M.I. Prieto Barrio1, A. Martínez Gordon1, A. Leal Matilla1, M.N. González García1, L. de Pereda Fernández2
This study explores the expanding role of Artificial Intelligence (AI) into the conduct and productivity of the emerging AI-empowered graduate candidates through a framework designed to observe cognitive usage in scientist-in-training (FOCUS). This multifaceted approach verifies AI’s capabilities and limitations across key research stages, initiating with a comprehensive bibliographic analysis of literature search methods, comparing conventional database queries to several AI-assisted platforms. This phase quantifies the total number of references cited in each work, their relevance, accuracy and utility of AI-generated summaries – an aspect previously unexplored. Building on this foundation, AI’s predictive capacity – a pre-experimental feasibility analysis – to simulate potential research outcomes based on existing or provided data was also probed. This analysis aims to optimize resource allocation by clearly distinguishing feasible projections aligned with existing evidence, data or hypotheses from speculative outputs or “hallucinations”, before resource commitment is performed. Furthermore, the quality of the human-led-interpretations of existing experimental data with those generated by AI was continuously verified in order to perform a critical evaluation of the alignment (or divergence) between AI's earlier simulated predictions and the experimental results. This phase performed a side-by-side review of both feature recognition and statistical pattern detection against traditional methods involving domain experts, with the main intention to reveal any discrepancies in trend identification, reasoning or outlier detection. Lastly, this study addressed the profound implications of AI integration on researcher development, specifically investigating its impact on critical thinking skills, and the dependence of AI tools that could possibly foster cognitive complacency, diminished analytical depth or uncritical acceptance of AI outputs —conceptualized in literature as “cognitive debt”. Reliance on automated tools was determined by surveying whether graduate students unquestioningly accepted AI outputs and required human oversight or if they applied selective scrutiny, enhancing analytical rigor through progressive iterative validation (prompt feedback). Through this comprehensive analysis, the proposed study seeks to illuminate the transformative potential of AI in academic research while highlighting the need for cautious implementation to maintain academic integrity and rigor, contributing to a deeper understanding of AI’s role in modern research landscapes.
Keywords: Artificial Intelligence, higher education, literature review, feasibility analysis, cognitive debt.