L. Malita
Since 2023, Artificial Intelligence (AI) is not anymore an academic discipline used only by specific academic area. Nowadays, through the widespread adoption of generative Artificial Intelligence, almost each area can benefit from the opportunities brought for academic purposes, including for writing, teaching, researching or for fulfilling administrative tasks.
This paper examines the deep research capabilities of some worldwide used and well-known Artificial Intelligence (AI) assistants: ChatGPT, Gemini, Grok, DeepSeek, Manus and Perplexity.
Since the beginning of 2025 more new AI assistants come in the usage of academic actors, mostly developed in China. Therefore, through this paper we will investigate both “old” tools with new developed assistant capabilities, as well as new entries that are equipped also with common capabilities, as well as new functionalities.
The study implies a comparative methodology, assessing these tools across several criteria, including the length and structure of responses, coverage and source quality, multi-step reasoning and depth of analysis, multimodal analysis capabilities, performance on specific research tasks, writing quality, citation and referencing and also AI hallucination. Findings reveal that while each AI assistant offers unique strengths, they also present limitations, notably concerning factual accuracy and the risk of hallucination. The study concludes that no single AI tool is a flawless solution for student-led deep research, advocating for a hybrid approach and critical engagement with these technologies, alongside the promotion of digital literacy to ensure responsible and academically rigorous use in higher education. The implications for educators and institutions highlight the necessity of understanding these tools' capabilities and limitations for effective integration into academic workflows.
Keywords: AI Assistants, Deep Research, Higher Education, Comparative Study, Hallucination.