M. Birkenkrahe
The rise of generative AI marks a pivotal moment in the evolution of research practice. Beyond its obvious gains in speed and access, AI introduces deeper shifts in how knowledge is conceived, produced, and validated—particularly through systems that simulate understanding without possessing it. As these systems become embedded in scholarly workflows, they risk amplifying publication volume while eroding trust and blurring traditional markers of originality and rigor. This talk presents a cognitive model of how researchers engage with generative AI across the research life cycle, moving between instrumental and generative modes. We argue that AI is more than a tool or a muse—it is a structural participant in the research process, shaping inquiry through fluent but uncomprehending simulation. Recognizing this, we call for a reflective integration of AI that foregrounds intellectual labor and meaning-making. The implications extend to pedagogy, authorship norms, and the design of AI-aware research infrastructure. Ultimately, thriving in the age of AI will require not just technical fluency, but critical literacy. AI reflects and refracts the values of the research communities that use it—and how we respond will shape the future of scholarship.
Keywords: Generative Artificial intelligence, Scenario analysis, Understanding, Research Tools, Cave Allegory.