ABSTRACT VIEW
ALL CLEAR: COPILOT AND HIGHER EDUCATION
L. Zizka
EHL Hospitality Business School / HES-SO University of Applied Sciences and Arts Western Switzerland (SWITZERLAND)
Artificial Intelligence (AI) has long been touted as a panacea for producing knowledge. Fed with millions of texts and sources, the popularity of AI Generators (Gen AI) has skyrocketed as they seamlessly produce copious amounts of text in seconds. The user enters a prompt, and the output is practically simultaneous. AI tools have made their way into every corner of business including marketing and communications. The question revolves around the quality of the text that is generated. Should professionals use these Gen AI tools or avoid them like the plague?

In the past ten years, Gen AI has become 'smarter,' using human feedback to improve with each new version. One such tool is Copilot, developed by Microsoft and an add-on to existing Microsoft 365 packages. Unlike its more popular Gen AI competitor, ChatGPT, Copilot is more effective at contextualizing content on a company level, i.e., business writing. Further, it is 'safer' than many Gen AI that share personal information in their responses to any user.

Nonetheless, one area in which Gen AI has thrived is academia. During the COVID-19 pandemic, we were obliged to teach and learn remotely and engage with technology to do so. However, using technology for teaching and learning is different from using Gen AI, which offers new possibilities to create (for students and faculty members) and assess (for faculty) content. This study aims to test Copilot's capability to complete an academic writing task when given a specific prompt. The author has yet to see any study investigating Copilot's potential in academic writing tasks. This paper attempts to fill that gap.

The project began with one prompt: Write a 500-word response to the following position: "This paper posits that robots cannot replace humans in the hospitality industry; thus, hoteliers should invest more in promoting the human touch." The response must include five academic journal articles used in-text and on a reference list in APA 7th edition format. This prompt was run 50 times in a row to test the veracity of the quality of the responses. Each response was saved in a file and the researcher took notes of observations of the output. To further these initial observations, the researcher ran all prompts through WordStat to identify common themes in the responses. At this stage, early findings show many similarities between the texts, suggesting that students in the same class and using the same prompt would potentially have issues with similarity checks or academic integrity. Further, the presentation is not in essay format; rather, the output looks like business documents that list (with bullet points) the main ideas. This is not the format that one would expect of an academic writing assignment. This could confirm that Copilot may be useful for the workplace, but, in its current state, is not effective in an educational setting.

This project will help faculty better understand how Copilot could or could not be useful for pedagogical tasks. What should they be looking for? Are there specific phrases or patterns that could suggest the use of AI? The findings will also incite discussion and debate regarding the use of Gen AI in the classroom.

Keywords: Higher Education Institutions, AI Generators, Copilot, Academic Writing, Consistency, Quality.