ABSTRACT VIEW
EVALUATING ALTERNATIVES TO ESSAY-BASED ASSESSMENT IN A WORLD WHERE GENERATIVE AI IS PERVASIVE
K. Meehan, S. Wilson, W. Farrelly
Atlantic Technological University (IRELAND)
The impact of Generative Artificial Intelligence (GenAI) in recent years has meant that some traditional assessment techniques need to be re-evaluated. While GenAI tools can greatly assist student research and learning, the ability of these tools to generate content with minimal input or understanding raises serious concerns regarding academic integrity. This is particularly problematic for traditional assessment methods where students are required to produce written essays or reports. The potential for a student to produce content that isn’t commensurate with their level of understanding or effort applied means that as educators we must carefully consider alternative assessment strategies.

It is through the lens of improving academic integrity in assessment that this action research project has been approached. In an effort to promote student autonomy the problem area was discussed with a cohort of 22 undergraduate computing students taking a module on “AI and Machine Learning”. The students had a very clear understanding of the issues relating to the inappropriate use of GenAI and therefore agreed that traditional essay-based assessment strategies were no longer an appropriate means of assessment. The students were presented with various alternative assessment strategies such as presentations, annotated bibliography, podcasts and debates. A guided in-class discussion with the students allowed the merits of each assessment method to be evaluated. The learning outcomes were presented to the students, and they were provided with autonomy to select the appropriate method for effectively assessing these. The assessment approaches selected by the students were a podcast and an oral debate. There is an additional benefit to the methods selected as these have great potential for assessing a student’s understanding, transversal skills (communication, presentation, collaboration) and higher order skills such as critical thinking and problem solving.

The feedback from this cohort of students was largely positive regarding the autonomy of selecting the appropriate assessment techniques and the specific assessment strategies that were employed. There are some observed disadvantages where some students were nervous and lacked public speaking skills. There was also one student where English wasn’t their first language and this real-time assessment was more challenging for them as a result. It is clear that the feasibility of utilising these assessment strategies will be largely based on the level of study, cohort size and other considerations such as disabilities or language barriers within a group. However, in this action research project the change of assessment strategy provided a more authentic assessment and academic integrity was not a challenge. Further research is required to determine if these results are generalisable in larger cohorts or different academic domains.

Keywords: Academic Integrity, Generative AI, Artificial Intelligence, Podcast, Debate, Autonomy.