AI VS HUMANS - EVALUATING CODING PLATFORMS FOR EDUCATION USING LAURILLARD'S CONVERSATIONAL FRAMEWORK
R. Osborne
One of the many outcomes of the COVID-19 pandemic of 2020 was a shift to online teaching and learning. At University College London (UCL) part of that shift meant the rapid acquisition of an online coding platform, CoCalc, so that students learning coding could engage both with each other and with their tutors to continue their studies. However, once the pandemic subsided the question was raised – was this still the right platform to support the teaching of coding in a post-pandemic world? Hence, it was decided to re-evaluate our needs for an online coding platform.
Coding platforms are not, however, simply teaching environments. Critically, they are also very much learning environments, i.e. environments where students engage with each other and independently of their tutors to learn. With that in mind, the evaluation was planned as a joint student-staff initiative, or as UCL calls it, a ChangeMakers project, rather than as a technical or simply staff-focused evaluation. We wanted to know not only what it meant to teach but also what it meant to learn with different online coding platforms.
Diana Laurillard's Conversational Framework is an educational model developed to improve teaching and learning in higher education. It conceptualizes learning as a dialogue or conversation between teacher and learner that unfolds across multiple levels of engagement. The framework describes six types of learning activities: Acquisition, Inquiry, Practice, Production, Discussion and Collaboration. Together these six learning types encapsulate all possible learning that might happen in any educational context – and potentially, therefore, can act as a framework for evaluating coding platforms. Our plan, therefore, was to evaluate coding platforms, together with students, against these six learning types.
Almost simultaneously with these plans, Generative AI appeared on the scene in late 2022. These new artificial intelligence tools seemed to both threaten existing practices but at the same time offered an exciting new future where thinking could be shared with machines; however, the AIs were also riddled with hallucinations and hence there was much uncertainty and doubt over their long-term value. It was decided, therefore, to blend both AI evaluations and human evaluations to not only tease out which might be the best online coding platform to suit our needs but also whether Generative AI could play a useful role in the overall evaluation process.
This talk will discuss the “multi-dimensional landscape report” that was produced as a result of this project, including:
- How we developed quantitative judgements for the six learning types to allow both humans and AIs to rate each of the final coding platforms (Codio, CoCalc, Github Classroom, Noteable)
- Our qualitative approach to evaluation, using a peer dialogue technique for humans and prompt development for the AIs
- The role that the different generative AIs (Chat GPT, Gemini, Copilot) played in the project and how they informed our decisions
The talk will be of particular interest to those interested in comparing online coding platforms, how the Conversational Framework can be ‘operationalised’ and used as an evaluation framework (and in particular the six learning types) and the pros and cons of using AIs alongside humans in this type of evaluation.
Keywords: Digital, Technology, Higher Education, Evaluation, Coding, Conversational Framework, Learning Types, Students, ChangeMakers.