ABSTRACT VIEW
KEEPING AI GENERATED COURSE TEXT ON TOPIC: A TWO STAGE APPROACH TO CONFIRM SUBJECT FOCUS
C. Edwards
The Open University (UNITED KINGDOM)
As they begin creating a new course, many institutions start by creating a set of intended learning outcomes. After this, the materials and assessments are written to align with this list. In this project, we have begun to use these learning outcomes in two ways which together seem to provide a robust approach to confirming whether generated text is sufficiently on topic. Once learning outcomes are defined, prompts can be developed for generative AI to produce course text that delivers these. In addition to manually checking that the resulting text is a suitable response to the prompts, we find we can also use a new prompt to ask the large language model (LLM) what learning outcomes it would associate with the text. By comparing this generated list of learning outcomes with the original, we are triangulating the generated text. The intention is that whilst we would not expect the exact wording of the learning outcomes to match, any difference in the content of the two lists would indicate a subject area that may be over or underrepresented. The second approach is a method to mitigate the randomness that AI generated text can exhibit in relation to prompts. We know that if we use the same prompt we expect to get a different response each time it is used. Therefore, if we use a LLM to generate a table mapping which learning outcomes are covered by a text, we would not expect to produce the same table every time. Some of these tables will have incorrect entries. This method uses repetition to create multiple tables that are then aggregated to reduce the impact of random errors in the mapping. We describe each of these approaches, giving examples and consider whether a combination of these approaches provide relatively robust in helping to ensure text is focussed and covers what is needed.

Keywords: GenAI, curriculum authoring, quality.