ABSTRACT VIEW
HARNESSING GENERATIVE AI FOR PROJECT MANAGEMENT CERTIFICATION: DEVELOPING HIGH-QUALITY QUESTION BANKS WITH PROMPT ENGINEERING
L. Meyer, A. Dannecker
University of Applied Sciences & Arts Northwestern Switzerland (SWITZERLAND)
The integration of generative AI into education offers transformative opportunities to streamline the creation of learning materials, particularly for certification preparation. This paper focuses on the development of a question bank tailored for the IPMA Level D Project Management certification using ChatGPT. The core of this research lies in prompt engineering, detailing how prompts were designed to extract relevant, high-quality questions from a 383-page study book comprising 29 chapters. Prompts were crafted to generate open-ended and single-choice questions with corresponding answers, including detailed explanations and references to specific page numbers in the source material.

A key aspect of this study is ensuring that the output adheres to a specific structural format compatible with Learning Management Systems (LMS) like Moodle. Questions were generated in XML format to streamline their direct integration into Moodle. This requires prompts to be carefully engineered, not only to yield content of the appropriate complexity but also to conform to the technical specifications for seamless import. This structural requirement plays a pivotal role in assessing the efficiency of using generative AI, as deviations from the required format can significantly increase post-processing time.

Prompt engineering methodology included iterative refinement to balance detail, difficulty, and style while meeting structural demands. Key insights are provided on how initial prompts were structured and adjusted based on output quality, addressing challenges such as ambiguous phrasing, misaligned references, overly simplistic questions, or format inconsistencies.

The quality of the AI-generated outputs is currently under expert review, with ongoing analyses focusing on:
- The proportion of questions requiring no or minor adjustments to content or format.
- Cases needing substantial rewriting for clarity, alignment, or adherence to the required structure.
- The number of questions deemed unsuitable for inclusion due to technical or content-related issues.

A time-based comparison is also underway to evaluate the efficiency of the generative AI approach. This involves measuring the time required for reviewing and refining AI-generated questions, including adjustments to ensure Moodle compatibility, against an estimate of the manual effort required for crafting and structuring comparable questions from scratch. Preliminary findings suggest significant time savings, especially in meeting structural requirements for LMS integration.

While this paper emphasizes the technical aspects of prompt engineering, quality assurance, and structural considerations, an outlook for future studies is provided. This includes evaluating participants' feedback on the usability and effectiveness of the question bank as a preparation tool, alongside their performance in the certification process. Metrics such as participant satisfaction, perceived usefulness, and examination outcomes will be assessed to offer deeper insights into the impact of generative AI tools in educational contexts.

The findings so far highlight the importance of expert review in ensuring both the quality and structural integrity of AI-generated content, underscoring the potential of generative AI to reduce workload while maintaining robust standards and technical compatibility for certification materials.

Keywords: Prompt Engineering, Generative AI, IPMA Certification, Question Bank Development, Moodle Integration, AI-Generated Content Review, Educational Technology, Learning Material Efficiency, Certification Preparation, Quality Assurance.

Event: INTED2025
Session: Assessment in the age of AI
Session time: Tuesday, 4th of March from 08:30 to 10:00
Session type: ORAL