J.F. Sanjuan Estrada1, M. Lupión Lorente1, N. Calvo Cruz2, P. Martinez Ortigosa1
In the context of higher education, automated assessment has emerged as a key strategy to optimize teaching in technical subjects, particularly in cloud computing environments. However, in the era of artificial intelligence (AI), the evaluation of non-presential activities faces significant challenges due to the extensive use of AI-based tools by students. If not properly managed, this phenomenon can compromise the development of essential technical competencies, limiting the acquisition of fundamental practical skills.
This article presents an automated assessment system designed to ensure an objective, precise, and reproducible measurement of student performance in Google Cloud Platform (GCP) and OpenStack environments. The solution is based on the automated verification and validation of configurations using Linux scripts, evaluating critical aspects such as network connectivity, web service deployment, and security configuration. This approach ensures that each student correctly configures and operates cloud resources, mitigating the risks of over-reliance on generative AI tools.
One of the most innovative features of the system is the automatic assignment of penalties, allowing the detection and quantification of configuration errors, thus ensuring a transparent and objective grading process. Additionally, the integration of personalized forms facilitates project customization for each student, while the mandatory submission of demonstration videos provides empirical evidence of their work. This approach not only reinforces practical learning but also makes it more challenging for students to use AI indiscriminately without truly understanding the underlying processes.
From a pedagogical perspective, the system offers multiple advantages: it reduces faculty workload, standardizes evaluation, and enhances student feedback, significantly optimizing teaching resources. However, its implementation requires the development of dynamic and robust scripts, capable of adapting to different configuration scenarios and ensuring security in cloud computing environments.
Finally, this case study demonstrates that, although students may use AI tools to solve problems, automation in assessment prevents the approval of activities without a correct configuration of resources, ensuring that acquired competencies are verifiable and aligned with the demands of the technological market. This model represents an advancement in digital education, offering a scalable and replicable method for the assessment of technical subjects in the context of digital transformation.
Keywords: Automated assessment, Cloud computing education, Artificial intelligence in learning, Objective grading and evaluation.