ABSTRACT VIEW
EVALUATING CHATGPT-POWERED ASSISTANTS' IMPACT ON STUDENT SATISFACTION IN ONLINE HIGHER EDUCATION ENVIRONMENTS
M.A. Cabeza-Rodríguez
Universidad Francisco de Vitoria (SPAIN)
This research examined students’ perceptions and satisfaction regarding a ChatGPT 3.5–based virtual assistant deployed across 21 online university subjects. A total of 391 students participated by completing the validated COMUNICA questionnaire, covering four constructs: Virtual Assistant Efficiency, Learning Impact, Skill Development, and Technical/Accessibility Aspects. The study employed both quantitative and qualitative methods.

Methodological Approach:
This study used a convenience sample of students (391 out of 3,419 enrolled) at a Spanish online university. Data were collected from two Applied Teaching Innovation Projects (PIDAs), where 21 teachers integrated ChatGPT 3.5 into their online classrooms. The primary instrument, the COMUNICA questionnaire, contained four items per construct on a five-point Likert scale. Statistical analyses included descriptive statistics, and tests for gender differences, Exploratory Factor Analysis (EFA), and CFA. Additionally, comments were analyzed both through automated sentiment analysis and inductive coding for deeper contextual insights.

Key Findings:
Quantitative results revealed that female students scored Virtual Assistant Efficiency slightly higher than male students, indicating a potential gender-based difference in attitudes toward the technology. A Confirmatory Factor Analysis (CFA) supported this single-latent-variable model, with excellent fit indices. The factor loadings ranged from .89 to .95, showing that each observed dimension strongly reflects overall student satisfaction. Internal consistency was satisfactory (Cronbach’s alpha ≥ .8). While the four constructs indeed captured unique aspects of the students’ experience, the data ultimately suggested they represent facets of the same overarching satisfaction construct.

Qualitative analysis of optional student comments (n=185) generally supported the quantitative data. Machine learning–based sentiment evaluation showed a predominantly positive inclination (66% on a normalized scale). Students praised the assistant for resolving doubts promptly, clarifying challenging concepts, and boosting learning autonomy.

Discussion:
While the majority of students perceived the virtual assistant positively, challenges remain. For instance, some respondents noted that the virtual assistant required more robust usage guidance and deeper subject-matter accuracy. Technical or accessibility features—although adequately rated—exhibited weaker associations with overall satisfaction, suggesting further optimization is needed. The gender imbalance in the sample (68% women) and the non-probabilistic sampling restrict the generalizability of findings. Moreover, the study did not account for external variables like socioeconomic background or prior digital literacy. Although a few students mentioned ChatGPT-4’s more comprehensive responses, only the 3.5 version was officially integrated for data collection.

Conclusions:
By indicating that Virtual Assistant Efficiency is the strongest determinant of satisfaction, the study underscores the potential of AI-based conversational tools in online higher education. Overall, the study’s findings reinforce the promise of AI-driven virtual assistants, emphasizing the need for ongoing enhancement in technical, pedagogical, and ethical dimensions to ensure they serve as effective, equitable, and accessible learning supports.

Keywords: Artificial intelligence, chatbot, ChatGPT, higher education, online teaching, technology.

Event: EDULEARN25
Track: Innovative Educational Technologies
Session: Generative AI in Education
Session type: VIRTUAL