ABSTRACT VIEW
DESIGN AND CHALLENGES OF OPEN LARGE LANGUAGE MODEL FRAMEWORKS (OPEN LLM): A SYSTEMATIC LITERATURE MAPPING
I.M. García López, M.S. Ramírez Monoya, J.M. Molina Espinosa
Tecnologico de Monterrey (MEXICO)
Analyzing open large language model (OLLM) frameworks is crucial for understanding the management and regulation of AI models. This study examines evidence from 2019 to 2024 on OLLM frameworks integrating AI, using a systematic mapping review of 227 articles from Scopus and Web of Science. Applying inclusion, exclusion, and quality criteria, the analysis revealed significant findings: improving customization and accuracy, latency and efficiency challenges, reliability and security importance, and complex operational management (LLMOps). AI has revolutionized various industries, transforming technology interactions. Large-scale language models (LLMs) excel at understanding and generating coherent text. These models, trained on vast data, perform tasks like language translation and content creation. OLLMs make these tools more accessible and customizable across industries, including education and healthcare. However, regulation and supervision are crucial to avoid misinformation, bias, and privacy violations. This study aims to analyze scientific evidence on OLLM frameworks, focusing on research areas, characteristics, and topics. A systematic mapping of articles published from January 2019 to May 2024 provides an overview of the state of the art, identifying gaps and opportunities for future research. The study employed systematic literature mapping, involving broad reviews of primary studies to identify available evidence. Planning, execution, and reporting phases were included, applying criteria to select high-impact articles from 2019 to 2024. Results show China and the United States lead in OLLM framework publications due to significant investments in research and development, with strong production in Germany and France. Latin American countries need increased research budgets and improved universities to enhance global presence. OLLMs are versatile in applications like book recommendations, ethics, bioinformatics, and data privacy, highlighting broad applicability and impact. Methodologies such as machine learning, generative language models, data preprocessing, evaluation metrics, and system integration are critical for ensuring accurate, efficient, and reliable model performance. Challenges in scalability, performance, robustness, security, interpretability, and adaptability were identified, with solutions including optimization techniques, transparency, regulatory frameworks, and ethical guidelines. As AI use grows, LLMs and OLLMs are key tools for various applications, but technical and ethical challenges must be addressed. Four critical areas needing improvement are customization and accuracy, latency and efficiency, reliability and security, and complexity in operational management (LLMOps). Open LLM frameworks have immense potential but face complex challenges. Addressing these issues is crucial for maximizing effectiveness and applicability. Future research should expand literature mapping, investigate optimization techniques, and explore interdisciplinary collaborations to enrich development and implementation of these models.

Keywords: OLLM, AI, frameworks, higher education, reliability.