ABSTRACT VIEW
NEURAL RADIANCE FIELDS IN VIRTUAL REALITY: A LITERATURE REVIEW ON REAL-TIME 3D RECONSTRUCTION AND RENDERING
J.H. Moolman, F. Boyle, J. Walsh
Munster Technological University (IRELAND)
This literature review investigates the potential of Neural Radiance Fields (NeRFs) to revolutionise educational experiences by enabling near real-time 3-dimensional (3D) asset creation from 2-dimensional (2D) images and video, particularly within immersive environments such as Virtual Reality (VR) and Augmented Reality (AR). As digital transformation accelerates across the education sector, there is an urgent need to identify scalable, efficient, and realistic technologies that support interactive teaching, experiential learning, and accessible simulation environments. NeRFs, originally developed for high-fidelity 3D reconstruction and novel view synthesis, are now being adapted to meet the performance demands of educational use cases, from online laboratories to vocational training and spatially-aware learning experiences.

Despite their promise, standard NeRF implementations are computationally intensive and resource-heavy, posing challenges for real-time deployment in classroom or headset-based settings. To address these limitations, the review explores a broad range of recent innovations and frameworks that have been proposed to optimise NeRF workflows. These include adaptive rendering techniques such as foveated rendering, tailored to human visual perception to reduce overhead, alongside optimised data structures like multi-resolution hash grids, sparse voxel octrees, and scene partitioning approaches that allow scalable and parallelisable model generation. Hybrid rendering methods, which combine NeRFs with traditional rasterisation engines, are also analysed for their potential to maintain photorealism while significantly reducing latency.

Hardware-accelerated NeRF variants such as Instant-NGP, RT-NeRF, and Mip-NeRF are critically assessed for their ability to enable real-time interaction and dynamic content generation on consumer-grade devices and standalone VR headsets. These tools offer new possibilities for educators and learners to rapidly digitise physical environments, create immersive digital twins for science, engineering, and design education, and support responsive, interactive feedback in simulation-based learning. The integration of NeRFs into platforms such as Unity and Unreal Engine is examined, highlighting their importance in bridging the gap between AI-generated content and practical deployment within educational virtual worlds.

The review highlights emerging applications in educational VR where NeRFs are used to replicate complex environments for training in safety-critical domains, including healthcare, emergency response, and industrial maintenance. Through case studies and comparative evaluation, this review outlines how optimised NeRF pipelines can support pedagogical innovation by lowering barriers to 3D content creation, increasing the realism and interactivity of digital learning environments, and promoting deeper engagement through immersive, experiential methods.

Ultimately, this review presents NeRF technology as a key enabler of digital transformation in education. By harnessing AI to convert everyday media into richly detailed, navigable 3D models, educators and learners can co-create authentic learning experiences that were previously resource- or expertise-prohibitive. The findings point towards a future where real-time 3D reconstruction is not just a technical capability, but a foundational component of a modern, inclusive, and scalable digital education infrastructure.

Keywords: Neural Radiance Fields (NeRFs), Real-Time 3D Reconstruction, Virtual Reality (VR), Rendering, Computational Optimisation, Immersive 3D Environments.

Event: EDULEARN25
Track: Innovative Educational Technologies
Session: Virtual & Augmented Reality
Session type: VIRTUAL