The integration of multimodal brain-computer interfaces (BCIs) with immersive learning environments represents a transformative approach to personalized education. This study presents a novel framework that synergizes neurophysiological data acquisition, dynamic virtual-physical scene generation, and AI-driven educational agents to create an adaptive learning ecosystem. We leverage non-invasive BCIs incorporating electroencephalography (EEG), functional Near-Infrared Spectroscopy (fNIRS), eye-tracking, and electromyography (EMG) to capture cognitive and behavioral signals in real time. These inputs drive a closed-loop system in which coaching and strategic agents, which are developed on the Coze platform with the DeepSeek-R1 large language model, provide personalized instructional support and dynamic difficulty adjustment. Implemented within Unity-based extended reality (XR) environments, our architecture demonstrates significant improvements in learning efficiency and metacognitive engagement through empirical validation with 387 engineering students. Results show a 31% enhancement in debugging efficiency and a 76% rate of independent strategy formulation in the experimental group. The study further explores applications in special education through emotion recognition via fNIRS and eye-tracking. Despite promising outcomes, challenges remain in computational latency, multimodal data fusion, and neural data privacy, necessitating future advances in asynchronous processing and encryption protocols.
References
[1] Yang, J., Shi, G., Zhu, W., Sun, Y. (2025) Intelligent technologies in smart education: a comprehensive review of transformative pillars and their impact on teaching and learning methods. Humanities and Social Sciences Communications, 12(1), 1-15.
[2] Sun, Z., Huang, Z., Duan, F., Liu, Y. (2020) A novel multimodal approach for hybrid brain-computer interface. IEEE Access, 8, 89909-89918.
[3] Li, Z., Zhang, G., Okada, S., Wang, L., Zhao, B., Dang, J. (2024) MBCFNet: A multimodal brain-computer fusion network for human intention recognition. Knowledge-Based Systems, 296, 111826.
[4] Karpov, A. A., Yusupov, R. M. (2018) Multimodal interfaces of human-computer interaction. Herald of the Russian Academy of Sciences, 88(1), 67-74.
[5] Papadimitropoulos, N., Dalacosta, K., Pavlatou, E. A. (2021) Teaching chemistry with Arduino experiments in a mixed virtual-physical learning environment. Journal of Science Education and Technology, 30(4), 550-566.
[6] Chen, Y., Xu, R., Lau, A. T., He, X., Chen, W., Wang, X., Jin, J. (2025) Leveraging low-frequency components for enhanced high-frequency steady-state visual evoked potential based brain computer interface in fast calibration scenario. Cognitive Neurodynamics, 19(1), 124.
[7] Anouck Schippers, Mariska J. Vansteensel, Zac V. Freudenburg, Shiyu Luo, Nathan E. Crone Nick F. Schippers, A., Vansteensel, M. J., Freudenburg, Z. V., Luo, S., Crone, N. E., Ramsey, N. F. (2025) Don’t put words in my mouth: Speech perception can falsely activate a brain-computer interface. Journal of NeuroEngineering and Rehabilitation, 22(1), 181.
[8] Ji, X., Lu, X., Xu, Y., Zhang, W., Yang, H., Yin, C., Shen, Y. (2025) Effects and neural mechanisms of a brain-computer interface-controlled soft robotic glove on upper limb function in patients with subacute stroke: a randomized controlled fNIRS study. Journal of NeuroEngineering and Rehabilitation, 22(1), 171.
[9] Mourali, Y., Agrebi, M., Farhat, R., Kolski, C., Jemni, M. (2025) Design and evaluation of online educational content: AI-informed guidelines based on machine learning analysis of learners’ interactions traces. International Journal of Human-Computer Interaction, 41(16), 10420-10442.
[10] Elcan, O. T. (2025) Predictive analysis of student engagement and academic performance in virtual learning environments using a hybrid markov: machine learning model. Asian Journal of Research in Computer Science, 18(8), 102-112.
[11] Khakpaki, A. (2025) Advancements in artificial intelligence transforming medical education: a comprehensive overview. Medical Education Online, 30(1), 2542807.
Share and Cite
Xie, T. (2025) Multimodal Brain-Computer Interface for Adaptive Virtual Learning Environments: A Synergistic Architecture of Brain-AI Fusion and Educational Agents. Global Education Bulletin, 2(5), 11-16.
