From “Scanning” to “Shaping”: An Ethical Risk and Governance Mechanism Study of AI Empowered Brain Computer Interfaces Based on the Reference of the Large-scale Game SOMA

Weixiang Gan, Mengfei Xiao*, Sikun Chen, Tara Ahmed Mohammed, Xiaolin Song
Graduate School of Business, SEGi University, Petaling Jaya, Selangor 47810, Malaysia
*Corresponding email: seanphydiyas@gmail.com
https://doi.org/10.71052/srb2024/MBUT2104

With the accelerating convergence of artificial intelligence and brain computer interface technologies, the focal point of ethical risk is shifting from system security and technical accuracy toward deeper questions of subjectivity, namely who counts as a human being and who ought to be recognized as a rights bearing subject. Drawing on the science fiction game SOMA on Steam as a high-density vehicle of ethical imagination, this study conducts a systematic analysis of the governance dilemmas that may arise from AI empowered brain computer interfaces. The findings are threefold. First, when a mind is replicated with high fidelity as a digital copy, institutional grey zones emerge in identity continuity and in the definition of rights bearing subjecthood, which can readily trigger identity appropriation and the drifting of responsibility attribution. Second, once readout-oriented interfaces are combined with AI inference capabilities, neural data may be continuously reinterpreted as expandable cues of the mind, exposing mental privacy over the long term. In such contexts, one time consent mechanisms are unable to cover downstream and future uses, thereby generating risks of latent discrimination and distorted opportunity allocation. Third, if closed loop writes in systems operate persistently under an intervention rationale framed as being for the individual’s own good, they directly implicate autonomy and psychological integrity. Where exit options and enforceable accountability designs are absent, the legitimacy of governance and societal trust will be severely undermined. Building on these analyses, the article proposes three actionable governance mechanisms. First, it calls for clear regimes of identity designation, authorization, and revocation for mind copies. Second, it recommends a high sensitivity tiered governance approach for neural inference, clarifying inference boundaries and introducing dynamic consent mechanisms. Third, it argues for institutionalizing meaningful human final control over closed loop interventions, establishing red line scenario constraints, and constructing an auditable chain of responsibility, thereby providing an institutional pathway for responsible innovation in AI empowered brain computer interfaces.

References
[1] Rizani, M. N., Khalid, M. N. A., Iida, H. (2023) Application of meta-gaming concept to the publishing platform: Analysis of the steam games platform. Information, 14(2), 110.
[2] Landay, L. (2023) Interactivity. The Routledge Companion to Video Game Studies, 243-254.
[3] Ferri, G., Gloerich, I. (2019) Take root among the stars: if Octavia Butler wrote design fiction. Interactions, 27(1), 22-23.
[4] Phillips, C., Klarkowski, M., Frommel, J., Gutwin, C., Mandryk, R. L. (2021) Identifying commercial games with therapeutic potential through a content analysis of steam reviews. Proceedings of the ACM on Human-computer Interaction, 5, 1-21.
[5] Lopes, T., Dahmouche, M. S. (2019) Teatro, ciência e divulgação científica para uma educação sensível e plural. Urdimento-Revista de Estudos em Artes Cênicas, 3(36), 306-325.
[6] Zheng, H., Wu, Y., Qian, T., Yue, W., Wang, X. (2025) Guiding LLMs to decode text via aligning semantics in EEG signals and language. Expert Systems with Applications, 130300.
[7] Gkintoni, E., Halkiopoulos, C. (2025) Digital twin cognition: AI-biomarker integration in biomimetic neuropsychology. Biomimetics, 10(10), 640.
[8] Helbing, D., Sánchez-Vaquerizo, J. A. (2023) Digital twins: potentials, ethical issues and limitations. Handbook on the Politics and Governance of Big Data and Artificial Intelligence, 64-104.
[9] Cornejo-Plaza, M. I., Cippitani, R., Pasquino, V. (2024) Chilean Supreme Court ruling on the protection of brain activity: neurorights, personal data protection, and neurodata. Frontiers in Psychology, 15, 1330439.
[10] Brown, C. M. L. (2024) Neurorights, mental privacy, and mind reading. Neuroethics, 17(2), 34.
[11] Deng, Z., Xiang, H., Tang, W., Cheng, H., Qin, Q. (2024) BP neural network-enhanced system for employment and mental health support for college students. International Journal of Information and Communication Technology Education (IJICTE), 20(1), 1-19.
[12] Sun, X. Y., Ye, B. (2023) The functional differentiation of brain-computer interfaces (BCIs) and its ethical implications. Humanities and Social Sciences Communications, 10(1), 1-9.
[13] Shymko, V., Babadzhanova, A. (2025) Ethical challenges and strategic responses to AI integration in psychological assessment. AI and Ethics, 5(5), 5415-5423.
[14] Ienca, M., Haselager, P., Emanuel, E. J. (2019) Reply to “separating neuroethics from neurohype”. Nature Biotechnology, 37(9), 991-992.
[15] Ridolfi, L. F., Santos, S. S. (2025) Neurotechnology and philosophy of neuroscience: ethical and ontological challenges in the era of brain-computer interfaces. Revista Ibero-Americana de Humanidades, Ciências e Educação, 11(9), 2880-2892.
[16] Doya, K., Ema, A., Kitano, H., Sakagami, M., Russell, S. (2022) Social impact and governance of AI and neurotechnologies. Neural Networks, 152, 542-554.
[17] Williams, C., Anik, F. I., Hasan, M., Rodriguez-Cardenas, J., Chowdhury, A., Tian, S., He, S., Sakib, N. (2025) Advancing brain-computer interface closed-loop systems for neurorehabilitation: systematic review of ai and machine learning innovations in biomedical engineering. JMIR Biomedical Engineering, 10, e72218.
[18] Haag, L., Starke, G., Ploner, M., Ienca, M. (2025) Ethical gaps in closed-loop neurotechnology: a scoping review. NPJ Digital Medicine, 8(1), 510.
[19] Wang, J., Chen, Z. S. (2024) Closed-loop neural interfaces for pain: Where do we stand? Cell Reports Medicine, 5(10), 101662.
[20] Michałowska, M., Kowalczyk, Ł., Marcinkowska, W., Malicki, M. (2021) Being outside the decision-loop: the impact of deep brain stimulation and brain-computer interfaces on autonomy. Analiza i Egzystencja, 56, 25-52.
[21] Mecacci, G., Haselager, W. F. G. (2021) Responsibility, authenticity and the self in the case of symbiotic technology. AJOB Neuroscience, 12(2-3), 196-198.
[22] Lungu, B. A. (2025) Machines looping me: Artificial Intelligence, recursive selves and the ethics of de-looping. AI & Society, 1-12.
[23] de Lima Dias, R. J. (2025) The hybrid mind in precision neurorehabilitation: integrating ai-driven neurotechnologies and ethical governance. World Journal of Neuroscience, 15(2), 105-125.
[24] Shanker, B. (2024) Neyigapula: ethical considerations in ai development: balancing autonomy and accountability. J. Adv. Artif. Intell, 10, 1-138.
[25] Pujari, T., Goel, A., Sharma, A. (2024) Ethical and responsible AI: Governance frameworks and policy implications for multi-agent systems. International Journal Science and Technology, 3(1), 72-89.
[26] Onciul, R., Tataru, C. I., Dumitru, A. V., Crivoi, C., Serban, M., Covache-Busuioc, R. A., Toader, C. (2025) Artificial intelligence and neuroscience: transformative synergies in brain research and clinical applications. Journal of Clinical Medicine, 14(2), 550.
[27] Alkawadri, R. (2019) Brain-computer interface (BCI) applications in mapping of epileptic brain networks based on intracranial-EEG: an update. Frontiers in Neuroscience, 13, 191.
[28] Ulaganathan, I. (2025) Ethical and security risks of autonomous AI systems. International Research Journal on Advanced Engineering Hub, 3(06), 2988-2995.
[29] Ghiurău, D., Popescu, D. E. (2024) Distinguishing reality from AI: approaches for detecting synthetic content. Computers, 14(1), 1.
[30] Seng, L. K., Mamat, N., Abas, H., Ali, W. N. H. W. (2024) AI integrity solutions for deepfake identification and prevention. Open International Journal of Informatics, 12(1), 35-46.
[31] Geldenhuys, K. (2023) The darker side of Artificial Intelligence. Servamus Community-based Safety and Security Magazine, 116(11), 20-25.
[32] Güngör, H. (2020) Creating value with artificial intelligence: a multi-stakeholder perspective. Journal of Creating Value, 6(1), 72-85.
[33] Panagopoulos, A. M., Davalas, A. (2025) Deepfakes on the EU AI Act and its implementation in the newsrooms. International Journal of Social Science and Economic Research, 10(8), 3276-3296.
[34] Makauskaite-Samuole, G. (2025) Transparency in the labyrinths of the eu ai act: smart or disbalanced? Journalism And the Right to Information as Tools for Combating Corruption in Ukraine: Assessment of Media Access to Anti-Corruption Authorities, 38.
[35] Łabuz, M. (2025) A teleological interpretation of the definition of deepfakes in the EU Artificial Intelligence Act – A purpose-based approach to potential problems with the word “existing”. Policy & Internet, 17(1), e435.
[36] Ibrahim, R. (2025) Addressing deepfake technologies through detection and regulation: a systematic survey. East Journal of Applied Science, 1(4), 10-20.
[37] Romanishyn, A., Malytska, O., Goncharuk, V. (2025) AI-driven disinformation: policy recommendations for democratic resilience. Frontiers in Artificial Intelligence, 8, 1569115.
[38] Garden, H., Winickoff, D. E., Frahm, N. M., Pfotenhauer, S. (2019) Responsible innovation in neurotechnology enterprises. OECD Science, Technology and Industry Working Papers, (5), 1-50.
[39] Muhidin, A. (2025) The ethics of deepfake technology: risks, regulations, and online safety concerns. International Journal of Scientific Development and Research, 10(9), b116-b121.
[40] Folorunsho, F., Boamah, B. F. (2025) Deepfake technology and its impact: ethical considerations, societal disruptions, and security threats in ai-generated media. International Journal of Information Technology and Management Information Systems, 16(1), 1060-1080.
[41] Ruiz-Vanoye, J., Díaz-Parra, O., Marroquín-Gutiérrez, F., Xicoténcatl Pérez, J. M., Barrera-Cámara, R. A., Fuentes-Penna, A., Simancas-Acevedo, E., Rodríguez-Flores, J., Martínez-Mireles, J. R. (2024) Brain data security and neurosecurity: Technological advances, ethical dilemmas, and philosophical perspectives. International Journal of Combinatorial Optimization Problems and Informatics, 15(5), 16.
[42] Xia, K., Duch, W., Sun, Y., Xu, K., Fang, W., Luo, H., Wu, D. (2022) Privacy-preserving brain-computer interfaces: a systematic review. IEEE Transactions on Computational Social Systems, 10(5), 2312-2324.
[43] Aimen, T. (2025) Cognitive freedom and legal accountability: Rethinking the EU AI act’s theoretical approach to manipulative AI as unacceptable risk. Cambridge Forum on AI: Law and Governance, 1, e20.
[44] Alsaigh, R., Mehmood, R., Katib, I., Liang, X., Alshanqiti, A., Corchado, J. M., See, S. (2024) Harmonizing AI governance regulations and neuroinformatics: perspectives on privacy and data sharing. Frontiers in Neuroinformatics, 18, 1472653.
[45] Cabrera, B. M., Luiz, L. E., Teixeira, J. P. (2025) The Artificial Intelligence Act: insights regarding its application and implications. Procedia Computer Science, 256, 230-237.
[46] Goering, S., Klein, E., Sullivan, L. S., Wexler, A., Agüera y Arcas, B., Bi, G., Carmena, J., Fins, J., Friesen, P., Gallant, J., Huggins, J., Kellmeyer, P., Marblestone, A., Mitchell, C., Parens, E., Pham, M., Rubel, A., Sadato, N., Teicher, M., Wasserman, D., Whittaker, M., Wolpaw, J., & Yuste, R. (2021) Recommendations for responsible development and application of neurotechnologies. Neuroethics, 14(3), 365-386.
[47] Eke, D. (2024) Ethics and governance of Neurotechnology in Africa: lessons from AI. JMIR Neurotechnology, 3(1), e56665.
[48] Maior, A. D. (2024) Mental integrity and ethics in the development of neurotechnologies. Curentul Juridic, 99(4), 111-117.
[49] Berger, S., Rossi, F. (2023) AI and neurotechnology. Communications of the ACM, 66(8), 58-68.
[50] Schopp, L., Starke, G., Ienca, M. (2025) Clinician perspectives on explainability in AI-driven closed-loop neurotechnology. Scientific Reports, 15(1), 34638.
[51] Lavazza, A., Balconi, M., Ienca, M., Minerva, F., Pizzetti, F. G., Reichlin, M., Samorè, F., Sironi, V. A., Songhorian, S. (2025) Neuralink’s brain-computer interfaces: medical innovations and ethical challenges. Frontiers in Human Dynamics, 7, 1553905.
[52] Khan, M. F. I. (2025) Risk management framework in the AI act. International Journal of Science and Research Archive, 14(03), 466-471.
[53] Buthut, M., Starke, G., Basaran Akmazoglu, T., Colucci, A., Vermehren, M., van Beinum, A., Bublitz, C., Chandler, J. A., Ienca, M., Soekadar, S. (2024) HYBRIDMINDS – summary and outlook of the 2023 international conference on the ethics and regulation of intelligent neuroprostheses. Frontiers in Human Neuroscience, 18, 1489307.

Share and Cite
Gan, W., Xiao, M., Chen, S., Mohammed, T. A., Song, X. (2025) From “Scanning” to “Shaping”: An Ethical Risk and Governance Mechanism Study of AI Empowered Brain Computer Interfaces Based on the Reference of the Large-scale Game SOMA. Scientific Research Bulletin, 2(4), 80-92.
https://doi.org/10.71052/srb2024/MBUT2104

Published

26/01/2026