Exploring the Formation Mechanisms and Preventive Strategies of Cognitive-Degrading Content on the Xiaohongshu Platform

Weixiang Gan1, Mengfei Xiao1, *, Naiqian Zhang2
1Graduate School of Business, SEGi University, Petaling Jaya, Selangor 47810, Malaysia
2Chongqing University of Arts and Sciences, Chongqing 402160, China
*Corresponding email: 282584787@qq.com
https://doi.org/10.71052/jsdh/YDTC2843

Against the backdrop of social media becoming deeply embedded in everyday life and continuously reshaping cognitive structures and consumer decision-making pathways, Xiaohongshu has evolved into a major platform through which users obtain lifestyle advice and value judgments. However, beneath the appearance of a flourishing content ecosystem, a form of manipulative information characterized by high concealment and systematic harm has been spreading. This study conceptualizes such information as “cognitive-degrading content”, referring to content that imitates scientific discourse, logical reasoning, authoritative endorsement, and moral appeals as rational symbolic systems to construct an argumentation shell that appears rigorous but is in fact fallacious, thereby unconsciously weakening users’ critical thinking and evidence evaluation abilities. Based on a structured analysis of content forms, this study identifies five typical types of cognitive-degrading content: pseudo-scientific marketing and terminology accumulation, fallacious evaluations and the inducement of cognitive shortcuts, fabricated authority and selective quotation, extreme emotional manipulation accompanied by identity binding and conspiratorial attribution, and immersive fictional realities. To explain the mechanisms through which such content is continuously produced and algorithmically amplified, this study constructs a dual-level analytical framework spanning the micro and macro levels. At the micro level, dual-process theory is introduced to reveal how cognitive-degrading content systematically activates intuitive judgment associated with System 1 while suppressing rational scrutiny associated with System 2. At the macro level, drawing on perspectives from the attention economy and the political economy of algorithms, the study demonstrates that recommendation logics centered on completion rates and interaction volume structurally reward emotionalized content with low cognitive cost and high dramatic intensity, thereby forming a self-reinforcing diffusion cycle. Based on this diagnostic analysis, the study further proposes a multi-actor collaborative governance pathway, including algorithmic restructuring and the introduction of quality weighting at the platform level, benefit constraints and professional boundary norms at the creator level, and the enhancement of media literacy and participatory co-governance mechanisms at the user level. The theoretical contribution of this study lies in disentangling cognitive-degrading content from explanations grounded in individual moral deviance and reconstructing it as a structural information ecology problem jointly shaped by platform institutions, creators’ rational choices, and users’ cognitive constraints, thereby offering systematic policy implications for information governance on lifestyle-oriented platforms.

References
[1] Ecker, U. K. H., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., Amazeen, M. A. (2022) The psychological drivers of misinformation belief and spread. Nature Reviews Psychology, 1, 13-29.
[2] Tomassi, A., Falegnami, A., Romano, E. (2024) Mapping automatic social media information disorder. The role of bots and AI in spreading misleading information in society. PLOS One, 19(5), e0303183.
[3] Sultan, M., Tump, A. N., Ehmann, N., Lorenz-Spreen, P., Hertwig, R., Gollwitzer, A., & Kurvers, R. H. (2024) Susceptibility to online misinformation: A systematic meta-analysis of demographic and psychological factors. Proceedings of the National Academy of Sciences, 121(47), e2409329121.
[4] Sun, Y., Xie, J. (2024) Do Heuristic cues affect misinformation sharing? Evidence from a meta-analysis. Journalism & Mass Communication Quarterly, 10776990241284597.
[5] Shin, D., Shin, E. Y. (2025) Cascading falsehoods: mapping the diffusion of misinformation in algorithmic environments. AI & SOCIETY, 1-18.
[6] Zenone, M., Kenworthy, N., Maani, N. (2023) The social media industry as a commercial determinant of health. International Journal of Health Policy and Management, 12, 6840.
[7] Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., Rand, D. G. (2021) Shifting attention to accuracy can reduce misinformation online. Nature, 592, 590-595.
[8] Denniss, E., Freeman, M., colleagues. (2025) Social media and the spread of misinformation. Health Promotion International, 40(2), daaf023.
[9] Melchior, C., Oliveira, M. (2022) Health-related fake news on social media platforms: a systematic literature review. New Media & Society, 24(6), 1500-1522.
[10] Fick, J., Rudolph, L., Hendriks, F. (2025) Jargon avoidance in the public communication of science: Single- or double-edged sword for information evaluation? Learning and Instruction, 98, 102121.
[11] French, A. M., Storey, V. C., Wallace, L. (2025) The impact of cognitive biases on the believability of fake news. European Journal of Information Systems, 34(1), 72-93.
[12] Elnaggar, O., Arelhi, R., Coenen, F., Hopkinson, A., Mason, L., Paoletti, P. (2023) An interpretable framework for sleep posture change detection and postural inactivity segmentation using wrist kinematics. Scientific Reports, 13(1), 18027.
[13] Dan, V., Arendt, F. (2021) Visual misinformation: the persuasive power of staged reality. Communication Research, 48(7), 1002-1024.
[14] Cho, Y. Y., Woo, H. (2025) Heuristic and Systematic processing on social media: pathways from literacy to fact-checking behavior. Journalism and Media, 6(4), 198.
[15] Wang, R., Yang, H., Wang, Y., Zhai, X. (2025) Understanding how users identify health misinformation in short videos: an integrated analysis using PLS-SEM and fsQCA. Frontiers in Public Health, 13, 1713794.
[16] Geels, J., Graßl, P., Schraffenberger, H., Tanis, M., Kleemans, M. (2024) Virtual lab coats: the effects of verified source information on social media post credibility. PLOS One, 19(5), e0302323.
[17] Inwood, O., Zappavigna, M. (2024) The legitimation of screenshots as visual evidence in social media: YouTube videos spreading misinformation and disinformation. Visual Communication, 14703572241255664.
[18] Shahbazi, M., Bunker, D. (2024) Social media trust: Fighting misinformation in the time of crisis. International Journal of Information Management, 77, 102780.
[19] Van Knippenberg, D., Van Kleef, G. A. (2016) Leadership and affect: Moving the hearts and minds of followers. Academy of Management Annals, 10(1), 799-840.
[20] Guess, A., Nagler, J., Tucker, J. (2020) Less than you think: prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(1), eaau4586.
[21] Swire-Thompson, B., Lazer, D. (2022) Public health and online misinformation: challenges and recommendations. Annual Review of Public Health, 43, 49-69.
[22] Bakshy, E., Messing, S., Adamic, L. A. (2021) Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130-1132.
[23] Cinelli, M., Morales, G. D. F., Galeazzi, A., Quattrociocchi, W., Starnini, M. (2021) The echo chamber effect on social media. Proceedings of the National Academy of Sciences of the United States of America, 118(9), e2023301118.
[24] Mancosu, M., Marchi, L., Pellegrini, G. (2020) The emotional underpinnings of fake news acceptance and sharing: Evidence from Italy. Journal of Trust Research, 10(2), 97-123.
[25] Bessi, A., Zollo, F., Del Vicario, M., Puliga, M., Scala, A., Caldarelli, G., Quattrociocchi, W. (2021) Users polarization on Facebook and YouTube. PLOS One, 11(8), e0159641.
[26] Shu, K., Wang, S., Liu, H. (2022) Disinformation, misinformation, and fake news in social media. IEEE Data Engineering Bulletin, 45(1), 3-15.
[27] Mialon, M., Swinburn, B., Sacks, G. (2021) A proposed approach to systematically identify and monitor the commercial determinants of health. Globalization and Health, 17, 16.
[28] Slater, M. D., Long, M., Ford, V. (2023) Narrative persuasion, emotion, and entertainment: a meta-analysis. Communication Research, 50(1), 3-27.
[29] Da Silva, S. (2023) Dual-process theory: a review of System 1 and System 2 thinking. Psych, 5(4), 611-628.
[30] Ku, Y. (2025) Dual-process models of social media information processing: heuristic and systematic pathways. Computers in Human Behavior, 153, 108190.
[31] Subramanian, H., Mitra, S., Ransbotham, S. (2021) Capturing value in platform business models that rely on user-generated content. Organization Science, 32(3), 804-823.
[32] Sharma, V., Bray, K. E., Kumar, N., Grinter, R. E. (2022) Romancing the algorithm: Navigating constantly, frequently, and silently changing algorithms for digital work. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2), 1-29.
[33] Fehrer, J. A., Woratschek, H., Brodie, R. J. (2018) A systemic logic for platform business models. Journal of Service Management, 29(4), 546-568.
[34] Wu, S., Cheng, H., Qin, Q. (2024) Physical delivery network optimization based on ant colony optimization neural network algorithm. International Journal of Information Systems and Supply Chain Management (IJISSCM), 17(1), 1-18.

Share and Cite
Gan, W., Xiao, M., Zhang, N. (2025) Exploring the Formation Mechanisms and Preventive Strategies of Cognitive-Degrading Content on the Xiaohongshu Platform. Journal of Social Development and History, 1(3), 101-113. https://doi.org/10.71052/jsdh/YDTC2843

Published

15/01/2026