媒体通过框架建构人工智能风险的社会意义,影响公众认知与政策响应。厘清AI风险报道的主题特征与情感表达,能够为优化AI风险治理提供理论支持和实证依据。文章基于框架理论,构建“范围-视角-色彩”三维分析模型,结合LDA主题建模、情感分析与语言模糊性识别等自然语言处理方法,对AI风险报道新闻文本进行系统分析,揭示框架建构机制及其演化特征。结果发现,AI风险新闻议题经历了从技术性风险向社会性风险的转变,早期主要关注数据隐私和司法偏见等技术问题,近年来逐步转向算法歧视、选举干预和心理操控等社会政治议题;不同媒体类型在报道立场与关注重心上表现出显著差异,反映出风险传播中的多元视角建构;媒体普遍倾向于使用负面情绪表达,并辅以语言模糊策略,强化了公众对AI风险的焦虑与警觉情绪。研究表明,媒体在AI风险传播中不仅是信息中介,更是风险意义的积极建构者。
Through the framing strategies, the media construct the social meaning of artificial intelligence (AI) risks, shaping public perception and influencing policy responses. Clarifying the theme characteristics and emotional expressions in AI risk reporting can provide theoretical support and empirical evidence for improving AI risk governance. This paper, grounded in framing theory, proposes a three-dimensional analytical model of “Scope-Perspective-Tone” and employs natural language processing techniques including LDA topic modeling, sentiment analysis, and linguistic ambiguity detection to systematically examine AI risk news texts, uncovering the mechanisms and evolution of media framing in this context. It turns out the framing of AI risk in the news has shifted from a focus on technical issues—such as data privacy and algorithmic bias in earlier years—to broader sociopolitical concerns in recent years, including algorithmic discrimination, election interference, and psychological manipulation. Distinct types of media outlets demonstrate significant differences in reporting stance and focal points, reflecting the construction of diverse perspectives in risk communication. Moreover, the media generally tends to adopt negative emotional tones and employ vague or hedging language, which amplifies public anxiety and alertness regarding AI risks. Research shows that media are not merely conduits of information but active agents in shaping the social meaning of AI-related risks.