近年来,生成式人工智能(Generative Artifical Intelligence,GenAI)技术的快速发展和广泛应用,一方面为社会经济发展带来转型升级机遇,另一方面也在快速形成新的风险。在数字蝶变环境下,精准识别生成式人工智能安全风险点位,是生成式人工智能安全治理的重要保障。文章通过对政策文本进行程序化扎根分析,对53份政策文件内容进行程序性编码,构建“内生(Endogenous)-伴生(Companion)-应用(Application)”模型,涵盖三个核心范畴,11个主范畴,31个副范畴,92个基本范畴。基于数字蝶变四个阶段,围绕其茧化、蛹化、蝶化、生态化的时代特征,对各阶段生成式人工智能所面临的风险进行点位识别与分析,提出了风险感知、预警、监测、分层、响应的全流程动态治理策略。
In recent years, the rapid development and widespread application of generative artificial intelligence (GAI) technology have, on the one hand, brought opportunities for transformation and upgrading to socio-economic development, while on the other hand, new risks are also emerging rapidly. In this digital transformation environment, accurately identifying the safety risk points of generative AI is a crucial safeguard for the safe governance of generative AI. This paper employs a procedural grounded analysis of policy texts, conducting procedural coding on the content of 53 policy documents to construct the “ECA”model, which encompasses three core categories, 11 main categories, 31 subcategories, and 92 basic categories. Based on the four stages of digital transformation, this paper identifies and analyzes the risks faced by GenAI in each stage focusing on the characteristics of cocooning, pupation, metamorphosis, and ecologicalization in the digital transformation environment. It proposes a comprehensive dynamic governance strategy encompassing risk perception, early warning, monitoring, risk tiered management, and response.