人工智能在促进社会创新发展的同时也会带来技术安全隐忧,平衡人工智能创新发展与安全风险之间的结构性矛盾对于充分释放人工智能应用价值具有重要意义。基于英国人工智能监管政策文本,文章对英国人工智能监管的价值目标、基本原则、组织体系、技术工具及制度框架进行分析。研究发现,英国形成了“支持创新的人工智能监管”模式,该模式通过兼顾安全保护与创新发展的目标设定,遵循透明、安全、公平、合法、责任与补偿等基本原则,构建了独立监管、协调监管与行业监管“三维一体”组织架构,运用监管沙盒的安全保护技术工具,在国家战略指引、操作性行为规范与行业自律准则构成的制度框架下实现了人工智能的创新安全发展。对我国人工智能监管而言,应完善人工智能监管的价值导向,构建立体化的人工智能监管组织体系,强化人工智能监管的技术防护,构建多层次的人工智能监管制度。
While artificial intelligence promotes social innovation and development, it also brings technical security concerns. Balancing the structural tension between AI innovation and security risks is crucial for fully unleashing AI’s application value. Based on the UK's AI regulatory policy texts, this paper systematically analyzes the value objectives, basic principles, organizational system, technical tools, and institutional framework of AI regulation in the UK. Studies have shown that the UK has developed an "innovation-supporting AI regulation" model, this model achieves innovative and secure AI development winthin an institutional framework guided by national strategies, through goal-setting that balances safety protection and innovation, adherence to basic principles such as transparency, safety, fairness, legality, accountability, and redress, establishment of a "three-in-one integrated" organizational structure comprising independent regulation-coordinated regulation-sectoral regulation, and employment of regulatory sandboxes as safety protection technical tools. For China, it is advisable to refine the value orientation, build a multi-tiered organizational system, strengthen technical safeguards, and establish a multi-level institutional framework for AI regulation.