While artificial intelligence promotes social innovation and development, it also brings technical security concerns. Balancing the structural tension between AI innovation and security risks is crucial for fully unleashing AI’s application value. Based on the UK's AI regulatory policy texts, this paper systematically analyzes the value objectives, basic principles, organizational system, technical tools, and institutional framework of AI regulation in the UK. Studies have shown that the UK has developed an "innovation-supporting AI regulation" model, this model achieves innovative and secure AI development winthin an institutional framework guided by national strategies, through goal-setting that balances safety protection and innovation, adherence to basic principles such as transparency, safety, fairness, legality, accountability, and redress, establishment of a "three-in-one integrated" organizational structure comprising independent regulation-coordinated regulation-sectoral regulation, and employment of regulatory sandboxes as safety protection technical tools. For China, it is advisable to refine the value orientation, build a multi-tiered organizational system, strengthen technical safeguards, and establish a multi-level institutional framework for AI regulation.
Wu Zhongcan Hao Wenqiang
. Artificial Intelligence Regulation Supporting Innovation: Lessons from the UK and Policy Implications[J]. Library & Information, 2025
, 45(05)
: 61
-72
.
DOI: 10.11968/tsyqb.1003-6938.2025058