ARTICLE

Volume 3,Issue 9

Cite this article
1
Download
13
Citations
68
Views
20 July 2025

人工智能安全与个人信息保护:挑战与对策研究

少刚 吴1
Show Less
1 南宁职业技术大学, 中国
© 2025 by the Author. Licensee Art and Design, USA. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution -Noncommercial 4.0 International License (CC BY-NC 4.0) ( https://creativecommons.org/licenses/by-nc/4.0/ )
Abstract

人工智能技术的广泛应用为个人信息保护带来诸多新挑战。本文依据截至2024年底已公开的研究与数据,系统梳理人工智能环境下个人信息面临的主要风险,并从技术、管理及法律三个层面提出综合防护策略。研究指出,生成式人工智能在数据采集隐蔽性、模型训练不可控性等方面加剧了隐私泄露的可能性。文章进一步提出整合隐私增强技术、分级治理机制与动态合规体系的解决方案,以助力构建安全、可信的人工智能应用环境[1,3,2,5]。

Keywords
人工智能安全
个人信息保护
隐私增强技术
数据治理
References

 [1] 王利明.加强人格权立法保障人民美好生活[J].四川大学学报(哲学社会科学版),2018(3):5-10
 [2] 张新宝.论个人信息保护的法律路径[J].法学研究,2023,45(1):98-115.
 [3] IDC. Worldwide Artificial Intelligence Spending Guide 2024[R]. 2024.
 [4] McKinsey Global Institute. The State of AI in 2024: Generative AI's Breakout Year[R]. 2024.
 [5] 杨强,刘洋.联邦学习:算法与应用[J].计算机学报,2020,43(5):897-909.
 [6] 国务院.关于构建数据基础制度更好发挥数据要素作用的意见[Z].2022.
 [7] Fredrikson, M., et al. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing[C]. Proceedings of the 23rd USENIX Security Symposium, 2014: 17-32.
 [8] Brown, T., et al. Language Models are Few-Shot Learners[J]. Advances in Neural Information Processing Systems, 2020, 33: 1877-1901.
 [9] Shokri, R., et al. Membership Inference Attacks Against Machine Learning Models[C]. 2017 IEEE Symposium on Security and Privacy (SP), 2017: 3-18.
 [10] McMahan, B., et al. Learning Private Neural Networks Using Differential Privacy[C]. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016: 308-318.
 [11] Yang, Q., et al. Federated Machine Learning: Concept and Applications[J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): 1-19.
 [12] 最高人民法院关于审理使用人脸识别技术处理个人信息相关民事案件适用法律若干问题的解释[Z].2021.

Share
Back to top