On June 26th, 2025, the French Data Protection Authority (CNIL) announced the launch of the Privacy Auditing of AI Models (PANAME) project — which it will lead and legally frame — with the goal of developing a tool that can test the confidentiality of artificial intelligence models (“AI Model(s)”).
This initiative comes against the backdrop of EU level efforts to regulate AI, notably the adoption of an opinion by the European Data Protection Board (EDPB) “on certain data protection aspects related to the processing of personal data in the context of AI models” [1].
The opinion notes that determining whether an AI Model is truly anonymous—and therefore whether the General Data Protection Regulation (GDPR) applies—requires, among other things, assessing the Model’s resilience to attacks on data confidentiality.
To make it easier for organisations to test the confidentiality of their AI Model(s), the CNIL, ANSSI, PEReN, and the IPoP project of the PEPR Cybersecurity, have launched the PANAME project, which will [2] :
Testing phases are planned to ensure the tool is developed in line with real world use cases. In addition, the CNIL has announced the upcoming publication of recommendations relating to the analysis of a Model’s resistance to attacks on data confidentiality, and its documentation.
These measures are in addition to the code of practice for general-purpose AI, published on July 10, 2025, which will be discussed in a future article.

