Legal news

Launch of the PANAME project to help stakeholders analyse the confidentiality of their AI model(s)

On June 26th, 2025, the French Data Protection Authority (CNIL) announced the launch of the Privacy Auditing of AI Models (PANAME) project — which it will lead and legally frame — with the goal of developing a tool that can test the confidentiality of artificial intelligence models (“AI Model(s)”).

This initiative comes against the backdrop of EU level efforts to regulate AI, notably the adoption of an opinion by the European Data Protection Board (EDPB) “on certain data protection aspects related to the processing of personal data in the context of AI models [1].

The opinion notes that determining whether an AI Model is truly anonymous—and therefore whether the General Data Protection Regulation (GDPR) applies—requires, among other things, assessing the Model’s resilience to attacks on data confidentiality.

To make it easier for organisations to test the confidentiality of their AI Model(s), the CNIL, ANSSI, PEReN, and the IPoP project of the PEPR Cybersecurity, have launched the PANAME project, which will [2] :

  • develop “[…] a software library, wholly or partly open source, designed to standardise how model confidentiality is tested” ; and
  • enable “[…] an efficient, low cost implementation of certain technical confidentiality assessment tests that actors in the AI ecosystem may carry out to evaluate an AI model’s GDPR compliance.

Testing phases are planned to ensure the tool is developed in line with real world use cases. In addition, the CNIL has announced the upcoming publication of recommendations relating to the analysis of a Model’s resistance to attacks on data confidentiality, and its documentation.

These measures are in addition to the code of practice for general-purpose AI, published on July 10, 2025, which will be discussed in a future article.