Bonn, 10 July 2025
Press release 10/2025
BfDI launches public consultation on AI models
Development of practice-relevant approaches through involvement of the expert community

AI models, especially large language models (LLMs), can reproduce personal information verbatim and meaningfully from their training data. Results from AI systems can therefore allow conclusions about specific individuals. This problem has been a concern for regulators worldwide since the release of ChatGPT and the associated massive availability of new AI offerings for consumers.
AI models are revolutionizing numerous industries and simultaneously present us with major challenges in terms of transparency, security, and data protection
, says Prof. Dr. Louisa Specht-Riemenschneider. According to the Federal Commissioner for Data Protection and Freedom of Information (BfDI), numerous questions regarding the protection of personal data during the training of AI models will still arise in 2025.
With a public consultation process, the BfDI aims to contribute to ensuring that practical knowledge about AI models is available to everyone for regulatory assessment. Experiences, challenges and solutions from practice are to be made visible for the development of practical and promising approaches relating to data protection law.
Insights from practice enable us to make better data protection recommendations. Prof. Dr. Louisa Specht-Riemenschneider
I want to avoid unrealistic and unclear regulations – but for that, I rely on the support of providers and researchers,
says Louisa Specht-Riemenschneider. The BfDI also uses public consultations to ensure legal certainty for all stakeholders through transparency and dialogue.
You can find all further details in the ‘Protection of personal data in AI models’ consultation procedure.