The 30th Activity Report by the State Commissioner for Data Protection and Freedom of Information of North Rhine-Westphalia (LDI NRW) sheds light on the current challenges companies face when using artificial intelligence (AI). The report focuses on ensuring compliance with data protection regulations while deploying generative AI systems and critically examines the use of emotion recognition software.
Generative AI Models: A Legal Minefield for Data Protection
The LDI NRW emphasizes that generative AI systems—such as large language models (LLMs)—often process vast amounts of personal data, frequently without the data subjects' knowledge. This often occurs through the automated collection of publicly available web content. While companies may rely on the “legitimate interest” basis under Article 6(1)(f) of the GDPR, the LDI NRW makes it clear that a careful balancing of interests is absolutely essential. Simply making data publicly available online does not automatically justify its use for training AI models.
According to the report, a particularly problematic issue is the risk of so-called inference errors: AI systems can generate false or misleading information about individuals, where the origin and truthfulness of such data is often no longer traceable. Deleting or correcting personal data is technically complex and, with generative models, remains an unsolved problem in many cases—it often requires retraining the entire model.
Emotion Recognition Software: Strong Criticism and Clear Boundaries
A serious case highlighted: A company in a call center used AI-based emotion recognition software to analyze the moods of employees and customers based on voice data—without informed consent and without performing a data protection impact assessment. The LDI NRW regards this as a severe intrusion on personal rights and therefore banned further use. A sanction is also being considered.
Obligations for Companies: Clear Rules for the Use of AI
The report underscores that data protection remains non-negotiable in the age of artificial intelligence. Companies developing or deploying AI systems are required to:
- Establish a clear legal basis for every processing of personal data,
- Effectively implement data subject rights such as access, rectification, and erasure,
- Conduct data protection impact assessments (DPIA) at an early stage.
Especially for generative AI systems, principles such as “privacy by design” and “privacy by default” must be upheld. In case of doubt, seeking advice from specialized legal counsel is recommended.
Conclusion
The activity report makes it clear: Artificial intelligence must not pose a risk to the rights and freedoms of individuals. For businesses, this means data protection is mandatory—it is essential to avoid fines and to sustainably maintain the trust of customers and staff.