Artificial Intelligence (AI)

The use of mobile apps, search engine queries, social networking, video and music streaming and online shopping, to name just a few, generate an immense amount of data every day, and the volume is growing year by year. This massive increase in data volume (Big Data) is accompanied by new developments in the field of machine learning and artificial intelligence (AI).


  • Experience: With over 1000 clients and numerous complex AI certification projects, we are your AI compliance partner. 
  • Customized Solutions: We develop AI compliance solutions tailored to the needs of your company. 
  • Interdisciplinary Expertise: Our team comprises experienced data protection experts, computer scientists, and legal professionals who provide you with personalized and individualized advice.


Our AI consulting services include the development of corporate policies for AI use, data protection impact assessments, and training employees on the legal, technical, and ethical aspects of the deployed AI system. Additionally, we certify the AI system and ensure compliance with GDPR.

AI Consulting

  • Development and Implementation of an AI Policy
  • Data Protection Impact Assessment for Deployed AI Systems
  • Employee Training
  • Consulting on Technical and Legal Implementation of "Privacy by Design"
  • Risk Assessment and Derivation of Necessary Measures in Accordance with the AI Regulation (AI Act)
Learn more about our AI Consulting Service

AI Certification

  • Technical and Legal Evaluation of the Deployed AI System 
  • The criteria catalogue is based on the General Data Protection Regulation (GDPR) and includes all current court rulings, the TDDDG, as well as guidance from the EDSA and data protection authorities. 
  • Confirms compliance with the criteria catalogue ePrivacyseal, which encompasses the requirements of EU data protection law under the GDPR. 
  • Accredited and expert-selected technical and legal assessors.
Learn more about ePrivacyseal

ePrivacyseal certifications: artificial intelligence

ePrivacy has certified the following AI applications and products with the ePrivacyseal for data protection and data security:

What is Artificial Intelligence?

Simply explained, AI is the attempt to transfer human learning and thinking to computers or robots, thus giving them intelligence. Instead of being programmed for each new requirement, an AI can find answers and solve partial problems independently. In other words, the computer "learns" autonomously via special algorithms and analyzes huge amounts of data for this purpose. Very often personal data are used for this process.

Therefore, artificial intelligence is very closely related to the topic of data protection and has to fulfill the high requirements of data protection when it comes to personal data.

Artificial Intelligence and GDPR - Consistency or contradiction?

Since self-learning systems access large amounts of data and also make automated decisions, the risk of violating the rights and freedoms of affected persons increases when personal data is used.

The German Data Protection Regulation regulates in art. 5 GDPR the principles of data processing to be considered, even by AI systems: legality, processing in good faith, transparency, purpose limitation, data minimization, accuracy, storage limitation.

Legal requirements include, for example, purpose limitation in accordance with art. 5(1)(b) GDPR or compliance with the principle of data minimization in accordance with art. 5(1)(c) GDPR. Technical and organizational standards are necessary according to art. 24 and 25 GDPR and concern, among other things, questions of pseudonymization and anonymization.

The principle that personal data must be processed in a manner that is comprehensible to the data subject is also important (art. 5(1)(a) GDPR). Traceability and explainability are essential aspects when using AI systems.

The GDPR can therefore also be applied to AI systems. However, some of the principles, such as data minimization and memory limitation, are not necessarily in line with the processing of large amounts of data - as required in the area of self-learning systems.

The tasks of the data protection experts:

If personal data is involved, AI systems must comply with the GDPR principles without exception. By designing technology at an early stage in accordance with art. 25 GDPR in the form of technical-organizational measures, those responsible must ensure the implementation of the principles for the processing of personal data from art. 5 GDPR. High standards must also be implemented in the areas of Privacy by Design and Privacy by Default as well as the requirement for data minimization. Violations are subject to significant monetary sanctions.

Our data protection experts from ePrivacy check whether the measures implemented in your company are suitable to meet the high requirements of data protection.

7 data protection requirements that ePrivacy checks when you´re running AI applications:

  • AI may not turn people into objects
    Is there a right for intervention by a real person?
  • AI may only be used for constitutionally legitimized purposes and may not be used for a specific purpose
    Does the data collection comply with this approach?
  • AI must be transparent, comprehensible and explainable
    Are the rights of affected persons (transparency, information) sufficiently implemented?
  • AI must avoid discrimination
    Can discrimination be ruled out - Are there mechanisms for risk monitoring?
  • The principle of data minimization applies to AI
    Does the collection of personal data comply with the principle of data minimization?
  • AI needs accountability
    Is a responsible person appointed and does this person ensure that the GDPR principles are observed
  • AI needs technical and organizational standards
    What measures are used to ensure an appropriate level of security?

TreuMed Research Project

Development of a data custodianship using the example of distributed artificial intelligence in medicine

ePrivacy is part of a major research project which is funded by the German Federal Ministry of Education and Research. The objective of this research is to develop a data custodianship for federated artificial intelligence in medicine in cooperation with the University of Hamburg, and the Medical University of Greifswald. A large amount of medical data can only be used to a very limited extent, due to high data protection requirements. This highly sensitive data is essential for drug and therapy research. The research project deals with big medical data processing in compliance with data protection regulations in Germany. This research is also part of the European data strategy.

There are many so called privacy-preserving technologies (PPT’s) to ensure data protection for users. Those technologies involve data aggregation and advanced cryptographic: data anonymization, differential privacy, secure multi-party computation (SMC), and homomorphic encryption (HE). 

If you achieve legally compliant data anonymization – non personal data - you do not need to comply with the GDPR. GDPR only handles the usage of personal data. In practice, data anonymization is very hard to accomplish because most application need an identifier.

Federated learning is a prospective solution to mitigate the risk of data breaches and keeps personal data on the user’s device. However, this technology is rather new and is in an early stage and more research is needed.  


Do you have questions or recommendations for us?

We are glad to receive your comments.