ePrivacyseal AI

Most HR Tech systems using AI are defined as high-risk under the EU AI Act Chapter 3 Article 6(3), especially when they support selection processes e.g. in recruiting and career planning. They must meet strict requirements, with violations fined up to 7% of global revenue

ePrivacy and People Partner jointly developed a cutting-edge certification program enabling HR tech providers to comply with major laws. The audit combines EU AI Act, GDPR, and Labor Law creating a trustworthy AI management for HR. The awarded seal prepares the entry of your business in the EU and demonstrates your commitment protecting personal data.

We will publish the criteria catalog here shortly and offer the new seal starting in Q1 2025.


The certification process comprises a comprehensive technical, legal, and organizational assessment.

  1. Definition: Target of Evalution / Zertifizierungsumfang
  2. Preparatory Consulting
  3. Workshop
  4. Optimization
  5. Audit
  6. Awarding of the Seal

AI Act

The AI Act takes a risk-based approach and defines legal as well as technical requirements. These depend on whether an AI system is classified as low-risk, medium-risk, or high-risk. However, some particularly risky applications, referred to as AI systems with systemic risk, are completely prohibited. The evaluation of individuals based on their social behavior (social scoring) and the creation of facial recognition databases through indiscriminate harvesting of facial images from the internet are deemed unacceptable risks and are banned.

High-risk AI systems, such as those used in HR processes or in the medical field, can pose significant dangers to health, safety, or fundamental rights. Therefore, such systems must undergo thorough examination and evaluation before being brought to market. A conformity assessment procedure in accordance with the AI Act is a process through which providers of high-risk AI systems demonstrate that their systems meet the defined requirements before they are marketed or put into operation. Certification can be a potential component of this assessment procedure.

The type of conformity assessment procedure to be conducted depends on the specific category of the high-risk AI system:

Compliance with the AI Act will be mandatory for all organizations that develop, deploy, or use AI systems in the near future. The AI Act is an additional legal framework that exists alongside the GDPR. The GDPR is technology-neutral and applies particularly to AI systems; this will continue to be the case with the AI Act.


Feel free to ask us about the new ePrivacyseal AI and learn more about how its implementation works in practice.

Do you have questions or recommendations for us?

We are glad to receive your comments.