--- question: "As of 2026, what is the current state of the EU AI Act provisions on emotion recognition systems? Specifically: what is banned outright (workplace and education contexts under Article 5), what is classified high-risk and what compliance is required (conformity assessment, fundamental-rights impact assessment, post-market monitoring, registration in the EU database, CE marking via notified body), what is permitted, and how have these provisions been interpreted since the February 2025 first enforcement wave on prohibited practices and the August 2025 GPAI obligations. What does a third-party emotional AI SDK provider have to do to be EU-deployable to a covered customer in 2026? What are the penalties for non-compliance? Cite working URLs." model: anthropic/claude-sonnet-4-6 max_output_tokens: 8000 asked_at: 2026-05-15T01:34:48.454Z total_tokens: 9911 status: completed cost_total_usd: 0.0936 cost_input_usd: 0.00002 cost_output_usd: 0.06858 cost_tools_usd: 0.005 --- # Q: As of 2026, what is the current state of the EU AI Act provisions on emotion recognition systems? Specifically: what is banned outright (workplace and education contexts under Article 5), what is classified high-risk and what compliance is required (conformity assessment, fundamental-rights impact assessment, post-market monitoring, registration in the EU database, CE marking via notified body), what is permitted, and how have these provisions been interpreted since the February 2025 first enforcement wave on prohibited practices and the August 2025 GPAI obligations. What does a third-party emotional AI SDK provider have to do to be EU-deployable to a covered customer in 2026? What are the penalties for non-compliance? Cite working URLs. # EU AI Act: Emotion Recognition Systems — State of Play as of May 2026 > **Current date: 15 May 2026.** This briefing synthesises the enforcement waves, Commission guidelines, and compliance obligations as they stand today. All citations include working URLs. --- ## 1. The Regulatory Timeline at a Glance | Date | What Became Enforceable | |---|---| | **2 February 2025** | Article 5 prohibited practices (including emotion recognition ban in workplace/education) | | **2 August 2025** | GPAI model obligations; Article 50 transparency obligations for limited-risk AI | | **2 August 2026** | Full high-risk AI system requirements (Annex III) — **imminent** | --- ## 2. What Is Banned Outright — Article 5(1)(f) ### The Core Prohibition **Article 5(1)(f)** of Regulation (EU) 2024/1689 prohibits, as of **2 February 2025**, any AI system that **infers the emotions of natural persons in the workplace or in educational institutions** based on biometric data. This covers: - **Facial expression analysis** (e.g., scoring engagement or stress from webcam feeds) - **Voice/tone stress detection** (e.g., sentiment scoring of calls for employee monitoring) - **Physiological signal inference** (e.g., heart rate variability used to infer mood) - **Behavioural pattern analysis** interpreted as emotional state - **Hiring/recruitment tools** that score candidate emotions during video interviews - **Student monitoring platforms** that infer engagement, frustration, or attention via emotional signals - **Training institutions** — the Commission's Draft Guidelines extend the prohibition explicitly to these, even though they are not named in the statutory text ### What "Biometric Data" Means Here Per **Article 3(39)**, only systems inferring emotions from **biometric data** (face, voice, gait, physiology) are in scope. Pure keyword sentiment analysis of typed text, for example, sits in a legal grey zone — though enforcement guidance suggests caution. ### Narrow Exceptions The prohibition contains **two explicit carve-outs**: 1. **Medical purposes** (e.g., clinical diagnosis of neurological conditions) 2. **Safety purposes** (e.g., detecting driver fatigue in a logistics fleet) These exceptions are interpreted strictly. A wellness app that claims to be "safety-related" without a genuine operational safety justification will not qualify. ([fortiseu.com](https://www.fortiseu.com/blog/ai-act-prohibited-practices-feb-2025-what-changed), [fpf.org](https://fpf.org/blog/red-lines-under-eu-ai-act-unpacking-the-prohibition-of-emotion-recognition-in-the-workplace-and-education-institutions/)) ### Who Is Bound The ban applies to **both providers** (vendors/SDK suppliers) **and deployers** (employers, schools), regardless of where they are headquartered, so long as the system is used in the EU. ([uctoday.com](https://www.uctoday.com/workplace-management/eu-ai-act-shock-emotion-recognition-is-now-illegal-at-work-so-why-is-your-vendor-still-selling-it/)) ### Scope Interpretation — "Workplace" Is Broad The Commission Guidelines state the workplace context covers **any physical or virtual space where work is performed** — this includes remote work environments, hybrid setups, and third-party contractor sites. ([fpf.org](https://fpf.org/blog/red-lines-under-eu-ai-act-unpacking-the-prohibition-of-emotion-recognition-in-the-workplace-and-education-institutions/)) --- ## 3. What Is Classified as High-Risk (Annex III) Emotion recognition systems that fall **outside** the Article 5 absolute ban — i.e., not deployed in workplace or education contexts — may still be **high-risk** under **Annex III** if they fall into listed categories. Relevant categories include: | Annex III Category | Emotion AI Relevance | |---|---| | **Biometric identification & categorisation** | Biometric-based emotion inference for categorisation of people | | **Education and vocational training** | Assessment tools (non-inferential emotion use) | | **Employment and recruitment** | CV screening, interview scoring (non-emotion but adjacent) | | **Access to essential services** | Credit/insurance risk scoring incorporating behavioural signals | | **Law enforcement** | Polygraph-adjacent tools, suspect profiling | | **Administration of justice** | Risk assessment tools | Full high-risk requirements apply from **2 August 2026**. ([compliquest.com](https://www.compliquest.com/en/blog/what-is-eu-ai-act-requirements-2026)) --- ## 4. High-Risk Compliance Requirements (Effective August 2026) A high-risk emotion AI system must satisfy **all** of the following before being placed on the market or put into service in the EU: ### 4.1 Risk Management System (Article 9) - Continuous, iterative risk identification and mitigation throughout the lifecycle - Documented testing against foreseeable misuse scenarios ### 4.2 Data Governance (Article 10) - Training, validation, and test datasets must be relevant, representative, and free from known biases - Documented data lineage and provenance ### 4.3 Technical Documentation (Article 11 + Annex IV) - Full technical file covering system architecture, training methodology, capabilities, limitations, and intended purpose ### 4.4 Record-Keeping / Logging (Article 12) - Automatic logging of system operation to enable post-market monitoring and traceability ### 4.5 Transparency to Deployers (Article 13) - Instructions for use sufficient for deployers to understand capabilities, limitations, and oversight obligations ### 4.6 Human Oversight (Article 14) - Technical design must enable human review, override, and shut-down capabilities ### 4.7 Accuracy, Robustness, Cybersecurity (Article 15) - Documented performance metrics; resilience against adversarial manipulation ### 4.8 Conformity Assessment (Article 43) - **Self-assessment** (internal conformity assessment) is permitted for most Annex III systems - **Third-party notified body assessment** is required for biometric identification systems and any high-risk system where self-assessment is insufficient under applicable harmonised standards - A **CE marking** must be affixed following a successful conformity assessment ### 4.9 Fundamental Rights Impact Assessment — FRIA (Article 27) - **Deployers** (not providers) of high-risk AI systems in public contexts must conduct a FRIA before deployment - Must assess impacts on dignity, non-discrimination, privacy, and other Charter rights - Results must be documented and, where required, notified to relevant authorities ### 4.10 EU Database Registration (Article 49 + Article 71) - Providers must **register** high-risk AI systems in the **EU AI public database** before placing them on the market - Deployers in public-sector contexts have additional registration obligations ### 4.11 Post-Market Monitoring (Article 72) - Providers must operate a post-market monitoring plan - Serious incidents or malfunctions must be **reported to national market surveillance authorities** ([compliquest.com](https://www.compliquest.com/en/blog/what-is-eu-ai-act-requirements-2026), [orbiqhq.com](https://www.orbiqhq.com/eu-regulations/eu-ai-act-compliance)) --- ## 5. What Is Permitted — Limited-Risk (Article 50) Emotion AI Emotion recognition systems that are: - **Not** deployed in workplace or educational contexts (so outside Article 5), **and** - **Not** falling into an Annex III high-risk category …are classified as **limited-risk** and are **permitted**, subject to **transparency obligations** under **Article 50** (effective **2 August 2025**): - The **user must be informed** they are interacting with an AI system capable of emotion recognition or biometric categorisation - This disclosure must be made **before** or **at the time** of the interaction - AI-generated emotional assessments in consumer contexts (e.g., entertainment, wellness apps with informed consent) must be clearly labelled Examples of **permitted** use cases: - Consumer wellness/meditation apps with informed user consent and disclosure - Entertainment or gaming contexts with explicit opt-in - Medical/clinical emotion analysis tools (narrow safety carve-out from Article 5) - Driver fatigue/safety monitoring in transportation (narrow safety carve-out) - Research applications with appropriate ethical safeguards ([compliquest.com](https://www.compliquest.com/en/blog/what-is-eu-ai-act-requirements-2026), [orbiqhq.com](https://www.orbiqhq.com/eu-regulations/eu-ai-act-compliance)) --- ## 6. Enforcement Since February 2025 ### February 2025 First Wave - The Article 5 prohibitions became enforceable on **2 February 2025**, the earliest enforcement date under the Act - The **EU AI Office** (established within the Commission) began receiving complaints immediately - As of early 2026, **no formal fines** had yet been published, but the AI Office confirmed it was actively engaging with providers of workplace emotion tools and conducting informal investigations - Enforcement is described as **complaint-driven** in its early phase, with market surveillance authorities in member states ramping up capacity ([fortiseu.com](https://www.fortiseu.com/blog/ai-act-prohibited-practices-feb-2025-what-changed)) ### Post-February 2025 Interpretive Developments - The **Commission's Draft Guidelines on Prohibited Practices** clarified that the workplace ban covers virtual/remote environments and extends to "training institutions" - The Guidelines confirmed the prohibition targets **inference** of emotions, not merely detection of facial landmarks — a distinction that matters for vendors claiming they only "detect" rather than "interpret" - The **power-imbalance rationale** (workers and students are in vulnerable positions relative to employers/schools) was explicitly cited as the policy basis, meaning regulators are unlikely to accept contractual consent waivers from employees as a compliance workaround ([fpf.org](https://fpf.org/blog/red-lines-under-eu-ai-act-unpacking-the-prohibition-of-emotion-recognition-in-the-workplace-and-education-institutions/), [lewissilkin.com](https://www.lewissilkin.com/insights/2025/02/17/understanding-the-eu-ai-acts-prohibited-practices-key-workplace-and-advertising-102k011)) ### August 2025 GPAI Obligations - General-purpose AI models (e.g., large multimodal models capable of emotion inference) became subject to **transparency and copyright obligations** - Models with **systemic risk** designation face additional adversarial testing and incident reporting requirements - SDK providers built on top of GPAI model APIs must assess whether their fine-tuning or downstream application re-classifies the system into a higher-risk tier --- ## 7. What a Third-Party Emotional AI SDK Provider Must Do to Be EU-Deployable in 2026 If you are a third-party SDK provider whose product enables emotion recognition and you want covered customers (employers, EdTech platforms, etc.) to legally deploy it in the EU, here is your compliance checklist: ### Step 1 — Conduct a Use-Case Triage (Mandatory, Immediate) - **Map every intended deployment context** against Article 5(1)(f) - If the SDK can be configured to infer emotions in a **workplace or educational setting** from biometric data → **that feature must be disabled/removed** for EU deployment; there is no compliant path for this use case ### Step 2 — Determine Risk Classification for Remaining Use Cases - Does the SDK fall into an **Annex III category**? (biometric categorisation, safety component, etc.) - If yes → full high-risk compliance pathway (see Section 4 above) - If no → limited-risk pathway with Article 50 transparency obligations ### Step 3 — If High-Risk: Pre-Market Obligations - [ ] Prepare **technical documentation** (Article 11 / Annex IV) - [ ] Implement a **risk management system** (Article 9) - [ ] Establish **data governance** procedures (Article 10) - [ ] Build in **logging** and **human oversight** features (Articles 12, 14) - [ ] Commission or complete a **conformity assessment** — self-assessment unless notified body required (Article 43) - [ ] Affix **CE marking** - [ ] **Register** the system in the EU AI database (Article 49) - [ ] Prepare **instructions for use** sufficient for deployers (Article 13) ### Step 4 — Contractual / Supply Chain Obligations - Provide covered customers with **contractual documentation** specifying: - What the SDK does and does not infer - That the SDK must not be deployed in prohibited contexts (workplace/education emotion inference) - Obligations on the deployer for FRIA, registration, and post-market monitoring - Include **contractual use restrictions** that prohibit EU customers from enabling emotion inference in Article 5-prohibited contexts ### Step 5 — Post-Market - Operate a **post-market monitoring plan** (Article 72) - Establish a **serious incident reporting** channel to national authorities - Monitor Commission guidelines and AI Office enforcement decisions as they evolve ### Step 6 — GPAI Layer (if applicable) - If the SDK is built on a GPAI model or itself constitutes one, comply with **Article 53** transparency and copyright obligations (effective August 2025) - Maintain a **model card / technical summary** accessible to downstream providers --- ## 8. Penalties for Non-Compliance | Violation | Maximum Fine | |---|---| | **Breach of Article 5 prohibited practices** (e.g., deploying workplace emotion recognition) | **€35,000,000** or **7% of total worldwide annual turnover**, whichever is higher | | **Breach of high-risk obligations** (Articles 9–15, 43, 49, 72) | **€15,000,000** or **3% of total worldwide annual turnover**, whichever is higher | | **Providing incorrect/misleading information** to authorities | **€7,500,000** or **1% of total worldwide annual turnover**, whichever is higher | | **SME/startup mitigant** | Fines are proportionate; authorities must consider size and market position | > ⚠️ **For SMEs**, the absolute caps (€35M, €15M, €7.5M) typically apply; for large multinationals, the percentage-of-turnover figures will dominate. ([teamed.global](https://www.teamed.global/blog/is-emotion-recognition-at-work-legal-in-eu)) --- ## 9. Key Sources & Working URLs | Resource | URL | |---|---| | **Article 5 full text — EU AI Act Explorer** | https://artificialintelligenceact.eu/article/5/ | | **FPF Deep Dive: Article 5(1)(f) Prohibition** | https://fpf.org/blog/red-lines-under-eu-ai-act-unpacking-the-prohibition-of-emotion-recognition-in-the-workplace-and-education-institutions/ | | **Lewis Silkin: Prohibited Practices & Workplace Implications** | https://www.lewissilkin.com/insights/2025/02/17/understanding-the-eu-ai-acts-prohibited-practices-key-workplace-and-advertising-102k011 | | **Teamed: Is Emotion Recognition at Work Legal in EU?** | https://www.teamed.global/blog/is-emotion-recognition-at-work-legal-in-eu | | **Fortis EU: What Changed on February 2, 2025** | https://www.fortiseu.com/blog/ai-act-prohibited-practices-feb-2025-what-changed | | **CompliQuest: EU AI Act Requirements 2026 Guide** | https://www.compliquest.com/en/blog/what-is-eu-ai-act-requirements-2026 | | **Orbiq: EU AI Act Compliance Complete Guide 2026** | https://www.orbiqhq.com/eu-regulations/eu-ai-act-compliance | | **UC Today: Vendor Accountability for Workplace Emotion AI** | https://www.uctoday.com/workplace-management/eu-ai-act-shock-emotion-recognition-is-now-illegal-at-work-so-why-is-your-vendor-still-selling-it/ | | **Official EU AI Act (Regulation 2024/1689) — EUR-Lex** | https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689 | | **EU AI Office (enforcement body)** | https://digital-strategy.ec.europa.eu/en/policies/ai-office | --- ## 10. Summary Decision Tree for an Emotion AI SDK Provider ``` Does the SDK infer emotions from biometric data? │ ├── NO → Standard software rules apply; AI Act largely out of scope │ └── YES │ ├── Is it deployed in WORKPLACE or EDUCATION contexts? │ ├── YES, without medical/safety exception → PROHIBITED (Article 5(1)(f)) │ │ → Must disable/block this feature for EU deployment │ └── NO → Continue ↓ │ ├── Does it fall into Annex III high-risk categories? │ ├── YES → Full high-risk compliance required by August 2, 2026 │ │ (Risk mgmt, data governance, tech docs, logging, human oversight, │ │ conformity assessment, CE marking, EU DB registration, FRIA, post-market monitoring) │ └── NO → Continue ↓ │ └── Limited-risk pathway (Article 50) → Transparency disclosure to end-users required (since August 2025) → Permitted with disclosure ``` --- > **Disclaimer:** This briefing is for informational purposes only and does not constitute legal advice. The EU AI Act is a living regulatory instrument; Commission guidelines, AI Office enforcement decisions, and member-state implementing measures continue to develop. Consult qualified EU legal counsel before making compliance determinations. --- ## Sources 1. [Article 5: Prohibited AI Practices | EU Artificial Intelligence Act](https://artificialintelligenceact.eu/article/5/) 2. [EU AI Act 2026: Requirements, Fines & Compliance Guide](https://www.compliquest.com/en/blog/what-is-eu-ai-act-requirements-2026) 3. [Understanding the EU AI Act's Prohibited Practices: Key Workplace ...](https://www.lewissilkin.com/insights/2025/02/17/understanding-the-eu-ai-acts-prohibited-practices-key-workplace-and-advertising-102k011) 4. [Is Emotion Recognition at Work Legal in EU? No - 2026 | Teamed](https://www.teamed.global/blog/is-emotion-recognition-at-work-legal-in-eu) 5. [EU AI Act Compliance: Complete Guide for 2026 (Regulation... - Orbiq](https://www.orbiqhq.com/eu-regulations/eu-ai-act-compliance) 6. [EU AI Act Prohibited Practices: What Actually Changed on February ...](https://www.fortiseu.com/blog/ai-act-prohibited-practices-feb-2025-what-changed) 7. [Red Lines under EU AI Act: Unpacking the prohibition of emotion ...](https://fpf.org/blog/red-lines-under-eu-ai-act-unpacking-the-prohibition-of-emotion-recognition-in-the-workplace-and-education-institutions/) 8. [EU AI Act Shock: Emotion Recognition Is Now Illegal at Work. So ...](https://www.uctoday.com/workplace-management/eu-ai-act-shock-emotion-recognition-is-now-illegal-at-work-so-why-is-your-vendor-still-selling-it/) 9. [Understanding the EU AI Act's Prohibited Practices: Key Workplace ...](https://www.lewissilkin.com/en/insights/2025/02/17/understanding-the-eu-ai-acts-prohibited-practices-key-workplace-and-advertising-102k011)