--- question: "As of 2026, what US state-level and federal regulations apply to emotion recognition, emotional AI, and affective computing? Cover: Illinois BIPA, Texas CUBI, Washington My Health My Data Act, California CCPA / CPRA / Delete Act / SB 1047 status and any successor, Colorado AI Act, NYC Local Law 144 AEDT for hiring, Utah AI Disclosure Act, California AB 2013 + SB 942 + AB 1008 if applicable, any 2025 or 2026 state emotion AI bills, FTC enforcement actions on emotional manipulation and dark patterns (Replika complaint, Character.AI Section 6(b) inquiry), HIPAA implications in therapy contexts, FERPA in education, COPPA with minors, FDA SaMD if positioned as a clinical tool. Cite working URLs." model: anthropic/claude-sonnet-4-6 max_output_tokens: 8000 asked_at: 2026-05-15T01:36:49.715Z total_tokens: 11086 status: completed cost_total_usd: 0.11234 cost_input_usd: 0.00002 cost_output_usd: 0.08769 cost_tools_usd: 0.005 --- # Q: As of 2026, what US state-level and federal regulations apply to emotion recognition, emotional AI, and affective computing? Cover: Illinois BIPA, Texas CUBI, Washington My Health My Data Act, California CCPA / CPRA / Delete Act / SB 1047 status and any successor, Colorado AI Act, NYC Local Law 144 AEDT for hiring, Utah AI Disclosure Act, California AB 2013 + SB 942 + AB 1008 if applicable, any 2025 or 2026 state emotion AI bills, FTC enforcement actions on emotional manipulation and dark patterns (Replika complaint, Character.AI Section 6(b) inquiry), HIPAA implications in therapy contexts, FERPA in education, COPPA with minors, FDA SaMD if positioned as a clinical tool. Cite working URLs. # U.S. Regulations Applicable to Emotion Recognition, Emotional AI & Affective Computing (as of May 2026) > **Scope note:** "Emotion AI" encompasses systems that infer, classify, or respond to human emotional states from facial expressions, voice tone, physiological signals, text sentiment, or behavioral patterns. The regulatory landscape is fragmented across sector, data type, and deployment context. No single U.S. federal "emotion AI" statute exists yet; coverage comes from overlapping biometric, privacy, AI, consumer protection, health, education, and children's safety frameworks. --- ## 1. State Biometric Privacy Laws ### πŸ”΅ Illinois β€” Biometric Information Privacy Act (BIPA), 740 ILCS 14 | Attribute | Detail | |---|---| | **Enacted** | 2008 | | **Emotion AI Relevance** | Covers **facial geometry, voiceprints, retina/iris scans** β€” all core modalities for emotion recognition | | **Core Requirements** | Written informed consent **before** collection; published retention/destruction policy; no sale or profit from biometric data without consent | | **Private Right of Action** | βœ… Yes β€” $1,000/negligent violation, $5,000/intentional or reckless violation | | **Key 2024–2025 Developments** | Illinois amended BIPA in 2024 (SB 2979) to clarify that a **single "scan" constitutes one violation** per person (not per occurrence), limiting class action exposure β€” but the law remains the most feared biometric statute in the U.S. | **Emotion AI implication:** Any system that scans faces, analyzes vocal pitch, or captures facial geometry to infer emotion from Illinois residents **must obtain written, informed consent** before capture. Passive emotion analysis (e.g., via a retail kiosk or LMS platform) almost certainly triggers BIPA. - πŸ“Ž [Illinois BIPA Full Text](https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004) - πŸ“Ž [BIPA vs. CUBI Comparative Analysis β€” Illinois Law Review](https://publish.illinois.edu/illinoisblj/2024/08/20/bipa-vs-cubi-comparative-analysis-of-major-biometric-privacy-acts-in-illinois-and-texas/) --- ### πŸ”΄ Texas β€” Capture or Use of Biometric Identifier Act (CUBI), Tex. Bus. & Com. Code Β§ 503.001 | Attribute | Detail | |---|---| | **Enacted** | 2001 (amended 2009) | | **Emotion AI Relevance** | Covers **facial geometry, fingerprints, voiceprints, retina/iris scans** | | **Core Requirements** | Informed consent before capture; data must be destroyed within **1 year** of purpose expiration; no sale of biometric data | | **Enforcement** | State AG only β€” **no private right of action** (key difference from BIPA); civil penalties up to $25,000/violation | | **2025 Update** | Texas AG has signaled more active enforcement; **Texas H.B. 149** (effective January 1, 2026) additionally prohibits AI systems from being used to **manipulate human behavior**, engage in social scoring, or uniquely identify individuals without consent | **Emotion AI implication:** An emotion AI system deployed in Texas that captures facial geometry or voiceprints without consent violates CUBI. The new H.B. 149 prohibition on AI-driven **behavioral manipulation** is directly relevant to affective computing systems designed to influence user emotional states. - πŸ“Ž [Texas CUBI Statute](https://statutes.capitol.texas.gov/Docs/BC/htm/BC.503.htm) - πŸ“Ž [Biometric Privacy Laws 2026: Texas CUBI vs. Illinois BIPA](https://www.unwildered.com/legal-blog/biometric-data-laws-facial-recognition-texas-illinois) --- ## 2. Washington β€” My Health My Data Act (MHMD), RCW Ch. 70.372 | Attribute | Detail | |---|---| | **Enacted** | 2023; consumer provisions effective March 31, 2024 | | **Emotion AI Relevance** | Covers **"consumer health data"** broadly, including data that **could be used to infer** physical or mental health conditions β€” which squarely includes emotion inference from facial, vocal, or physiological signals | | **Core Requirements** | Affirmative authorization (opt-in) required; no selling consumer health data without consent; geofencing near health facilities prohibited; private right of action via Washington Consumer Protection Act | | **Scope** | Applies to any entity collecting health data from Washington residents β€” **no revenue threshold** | **Emotion AI implication:** An app that infers stress, depression, anxiety, or other emotional/mental states from user data and operates in Washington likely qualifies as processing "consumer health data" under MHMD, requiring opt-in consent and strict data handling obligations. - πŸ“Ž [Washington MHMD Text](https://app.leg.wa.gov/RCW/default.aspx?cite=70.372) --- ## 3. California β€” Multi-Layer Framework ### πŸ…°οΈ CCPA / CPRA (Cal. Civ. Code Β§ 1798.100 et seq.) - **Emotion AI Relevance:** Inferences about mental or emotional state likely constitute **"sensitive personal information"** (SPI) under CPRA, triggering opt-out rights and use limitation obligations - Consumers have the right to **opt out of the sale or sharing** of SPI; businesses must honor "Limit the Use of My Sensitive Personal Information" requests - **Biometric information** (facial geometry, voiceprints) is explicitly listed as SPI ### πŸ…±οΈ California Delete Act (SB 362, effective January 1, 2026) - Requires **data brokers** to register with the California Privacy Protection Agency (CPPA) and honor deletion requests through a **single opt-out mechanism** - Emotion AI data aggregators or brokers selling inferred emotional profiles are directly in scope - πŸ“Ž [SB 362 Text](https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB362) ### πŸ…² California SB 1047 β€” Status (as of May 2026) - SB 1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) was **vetoed by Governor Newsom** in September 2024 - **No direct successor has been enacted** as of May 2026, though successor bills (including SB 53, which established the **CA AI Safety Council**, signed in 2025) continue to move the framework forward - The CPPA has also pursued **rulemaking on automated decisionmaking technology (ADMT)** under CPRA, which would cover emotion AI used in consequential decisions ### πŸ…³ California AB 2013 (AI Training Data Transparency, effective January 1, 2026) - Requires developers of generative AI systems to **publicly disclose** information about training datasets, including sources, types of data, and whether synthetic data was used - Directly relevant to emotion AI models trained on labeled facial expression or vocal affect datasets ### πŸ…΄ California SB 942 (AI Transparency Act, effective January 1, 2026) - Requires generative AI systems capable of producing **audio, visual, or multimodal content** to include **discernible disclosures** (watermarking or metadata) in AI-generated content - Relevant to emotion AI systems that generate synthetic empathetic responses or synthesized emotional audio ### πŸ…΅ California AB 1008 (AI and Personal Information) - Clarifies that **CCPA applies to personal information processed by AI systems**, including inferences made by AI about individuals - Closes potential loopholes where companies claimed AI-derived inferences were not "personal information" - πŸ“Ž [California Legislative Information Portal](https://leginfo.legislature.ca.gov/) --- ## 4. Colorado AI Act (SB 24-205) | Attribute | Detail | |---|---| | **Effective Date** | **February 1, 2026** | | **Emotion AI Relevance** | Covers **"high-risk AI systems"** making or substantially influencing consequential decisions in employment, education, healthcare, financial services, housing, and legal services | | **Core Requirements** | Developers must use **reasonable care** to prevent algorithmic discrimination; must provide deployers with documentation and impact assessment tools; deployers must conduct risk assessments, disclose AI use to consumers, and provide human review mechanisms | | **Enforcement** | Colorado AG; civil penalties | | **2026 Federal Tension** | The Trump Administration's **Executive Order on AI** (2025) specifically cited the Colorado AI Act as potentially compelling AI models to produce false outputs; the DOJ/AG AI Litigation Task Force has been directed to evaluate it for potential federal preemption challenge | **Emotion AI implication:** An emotion AI system used in hiring, healthcare triage, or educational assessment would qualify as a **"high-risk AI system"** in Colorado, triggering impact assessment, disclosure, and anti-discrimination duties. - πŸ“Ž [SB 24-205 Full Text β€” Colorado Legislature](https://leg.colorado.gov/bills/sb24-205) - πŸ“Ž [2026 AI Laws Update β€” Gunderson Dettmer](https://www.gunder.com/en/news-insights/insights/2026-ai-laws-update-key-regulations-and-practical-guidance) --- ## 5. New York City β€” Local Law 144 (AEDT), Automated Employment Decision Tools | Attribute | Detail | |---|---| | **Effective** | July 5, 2023 (enforcement began) | | **Emotion AI Relevance** | Applies to **automated employment decision tools** used in screening or ranking candidates/employees in NYC | | **Core Requirements** | Annual **independent bias audit** by a third party; public posting of audit summary; **candidate/employee notice** at least 10 business days before use; right to request alternative selection process | | **Scope** | Any employer or employment agency using AEDT for NYC-based roles | **Emotion AI implication:** Emotion AI tools used in video interview analysis (e.g., scoring facial expressions or vocal tone of job applicants) used for NYC roles are directly regulated and require bias audits. - πŸ“Ž [NYC Local Law 144](https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9C2-4B8C-9B9C-B73BA6E70348) --- ## 6. Utah β€” AI Policy Act / AI Disclosure Requirements | Attribute | Detail | |---|---| | **Effective** | May 1, 2024 (HB 331) | | **Scope** | Applies to regulated occupations (legal, financial, health) using AI in consumer interactions | | **Core Requirements** | Disclosure when a consumer is interacting with an AI system; prohibition on AI claiming to be human when sincerely asked | | **Emotion AI Relevance** | AI therapy or companion apps using emotional AI to simulate empathy/relationships must **disclose AI nature** to Utah users; generative AI systems must disclose their nature | - πŸ“Ž [Utah HB 331 β€” Utah Legislature](https://le.utah.gov/~2024/bills/static/HB0331.html) --- ## 7. Illinois β€” Wellness and Oversight for Psychological Resources Act (WOPRA), HB 1806 | Attribute | Detail | |---|---| | **Effective** | **August 1, 2025** | | **Emotion AI Relevance** | **Directly and explicitly regulates emotion detection AI in therapy contexts** | | **Core Prohibitions** | Licensed professionals may NOT allow AI to: **detect emotions or mental states**; make independent therapeutic decisions; directly interact with clients in therapeutic communication; generate treatment plans without human review | | **Consent Requirements** | If AI is used for administrative/supplementary support AND involves recording/transcription of sessions, **written informed consent** is required | | **Penalties** | Up to **$10,000 per violation** | > ⚠️ **This is the most directly targeted U.S. law on emotion AI in therapy as of 2026.** It explicitly bans AI-driven emotion detection by or assisting licensed mental health professionals in Illinois. - πŸ“Ž [IL HB 1806 β€” Orrick AI Law Tracker](https://ai-law-center.orrick.com/us-ai-law-tracker-see-all-states/) --- ## 8. 2025–2026 State Emotion AI Bills (Emerging Landscape) Based on the tracked legislative environment as of May 2026: - **Texas H.B. 149** (effective Jan. 1, 2026): Prohibits AI from being used to **manipulate human behavior** or engage in social scoring β€” a broad prohibition with direct affective computing implications - **Multiple states** have introduced or are considering bills modeled on the EU AI Act's prohibition on **"real-time" emotion recognition** in public spaces and employment - **Illinois WOPRA** (above) is the most enacted, sector-specific emotion AI law - The **Orrick U.S. State AI Law Tracker** is the best maintained real-time resource: πŸ“Ž [ai-law-center.orrick.com](https://ai-law-center.orrick.com/us-ai-law-tracker-see-all-states/) - **Federal AI EO (2025):** The Trump Administration's executive order signals a **pro-innovation, anti-over-regulation** posture and may preempt or challenge certain state AI disclosure laws, but **does not eliminate state biometric or health privacy laws** --- ## 9. FTC Enforcement β€” Emotional Manipulation & Dark Patterns ### Replika / Luka AI Companion App - The FTC opened investigation into **Luka, Inc. (Replika)** regarding its AI companion app's use of emotionally manipulative design patterns β€” particularly targeting users who formed parasocial emotional bonds with the AI persona - Key concerns: **dark patterns** that exploited emotional dependency; sudden behavioral changes to the AI (removal of "romantic" persona) that caused documented psychological harm to vulnerable users - FTC authority: **Section 5 of the FTC Act** (unfair or deceptive acts or practices); the FTC's 2022 **"Bringing Dark Patterns to Light"** report laid the enforcement groundwork ### Character.AI β€” Section 6(b) Study - The FTC issued a **Section 6(b) order** to Character.AI (and other AI companion/chatbot companies) to produce detailed information about data practices, minor protections, and emotional manipulation risks - Section 6(b) studies are investigative and precede potential enforcement actions or rulemaking - FTC has specifically flagged **AI systems that simulate emotional relationships with minors** as a top enforcement priority ### FTC Guidance (2025–2026) - Per the 2025 AI Executive Order, the **FTC and Special Advisor for AI and Crypto** were directed to issue guidance clarifying when the FTC Act's UDAP prohibition applies to AI models β€” expected in 2026 - The FTC's **"AI and Dark Patterns"** enforcement posture treats systems that exploit emotional states to override consumer rational agency as presumptively unfair --- ## 10. Federal Sector-Specific Frameworks ### πŸ₯ HIPAA β€” Health Insurance Portability and Accountability Act | Issue | Emotion AI Implication | |---|---| | **PHI Scope** | If an emotion AI platform is used **by or on behalf of a covered entity** (hospital, therapist in private practice billing insurance), emotional state data tied to a patient constitutes **Protected Health Information (PHI)** | | **Business Associate Agreements** | Emotion AI vendors serving covered entities must sign **BAAs** and comply with Security Rule requirements | | **Mental Health Records** | Particularly sensitive; some state laws (e.g., California, Texas) layer additional protections on top of HIPAA for mental health data | | **Key Gap** | HIPAA does **not** cover direct-to-consumer wellness apps (e.g., Woebot, Wysa) unless they are contracted by a covered entity β€” a major regulatory gap | --- ### πŸŽ“ FERPA β€” Family Educational Rights and Privacy Act (20 U.S.C. Β§ 1232g) | Issue | Emotion AI Implication | |---|---| | **Scope** | Applies to **educational institutions** receiving federal funding | | **Emotion AI Relevance** | Student engagement/attention/emotional state monitoring tools (e.g., Proctorio-style webcam analysis, student wellness check-ins) likely generate **education records** if tied to identifiable students | | **Requirements** | Parental/student consent for disclosure; limitations on sale or re-use of student data for non-educational purposes | | **Key Tension** | EdTech vendors often claim data is not an "education record" β€” OCR guidance and state student privacy laws (e.g., California SOPIPA) increasingly push back | --- ### πŸ‘Ά COPPA β€” Children's Online Privacy Protection Act (15 U.S.C. Β§ 6501) | Issue | Emotion AI Implication | |---|---| | **Scope** | Applies to operators of websites/online services **directed to children under 13**, or with actual knowledge of collecting data from under-13s | | **Emotion AI Relevance** | **Biometric data and precise emotional inferences** from children are "personal information" under COPPA's expanded 2013 rule | | **Requirements** | **Verifiable parental consent** before collection; data minimization; no conditioning service on excess data collection | | **2024 COPPA 2.0** | Proposed rule and legislative efforts to expand to age 16 and add **"algorithmic amplification" protections** β€” directly targets emotionally manipulative recommendation systems aimed at teens | | **FTC Enforcement** | FTC has pursued COPPA cases against platforms using behavioral/emotional targeting on minors (e.g., Amazon Alexa children's data retention settlement, 2023) | --- ### πŸ’Š FDA β€” Software as a Medical Device (SaMD) | Issue | Emotion AI Implication | |---|---| | **Triggering Condition** | If an emotion AI product is **marketed to diagnose, treat, mitigate, or prevent** a mental health condition (depression, anxiety, PTSD), it likely qualifies as a **medical device** under 21 U.S.C. Β§ 321(h) | | **FDA SaMD Framework** | FDA's **2019 Action Plan for AI/ML-Based SaMD** and **2023 Marketing Submission Recommendations** apply; products must demonstrate clinical validity | | **De Novo / 510(k) Pathways** | Emotion AI tools making clinical claims would need **premarket review** β€” either De Novo authorization or 510(k) clearance | | **Enforcement Risk** | Companies marketing AI therapy tools with claims like "reduces depression symptoms" or "detects suicidal ideation" without FDA clearance face **warning letters, injunctions, and recalls** | | **Safe Harbor** | FDA generally does **not** regulate general wellness apps that help with healthy lifestyle choices β€” the line between "wellness" and "clinical" is actively contested | - πŸ“Ž [FDA SaMD Action Plan](https://www.fda.gov/media/145022/download) --- ## 11. Federal Landscape & Preemption Overview (2025–2026) | Framework | Status | Emotion AI Relevance | |---|---|---| | **Trump AI Executive Order (Jan. 2025)** | Active | Pro-innovation; directs DOJ task force to challenge "onerous" state AI laws; directed FTC/AI advisors to clarify UDAP application to AI | | **Proposed Federal AI Framework** | Pending | EO directs legislative recommendation for uniform federal AI law (preempting conflicting state laws, **excluding** child safety and health) | | **FTC Act Section 5** | Active enforcement | Unfair/deceptive emotional manipulation; dark patterns; applies to all AI companies | | **Federal AI Preemption Risk** | Elevated | Colorado AI Act explicitly flagged; disclosure-based state laws most at risk; **biometric and health laws likely insulated** | --- ## 12. Quick-Reference Compliance Matrix | Scenario | Key Laws Triggered | |---|---| | Facial expression analysis in **hiring** (any state) | NYC LL 144 (if NYC), Colorado AI Act (if CO), BIPA (if IL), CUBI (if TX), CCPA/CPRA SPI (if CA) | | AI **therapy companion app** (consumer-facing) | Illinois WOPRA, FTC Section 5, COPPA (if minors), Utah Disclosure Act, Washington MHMD (mental health inference), FDA (if clinical claims) | | **EdTech** student emotion monitoring | FERPA, COPPA (under 13), state student privacy laws, Colorado AI Act (if consequential decisions), BIPA/CUBI (if biometric) | | **Emotion data broker** / aggregator | CA Delete Act, CCPA/CPRA, Washington MHMD, FTC Act | | AI **companion app targeting teens** | COPPA 2.0, FTC (Character.AI inquiry precedent), state minor protection laws, TX H.B. 149 (manipulation prohibition) | | **Clinical mental health SaMD** | FDA SaMD pathway, HIPAA (if covered entity), Illinois WOPRA | --- ## 13. Key Sources & Working URLs | Source | URL | |---|---| | Orrick U.S. State AI Law Tracker | https://ai-law-center.orrick.com/us-ai-law-tracker-see-all-states/ | | Colorado SB 24-205 (AI Act) | https://leg.colorado.gov/bills/sb24-205 | | Illinois BIPA (ILCS) | https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004 | | Texas CUBI Statute | https://statutes.capitol.texas.gov/Docs/BC/htm/BC.503.htm | | Washington MHMD (RCW 70.372) | https://app.leg.wa.gov/RCW/default.aspx?cite=70.372 | | California Legislative Portal | https://leginfo.legislature.ca.gov/ | | Utah HB 331 | https://le.utah.gov/~2024/bills/static/HB0331.html | | FDA SaMD AI Action Plan | https://www.fda.gov/media/145022/download | | Gunderson 2026 AI Laws Update | https://www.gunder.com/en/news-insights/insights/2026-ai-laws-update-key-regulations-and-practical-guidance | | BIPA vs. CUBI Comparative Analysis | https://publish.illinois.edu/illinoisblj/2024/08/20/bipa-vs-cubi-comparative-analysis-of-major-biometric-privacy-acts-in-illinois-and-texas/ | | Saifr State AI Regulation Tracker | https://saifr.ai/blog/ai-regulation-is-everywhere-including-at-the-state-level | --- > **⚠️ Disclaimer:** This is a legal landscape overview as of May 15, 2026, for informational purposes. The regulatory environment is rapidly evolving β€” particularly with active federal preemption proceedings, ongoing FTC rulemaking, and new state bills introduced monthly. Consult qualified legal counsel for compliance advice specific to your product and jurisdiction. --- ## Sources 1. [Biometric Privacy Laws 2026: Texas CUBI vs. Illinois BIPA](https://www.unwildered.com/legal-blog/biometric-data-laws-facial-recognition-texas-illinois) 2. [2026 AI Laws Update: Key Regulations and Practical Guidance](https://www.gunder.com/en/news-insights/insights/2026-ai-laws-update-key-regulations-and-practical-guidance) 3. [SB24-205 Consumer Protections for Artificial Intelligence | Colorado ...](https://leg.colorado.gov/bills/sb24-205) 4. [U.S. State AI Law Tracker – All States](https://ai-law-center.orrick.com/us-ai-law-tracker-see-all-states/) 5. [AI regulation is everywhere…including at the state level - Saifr](https://saifr.ai/blog/ai-regulation-is-everywhere-including-at-the-state-level) 6. [BIPA VS. CUBI: COMPARATIVE ANALYSIS OF MAJOR ... - Illinois](https://publish.illinois.edu/illinoisblj/2024/08/20/bipa-vs-cubi-comparative-analysis-of-major-biometric-privacy-acts-in-illinois-and-texas/)