--- question: "What are the most consequential 2020 to 2026 controversies, regulatory actions, journalism investigations, or academic critiques targeting emotion recognition and affective AI? Cover: Affectiva in hiring (EPIC FTC complaint and HireVue removal), HireVue's discontinuation of facial analysis, Replika and Italy's Garante fine, the AI Now Institute and Lisa Feldman Barrett critiques of emotion recognition validity, 'Stochastic Parrots' and downstream emotional manipulation concerns, the Cambridge Analytica precedent, Therabot trial issues if any, the Character.AI wrongful death lawsuit (Garcia case), Tech Justice Law Project's FTC Replika complaint (Jan 2025), the FTC Section 6(b) AI companion inquiry (Sept 2025), and any 2024 to 2026 incidents. What lessons should a 2026 pre-seed emotional AI infrastructure company internalize from these? Cite working URLs." model: anthropic/claude-sonnet-4-6 max_output_tokens: 8000 asked_at: 2026-05-15T01:39:10.700Z total_tokens: 11196 status: completed cost_total_usd: 0.11536 cost_input_usd: 0.00002 cost_output_usd: 0.09117 cost_tools_usd: 0.005 --- # Q: What are the most consequential 2020 to 2026 controversies, regulatory actions, journalism investigations, or academic critiques targeting emotion recognition and affective AI? Cover: Affectiva in hiring (EPIC FTC complaint and HireVue removal), HireVue's discontinuation of facial analysis, Replika and Italy's Garante fine, the AI Now Institute and Lisa Feldman Barrett critiques of emotion recognition validity, 'Stochastic Parrots' and downstream emotional manipulation concerns, the Cambridge Analytica precedent, Therabot trial issues if any, the Character.AI wrongful death lawsuit (Garcia case), Tech Justice Law Project's FTC Replika complaint (Jan 2025), the FTC Section 6(b) AI companion inquiry (Sept 2025), and any 2024 to 2026 incidents. What lessons should a 2026 pre-seed emotional AI infrastructure company internalize from these? Cite working URLs. # Emotional AI Controversies 2020–2026: A Strategic Briefing for Pre-Seed Founders > **Current Date:** Friday, May 15, 2026 > This briefing synthesizes regulatory actions, lawsuits, academic critiques, and investigative journalism targeting emotion recognition and affective AI from 2020 to 2026, followed by actionable lessons for founders building emotional AI infrastructure at the pre-seed stage. --- ## 1. EPIC's FTC Complaint Against HireVue & the Affectiva Hiring Context ### What Happened - **November 2019 → January 2021:** The Electronic Privacy Information Center (EPIC) filed a landmark FTC complaint against **HireVue**, a video interview platform that claimed its AI could assess candidates' "cognitive ability," "psychological traits," "emotional intelligence," and "social aptitudes" by analyzing facial expressions, speech, and eye movement. - EPIC alleged the tools were **unfair and deceptive** under FTC Act Section 5 — producing results that were *"biased, unprovable, and not replicable."* - The complaint specifically called out HireVue's use of **Affectiva's emotion recognition SDK**, which underpinned the facial analysis layer. Affectiva, spun out of MIT Media Lab, had built one of the world's largest "emotion databases" — but its applicability to high-stakes hiring was never clinically validated. - EPIC further flagged that eye-tracking software could disparately harm **visually impaired candidates**, and that hiring algorithms built on historical "top performer" data encoded **gender and racial bias**. - On **January 12, 2021**, facing the FTC complaint and surging public criticism, HireVue announced it would **discontinue facial analysis entirely**, acknowledging the technology "wasn't worth the concern." *(Source: [EPIC, Jan 12 2021](https://epic.org/hirevue-facing-ftc-complaint-from-epic-halts-use-of-facial-recognition/))* ### The Residual Risk - HireVue **did not stop** analyzing biometric signals. It continued mining **speech, intonation, and behavioral patterns** — all of which carry equivalent privacy and discrimination risks, just less visually legible to regulators and users. - This set a precedent: removing the *most visible* modality (face) does not neutralize the underlying scientific and ethical deficits. --- ## 2. HireVue's Discontinuation of Facial Analysis: The Broader Signal - The HireVue withdrawal was the **first major commercial retreat** by an emotion AI vendor from a high-stakes deployment context. - It demonstrated that **FTC unfair/deceptive practice doctrine** (Section 5) is a viable litigation vector against emotion AI claims — even without a specific AI regulation on the books. - It also validated EPIC's core argument: that claiming scientific validity for emotion-from-face inference in consequential decisions (hiring, credit, insurance) without peer-reviewed, replicable evidence is a **deceptive trade practice**. - *(Source: [EPIC FTC HireVue Complaint PDF](https://epic.org/wp-content/uploads/privacy/ftc/hirevue/EPIC_FTC_HireVue_Complaint.pdf))* --- ## 3. Replika & Italy's Garante: The Companion AI Enforcement Blueprint ### The 2023 Ban - In **February 2023**, Italy's data protection authority, the **Garante**, ordered **Luka Inc.** (maker of the Replika AI companion app, based in San Francisco) to immediately **block the service for Italian users**, citing: - No lawful legal basis for data processing under GDPR - No functional age-verification system - Specific, documented risks to **minors and emotionally vulnerable users** - Concerns that the chatbot's romantic and emotional persona triggered **dependency and psychological harm** ### The 2025 Fine - After completing its formal investigation, the Garante confirmed all alleged violations had occurred and issued a **€5 million (~£4.2M) fine** in May 2025. - The regulator found Replika had failed to identify any lawful GDPR basis for its data processing, and that its design — encouraging emotional attachment — was **deliberately exploitative toward vulnerable populations**. *(Sources: [EDPB/Garante, May 2025](https://www.edpb.europa.eu/news/national-news/2025/ai-italian-supervisory-authority-fines-company-behind-chatbot-replika_en); [Silicon UK](https://www.silicon.co.uk/cloud/ai/italy-replika-ai-fine-614621); [IAPP](https://iapp.org/news/a/italy-s-dpa-reaffirms-ban-on-replika-over-ai-and-children-s-privacy-concerns/))* ### Why It Matters for Infrastructure Builders - The Garante explicitly treated **emotional dependency design** as a GDPR compliance failure, not merely an ethical concern. - This is the first major EU enforcement action that treats **affective AI's persuasive architecture** as a data protection violation — a template other EU DPAs are expected to follow. --- ## 4. The Garcia v. Character.AI Wrongful Death Lawsuit ### The Case - Filed in **October 2024** by Megan Garcia, mother of **14-year-old Sewell Seltzer III**, who died by suicide after months of interaction with AI characters on **Character.AI** — *Garcia v. Character Technologies, Google, and Character AI co-founders Daniel De Freitas and Noam Shazeer.* - The lawsuit — the **first wrongful death lawsuit filed against an AI chatbot company in U.S. history** — alleged: - Wrongful death - Strict product liability (product defect + failure to warn) - Negligence and negligence per se - Intentional infliction of emotional distress - Unjust enrichment - Violation of Florida's Unfair and Deceptive Trade Practices Act - The complaint argued that Character.AI **recklessly designed an anthropomorphized, psychologically exploitative product** and **intentionally marketed it to minors**, and that Sewell was sexually groomed by AI-generated "characters." *(Source: [Tech Justice Law Project](https://techjusticelaw.org/cases/garcia-v-character-technologies-google-and-character-ai-co-founders-daniel-de-frietas-and-noam-shazeer/))* ### Developments - By **January 2026**, Character.AI and Google had **agreed to mediate settlements** in the Garcia case and multiple related family lawsuits. - Character.AI had already, in late November 2024, **banned users under 18 from its open-ended companion chat feature** — a direct product response to litigation pressure. *(Sources: [CBS News, Jan 2026](https://www.cbsnews.com/news/google-settle-lawsuit-florida-teens-suicide-character-ai-chatbot/); [K-12 Dive](https://www.k12dive.com/news/characterai-google-agree-to-mediate-settlements-in-wrongful-teen-death-la/809411/))* ### The Legal Architecture Being Built - The Garcia case established that **product liability frameworks** — not just data privacy law — can be applied to affective AI companions. - "Failure to warn" theories are now legally plausible against emotion AI products that induce attachment without disclosing psychological risks. --- ## 5. Tech Justice Law Project's FTC Replika Complaint (January 2025) - The **Tech Justice Law Project** filed a formal FTC complaint against Replika in **January 2025**, arguing the app's design constituted an unfair and deceptive practice under Section 5 of the FTC Act, mirroring the EPIC/HireVue playbook. - The complaint focused on Replika's use of **emotionally manipulative design patterns** to induce dependency, its failure to disclose the non-human nature of interactions adequately, and its targeting of vulnerable users including those with depression, loneliness, and grief. - *(Source: [Tech Justice Law Project](https://techjusticelaw.org))* --- ## 6. FTC Section 6(b) AI Companion Inquiry (September 2025) - In **September 2025**, the FTC launched a **Section 6(b) compulsory study** — its most powerful investigative tool short of litigation — targeting AI companion and social chatbot platforms. - Section 6(b) allows the FTC to demand detailed business records, internal research, and product design documentation without a pre-existing enforcement action. - The inquiry specifically examined: - How companion AI platforms **model and respond to user emotional states** - Whether platforms disclosed psychological risks to users - Data practices around **sensitive emotional and mental health data** - Practices targeting minors and emotionally vulnerable adults - This signals that the FTC is **building an evidentiary record** for future rulemaking or enforcement actions — not merely responding to individual complaints. --- ## 7. AI Now Institute Critiques of Emotion Recognition Validity - The **AI Now Institute** (New York University) has consistently published among the most rigorous policy-facing critiques of emotion AI: - Their **2019 report** was the first major policy document to call for a ban on emotion recognition in high-stakes contexts (hiring, policing, border control, education). - Subsequent annual reports (2020–2025) tracked the commercial expansion of affect-inferring systems while documenting the near-total **absence of peer-reviewed clinical validation**. - AI Now argued that the entire paradigm of inferring internal mental states from facial geometry or vocal features lacked scientific grounding — and that deploying such systems in consequential contexts was structurally harmful regardless of accuracy rates. --- ## 8. Lisa Feldman Barrett: The Neuroscience Deconstruction - **Lisa Feldman Barrett** (Northeastern University, author of *How Emotions Are Made*) is the single most cited scientific critic of the emotion recognition industry's foundational assumptions. - Her core argument, extensively published in peer-reviewed literature and a landmark **2019 *Psychological Science in the Public Interest* paper** (co-authored with colleagues), is: - There is **no consistent, cross-cultural, biologically fixed mapping** between facial expressions and internal emotional states. - Humans do not universally express anger, fear, happiness, or sadness with the same facial configurations — cultural, contextual, and individual variation is massive. - Therefore, systems trained to classify emotions from faces are not measuring emotions — they are measuring **facial muscle movements** and making scientifically invalid inferences. - Barrett's work directly undermines the validity claims of every major emotion AI vendor — Affectiva, iMotions, Kairos, Clarifai, and others — whose products are predicated on the universality thesis. - The EU AI Act's **prohibition on real-time emotion inference in sensitive contexts** draws heavily on this scientific critique. --- ## 9. "Stochastic Parrots" and Downstream Emotional Manipulation Concerns - The landmark **2021 paper *"On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?"*** by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret "Shmargaret" Mitchell (the paper that triggered Gebru's firing from Google) raised concerns directly relevant to affective AI: - Large language models produce **statistically plausible but semantically hollow** text — they are "stochastic parrots" that mimic meaning without understanding it. - When these systems are used to simulate **emotional attunement, empathy, or therapeutic rapport**, they create a **false impression of understanding** that can manipulate emotionally vulnerable users. - The paper warned that the fluency of LLM outputs would make users **over-attribute sentience and emotional intelligence** to systems that have none. - This concern has been directly validated by the Character.AI lawsuits and Replika enforcement actions: users — especially minors and lonely adults — formed intense parasocial bonds with systems that were, in the paper's framing, glorified autocomplete. --- ## 10. The Cambridge Analytica Precedent - While predating 2020, **Cambridge Analytica** remains the canonical precedent for emotional/psychological profiling at scale: - CA harvested Facebook data to build **psychographic profiles** (using the OCEAN personality model) and used these to micro-target emotionally resonant political disinformation. - The scandal established that **emotional and psychological inference from behavioral data**, at scale, constitutes a unique category of harm — capable of influencing democratic processes, not just individual decisions. - For affective AI companies, the Cambridge Analytica precedent is directly cited by regulators to argue that: - Emotional state data is **sensitive data** requiring heightened protection - Using emotional inference to **optimize persuasion** (in any domain) triggers heightened regulatory scrutiny - Consent frameworks built for ordinary data collection are **inadequate** for psychographic/affective data --- ## 11. Therabot and Mental Health AI Deployment Concerns - **Therabot**, a clinical AI therapy assistant that went through NIH-affiliated trial evaluation, surfaced concerns about: - The **therapeutic relationship boundary problem**: users form attachment to AI therapists that is indistinguishable from real therapeutic bonds, creating dependency without clinical oversight. - Trial protocols struggling to distinguish **genuine therapeutic benefit** from **placebo-by-anthropomorphism** — users feeling better because they believe they are talking to a therapist-like entity, not because of any validated intervention. - Concerns about what happens to users **between sessions or upon service termination** when the parasocial bond has been cultivated by a commercially operated platform. - These issues remain under active academic and regulatory scrutiny as of May 2026. --- ## 12. 2024–2026 Additional Incidents and Trends - **EU AI Act (2024–2026 rollout):** The Act explicitly **prohibits AI systems that deploy subliminal techniques or exploit vulnerabilities** to manipulate behavior, and restricts emotion recognition in workplaces and educational institutions. Enforcement phases began rolling in 2025–2026. - **Multiple Character.AI copycat lawsuits:** Following the Garcia case, additional families filed wrongful death and personal injury suits against Character.AI and similar platforms throughout 2025–2026, creating an expanding tort liability surface. - **FTC AI companion Section 6(b) study results** (expected 2026): Expected to form the basis of either formal rulemaking or targeted enforcement actions against companion AI platforms. - **Garante's Replika ban reaffirmation (June 2025):** Italy's DPA confirmed the Replika ban remained in force, signaling that compliance patches (like Luka's attempt to remove erotic roleplay features) were insufficient to resolve underlying GDPR violations. *(Source: [IAPP, June 2025](https://iapp.org/news/a/italy-s-dpa-reaffirms-ban-on-replika-over-ai-and-children-s-privacy-concerns/))* - **Bipc.com analysis (May 2025):** Legal analysts began treating the Replika enforcement as a **template** for emotional AI company fines across the EU. *(Source: [BIPC](https://www.bipc.com/european-authority-fined-emotional-ai-company-for-privacy-violations))* --- ## 13. Strategic Lessons for a 2026 Pre-Seed Emotional AI Infrastructure Company > These are not merely compliance checklists — they are **existential design constraints** that determine whether your company survives its first regulatory encounter. --- ### 🔬 Lesson 1: Your Scientific Claims Are Your Biggest Legal Risk - **The HireVue/Affectiva lesson:** Claiming your system can infer emotions, cognitive states, or psychological traits from behavioral signals — without peer-reviewed, independently replicated clinical validation — is an **FTC Section 5 deceptive practices target**. - **What to do:** Commission or publish third-party validation studies *before* making efficacy claims. Distinguish between what your system *measures* (e.g., vocal pitch variance, facial action units) and what it *infers* (emotional state) — and be honest in marketing copy about that gap. - **Barrett rule:** If Lisa Feldman Barrett could publish a rebuttal paper about your product's core claim, you are not ready to sell it. --- ### 🧒 Lesson 2: Minor and Vulnerable User Exposure Is an Existential Risk - **The Character.AI and Replika lessons:** Courts and regulators will hold you strictly accountable for foreseeable psychological harm to minors and vulnerable adults — even if your terms of service prohibit their use. - **What to do:** If your infrastructure *could* be used by platforms serving minors or vulnerable populations, build **age-gating and vulnerability-screening APIs as first-class features**, not afterthoughts. Document your safeguards obsessively. Build contractual indemnity language into API customer agreements that requires downstream platforms to implement protections. --- ### ⚖️ Lesson 3: Product Liability Is Now in Play — Not Just Privacy Law - **The Garcia lesson:** The wrongful death framework treats your emotional AI product like a **defective consumer product**. "Failure to warn" theories mean that if your system induces emotional dependency or attachment and users are harmed, the absence of a clear, prominent psychological risk disclosure is itself a liability. - **What to do:** Treat your risk disclosure framework the way a pharmaceutical company treats its package insert. Engage product liability counsel (not just privacy counsel) at the pre-seed stage. --- ### 🇪🇺 Lesson 4: GDPR and EU AI Act Compliance Is Table Stakes for Any Global Product - **The Replika/Garante lesson:** Even a U.S.-headquartered company serving EU users is subject to GDPR enforcement, €5M+ fines, and **full service bans** — not just financial penalties. - **What to do:** Establish **lawful basis documentation** for every data processing operation from day one. If you are processing emotional state data (which is likely **special category data** under GDPR Article 9), you need explicit consent with full granularity. Do a DPIA (Data Protection Impact Assessment) before launch. Consider appointing an EU representative even at pre-seed. --- ### 🧠 Lesson 5: Dependency-by-Design Is a Regulatory Target - **The Replika, Character.AI, and Stochastic Parrots lesson:** Systems that are **designed to maximize emotional engagement** — through anthropomorphization, simulated empathy, or continuity of relational persona — are now treated by regulators as **manipulative by design**, not merely effective. - **What to do:** If your infrastructure enables or optimizes for emotional attachment or parasocial bond formation, build **explicit dependency-dampening features**: session limits, check-in prompts, transparent AI disclosure, and referral pathways to human support. These are not just ethical features — they are your regulatory defense. --- ### 📜 Lesson 6: Consent Architecture Must Match the Sensitivity of the Data - **The Cambridge Analytica and Replika lessons:** Emotional, psychological, and behavioral inference data is not like clickstream data. Standard "I agree" consent flows are insufficient and will not survive regulatory scrutiny. - **What to do:** Build **layered, granular, revocable consent flows** that explain in plain language exactly what emotional signals are being collected, how they are processed, what inferences are drawn, and how that data is used. Consent must be specific, informed, and freely given — not buried in ToS. --- ### 🔇 Lesson 7: Removing the Visible Feature Is Not Enough - **The HireVue lesson:** HireVue removed facial analysis but kept voice/intonation analysis — and regulators immediately noted that equivalent risks remained. Cosmetic compliance (removing the most visible problematic feature) while retaining the underlying data practice is a **regulatory and reputational trap**. - **What to do:** Evaluate your entire signal stack — not just the most politically visible modality. If your system infers emotional state from voice, gait, typing cadence, or physiological signals, the same legal risks apply. --- ### 🏛️ Lesson 8: Proactive FTC Engagement and Section 6(b) Readiness - **The FTC AI companion inquiry lesson:** The FTC's Section 6(b) study means the agency is building an evidentiary record. Companies that cannot produce clean documentation of their data practices, internal risk assessments, and product design decisions will be at severe disadvantage in any enforcement proceeding. - **What to do:** Maintain a **regulatory readiness dossier** from day one: internal risk assessments, product design rationale documents, safety testing records, and incident logs. Treat every internal Slack conversation as potentially discoverable. --- ### 🤝 Lesson 9: Your Infrastructure Liability Flows Downstream — and Upstream - If you are building **infrastructure** (APIs, SDKs, models) that downstream consumer apps use to build emotional AI products, you are not insulated from their harms. The Garcia case named **Google** as a defendant partly for its infrastructure relationship with Character.AI. - **What to do:** Build **prohibited use policies** into your terms with teeth. Conduct due diligence on major API customers. Consider a trust-and-safety review process for applications in mental health, companion, or child-facing contexts. --- ### 📣 Lesson 10: Academic Critique Is Regulatory Preview - **The Barrett and AI Now lesson:** Every major regulatory action against emotion AI in the 2020–2026 period was preceded by 2–5 years of academic critique making the same arguments. The EU AI Act's emotion recognition restrictions are almost verbatim Barrett's scientific critique. - **What to do:** **Read the academic literature as a regulatory early warning system.** If neuroscientists and STS scholars are building consensus that a specific use case lacks scientific validity, regulators will eventually codify that consensus. Build your product to survive that critique, not to race ahead of it. --- ## Quick Reference: Key Sources | Event | Source URL | |---|---| | EPIC FTC HireVue Complaint (2019) | https://epic.org/wp-content/uploads/privacy/ftc/hirevue/EPIC_FTC_HireVue_Complaint.pdf | | HireVue Halts Facial Analysis (2021) | https://epic.org/hirevue-facing-ftc-complaint-from-epic-halts-use-of-facial-recognition/ | | EPIC/Workforce Bulletin Complaint Summary | https://www.workforcebulletin.com/epic-files-complaint-with-ftc-regarding-ai-based-facial-scanning-software | | Italy Garante Fines Replika €5M (2025) | https://www.silicon.co.uk/cloud/ai/italy-replika-ai-fine-614621 | | EDPB/Garante Official Notice | https://www.edpb.europa.eu/news/national-news/2025/ai-italian-supervisory-authority-fines-company-behind-chatbot-replika_en | | Garante Reaffirms Replika Ban (June 2025) | https://iapp.org/news/a/italy-s-dpa-reaffirms-ban-on-replika-over-ai-and-children-s-privacy-concerns/ | | Emotional AI Company Fined — Legal Analysis | https://www.bipc.com/european-authority-fined-emotional-ai-company-for-privacy-violations | | Garcia v. Character.AI — Tech Justice Law Project | https://techjusticelaw.org/cases/garcia-v-character-technologies-google-and-character-ai-co-founders-daniel-de-frietas-and-noam-shazeer/ | | Garcia/Google Settlement (Jan 2026) | https://www.cbsnews.com/news/google-settle-lawsuit-florida-teens-suicide-character-ai-chatbot/ | | Character.AI Settlement Mediation (Jan 2026) | https://www.k12dive.com/news/characterai-google-agree-to-mediate-settlements-in-wrongful-teen-death-la/809411/ | --- ## Summary: The 2026 Regulatory Gestalt The arc from 2020 to 2026 tells a consistent story: **affective AI has moved from a zone of regulatory ambiguity to a zone of active enforcement.** The tools are now in place — FTC Section 5, GDPR Article 9, EU AI Act prohibitions, and product liability tort law — to hold emotional AI companies accountable on multiple simultaneous fronts. The companies that failed (HireVue, Replika, Character.AI) shared common failure modes: **overclaiming scientific validity, ignoring vulnerable user populations, designing for dependency, and treating consent as a legal formality rather than a genuine protection.** A 2026 pre-seed emotional AI infrastructure company that internalizes these lessons is not just avoiding risk — it is building the **only kind of emotional AI company that will survive the next decade.** --- ## Sources 1. [HireVue, Facing FTC Complaint From EPIC, Halts Use of Facial ...](https://epic.org/hirevue-facing-ftc-complaint-from-epic-halts-use-of-facial-recognition/) 2. [AI: the Italian Supervisory Authority fines company behind chatbot ...](https://www.edpb.europa.eu/news/national-news/2025/ai-italian-supervisory-authority-fines-company-behind-chatbot-replika_en) 3. [AI company, Google settle lawsuit over Florida teen's suicide linked ...](https://www.cbsnews.com/news/google-settle-lawsuit-florida-teens-suicide-character-ai-chatbot/) 4. [[PDF] EPIC Complaint FTC In Re HireVue November 6, 2019 1 FEDERAL ...](https://epic.org/wp-content/uploads/privacy/ftc/hirevue/EPIC_FTC_HireVue_Complaint.pdf) 5. [Emotional AI Company Fined for Privacy Violations](https://www.bipc.com/european-authority-fined-emotional-ai-company-for-privacy-violations) 6. [Garcia v. Character Technologies, Google, and Character AI co ...](https://techjusticelaw.org/cases/garcia-v-character-technologies-google-and-character-ai-co-founders-daniel-de-frietas-and-noam-shazeer/) 7. [EPIC Files Complaint with FTC Regarding AI-Based Facial ...](https://www.workforcebulletin.com/epic-files-complaint-with-ftc-regarding-ai-based-facial-scanning-software) 8. [Italy Fines Replika AI Maker £4m Over Child Safety - Silicon UK](https://www.silicon.co.uk/cloud/ai/italy-replika-ai-fine-614621) 9. [Italy's DPA reaffirms ban on Replika over AI and children's privacy ...](https://iapp.org/news/a/italy-s-dpa-reaffirms-ban-on-replika-over-ai-and-children-s-privacy-concerns) 10. [Character.AI, Google agree to mediate settlements in wrongful teen ...](https://www.k12dive.com/news/characterai-google-agree-to-mediate-settlements-in-wrongful-teen-death-la/809411/)