# Research: Market Validation for an Emotional AI Infrastructure Company > Asked: Matt's deep-research prompt on whether the EAII / EDNA thesis (neutral third-party emotional AI infrastructure, SDK-delivered, cross-platform, persistent per-user) survives skeptical 2026 scrutiny. > Searches run: 30 (sliced across the 7 sections of the original prompt). > Date: 2026-05-14. > Raw Perplexity outputs preserved under [tools/perplexity-search/ask-perplexity-output/](tools/perplexity-search/ask-perplexity-output/). Index of all 30 files at the bottom of this document. > Prompt file: [local/perplexity-output/2026-05-14-market-validation-30-prompts.md](local/perplexity-output/2026-05-14-market-validation-30-prompts.md). --- ## Executive summary The full EAII thesis as posed (a neutral cross-platform consumer emotional profile that travels with the user across OpenAI, Anthropic, Google, Meta, Apple, and Amazon) is the **weakest** version of the bet. Every historical analogue (identity, payments, health data, analytics) shows platform giants absorbing the consumer-facing UX layer through on-device personalization moats, while the neutral layer survives only in B2B infrastructure (Plaid, Stripe, Twilio). Apple's on-device + Private Cloud Compute architecture, Anthropic's April 2026 finding of 171 emotion vectors inside Claude Sonnet 4.5, and Inflection's talent-strip by Microsoft are all evidence that platforms will own the consumer cross-platform memory layer themselves. Without a regulatory mandate equivalent to PSD2 or CFPB 1033 for emotional data (none exists in 2026), the consumer thesis breaks. The **strong** version of the bet survives. The market has clear vacant space for a B2B emotional-signal infrastructure SDK that (a) ships on-device first, (b) is opt-in by architecture, (c) outputs structured, auditable emotional state to the developer's choice of LLM, and (d) ships compliance documentation for the EU AI Act, HIPAA, BIPA, and Colorado AI Act in the box. No incumbent (Hume AI, SmartEye/Affectiva, NICE, Cogito, Uniphore, Mem0) covers all four. Foundation model providers will not, because exposing structured per-user emotional state conflicts with their own product surfaces. The sharpest wedge for a pre-seed team in 2026 is **clinical-adjacent mental health coaching platforms** (Spring Health, Lyra, Calm Business, Brightline, Sword Health). Reasons: the EU AI Act Article 5(1)(f) ban on workplace emotion recognition has an explicit medical and safety carve-out, HIPAA's BAA gate is a moat once cleared, the buyer (CMO / Chief People Officer through employer benefits procurement) is the exact motion Michelle already runs, Hume cannot land these because it has no HIPAA BAA path or FHIR output, and Matt's team can publish on small-LLM emotion classification (fine-tuned Phi-4-mini or Gemma 3 1B on DAIC-WOZ / IEMOCAP) to generate inbound. Second wedge: AI agent orchestration middleware (Cognigy, NICE AI Studio, Agentforce ISVs, Intercom Fin) for emotion-aware handoff. Third wedge (Series A territory): consumer wearable emotional memory. Recommendation: lead with mental health coaching platforms. Sequence into agent infrastructure at Series A. Treat the consumer cross-platform vision as a 24 to 48 month outcome of B2B network effects, not the pre-seed pitch. --- ## Findings ### Section 1. Market map of emotional / affective AI (2026) | Company | Founded | Last raise | HQ | What they sell | Modalities | Persistent per-user profile | Cross-platform | Pricing | Named customers / partners | |---|---|---|---|---|---|---|---|---|---| | **Hume AI** | 2021 | $50M Series B (Mar 2024, total ~$62.7M; lead EQT Ventures, USV) | New York | EVI voice API + Expression Measurement API + Expressive TTS | Voice (primary), face, text, physiological | Session-level only; no shipped persistent product | LLM-agnostic but Hume-locked | Public tiers $0 to $500/mo + custom | Vonova, Hamming AI, hpy, Northwell Health (investor + partner) | | **SmartEye / Affectiva** | 2009 / merged 2021 (~$73.5M) | Public (Nasdaq First North) | Boston + Gothenburg | Automotive Interior Sensing + Media Analytics + Human Factors | Face (15M+ videos, 8B+ frames, 90 countries), voice, gaze | No persistent user product (session, in-cabin) | Vertical-locked (auto / advert / research) | Enterprise OEM licensing, per-vehicle | OEMs (NDA); Unilever, CBS (pre-acquisition era); CES 2026 | | **Cogito** | 2007 | ~$70M total (last Series D 2019) | Boston | Real-time agent voice coaching | Voice only | Per-agent longitudinal (yes), per-caller weak | Contact-center locked | Enterprise SaaS, undisclosed | MetLife, Humana, Cigna, Principal | | **Uniphore** | 2008 | $400M Series E (Feb 2022, total ~$610M) | Palo Alto | U-Analyze / U-Assist / Q for Sales (CX agentic AI) | Voice, video, text, screen | Strongest cross-session per-caller profile of the three | Contact-center locked, multimodal | Enterprise, $500K to $2M+ ACV | Bajaj Allianz, Franklin Templeton, Conduent, HGS | | **NICE Ltd.** | 1986 | Public (NASDAQ:NICE), ~$8 to $10B market cap, ~$2.4B FY24 revenue | Ra'anana, Israel | CXone Mpower + Enlighten AI (CSAT/QA/routing/Copilot); Cognigy (2024 acq) | Voice, chat, email, SMS, social, screen | Score/outcome-oriented journey profiles | Locked to CXone | Modular per-seat SaaS, enterprise | Verizon, Virgin Media O2, Radisson, HUB International, Bradesco | | **Behavioral Signals** | 2016 | Kairos Ventures portfolio (rounds undisclosed) | Los Angeles | AIMC platform: real-time emotion-routing for calls | Voice | Implied (routing optimization) | Call-center locked | Enterprise, undisclosed | Call center + defense; specifics not public | | **Empath (Japan)** | ~2015 to 2016 | ¥320M (~$2.9M, 2018) | Tokyo | REST API (language-independent acoustic emotion) | Voice | No (stateless) | API-only | Freemium + enterprise | NTT Docomo (disaster support); ~1,000 companies across 50 countries (2018 figure) | | **audEERING** | 2012 | ERC H2020 + acquired by Agile Robots (~2024 to 2025) | Munich | openSMILE (OSS) + AI SoundLab + SDK for auto/health/robotics | Voice, environmental sound, music; ~7,000 acoustic features | Partial (AI SoundLab longitudinal voice biomarkers) | SaaS + SDK, vertical-targeted | Enterprise / undisclosed | BMW, Huawei, GfK, Red Bull, Ipsos | | **MorphCast** | 2017 | Bootstrapped | Milan | Browser JS / WASM SDK (100+ affective signals) | Face (in-browser, on-device) | No (ephemeral by design) | Web-first SDK | Freemium + paid + enterprise | EdTech / telehealth / martech (most names not public) | | **Realeyes** | 2007 | ~$33.8M total (Series A+B) | London | Ad / media attention measurement (panel-based) | Face (opt-in webcam panels) | No (research-scoped) | SaaS + lightweight JS loader | Enterprise / undisclosed | AT&T, Mars, Hershey's, Coca-Cola | | **Entropik** | 2016 | $25M Series B (Bessemer, 2023; total ~$35M) | Bengaluru | Decode platform (facial + eye + EEG + survey) | Multimodal | No (research-scoped) | Cloud SaaS + API | Enterprise / undisclosed | FMCG / media / retail (specifics limited) | | **Mem0** | 2023 | $24M (Oct to Nov 2025; YC, Peak XV, Basis Set) | Bay Area | Persistent AI memory layer (model-agnostic SDK) | Text (no affect modality) | Yes (cognitive memory, not affective) | API + SDK, framework-agnostic | Usage-based + enterprise | Thousands of startups; 41K GitHub stars | | **Humans& (Eric Zelikman)** | 2025 | Raising ~$1B at ~$4B (Oct 2025, in progress) | Bay Area (inferred) | EQ-native foundation model | Multimodal (planned) | Unknown | Foundation model | Pre-revenue | None public | | **Sanas** | 2020 | $32M Series A (2022), ongoing growth rounds | Bay Area | Real-time accent / emotional tone neutralization for BPO calls | Voice (real-time transformation) | No | Telephony stack | Enterprise | Large BPOs, F500 (unnamed) | **Map summary.** Crowded zones: facial action unit scoring for ad research (Realeyes, Affectiva, Entropik, MorphCast), contact-center voice coaching (Cogito, NICE, Uniphore, Behavioral Signals, Empath), automotive DMS (SmartEye + Seeing Machines effective oligopoly), and session-level voice emotion APIs (Hume EVI vs ElevenLabs / Cartesia / Sesame). Sparse zones: persistent cross-session emotional profiles tied to user identity (no vendor has shipped this), cross-platform / cross-application portability (zero), developer-first neutral third-party SDKs that are not vertical-locked (closest are Hume API and MorphCast browser SDK, neither persistent), consent-governed user-controlled emotional data portability (the explicit Plaid analogy, zero vendors), edge plus cloud hybrid with persistent identity graph (none), and emotional context primitives for AI agents (none, despite the agentic-AI wave). EDNA-like products: ChatGPT memory has the most depth but no portability, KAi (privacy-first newcomer) is closest to "persistent understanding" architecture, Kindroid permits manual portable journals, no product meets the four-criteria EDNA bar (deep inference + user ownership + cross-product portability + interop standard). ### Section 2. The cross-platform thesis, stress-tested **Foundation-model providers' emotion surfaces in 2026.** - **OpenAI**: GPT-4o expressive audio via the Realtime API (production); ChatGPT memory (auto-managed as of April 2026) is consumer-only, not exposed via API. Responses API context compaction and reusable skills exist (March 2026 update). No structured `{detected_emotion: ...}` API endpoint. ([OpenAI](https://openai.com/index/hello-gpt-4o/), [InfoQ March 2026](https://www.infoq.com/news/2026/03/openai-responses-api-agents/)) - **Anthropic**: Claude April 2026 interpretability research identified 171 distinct emotion vectors in Sonnet 4.5 internals that causally shape behavior. Positioned as safety / model-welfare research, not productized; no public API exposure of emotion state. ([Megaone analysis](https://megaoneai.com/analysis/anthropic-finds-functional-emotion-representations-inside-claude-that-influence/)) - **Google**: Gemini sentiment via prompting; Project Astra has affective scene understanding in prototype, not API-exposed. Older Natural Language API still offers structured sentiment. - **Meta**: Llama internal emotion encoding confirmed in academic research (ACL 2025 SENTRILLAMA); Meta AI affective tone locked to Meta surfaces; Reality Labs Codec Avatars expose limited expression tracking via Quest Presence Platform. - **Apple**: Writing Tools tone rewriting, priority notifications, Siri contextual empathy. All locked. Apple Intelligence 3B on-device model + Private Cloud Compute is the production reference for hybrid local + cloud. No emotion API exposed. ([Apple research](https://machinelearning.apple.com/research/introducing-apple-foundation-models)) - **Amazon**: Comprehend sentiment API (open); Alexa Skills Kit frustration detection (limited); Alexa+ empathy locked. - **Microsoft**: Azure AI Language Sentiment + Azure AI Speech emotion are both GA and the **most open** structured emotion APIs from a foundation provider. Copilot/Viva sentiment locked to M365. **Portable state and emotion standards.** MCP is the de-facto agent transport but has no emotion extension. W3C WebAgents CG (March 2026 interop report) and Smart Voice Agents Workshop (Oct 2025) raised cross-cultural emotion modeling and identity delegation as open items; potential W3C Activity Group at TPAC 2026. IEEE 7010 (wellbeing metrics) is process-oriented, not a wire format. Letta / Mem0 / Agent Protocol are practitioner-led de-facto memory standards. No ratified cross-vendor emotional context standard exists in 2026. EmotionML 1.0 (2014 W3C Recommendation) is stale. **Historical analogues (full table).** | Analogue | Open standard | Neutral B2B middleware | Platform absorption of consumer UX | Regulatory force | |---|---|---|---|---| | Identity (OAuth / Okta+Auth0 / SIWA / Google) | OAuth open won | Okta+Auth0 won enterprise | Apple/Google won consumer | Partial | | Payments (Stripe / Adyen / Apple Pay / Google Pay) | API commoditized | Stripe / Adyen won B2B | Apple Pay / Google Pay won wallet UX | EU DMA forced NFC opening | | Financial data (Plaid / FDX / PSD2) | FDX won protocol | Plaid survived (DOJ blocked Visa acquisition) | Banks tried, failed | PSD2 + CFPB 1033 mandated portability | | Health data (FHIR / Apple Health / Health Connect) | FHIR won | Fragmented | Apple Health / Health Connect won consumer | 21st Century Cures Act | | Comms APIs (Twilio / Sinch / MessageBird) | Open protocols | Twilio won clearly | Platforms tried, failed | Telecom regulatory neutrality | | Analytics / CDP (Segment / mParticle / RudderStack) | Fragmented | Partially absorbed (Segment → Twilio struggled) | Google Analytics dominant; ATT crippled CDPs | None protective; ATT hostile | The **closest predictive analogue is Plaid (financial data)**. Plaid survived only because regulators mandated portability. There is no equivalent emotional-data portability regulation in 2026 and none on the legislative calendar. The EU AI Act regulates emotional data sharply but **does not mandate user-controlled portability**. The honest read: without regulatory backing, the neutral cross-platform emotional layer is structurally more like CDPs after ATT than Plaid after PSD2. It survives B2B; it loses consumer. **The bear case (Perplexity's strongest version).** Five compounding forces work against the consumer cross-platform thesis: (1) on-device personalization moats are now cryptographic, not contractual, with Apple's Private Cloud Compute and Secure Enclave deliberately preventing third-party access; (2) Apple + Google's January 2026 Gemini-powers-Siri deal is a bilateral vertical integration that squeezes any independent layer; (3) every analogous neutral consumer layer was absorbed (Workflow / Siri Shortcuts, Auth0 consumer, Stripe consumer, Twilio Segment after ATT); (4) the Inflection / Pi precedent is the new playbook: Microsoft talent-stripped Inflection in March 2024 without acquiring it, gutting the neutral consumer emotional AI while avoiding antitrust scrutiny; (5) every multimodal stream emotional AI needs (HRV, voice tone, typing cadence, app-usage) is now owned by Apple Health / Gemini / WhatsApp / Quest. Conditions that would have to be true to keep the consumer thesis alive: regulatory interoperability mandate (low probability), users actively port their context via a W3C standard (very low), platform AI stays bad (low and dropping), the neutral layer becomes a hardware play above the OS (medium and the only viable escape hatch), or the neutral layer pivots to B2B enterprise (medium to high; this is what actually works). Bottom line: build the B2B layer first, treat consumer cross-platform as an emergent property of B2B network effects, and accept that consumer UX will be owned by platforms. ### Section 3. Who would actually pay for this, ranked by realism | Rank | Segment | Sales cycle | Realistic first ACV (pre-seed) | First-five logos (Perplexity-suggested) | Regulatory drag | |---|---|---|---|---|---| | 1 | **AI companion apps** | 4 to 10 weeks | $24K to $180K | Replika, Character.AI, Kindroid, Nomi, Paradot | Medium (FTC 6(b) inquiry is opportunity, not block) | | 2 | **Gaming NPCs / character infra** | 3 to 8 weeks | $8K to $60K (indie/AA), $150K to $500K (publisher) | Inworld, Convai, Charisma.ai, Replica Studios, Niantic | Low | | 3 | **Dating apps** | 6 to 14 weeks (small) / 9 to 18 mo (Match) | $18K to $120K (small) / $300K to $1M (Match/Bumble) | Thursday, Feels, Hinge, Locket, Paired | Medium (BIPA, FTC dark patterns) | | 4 | **AI agent infrastructure** | 2 to 6 weeks | $12K to $400K | Bland AI, Vapi, Retell AI, Sierra AI, Cognigy | Low | | 5 | **Enterprise CX / contact centers** | 4 to 18 months | $40K to $800K | Observe.AI, Balto, Talkdesk, Intercom, Qualtrics | Medium (EU AI Act high-risk; BIPA) | | 6 | **Mental health (consumer wellness lane)** | 8 to 16 weeks (wellness) / 18 to 36 mo (clinical) | $20K to $80K (wellness) / $150K to $500K (clinical) | Woebot Health, Wysa, Calm, Spring Health, Headspace for Work | High (HIPAA, FDA SaMD, Illinois WOPRA) | | 7 | **Education (corporate L&D only)** | 3 to 6 mo (L&D) / 12 to 24+ mo (K-12) | $15K to $60K (L&D) / $50K to $200K (uni) | Coursera for Business, Duolingo, Synthesis, Khanmigo, Articulate 360 | Very high in K-12 (FERPA, COPPA, EU AI Act Annex III, state bans) | | 8 | **HR tech (post-hire only)** | 3 to 6 mo | $30K to $100K | Leapsome, Lattice, Humu (acquired), Perceptyx, BetterUp | Very high (NYC LL144, Colorado AI Act, EEOC, EU AI Act Art 5 ban for hiring/workplace) | | 9 | **Automotive (fleet aftermarket only)** | 6 to 12 mo (fleet) / 3 to 5 yr (OEM) | $40K to $150K (fleet) | Samsara, Lytx, Netradyne, Mobileye, SmartDrive | Very high (UNECE R157, ISO 26262, EU AI Act high-risk) | Companion apps and gaming NPCs are the fastest first-customer paths. AI agent infrastructure is the fastest deal cycle (developer buyer, no procurement). Enterprise CX is large but slow and incumbent-saturated. Mental health is high-mission, high-friction. HR tech for hiring is effectively closed. Automotive OEM is structurally unreachable at pre-seed; fleet telematics is the only viable lane and still slow. **Budget owners by 2026 reality.** Chief AI Officer has emerged at ~35% of F500 and is now a mandatory approval for any AI SDK above ~$50K. Trust & Safety / Chief Trust Officer has independent budget in consumer platforms (DSA / KOSA pressure). CMO (Chief Medical Officer) is the gating buyer in any health-touching deal. Head of Platform owns developer-tooling budget in gaming and agent-infra contexts. The 2024 to 2026 shift: AI moved from innovation budget (25% of LLM spend in 2024) to core OpEx (7% innovation, 93% operating in 2026), procurement is gate-driven not curiosity-driven, and vendor consolidation is the dominant pressure. Lead with compliance, not capability. Map to an existing line item (Trust & Safety, Clinical AI, Contact Center OpEx). ROI documentation is mandatory. ### Section 4. The SDK-in-the-client distribution model **Lessons from successful SDK businesses.** Five meta-lessons survive: (1) time-to-first-call under 10 minutes with no sales call (Stripe's 7-line integration, Sentry's `init()` + dsn key); (2) public, usage-based pricing with a generous free tier (Pinecone, ElevenLabs, Sentry); (3) idiomatic per-language SDKs written by company engineers, never auto-generated wrappers (Stripe gold standard); (4) integration density compounds (Datadog 1,000+ integrations, Segment Sources/Destinations marketplace); (5) data trust is everything (Mixpanel's 2017 MTU pricing change triggered mass exodus to Amplitude). What killed companies that failed: Heap (autocapture without data governance), MoEngage and similar mobile marketing SDKs (marketer-buyer plus engineer-integrator misalignment), Auth0 challengers (any security incident was fatal). Emerging dealbreaker in 2026: SDKs without machine-readable schemas (OpenAPI, MCP) are becoming invisible to AI coding agents that provision infrastructure. **2026 integration friction.** Total SDK-to-production timeline for a mid-complexity mobile SDK is 8 to 20 weeks. Hard gates include SOC 2 Type II report current within 12 months, SBOM (SPDX or CycloneDX), SLSA Level 2 provenance, Apple PrivacyInfo.xcprivacy manifest (enforced since May 2024; without it the SDK literally cannot ship on iOS), required-reason API declarations, Google Play Data Safety disclosures (automated binary scanning since 2025; 255K+ apps blocked in 2025 alone), ATT-denial graceful fallback, and performance budgets (iOS binary <1.5MB, Android <2MB post-shrink, main-thread init <200ms, web bundles <50KB gzipped for synchronous loads). ([Bitrise Apple Privacy Manifest](https://bitrise.io/blog/post/enforcement-of-apple-privacy-manifest-starting-from-may-1-2024), [Respectlytics Data Safety guide](https://respectlytics.com/blog/google-play-data-safety-guide/)) **On-device feasibility (the EAII technical bet).** Yes, credible in 2026 and shipping in production. Apple Intelligence's 3B on-device model with 2-bit QAT plus Private Cloud Compute is the production reference architecture. Gemini Nano (1.8B) on Tensor G3 / Pixel + AICore is the Android equivalent. Samsung Galaxy AI (Gauss 1 to 3B) plus cloud is also live. | Model | Quantized size | RAM | iPhone 17 Pro tok/sec | Old 4GB Android tok/sec | Emotion fit | |---|---|---|---|---|---| | Gemma 3 1B (4-bit) | 0.6 to 0.9 GB | 1.0 to 1.5 GB | 35 to 45 | 10 to 15 | Best low-end / real-time | | Llama 3.2 1B | 0.7 to 1.0 GB | 1.2 to 1.8 GB | 30 to 40 | 8 to 12 | Broad fine-tunes | | Llama 3.2 3B | 1.8 to 2.2 GB | 2.5 to 3.5 GB | 16 to 22 | 5 to 9 | Tool-calling, broad community | | Phi-4-mini (3.8B) | 2.2 to 2.8 GB | 3.0 to 4.5 GB | 13 to 18 | 4 to 7 | Strongest reasoning per param | | Apple Foundation (~3B, 2-bit QAT) | 2 to 3 GB | 2 to 3 GB | 10 to 20 | iOS only | System-managed, zero latency | For emotion classification specifically, fine-tuned encoder-only models (DistilBERT / DeBERTa-v3-small, 80 to 250MB, 5 to 40ms prefill) **dominate small LLMs on latency**, often by 10x to 50x, at comparable accuracy. The case for using an on-device LLM is generalization to novel emotion schemas without retraining, or sharing the model already loaded for other features. PAD regression on-device hits r ~0.68 to 0.78 with DeBERTa-small or fine-tuned 1B to 3B LLMs (cloud GPT-4o ceiling is r ~0.80 to 0.86; human inter-annotator agreement is r ~0.60 to 0.70, so models are at the annotation noise floor). Browser-side: Transformers.js plus DistilRoBERTa-emotion (~80MB, ~100ms) is production-ready; WebGPU LLM inference is too slow on mobile browser. ([Apple foundation models](https://machinelearning.apple.com/research/introducing-apple-foundation-models), [On-Device LLMs State of the Union 2026](https://v-chandra.github.io/on-device-llms/)) **Compliance posture and certification sequencing.** SOC 2 Type II is the universal door-opener (8 to 12 months, $25K to $55K total; observation period 3 to 6 months minimum; Vanta or Drata automation $10K to $20K per year). HIPAA BAA + GDPR DPA in parallel ($8K to $20K). ISO 27001 after ($15K to $35K; 60% overlap with SOC 2). EU AI Act conformity assessment + ISO 42001 in 2026 ($25K to $65K combined). FedRAMP is Series A+ territory ($500K to $2M+). Total Year 1 to 2 excluding FedRAMP: ~$96K to $235K. **The single largest compliance leverage point is the on-device versus cloud decision**, because on-device inference removes biometric data from sub-processor chains and dramatically compresses audit scope. The EAII team is already aligned on this with EDNA as an SDK. ### Section 5. Regulation, ethics, and the can-we-even-sell-this question **EU AI Act (effective phases through August 2026).** Article 5(1)(f) prohibits emotion inference from biometric data in workplace and educational settings. Active since Feb 2, 2025. Penalties: €35M or 7% of global turnover. Two carve-outs only: medical purposes and safety purposes (strictly interpreted). Annex III high-risk categories (biometric categorisation, education, employment, essential services, law enforcement, justice administration) require conformity assessment, FRIA, EU database registration, CE marking, post-market monitoring. Effective August 2, 2026. Article 50 transparency obligations apply to permitted limited-risk emotion AI (consumer wellness, gaming, entertainment) since August 2025. ([Article 5 text](https://artificialintelligenceact.eu/article/5/), [FPF deep dive](https://fpf.org/blog/red-lines-under-eu-ai-act-unpacking-the-prohibition-of-emotion-recognition-in-the-workplace-and-education-institutions/)) **US state and federal.** Illinois BIPA (private right of action; 2024 SB 2979 amendment limited per-scan exposure but core requirements stand). Texas CUBI + new Texas HB 149 (effective Jan 1, 2026) banning AI behavioral manipulation and social scoring. Washington My Health My Data Act covers any "data that could be used to infer" mental health states (sweeps in most emotional AI). California stack: CCPA / CPRA / SB 362 Delete Act (effective Jan 1, 2026) / AB 2013 training data transparency / SB 942 watermarking / AB 1008 AI personal information clarifier. Colorado AI Act effective Feb 1, 2026 (Trump EO has directed DOJ task force to evaluate it for federal preemption, but biometric and health laws are likely insulated). NYC Local Law 144 AEDT bias audit requirements (Dec 2025 NYC Comptroller enforcement wave; voiceprint use in voice AI hiring tools squarely in scope). Utah HB 331 AI disclosure act. **Illinois WOPRA (HB 1806, effective August 1, 2025) explicitly prohibits AI emotion detection by licensed mental health professionals** in Illinois (penalties $10K/violation). This forces the mental-health wedge to be coaching / EAP / employer-benefits-channel framed, not licensed-therapist-channel. **Sectoral.** HIPAA applies if vendor serves a covered entity (BAA mandatory); HIPAA does NOT cover direct-to-consumer wellness apps (the gap that lets Calm / Headspace operate without BAAs unless contracted by a covered entity). FERPA covers student records; emotional inference tied to identifiable students likely qualifies. COPPA covers under-13 (verifiable parental consent; expanded interpretation for biometric and behavioral profiling). FDA SaMD: clinical claims (diagnose, treat, mitigate, prevent depression / anxiety / PTSD) trigger De Novo or 510(k) review. Big Health is the case study (Sleepio/DaylightRx cleared; any third-party SDK touching them is a 510(k) change-notification trigger). **Landmines and lessons.** EPIC v HireVue (2019) → HireVue removed facial analysis January 2021 but kept voice/linguistic. Replika Italian Garante €5M fine (May 2025); Tech Justice Law Project FTC complaint (Jan 2025); reaffirmed ban June 2025. Character.AI Section 6(b) inquiry (Sept 2025) and Garcia wrongful death lawsuit settled by mediation January 2026; banned under-18 from companion chat November 2024. Lisa Feldman Barrett (Psychological Science in the Public Interest 2019) scientific deconstruction of the "universal facial emotion" thesis is now the regulatory foundation for the EU AI Act prohibitions. Stochastic Parrots (Bender, Gebru, McMillan-Major, Mitchell 2021) is the academic source for the "fluency over-attribution" concern that drives consumer companion regulation. **Ten compounding lessons** for a 2026 pre-seed: never overclaim scientific validity beyond peer-reviewed evidence; treat minor and vulnerable-user exposure as existential; product liability frameworks (not just privacy) now apply (Garcia); GDPR Article 9 special-category-data consent must be explicit and granular; dependency-by-design is the new regulatory target (do the opposite: ship dependency-dampening features); consent architecture must match data sensitivity; removing the visible feature is not enough (HireVue removed face, kept voice, regulators noticed); maintain a Section 6(b) readiness dossier; infrastructure liability flows both upstream and downstream (Google was named in Garcia); read the academic literature as a regulatory early warning system. **Net effect on customer segments from Section 3.** | Segment | Pre-regulation feasibility | Post-2026-regulation feasibility | Posture that unlocks it | |---|---|---|---| | HR hiring | Moderate | Effectively closed | None at pre-seed | | Workplace monitoring (EU) | Moderate | Prohibited | None | | K-12 student emotion | Hard | Near-paralysis | Adult / corporate L&D only | | Therapist-side clinical (IL) | Moderate | Closed by WOPRA | Patient-initiated, coaching / EAP framing | | AI companion apps (consumer) | Easy | Conditional on opt-in / age-gating | Opt-in plus safety telemetry plus minor exclusion | | Mental health coaching (employer EAP) | Moderate | Open with carve-out (Art 5(1)(f) safety / medical) | This is the wedge | | Automotive driver safety | Hard | Open with carve-out | UNECE R157 / safety exception | | Agent infrastructure | Easy | Easy (limited risk, Article 50 transparency only) | Disclose AI, ship audit logs | ### Section 6. The differentiation question (moats) | Moat candidate | Defensibility | Time-to-build | Pre-seed winnable? | Pre-seed move | |---|---|---|---|---| | (a) Cross-platform persistence of user emotional state | Medium | 9 to 15 mo | Tight | File patents on consent-graph schema | | (b) Proprietary multi-modal dataset from customer integrations | Low → Medium at scale | 18 to 36 mo | Series B moat | Instrument every integration with opt-in telemetry day one | | (c) Vertical lock-in | Low (as primary moat); Medium (as beachhead) | 6 to 12 mo | Yes as beachhead | Pick one compliance-heavy vertical | | (d) Developer experience and SDK ergonomics | Medium | 3 to 6 mo | Yes (fastest) | Ship "emotion in 3 lines" SDK + open-source schema | | (e) Safety, compliance, audit posture | **High** | 6 to 24 mo | Yes (start now) | Publish an open Emotion Data Trust Framework spec | | (f) Cross-app network effects from profile portability | High at scale | 24 to 48 mo | Series A/B moat | Design the schema portable from day one | | (g) On-device latency and performance | Medium | 6 to 12 mo | Yes (18-mo window before NPU/OS commoditizes) | Federated learning loop, not just on-device inference | | (h) Pricing and packaging | Low alone (architecture matters) | 1 to 3 mo | Decide day one | Price per emotional-event (not per API call) | | (i) Modular model-merging (Sakana M2N2, Allen BAR) for per-vertical specialists | **High if executed** | 12 to 18 mo | Yes with ML founder | Publish a benchmark paper before having proprietary data | **Pre-seed moat stack (layered, sequential).** Months 0 to 6: DX + Compliance architecture. Months 6 to 18: on-device + consent graph. Months 12 to 24: modular model-merging pipeline (this is where the M2N2 thesis pays off and where Matt's super outline v1.2 has the clearest technical lead over Hume). Months 18 to 36: cross-app profile portability begins generating network effects. Months 24 to 48: proprietary dataset flywheel compounds. **The single structural advantage no incumbent can replicate.** Neutrality. Hume competes with the apps it would serve (EVI is itself a voice agent product). SmartEye is owned by Smart Eye AB with automotive interests. NICE, Cogito, Uniphore are the applications, not the infrastructure. Foundation-model providers have conflict-of-interest constraints on exposing structured emotion state to third parties. A pre-seed neutral B2B vendor is the only party structurally incentivized to be neutral, and that is the Plaid analogy reduced to its essence. ### Section 7. Recommended wedge (three candidates, ranked) #### Most defensible: clinical-adjacent mental health coaching platforms We help digital mental health coaching platforms solve the absence of between-session affective continuity and clinical defensibility by shipping an on-device emotional state SDK with structured FHIR-compatible signal output, opt-in consent flows, and a model-merging layer that lets platforms swap or fine-tune the affect classifier without retraining from scratch. The wedge is the only affect infrastructure that is simultaneously on-device (HIPAA data residency by architecture), FHIR R4 output, and EU AI Act Article 5(1)(f) safe (medical and safety carve-out, patient-initiated rather than employer-initiated). First five logo targets: Spring Health, Lyra Health, Calm Business / Calm Health, Brightline, Sword Health. Disqualifying risk: a major EHR (Epic, Oracle Health) ships a native mood-tracking API and coaching platforms adopt that instead. #### Balanced: AI agent orchestration middleware We help AI agent platform vendors (voice agents, customer service copilots, sales coaching tools) solve emotionally-blind task routing (agents that escalate on keywords rather than affective state) by providing a sub-50ms on-device emotion signal middleware layer that plugs into LangChain, LlamaIndex, CrewAI, Mastra, or any orchestration graph as a node. The wedge is the only emotion signal node that runs on-device (no cloud round-trip latency penalty), outputs structured JSON with confidence intervals and regulatory metadata, and supports model-merging so the platform can blend a generic affect classifier with their own domain-tuned data without retraining from scratch. First five logo targets: Cognigy, NICE CXone AI Studio, Salesforce Agentforce ISV partners, Leapsome (coaching agents), Intercom Fin. Disqualifying risk: LangChain or LlamaIndex ships a native "emotion node" primitive that becomes the default, making a standalone vendor redundant. #### Most ambitious (Series A+ territory): consumer wearable emotional memory We help consumer wearable platforms (smart rings, earbuds, AR glasses) and quantified-self apps solve the absence of longitudinal emotional memory (inability to correlate biometric signals with affective states across weeks and months) by providing an on-device emotional memory graph SDK that fuses passive biometric streams (HRV, GSR, vocal affect) with explicit opt-in check-ins, stores them locally encrypted, and exposes a cloud-sync API for platform-level insights without raw biometric egress. The wedge is the only SDK that combines on-device emotional memory persistence (Mem0-like but affect-native), biometric-to-affect fusion, and a single compliance layer that satisfies BIPA, Colorado AI Act, and GDPR Article 9. First five logo targets: Oura Ring, Samsung Health (Galaxy Ring), Mimi Health, Whoop, Nothing Ear. Disqualifying risk: Apple Intelligence 2.0 ships a native on-device emotional memory layer tied to Health app, commoditizing the iOS surface overnight. #### Recommendation: start with Candidate 1, sequence into Candidate 2 Five reasons (under 200 words). (1) The benefits-procurement buyer (VP Benefits, Chief People Officer, CMO) is exactly the motion Michelle already runs from pet insurance benefits. (2) Regulatory complexity is the moat, not the obstacle: every coaching platform needs a vendor that has already solved HIPAA BAA plus BIPA opt-in plus FHIR R4 output plus EU AI Act Article 5(1)(f) medical-exemption documentation; the first vendor to ship that is uncatchable for 12 months. (3) Matt's ML team can publish a clinically validated emotion classifier benchmark (Phi-4-mini or Gemma 3 1B fine-tuned on IEMOCAP / DAIC-WOZ / AVEC) and generate inbound from every coaching platform's ML team. (4) On-device is a clinical necessity (HIPAA data residency), not a performance argument, which closes legal review faster. (5) Once 3 to 5 coaching platform logos are live, the natural Series A expansion pitch to AI agent platforms writes itself ("we already power emotion-aware coaching agents at Spring Health, same infrastructure for your customer service agents"). This sequencing also leaves room for the consumer cross-platform vision to emerge from B2B network effects, which is what the historical analogues say will actually happen. --- ## Connection to the engagement This research lines up directly with the team's existing direction and helps sharpen several open items. - **B2B plus Plaid model is the right call (May 11 Slack huddle).** All 30 queries confirm the consumer cross-platform vision is structurally hard. B2B SDK infrastructure with opt-in core value is exactly the pattern that survives the bear case. - **Opt-in as a core value is structurally compliant.** The EU AI Act's Article 5(1)(f) workplace ban explicitly turns on employer-versus-individual initiation. Patient-initiated / individual-initiated emotional inference (with the medical or safety carve-out, or the limited-risk Article 50 disclosure path) is the only viable path. The team's May 11 decision was prescient. - **EDNA pushed to Phase 3 fits the layered moat stack.** The synthesized recommendation is: ship structured emotional-signal infrastructure first (Phase 1 emotional observatory and Phase 2 engine), then ship persistence (Phase 3 EDNA), then let cross-app portability emerge. Patrick's May 13 "Engine before EDNA" call is the right sequencing. - **The "crazy theory" of dual-perspective architecture maps to Observer + Strategist split.** The current super outline v1.2 already encodes this (ADR-001 Llama 3 8B Observer + Llama 3 70B Strategist, with ADR-006 DARE-TIES + SLERP session-start merging). The relational-duality framing gives the architecture a publishable narrative that competes with Hume's empathic-voice narrative on first principles, not on features. - **On-device feasibility is real, but model sizing should drop for the SDK tier.** The super outline v1.2 currently specs an 8B Observer for the cloud side. For the on-device EDNA SDK tier, the math says Gemma 3 1B or fine-tuned DistilBERT/DeBERTa-small are more realistic for mass-market mobile distribution. Reserve the 8B Observer for the cloud backbone; ship a sub-1B distilled classifier for on-device. This is consistent with the Phase 1 Pillar 1.5 prosody-distillation training subset already in v1.2. - **The mental-health-coaching wedge fits Michelle's network.** Spring Health, Lyra, Brightline, and Sword all sell through employer benefits procurement, the exact category Michelle has run. Her People Forward Network lunch (May 8) and her CEO motion gives the team a credible warm-intro path. This is also consistent with Dustin's two-moats framing (data accumulation plus CMS) because compliance is the wedge. - **Joann's Phase 1 PostgreSQL + Valkey alternate stack is licensing-clean for the SDK side.** Anything client-distributed must avoid SSPL / AGPL contamination; her instinct was correct and aligns with the SDK adoption findings (GPL contamination in transitive deps is a disqualifying dealbreaker in 2026 procurement). - **Hume positioning gap is exploitable in 2026.** Hume has no HIPAA BAA path documented, no FHIR-compatible output, no on-device SDK, no enterprise audit log per developer, and no per-vertical specialist module pipeline. These are the five exact slots a pre-seed neutral infrastructure SDK fills. - **Naming and pitch note.** Whichever name lands May 18 (Human Discovery / EAII / emogens), the pitch should lead with "emotional observatory and SDK for emotionally-sensitive AI applications, B2B-first, on-device-first, opt-in by architecture." That language is regulator-friendly, fits the FTC Section 6(b) inquiry posture, and avoids the dependency-by-design language that gets companion apps in trouble. Risks for Matt to flag with the team. - **Illinois WOPRA closes one mental-health lane.** Licensed mental health professionals in Illinois cannot use AI for emotion detection. Mental-health-coaching customers must be coaching / EAP / digital therapeutic frame, not in-network clinical practice. Spring Health and Lyra are the right framings; Talkiatry (employed psychiatrists, prescribing) is the wrong one. - **The 2026 SDK gate is brutal.** Privacy Manifest, SBOM, ATT graceful fallback, Google Play Data Safety, performance budgets are all hard gates. Joann's Phase 1 work should bake these in from day one or the SDK literally cannot ship to iOS. - **Mem0 is the closest single competitor to watch on the persistence side.** They have $24M raised, AI memory as their core, and could pivot affect-native in 6 to 12 months. The countermove is to ship the affect-native primitive plus model-merging plus compliance documentation before they do. - **18-month on-device window before Apple / Google commoditize the OS-level emotion API.** Apple could expose on-device emotion via Foundation Models framework at WWDC 2026 or 2027. The team's lead has a clock on it. --- ## Gaps and caveats - **Funding figures are dated for several smaller vendors.** Behavioral Signals last public round was Kairos Ventures portfolio listing (no recent disclosure). Empath's funding is from 2018 (¥320M, ~$2.9M). MorphCast appears bootstrapped (no disclosed VC). audEERING was acquired by Agile Robots (~2024 to 2025) with no public valuation. - **Foundation model emotion roadmaps are partially inferred.** OpenAI, Anthropic, Google, Meta, Apple, Amazon, Microsoft do not publish emotion-feature roadmaps; the answers in Section 2 are reconstructed from shipped products, research papers, and patent activity, not internal roadmaps. The Anthropic April 2026 interpretability finding (171 emotion vectors in Claude Sonnet 4.5) is the most direct foundation-model-side signal but is research, not product. - **M2N2 / BAR for emotion has not been demonstrated.** Sakana's M2N2 and Allen Institute's BAR are real and published, but applying them to produce per-vertical emotion specialist modules is not yet a published result. This is the highest-leverage technical moat but it is a hypothesis the EAII team would prove (which is itself a moat if it lands). - **Pre-seed ACV ranges are estimates.** None of the customer-segment ACV ranges are pre-seed-vendor benchmark data because pre-seed emotional AI vendors do not publish ACV. The ranges synthesize comparable SaaS / enterprise SDK deal patterns and should be validated through direct customer discovery before being used in fundraising materials. - **Hume AI Series C status.** No publicly announced round beyond the $50M Series B (March 2024). Hume could raise large in 2026; the strategic analysis holds regardless because Hume's structural neutrality gap (it competes with apps it would serve) does not change with a larger balance sheet. - **Query 4 and Query 5 (vendor profile batches) had tool-call limitations mid-research** per Perplexity's own caveats. Behavioral Signals and Empath details are best-effort with explicit "[Not publicly disclosed]" flags. - **Texas HB 149 and Colorado AI Act federal preemption posture is in flux.** The Trump AI Executive Order has directed DOJ to evaluate preemption challenges; outcomes are unknown as of May 2026. Biometric and health privacy laws are likely insulated; disclosure laws are more exposed. - **No primary source for "Humans& at $4B valuation."** The Eric Zelikman EQ foundation model raise was reported by Business Insider (Oct 2025) as "in progress." Treat as signal, not confirmed funding. --- ## Sources ### Primary citations woven into the synthesis - Hume AI: [hume.ai](https://www.hume.ai/), [Hume Series B announcement (March 2024)](https://www.hume.ai/blog/series-b-evi-announcement), [Contrary Research breakdown](https://research.contrary.com/company/hume-ai), [Hume pricing](https://www.hume.ai/pricing) - Affectiva / SmartEye: [Smart Eye CES 2026](https://www.smarteye.se/ces-2026/), [Affectiva acquisition release](https://www.affectiva.com/news-item/smart-eye-completes-acquisition-of-affectiva/) - Apple Foundation Models: [Apple ML research](https://machinelearning.apple.com/research/introducing-apple-foundation-models), [Apple 2025 Tech Report](https://machinelearning.apple.com/research/apple-foundation-models-tech-report-2025) - On-device LLMs 2026: [v-chandra.github.io](https://v-chandra.github.io/on-device-llms/), [Mobile LLM benchmarks 2026 (Phi-4 vs Gemma 3 vs SmolLM)](https://www.promptquorum.com/power-local-llm/mobile-llm-models-phi4-gemma-smollm) - MCP and standards: [Anthropic MCP announcement](https://www.anthropic.com/news/model-context-protocol), [W3C Smart Voice Agents Workshop report](https://www.w3.org/2025/10/smartagents-workshop/report.html), [W3C WebAgents interop report](https://w3c-cg.github.io/webagents/TaskForces/Interoperability/Reports/report-interoperability.html) - EU AI Act: [Article 5 text](https://artificialintelligenceact.eu/article/5/), [FPF prohibition deep dive](https://fpf.org/blog/red-lines-under-eu-ai-act-unpacking-the-prohibition-of-emotion-recognition-in-the-workplace-and-education-institutions/), [Lewis Silkin analysis](https://www.lewissilkin.com/insights/2025/02/17/understanding-the-eu-ai-acts-prohibited-practices-key-workplace-and-advertising-102k011), [Fortis EU February 2025 changes](https://www.fortiseu.com/blog/ai-act-prohibited-practices-feb-2025-what-changed) - US state laws: [Orrick AI law tracker](https://ai-law-center.orrick.com/us-ai-law-tracker-see-all-states/), [Gunderson 2026 AI laws update](https://www.gunder.com/en/news-insights/insights/2026-ai-laws-update-key-regulations-and-practical-guidance), [Foley BIPA 7th Circuit retroactivity](https://www.foley.com/insights/publications/2026/04/bipa-alert-seventh-circuit-ruling-applies-bipa-amendments-retroactively-ending-per-scan-exposure-for-companies-operating-in-illinois/) - NYC LL144: [DCWP page](https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page), [NYC Comptroller enforcement Dec 2025](https://www.osc.ny.gov/state-agencies/audits/2025/12/02/enforcement-local-law-144-automated-employment-decision-tools) - Replika: [Garante official notice (May 2025)](https://www.edpb.europa.eu/news/national-news/2025/ai-italian-supervisory-authority-fines-company-behind-chatbot-replika_en), [Silicon UK fine coverage](https://www.silicon.co.uk/cloud/ai/italy-replika-ai-fine-614621), [Tech Justice FTC complaint](https://techjusticelaw.org/wp-content/uploads/2025/01/Complaint-and-Petition-for-Investigation-Re-Replika.pdf) - Character.AI: [Tech Justice Law Project Garcia case](https://techjusticelaw.org/cases/garcia-v-character-technologies-google-and-character-ai-co-founders-daniel-de-frietas-and-noam-shazeer/), [CBS News January 2026 mediation](https://www.cbsnews.com/news/google-settle-lawsuit-florida-teens-suicide-character-ai-chatbot/), [Suffolk Law Review on FTC inquiry](https://sites.suffolk.edu/jhbl/2025/11/24/ai-companions-emotional-dependency-and-the-law-ftcs-next-frontier/) - HireVue: [EPIC FTC complaint (2019)](https://epic.org/wp-content/uploads/privacy/ftc/hirevue/EPIC_FTC_HireVue_Complaint.pdf), [HireVue 2021 facial analysis discontinuation](https://epic.org/hirevue-facing-ftc-complaint-from-epic-halts-use-of-facial-recognition/) - Mem0: [Mem0 announcement](https://mem0.ai/series-a), [TechCrunch coverage](https://techcrunch.com/2025/10/28/mem0-raises-24m-from-yc-peak-xv-and-basis-set-to-build-the-memory-layer-for-ai-apps/), [Built In SF](https://www.builtinsf.com/articles/mem0-raises-24m-AI-memory-infrastructure-20251103) - SDK / mobile gates: [Apple Privacy Manifest enforcement (Bitrise)](https://bitrise.io/blog/post/enforcement-of-apple-privacy-manifest-starting-from-may-1-2024), [Google Play Data Safety](https://respectlytics.com/blog/google-play-data-safety-guide/), [Google Play 2025 enforcement](https://blog.google/products-and-platforms/platforms/google-play/how-we-kept-google-play-safe-in-2025/) - Plaid analogue: [Plaid Open Banking](https://plaid.com/open-banking/), [FDX standard](https://plaid.com/resources/open-finance/what-is-fdx/), [Truist + Plaid (March 2026)](https://thepaypers.com/fintech/news/truist-and-plaid-sign-open-banking-data-access-agreement) ### Raw Perplexity outputs (preserved for traceability) All 30 outputs are stored verbatim at [tools/perplexity-search/ask-perplexity-output/](tools/perplexity-search/ask-perplexity-output/). Generated 2026-05-14. | # | Topic | File | |---|---|---| | 1 | Hume AI deep profile | [2026-05-14-175145-2026-give-complete-profile.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-175145-2026-give-complete-profile.md) | | 2 | Affectiva / SmartEye profile | [2026-05-14-175300-2026-give-complete-profile.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-175300-2026-give-complete-profile.md) | | 3 | Cogito / Uniphore / NICE | [2026-05-14-175425-2026-give-detailed-profiles.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-175425-2026-give-detailed-profiles.md) | | 4 | Behavioral Signals / Empath / audEERING | [2026-05-14-175548-2026-give-detailed-profiles.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-175548-2026-give-detailed-profiles.md) | | 5 | MorphCast / Realeyes / Entropik | [2026-05-14-175721-2026-give-detailed-profiles.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-175721-2026-give-detailed-profiles.md) | | 6 | New 2024 to 2026 entrants (Mem0, Humans&, Sanas) | [2026-05-14-175834-new-emotional-ai-affective.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-175834-new-emotional-ai-affective.md) | | 7 | EDNA-like persistent products | [2026-05-14-175957-2026-companies-products-offer.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-175957-2026-companies-products-offer.md) | | 8 | Market crowded vs sparse + Plaid/Twilio/Stripe gap | [2026-05-14-180156-based-current-2026-landscape.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-180156-based-current-2026-landscape.md) | | 9 | OpenAI emotion surface | [2026-05-14-180255-2026-emotion-aware-sentiment.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-180255-2026-emotion-aware-sentiment.md) | | 10 | Anthropic / Google / Meta / Apple / Amazon / Microsoft | [2026-05-14-180423-2026-emotion-aware-sentiment.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-180423-2026-emotion-aware-sentiment.md) | | 11 | Standards (MCP / W3C / IEEE / OpenAI Agents SDK) | [2026-05-14-180544-2026-standards-bodies.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-180544-2026-standards-bodies.md) | | 12 | Historical analogues (identity / payments / Plaid / FHIR / Twilio / Segment) | [2026-05-14-180715-analogues-summarize-whether.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-180715-analogues-summarize-whether.md) | | 13 | Bear case against cross-platform thesis | [2026-05-14-180833-strongest-evidence-based.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-180833-strongest-evidence-based.md) | | 14 | Dating + AI companion apps as customers | [2026-05-14-181036-2026-buyer-pain-dating-apps.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-181036-2026-buyer-pain-dating-apps.md) | | 15 | Mental health / therapy apps as customers | [2026-05-14-181242-2026-mental-health-therapy.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-181242-2026-mental-health-therapy.md) | | 16 | Enterprise CX / contact centers as customers | [2026-05-14-181418-2026-size-budget-reality.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-181418-2026-size-budget-reality.md) | | 17 | Education + automotive + gaming as customers | [2026-05-14-181636-2026-evaluate-buyer-pain.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-181636-2026-evaluate-buyer-pain.md) | | 18 | HR tech as customers (and regulatory closure) | [2026-05-14-181829-2026-regulatory-buyer.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-181829-2026-regulatory-buyer.md) | | 19 | AI agent infrastructure as customers | [2026-05-14-182003-2026-buyer-pain-ai-agent.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-182003-2026-buyer-pain-ai-agent.md) | | 20 | Budget owners + lines + 2024 vs 2026 procurement | [2026-05-14-182146-b2b-emotional-intelligence-sdk.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-182146-b2b-emotional-intelligence-sdk.md) | | 21 | Pre-seed ranked first customers | [2026-05-14-182419-given-pre-seed-ai-startup-00k.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-182419-given-pre-seed-ai-startup-00k.md) | | 22 | SDK business lessons (Stripe / Twilio / Sentry / Heap fail) | [2026-05-14-182648-2026-most-important-adoption.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-182648-2026-most-important-adoption.md) | | 23 | 2026 SDK integration friction (Privacy Manifest, SBOM, ATT) | [2026-05-14-182842-2026-actually-take-get-new-sdk.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-182842-2026-actually-take-get-new-sdk.md) | | 24 | On-device feasibility (Phi / Gemma / Apple Intelligence) | [2026-05-14-183053-2026-local-sdk-small.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-183053-2026-local-sdk-small.md) | | 25 | Privacy posture + certifications (SOC 2 / HIPAA / ISO 27001 / ISO 42001 / EU AI Act) | [2026-05-14-183301-2026-privacy-posture-device-vs.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-183301-2026-privacy-posture-device-vs.md) | | 26 | EU AI Act emotion recognition provisions | [2026-05-14-183448-2026-current-state-eu-ai-act.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-183448-2026-current-state-eu-ai-act.md) | | 27 | US state + federal emotion AI laws | [2026-05-14-183649-2026-state-level-federal.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-183649-2026-state-level-federal.md) | | 28 | Reputational landmines (HireVue / Replika / Garcia / Barrett / Stochastic Parrots) | [2026-05-14-183910-most-consequential-2020-2026.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-183910-most-consequential-2020-2026.md) | | 29 | Defensibility / moat analysis | [2026-05-14-184125-pre-seed-neutral-third-party.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-184125-pre-seed-neutral-third-party.md) | | 30 | Three wedge candidates ranked | [2026-05-14-184320-given-2026-emotional-ai.md](tools/perplexity-search/ask-perplexity-output/2026-05-14-184320-given-2026-emotional-ai.md) |