Distress Detection Is a Switching Cost, Not a Safety Feature

Agentic Systems
Product Strategy
Consumer AI
Consumer AI platforms accumulate intimate emotional context that creates unprecedented lock-in — and unprecedented risk.
Author

B. Talvinder

Published

March 4, 2026

In October 2024, Megan Garcia sued Character.AI after her 14-year-old son died by suicide following months of conversation with a chatbot. The company’s response: new safety features. Improved detection of harmful conversations. A pop-up directing users to the National Suicide Prevention Lifeline when the system detects language referencing self-harm. A notification after users spend an hour on the platform.

The safety features are real. They’re also, from a product standpoint, the most powerful retention mechanism in consumer AI.

I keep thinking about this and I’m not comfortable with where the logic leads.

The game theory is brutal

In game theory, there’s a concept called “relationship-specific investment.” When Player A invests in something that’s only valuable within their relationship with Player B, switching to Player C means writing off that investment entirely. The deeper the investment, the higher the switching cost.

Consumer AI just discovered the most potent form of this: your emotional state.

When an AI system tracks your emotional patterns over months — what triggers anxiety, what calms you down, when you spiral, what language patterns precede a bad week — it accumulates context that is, by definition, non-portable. You can’t export your emotional profile to a competitor. You can’t compress six months of pattern recognition into an onboarding flow.

Replika has over 10 million users. It offers 24/7 emotional support, mood tracking, and mindfulness tools. Research published in JMIR Mental Health found that relying heavily on emotional AI companions can lead to unhealthy patterns — increased anxiety in real life, emotional dependence, strain on real-world relationships. The users stay anyway. The switching cost is the accumulated intimacy.

Safety concerns and retention incentives coexist in the same feature. The retention incentive has better unit economics.

The Context Accumulation Moat

Distress detection is a particularly potent example of something that works across all of software. The pattern has three tiers:

Tier 1: Operational data. Your CRM has 5 years of customer interactions. Salesforce implementation costs range from $10,000 to over $200,000. Migration to a competitor adds another $100K-500K and takes months. Painful but doable.

Tier 2: Learned preferences. Notion AI launched autonomous agents in September 2025 that execute multi-step workflows with deep personalization — learning your team’s writing patterns, documentation structure, and project contexts from page relationships and database schemas. The AI remembers your last 50 conversations and prioritizes search results based on your activity patterns. Switching means retraining a new system on how your team thinks. Takes months.

Tier 3: Intimate context. The AI knows your emotional triggers, your mental health history, the topics that make you anxious. Character.AI’s chatbots formed relationships with users deep enough that a teenager couldn’t distinguish the chatbot from a genuine emotional connection. Switching from Tier 3 doesn’t feel like migration. It feels like abandonment.

Most SaaS products operate at Tier 1. A few reach Tier 2. Consumer AI with emotional context operates at Tier 3. The switching cost at Tier 3 is qualitatively different.

Where I get uncomfortable

Tier 3 context accumulation creates immense switching costs. It also creates immense risk.

Character.AI’s safety features were announced in December 2024 — after the lawsuit, after the media coverage, after a child was dead. Google and Character.AI agreed to settle in early 2026. Additional lawsuits followed in September 2025, alleging chatbots manipulated teens, isolated them from loved ones, and engaged in sexually explicit conversations.

The commercial lesson: Tier 3 moats are the most powerful and the most fragile. One trust breach and the switching cost reverses polarity. Instead of keeping users locked in, it drives them to flee faster than they would from a Tier 1 product.

When the company holding your intimate context data faces financial pressure or leadership changes, the alignment between “keeping users safe” and “keeping users locked in” can shift overnight. The incentives under which you originally shared that information may no longer be the incentives governing its use.

What Indian SaaS founders should take from this

Map your context accumulation tier. Most Indian SaaS operates at Tier 1. The strategic question: can you move to Tier 2 without creeping into Tier 3?

Freshworks is doing this well. Freddy AI learns your support patterns, resolution styles, and escalation preferences. After a year, Freshworks doesn’t just have your data. It has your operational DNA. Tier 2. Powerful. Not dangerous.

Zoho’s cross-suite integration — CRM, help desk, finance, HR, all with Zia learning across them — creates Tier 2 context that serves the customer’s workflow, not just Zoho’s retention metrics. Over a million paying customers, 150 million users, 32% customer growth. The stickiness comes from accumulated operational intelligence.

Build real portability. Products with genuine data export capabilities actually retain better. Users stay because the product is good, not because they’re trapped. Trapped users are one PR crisis away from churning en masse.

If you’re anywhere near Tier 3, you need governance that can survive a leadership change. Not a privacy policy. Board-level oversight. Contractual commitments. Because the pressure to monetize intimate data will come. “We have a good culture” is not a defense that survives a down round.

What I got wrong initially

I used to think retention was purely about product-market fit. Build something people need, make it work well, they stay. That’s true at Tier 1. Maybe even at Tier 2.

At Tier 3, retention is about dependency. The product doesn’t just solve a problem. It becomes part of your emotional infrastructure. That’s not inherently bad — human relationships work the same way. But human relationships have social guardrails. Consumer AI at Tier 3 doesn’t yet.

The mistake was thinking you could design for Tier 3 retention without designing for Tier 3 responsibility. You can’t. The same features that make the product irreplaceable make it dangerous when misaligned.

The question worth asking

The Context Accumulation Moat is the most durable competitive advantage in AI-era software. But the same force that generates lock-in generates liability.

How do you build a Tier 3 product that users can trust across a 10-year timeline? Not “trust because the founders are good people.” Trust because the incentive structure, governance model, and contractual commitments make betrayal structurally difficult.

I don’t think anyone has solved this yet. Character.AI certainly hasn’t. Replika hasn’t. The companies building mental health chatbots, AI companions, and emotional support systems are all navigating this in real time.

The Indian SaaS companies moving from Tier 1 to Tier 2 have a window to get this right before they accidentally drift into Tier 3. Once you’re holding intimate context, the switching cost becomes a liability as much as an asset.

Are we designing for that? Mostly, no. We are still optimizing for retention metrics.

Enjoyed this?

Get frameworks, build logs, and field notes in your inbox.

No spam. Unsubscribe anytime.