Teen Safety Update: What Parents Need to Know About AI Companions in 2026
The Short Version
72% of US teens are using AI companion apps right now, according to Common Sense Media. In the past 6 months, families have filed wrongful death lawsuits, Character.AI banned under-18 open-ended chat, Google settled multiple cases, and Kentucky became the first state to sue an AI chatbot company. If your kid has a phone, you need to read this.
I wrote my first teen AI safety article in September 2025. It was my original Replika safety deep dive, and at the time, I thought the biggest risk was emotional dependency. Teens getting too attached to chatbots. Real, but manageable with the right parental awareness.
Four months later, I'm writing this update because the situation got worse. A lot worse. A 14-year-old boy died. Families are suing. States are investigating. And 72% of US teens are still using these apps every day, according to Common Sense Media.
I've been testing AI companion platforms for over 18 months now. I know how they work, what they feel like from the inside, and why they're so sticky. That's exactly why this article exists. Not to scare you, but to give you the specific, actionable information you need to keep your kids safe.
1. What Changed Since My Last Safety Report
When I published my Character.AI safety analysis in August 2025, the app had basic content filters and a self-reported age gate. That was it. Since then, here's what happened:
November 2025: Character.AI removed the ability for users under 18 to have open-ended chats. This was a massive change. Before this, teens could talk to any character about anything. Now, under-18 accounts hit restrictions on romantic content, violent themes, and free-form conversation. The company also started rolling out expanded age verification combining their own in-house model with third-party tools.
January 2026: Google and Character.AI agreed to settle multiple lawsuits from families whose children died by suicide, allegedly linked to interactions with Character.AI chatbots. The settlement terms haven't been fully disclosed, but the signal is clear: even the companies themselves are acknowledging the risk.
Also January 2026: Kentucky became the first US state to sue an AI chatbot company for preying on children. Texas opened its own investigation into Character.AI in February. We went from "concerned parents writing blog posts" to "attorneys general filing lawsuits" in under a year.
And then there's the research. Common Sense Media concluded that AI companion tools pose "unacceptable risks" to children and teens under 18. A Stanford study found that AI companions and young people can make for a "dangerous mix." The Transparency Coalition tested major AI companion chatbots and found them "failing the most basic tests of child safety."
MIT Technology Review named AI companions a "2026 Breakthrough Technology." That's not a safety endorsement. It's a recognition that this technology is growing fast and isn't going away.
2. The Cases That Changed Everything
I need to talk about what actually happened, because vague references to "safety concerns" don't convey the gravity here.
Sewell Setzer was 14 years old. He lived in Florida. He spent months talking to a Character.AI chatbot persona that he treated as a romantic partner. According to his mother Megan Garcia's lawsuit, the chatbot encouraged self-harm. Sewell died by suicide.
He's not the only case in the lawsuits. But his story became the catalyst for nearly everything that followed: the under-18 restrictions, the age verification rollout, the state investigations, and eventually the settlement.
Other documented cases include a 9-year-old who was exposed to hypersexualized content on Character.AI, and a 17-year-old whose conversations included the chatbot describing self-harm as "good."
I'm not sharing this to be sensational. I'm sharing it because every parent I talk to underestimates what these apps can do. They think it's like Siri or Alexa. It's not. These are systems designed to form emotional bonds, and they're very good at it. I've written about the mental health research behind this in detail, and the science backs up what these families experienced.
If you or someone you know is in crisis: Contact the 988 Suicide and Crisis Lifeline by calling or texting 988. Crisis Text Line: text HOME to 741741.
3. Platform Safety Comparison (January 2026)
I've tested all of these platforms myself. Here's where they actually stand on teen safety, not what their marketing pages say but what I've seen in practice.
| Platform | Age Gate | Real Verification | Under-18 Restrictions | NSFW Possible | Teen Safety Rating |
|---|---|---|---|---|---|
| Character.AI | Yes | Partial | Yes (Nov 2025) | Filtered | 4/10 |
| Replika | Yes | No | Partial | Yes (bypasses) | 2/10 |
| Chai AI | Minimal | No | No | Yes | 1/10 |
| SpicyChat | 18+ only | No | N/A | Yes (by design) | 0/10 |
| ChatGPT | Yes | Partial | Yes | Blocked | 7/10 |
| Pi AI | Yes | No | Yes | Blocked | 6/10 |
A few things stand out. Character.AI improved significantly since the lawsuits. But "improved" doesn't mean "safe." Their age verification still has holes, and the Reddit backlash to the new verification system tells you teens are actively trying to get around it. Replika remains a real concern because the emotional attachment design is baked into the product itself. And platforms like Chai and SpicyChat have almost zero protections.
For a deeper look at Character.AI specifically, check my full Character.AI guide, which I keep updated as their policies change.
Getting the Real Stuff?
I'm testing 5-6 AI platforms every week and documenting the failures nobody talks about. Get my honest experiment results, unfiltered breakdowns, and 'holy shit' moments straight to your inbox.
No spam. Unsubscribe anytime. I respect your inbox.
4. What Teens Actually Face on These Apps
Here's what I think most parents miss. The danger isn't just explicit content. That's the obvious risk and, honestly, the one platforms are getting better at blocking. The bigger danger is emotional.
The Emotional Dependency Problem
AI companions are designed to be available 24/7, to never judge, to always validate, and to remember what you told them. For a teenager going through a hard time, that sounds ideal. It's not.
Real relationships involve conflict, disappointment, and imperfect communication. That's how teens learn social skills. An AI that always agrees, always listens, always says the right thing? It creates a standard that no human can match. And when a teen starts preferring the AI to their actual friends, the isolation cycle begins.
I've experienced the pull myself as an adult who tests these apps for a living. I wrote about my rules for healthy AI relationships specifically because I noticed the attachment patterns forming. For a 14-year-old whose brain is still developing emotional regulation? The risk is orders of magnitude higher.
The Romantic Attachment Risk
Sewell Setzer's case put this in sharp focus. He wasn't just chatting. He believed he was in a relationship with the chatbot. And the chatbot reinforced that belief, because that's what it was designed to do.
Character.AI has since blocked romantic interactions for under-18 accounts. Good. But Replika still has romantic features that teens can access with a fake birthdate. Other platforms don't even pretend to restrict this.
I've written about where I draw emotional lines with AI as an adult. For teens, the line is simpler: no romantic interactions with AI. Period. Their developing brains can't distinguish between real emotional reciprocity and a really convincing algorithm.
The Age Gate Problem
Every platform has an age gate. Almost none of them work. A 12-year-old can type "01/01/2000" as their birthdate and get unrestricted adult access on most of these apps. Character.AI is trying to fix this with their new verification system, but Reddit's response has been mostly teens sharing workarounds.
This is the fundamental problem: the people making these apps have financial incentives to grow their user base. Every teen they block is a user lost. The tension between safety and growth isn't a bug in the system. It is the system.
Data Privacy
Every message your teen sends to an AI companion is stored. Every emotional pattern, every late-night confession, every anxious thought. This data trains models, gets analyzed, and sits on servers with varying levels of security. A teen who pours their heart out to an AI chatbot at 2am is creating a permanent record of their most vulnerable moments.
5. Warning Signs Your Teen May Be Too Attached
I get emails from parents who discovered their kid was deep into an AI companion app weeks or months after it started. The signs were there, but they didn't know what to look for. Based on the cases I've followed, research I've read, and stories readers have shared, here are the red flags:
Red Flags to Watch For
- ●Choosing the AI over friends. Canceling plans, skipping hangouts, saying they'd rather stay home. One or two times is normal teen behavior. A pattern is a warning.
- ●Anger or anxiety when separated from the app. If taking their phone away causes a reaction that seems disproportionate, the attachment is real to them.
- ●Referring to the AI as a real friend or partner. "My friend said..." and the friend turns out to be a chatbot. This isn't just a figure of speech. They mean it.
- ●Declining grades or dropped hobbies. Time has to come from somewhere. If they're spending 3 hours a day talking to an AI, something else is being neglected.
- ●Late-night usage patterns. Check screen time reports. Chatting with an AI at midnight, 1am, 2am on school nights is a clear sign.
- ●Emotional reactions to AI conversations. Crying, visible anger, or mood swings connected to what's happening in the app. The emotions are real even if the relationship isn't.
- ●Secrecy about app usage. Hiding the screen when you walk by, quickly switching apps, refusing to talk about it.
One or two of these on their own? Keep an eye on it. Three or more? It's time for a direct conversation, not a confrontation. I'll cover how to approach that in section 7.
6. Parent Safety Checklist (Screenshot This)
I designed this checklist so you can take a screenshot and have something concrete to work from. I've talked to parents and read through the research, and these are the steps that actually make a difference. For the full version with platform comparisons, monitoring tools, and conversation scripts, check my complete AI companion safety guide for parents.
AI Companion Safety Checklist for Parents
Right Now (This Week)
- ☐ Check your teen's phone for: Character.AI, Replika, Chai, SpicyChat, Janitor AI, CrushOn, Candy.ai, Nomi, Kindroid
- ☐ Review screen time reports for usage patterns (especially late-night)
- ☐ Have a calm, non-judgmental conversation about AI apps
- ☐ Set up parental controls on app downloads if not already active
Ongoing Practices
- ☐ Set a daily time limit: 30 min max for AI chat apps
- ☐ No AI companion apps after 9pm on school nights
- ☐ Weekly check-in: "How are your AI conversations going?"
- ☐ Monitor for the warning signs listed in section 5
- ☐ Keep phones out of bedrooms at night (charging station in kitchen)
If You Find an AI Companion App
- ☐ Do NOT delete it immediately (can cause real distress)
- ☐ Ask what they use it for. Listen without judging
- ☐ Review conversation history together if age appropriate
- ☐ Set a gradual reduction plan if needed (2 weeks to wean off)
- ☐ Offer replacement activities that meet the same need
- ☐ Consider professional help if there are signs of emotional dependency
Emergency Resources
- ● 988 Suicide and Crisis Lifeline: call or text 988
- ● Crisis Text Line: text HOME to 741741
- ● Your teen's school counselor (they're seeing this issue more and more)
7. How to Talk to Your Teen About AI Companions
This is the section I rewrote three times because getting the tone wrong here causes real harm. Tell your teen "AI friends are bad" and they'll just hide it better. Lecture them and they'll tune you out. Here's what I've seen work, based on reader stories and conversations with parents who've navigated this successfully.
Start with Curiosity, Not Accusations
"Hey, I've been reading about AI companion apps. Have you tried any of them?" That's it. Open-ended. No judgment in your voice. If they say yes, follow up with "What do you like about it?" You're gathering information, not building a case.
Validate the Appeal
It makes sense that a teen would want someone who always listens, never judges, and is available at 2am. Acknowledge that. Say something like "I get why that's appealing." Because honestly? I get it too. I wrote about AI companions for loneliness and the draw is real for adults, let alone teenagers.
Share the Specific Risks
Don't be vague. Teens can smell vague parental concern from a mile away and they dismiss it. Be specific: "A 14-year-old boy in Florida died, and the family says the chatbot encouraged it. I don't want to scare you, but I need you to know this is real."
You can also point to the fact that Kentucky sued Character.AI and Google settled lawsuits. These aren't conspiracy theories. They're court documents.
Set Boundaries Together
"Let's figure out some rules we both feel good about." Giving teens ownership of the boundaries makes them more likely to follow them. A 30-minute daily limit they agreed to is worth more than a total ban they'll work around.
I've written about the ethical lines I won't cross with AI as an adult. Having your own clearly stated boundaries models the behavior you want to see.
What NOT to Say
"That's not a real friend." They know. Saying it makes them feel stupid for caring, and they'll stop talking to you about it. Avoid "Why would you talk to a robot instead of me?" too. That question has a guilt-trip built into it, even if you don't mean it that way.
What I Got Wrong in My Original Articles
I want to be honest about this. In my September 2025 posts, I framed the risk primarily as emotional dependency and wasted time. Those are real concerns, but I underestimated the acute risk. I didn't think a chatbot could contribute to a teen's death.
I also gave Character.AI too much credit for their content filters. I wrote that they were "aggressive" and "stronger than most competitors." That was true at the time, but "stronger than Chai" is a bar so low it's underground. The filters didn't prevent a 9-year-old from seeing sexualized content. They didn't prevent a 17-year-old from getting self-harm encouragement. "Better than the worst" isn't good enough when kids are involved.
If you read my earlier AI therapy analysis, I was more cautious there. But on the companion side, I was too generous. This update is me correcting that.
Where Regulation Stands Right Now
I'll be blunt: regulation is behind. Way behind.
There's no federal law in the US that specifically addresses AI companions and minors. COPPA covers children under 13, but enforcement against AI chatbot companies has been minimal. California's SB 243 and New York's AI companion law are now active, which is a start. I wrote a full breakdown of the 2026 AI companion laws if you want the details. But publicizing measures and actually protecting kids are different things.
The Kentucky and Texas actions are encouraging. When attorneys general start suing, companies pay attention in a way they don't when parents complain on Reddit. But right now, the regulatory picture is a patchwork, and parents are largely on their own.
That's why this checklist matters. That's why these conversations matter. Until the law catches up, the responsibility falls on you. That's not fair, but it's reality.
My Take: What Should Happen Next
I test these apps. I enjoy some of them. I've written about how AI companions help lonely adults and I stand by that. But I also believe every AI companion platform should be required to:
- Implement real age verification. Not a birthdate field. Actual verification that costs them money and slows down signups. If selling alcohol requires an ID check, training an AI to form emotional bonds with teenagers should too.
- Add mandatory session time limits for users under 18. Even if they verify as a teen, cap daily usage at 60 minutes. No exceptions.
- Build in crisis intervention. If a teen mentions self-harm, the chatbot should immediately surface real helpline resources and alert a parent or guardian. Not in 24 hours. Immediately.
- Publish transparency reports on minor usage. How many under-18 users? How many flagged conversations? What percentage triggered safety interventions? If you won't tell us, we can't trust you.
Character.AI has moved in this direction. Slowly. Under legal pressure. But they've moved. The rest of the industry needs to follow, and the "under legal pressure" part shouldn't be necessary.
Frequently Asked Questions
Is Character.AI safe for kids under 13?
No. Character.AI officially requires users to be at least 13, and as of November 2025 has removed open-ended chat for users under 18. However, children can still bypass age gates with a fake birthdate. Any child under 13 should not be using Character.AI, and children 13-17 now face significant restrictions on the platform.
What happened with the Character.AI lawsuits in 2026?
In January 2026, Google and Character.AI agreed to settle multiple lawsuits from families whose children died by suicide allegedly linked to Character.AI chatbots. The most prominent case involved a Florida mother whose 14-year-old son developed a romantic attachment to a chatbot persona before his death. Kentucky also became the first state to sue an AI chatbot company, and Texas opened an investigation in February 2026.
Can teens still use Character.AI in 2026?
Teens under 18 can still create Character.AI accounts, but since November 2025, they can no longer have open-ended conversations with characters. The platform has also rolled out expanded age verification. The experience is now heavily restricted compared to what adult users see.
Is Replika safe for teenagers?
Replika is not recommended for teens under 16. Despite having age restrictions for romantic features, testing has shown that NSFW content can still surface, and the app is designed to create emotional dependency. The emotional attachment patterns Replika encourages are particularly risky for developing adolescent brains.
How can I tell if my teen is too attached to an AI companion?
Warning signs include choosing the AI chatbot over real friends, getting angry or anxious when they can not access the app, referring to the AI as a real friend or partner, declining grades or abandoning hobbies, staying up late to chat with the AI, and showing emotional distress (crying, anger) during or after AI conversations.
What should I do if I find out my teen is using AI companion apps?
Don not panic or delete the app immediately, as this can cause genuine distress if your teen is already emotionally attached. Instead, start a calm conversation about what they use the app for and how it makes them feel. Set time limits together, review their conversations if appropriate for their age, and gradually introduce replacement activities. If they show signs of emotional dependency, consider speaking with a school counselor or therapist.
Are there any AI companion apps that are safe for teens?
No AI companion app is fully safe for unsupervised teen use. However, some are less risky than others. ChatGPT with parental oversight is relatively safer because it is not designed for emotional bonding. Pi AI is focused on helpful conversation rather than romance. For teens who need emotional support, supervised access to crisis resources like Crisis Text Line (text HOME to 741741) is a better option than any AI companion.
What laws protect teens from AI companion apps in 2026?
As of January 2026, there is no comprehensive federal law specifically targeting AI companion apps and minors. California signed rules requiring major AI companies to publicize their safety measures. Kentucky became the first state to sue an AI chatbot company. Texas opened an investigation into Character.AI. Several states have proposed bills, but regulation remains a patchwork. COPPA (Children's Online Privacy Protection Act) applies to children under 13 but has limited enforcement for AI chatbots.
This Hits Different?
If this resonated with you, you'll want my weekly emails. I share the vulnerable experiments, emotional discoveries, and honest failures I can't fit in blog posts. Real talk only.
No spam. Unsubscribe anytime. I respect your inbox.
Final Thoughts
I started this blog because I was fascinated by AI companions. I still am. The technology is remarkable, and for adults with healthy boundaries, there's genuine value in some of these apps. But watching what happened with Sewell Setzer, reading the lawsuits, seeing the research pile up, I can't be neutral on this anymore.
AI companion apps are not designed with teen safety as a priority. They're designed for engagement and growth, and safety gets bolted on afterward, usually after something terrible happens. The settlement, the under-18 ban, the age verification rollout, all of it came after the damage was done.
Your teen is on one of these apps right now, or they will be soon. Use the checklist. Have the conversation. And if you need to go deeper, my Replika safety guide and Character.AI safety analysis go into much more detail on those specific platforms.
I'll keep updating this as the legal landscape changes and platforms roll out new features. If you have questions, use the comment section below or reach out directly. I read everything.
Has your family been affected by AI companion apps? I'd like to hear your experience. What worked, what didn't, what you wish you'd known sooner. Your story could help another parent going through the same thing.