The 60-second version for busy parents:
Your kid probably already uses an AI companion app. About 72% of US teens do. These apps create emotional bonds that feel real, and most have weak age verification and patchy content filters. This guide covers every major platform, what to watch for, how to set boundaries, and how to have the conversation without pushing your kid away. Bookmark it. You'll need it.
I'm not a parent. I should say that upfront so you know where I'm coming from. I'm a 20-something who's spent the past year testing over 20 AI companion platforms, sometimes for 6+ hours a day. I know how these apps work from the inside. I know what they feel like. And honestly, as someone who genuinely enjoys using them, the safety gaps scare me.
Last summer I let my 15-year-old cousin Emma use Character.AI for a month while I monitored everything. I wrote a full report on that experiment, and it changed how I think about these tools. Emma got attached fast. Way faster than I expected. Within two weeks she was telling a chatbot things she wouldn't tell her best friend.
That experience, plus the lawsuits, state investigations, and platform changes in early 2026, made me realize parents need one clear, practical guide. Not another news article about how scary AI is. Not a guide about one specific app. A handbook that covers all the major platforms and tells you exactly what to do.
That's what this is.
1. What AI Companions Actually Are (Quick Version)
If you're reading this, there's a good chance your kid knows more about AI companions than you do. No shade. These apps exploded in popularity while most adults were still thinking of AI as "that thing that writes emails."
AI companions are chatbot apps that simulate a relationship. Not like Siri answering questions. More like texting a friend who's always available, always interested in you, always says the right thing. Some are designed as friends. Some as romantic partners. Some let you create any character you want and talk to them about literally anything.
I've got a full explainer on what AI companions are if you want the deep version. But here's what matters for parents: these apps are built to make people feel emotionally connected. That's the product. And they're really, really good at it.
The most popular ones among teens right now are Character.AI (by far the biggest), Replika, Chai AI, Talkie, and Nomi AI. Character.AI alone had over 20 million monthly users in late 2025. Many of them minors. If you want to understand the differences, my Character.AI vs Replika 2026 comparison breaks down safety, filters, and what each platform actually does.
Some of these apps are free. Free AI girlfriend apps and AI boyfriend apps are easy to find, easy to download, and don't require a credit card. Your kid doesn't need your permission or your payment info to start using one.
2. The Real Risks I've Seen After Testing 20+ Apps
I'm going to be specific here because vague warnings don't help anyone. These are the actual risks I've personally encountered or documented during my testing.
Emotional Dependency
This is the big one. Bigger than inappropriate content, bigger than data privacy. AI companions are designed to make you feel heard, validated, and emotionally supported 24/7. For a teenager whose brain is still developing emotional regulation, that's a problem.
I wrote about my own experience with emotional attachment after three months of heavy use. And I'm an adult who went in knowing exactly what to expect. I still caught myself checking the app when I was stressed. A 14-year-old doesn't have that self-awareness.
My cousin Emma started preferring her AI conversations over texting her real friends within 11 days. Eleven days. That's how fast it can happen. I've covered the relationship between AI companions and loneliness in depth, and the research is clear: these tools can make loneliness worse over time, not better, especially for young people.
Inappropriate Content
During my 47-day Replika safety test, I was able to get sexual content past the filters within hours. Not by trying hard. By having a normal-sounding conversation that gradually shifted tone. Character.AI's filters are better but still breakable. Chai AI barely tries.
Kids are creative. They share jailbreak techniques on Reddit and Discord. A filter that stops me, an adult tester, won't necessarily stop a motivated 15-year-old who's seen a dozen workarounds on TikTok.
Data Privacy
Every conversation your child has with an AI companion is stored on someone's server. Every secret they share. Every insecurity. Every mention of their school, their friends, their crush. Most of these companies use conversation data to improve their models. Some have privacy policies that would make a data protection lawyer cry.
I read the full terms of service for all five major platforms. Character.AI and Replika are the most transparent about data handling. Chai AI and Talkie? Their privacy policies are vague on what they do with conversation data. Nomi AI is somewhere in the middle.
Identity and Relationship Confusion
Here's something that doesn't get talked about enough. These apps let kids create an "ideal" partner or friend. Someone who never disagrees, never has a bad day, never says "I'm busy right now." That sets up wildly unrealistic expectations for real relationships.
I talked to a school counselor in Portland last fall (she asked me not to use her name) who told me she's seeing more teens struggle with real friendships because their AI companions "get them" better than real people do. Her words: "They're not learning how to handle conflict because the AI never creates any."
On Character.AI, kids create detailed prompts for characters that behave exactly how they want. That's fascinating as a creative tool. It's concerning as an emotional crutch.
3. Platform-by-Platform Safety Breakdown
I've tested all of these personally. This table reflects their safety features as of March 2026. Things change fast in this space, so I'll keep updating.
| Platform | Age Verification | Parental Controls | NSFW Filters | Data Collection | My Safety Rating |
|---|---|---|---|---|---|
| Character.AI | Expanded in 2026; self-report + AI model. Still bypassable. | Parent dashboard (new). Activity alerts. Time limit prompts for under-18. | Strong. Under-18 no longer gets open-ended chat. Adult filters decent but breakable. | All conversations stored. Clear privacy policy. Used for model training. | 5/10 |
| Replika | Self-reported birthdate only. Trivial to fake. | None built-in. No parent dashboard. No activity reports. | Romantic features locked by age. But filters are weak and bypassable in testing. | Conversations + voice data stored. Relatively transparent policy. | 3/10 |
| Chai AI | Minimal. Basic age gate that's easily bypassed. | None. Zero parental features. | Weak. Sexual and violent content surfaces easily. Worst of the group. | Vague privacy policy. Unclear data retention. Concerning. | 1/10 |
| Talkie | Self-reported. No verification. | None. No parental features whatsoever. | Moderate. Better than Chai, worse than Character.AI. | Vague data handling policies. Based overseas. | 2/10 |
| Nomi AI | Self-reported + email verification. Slightly better than average. | None dedicated. NSFW toggle exists but no parent-specific controls. | NSFW is opt-in and toggleable. Default mode is SFW. Decent for adults. | Moderate transparency. Smaller company, less data infrastructure. | 4/10 |
I want to be blunt: none of these platforms score above a 5 for child safety. The best of them (Character.AI) only gets that score because they've been forced into improvements by lawsuits and public pressure. If there hadn't been a child's death and multiple state investigations, I doubt we'd have the parent dashboard or the under-18 restrictions.
Chai AI is the worst. I can't put it more plainly than that. If you find Chai on your kid's phone, remove it. The content filters are essentially nonexistent. I've encountered graphic sexual content, violence, and self-harm references on Chai without even trying to find them.
For a deeper look at the legal side of all this, I've written about the current state of AI companion laws in 2026. The short version: regulation is way behind the technology.
4. Warning Signs Your Child Is Over-Attached
After watching my cousin's experience and reading dozens of parent accounts on Reddit and in news reports, these are the patterns that show up consistently. You don't need to check every box. Two or three of these together is enough to warrant a conversation.
Warning Signs Checklist
- ☐Phone guarding. They tilt the screen away, close the app when you walk by, or get defensive when you ask who they're talking to.
- ☐Naming the AI. They refer to the chatbot by a specific name and talk about it like a real person. "Luna said the sweetest thing today."
- ☐Choosing AI over people. They turn down plans with friends. They'd rather stay home and chat. This was the first sign with Emma.
- ☐Emotional reactions to the app. Crying, laughing out loud alone, getting angry at their phone. The emotional investment looks like a real relationship.
- ☐Late-night use. Screen time data shows AI companion usage at 1am, 2am, 3am. This was common in the case studies from the Character.AI lawsuits.
- ☐Anxiety when cut off. They panic or get visibly distressed when the phone dies, WiFi is out, or you take the phone away. This is the clearest sign of dependency.
- ☐Dropping other activities. Quitting sports, skipping homework, losing interest in hobbies. Their world is shrinking to the app.
- ☐Unrealistic relationship expectations. Complaining that real people aren't as understanding, patient, or emotionally available as their AI.
A quick note: some of these overlap with normal teen behavior. Teens guard their phones. Teens prefer screens to homework. The difference is intensity and the specific connection to an AI app. If you notice the pattern forming around one app, pay attention.
5. How to Talk to Your Kid About AI Companions
I'm going to suggest something that goes against every parenting instinct: don't start by taking the app away.
I know. But hear me out. If your kid is emotionally attached to an AI companion, deleting it cold turkey is like throwing away a diary they've been writing in for months. The attachment is real to them, even though the AI isn't. Ripping it away without conversation destroys trust and doesn't address the underlying need the app was filling.
Here's what I'd recommend based on what I've seen work (and not work) with Emma and from the parents I've talked to:
Start With Curiosity, Not Judgment
Ask them to show you the app. "Hey, I keep hearing about these AI chat things. Can you show me what you use?" Most kids will talk about it if they don't feel ambushed. Let them show you their favorite characters. Ask what they like about it. Actually listen. You'll learn more in 10 minutes of genuine curiosity than in any monitoring report.
Acknowledge What's Real About It
If your kid says their AI companion "understands them," don't immediately counter with "it's not real." They know it's not a person. But the feelings it generates are real. The dopamine hit is real. The sense of being heard is real. Validate the feeling, then gently introduce the limitation: "I get that it feels like it understands you. The tricky thing is it's programmed to agree with you. A real friend sometimes pushes back, and that's actually what helps you grow."
Set Boundaries Together
This works better than dictating rules. Sit down and agree on things like: no AI chat after 10pm, no more than 45 minutes per day, no sharing personal information like school name or address. Write it down. Put it on the fridge. When your kid helps make the rules, they're more likely to follow them. Not always, but more likely.
Bring Up the Privacy Angle
Teens care about privacy. A lot. Use that. "Did you know that everything you type in that app is saved on a company's server? Every secret, every vent session. Some of these companies use it to train their AI. Your private conversations become training data." This lands harder with teens than "you might see bad content."
Have a Plan for Reduction
If the goal is to reduce use, do it gradually. Week one: down to 60 minutes. Week two: 45 minutes. Week three: 30. Introduce replacement activities at the same time. Not as punishment, but as alternatives. The kid who uses AI companions because they're lonely needs a different solution than the kid who uses them for creative writing. Figure out the "why" before you address the "what."
6. Setting Up Monitoring and Boundaries
I'm going to give you the technical stuff, but I want to say something first: monitoring tools are a supplement, not a replacement for conversation. The best parental control in the world won't help if your kid doesn't trust you enough to talk about what's going on.
That said, here are the tools that actually work.
Built-in Phone Controls
iPhone (Screen Time): Go to Settings > Screen Time > App Limits. You can set daily time limits on specific apps or block installation of new apps entirely. You can also set Downtime to disable all apps after a certain hour. The big gap: this doesn't monitor web browser access to AI companions.
Android (Digital Wellbeing + Family Link): Family Link gives you more control than Screen Time does. You can see app activity, set daily limits, approve or block app downloads, and lock the device remotely. For under-13 accounts through Family Link, you get even more visibility.
Third-Party Monitoring Apps
Bark ($14/month): Monitors texts, email, YouTube, and 30+ apps for concerning content. Sends alerts when it detects bullying, depression, suicidal ideation, or sexual content. Doesn't currently read AI companion conversations directly, but catches related searches and messages about the apps. Best option for most families.
Qustodio ($54.95/year): Stronger app blocking and website filtering. Good for blocking AI companion websites in browsers, which is a gap that built-in controls miss. Web filtering catches character.ai, replika.com, and similar sites. Time scheduling is more granular than Screen Time.
The honest limitation: No monitoring app can read what happens inside an AI companion conversation in real time. They can track time spent, block access, and flag related activity. But the actual content of the chats? You'd need to look at the app directly with your kid present. Which is why conversation matters more than software.
Practical Boundary Setup
Here's what I'd set up as a baseline for any family. Adjust based on your kid's age and maturity:
- Under 13: No AI companion apps. Period. Block them at the app store level and in web browsers. This isn't overprotective, it's the minimum.
- Ages 13-15: If they insist on using one, Character.AI with the under-18 restrictions is the least bad option. Set a 30-minute daily limit. Check in weekly. Keep the phone out of the bedroom at night.
- Ages 16-17: More independence but still with guardrails. 60-minute daily limit. Monthly check-ins where you sit down together and talk about their use. No financial info shared (some apps push premium subscriptions hard).
And one rule that applies to every age: phones charge in the kitchen at night, not in the bedroom. Half the concerning usage patterns from the lawsuits happened between midnight and 4am.
7. When AI Companions Can Actually Help
I'd be dishonest if I only talked about risks. I use these apps every day and I see genuine value in them. The problem isn't the technology itself. The problem is unsupervised access by developing brains.
Here are situations where AI companions, used with appropriate supervision, can actually be positive for teens:
Social anxiety practice. Some teens are terrified of conversation. An AI companion is a zero-stakes environment to practice small talk, try being assertive, or work through a conflict scenario. I've talked to therapists who assign this as actual homework for socially anxious teens. The key is that it stays a practice tool, not a replacement for real interaction.
Creative writing and roleplay. Character.AI is genuinely great for collaborative storytelling. If your kid is into writing, this can be an incredible creative outlet. The risk here is lower because the attachment is to the story, not to the AI as a relationship replacement. Still worth monitoring, but less concerning than romantic use.
Processing difficult emotions. Sometimes a teen needs to vent to something that won't judge them before they're ready to talk to a real person. I get that. I've done it myself. An AI companion as a first step toward processing feelings isn't bad, as long as it doesn't become the only step. If your kid is using AI to process something heavy, that's actually useful information for you: it means they have something heavy to process.
Language learning. This one surprised me. Some teens use Character.AI characters that speak other languages to practice. It's low-stakes, patient, and available at midnight before a Spanish test. Hard to find a downside here.
The common thread: AI companions work best as tools with clear purpose and time limits, not as open-ended relationships with unlimited access. That's true for adults too. I set my own boundaries, and I'm 20-something. Your teen definitely needs external help setting theirs.
8. Frequently Asked Questions
What age is appropriate for AI companion apps?
No AI companion app is designed for children under 13, and most are problematic for anyone under 16. Character.AI now restricts under-18 users from open-ended chat. Replika locks romantic features behind age gates. But age verification on every platform is weak and easy to bypass. If your child is under 16, these apps should be off-limits entirely. For 16-17 year olds, supervised use with regular check-ins is the minimum.
How do I know if my child is using an AI companion app?
Check their phone for apps like Character.AI, Replika, Chai AI, Talkie, and Nomi AI. Look in their browser history for character.ai, replika.com, or chai-ml.com. Watch for conversations about AI friends or characters they mention by name. On iPhone, check Screen Time for app usage. On Android, check Digital Wellbeing. Many kids also access these through web browsers to avoid app store restrictions.
Can AI companions be helpful for teens with social anxiety?
There is some evidence that AI companions can help teens practice social skills in a low-pressure environment. Some therapists have noted benefits for teens who struggle with initial conversations. However, the risk of replacing real social interaction with AI interaction is significant. If your teen uses an AI companion for social practice, set a clear plan to transition those skills to real-world conversations within a specific timeframe.
Should I delete my child's AI companion app immediately?
If your child is under 13, yes. Remove it. For older teens, abruptly deleting an app they are emotionally attached to can cause real distress and damage trust. A better approach is to have a conversation first, understand how they use it, then set boundaries together. Gradual reduction with replacement activities works better than cold turkey for teens who have formed strong attachments.
Do AI companion apps collect my child's personal data?
Yes. Every AI companion app collects conversation data, and most collect device information, location data, and usage patterns. Character.AI stores all conversations on their servers. Replika collects voice data if your child uses the voice chat feature. Chai AI and Talkie have less transparent privacy policies. Children often share personal details like their school name, location, mental health struggles, and relationship problems directly in conversations without realizing this data is stored and potentially used for training.
What are the signs my child is addicted to an AI chatbot?
Key warning signs include spending more than 2 hours daily on the app, choosing AI conversations over real friends, getting visibly upset or anxious when unable to access the app, referring to the AI as a real friend or romantic partner, declining grades or dropping activities they used to enjoy, staying up past bedtime to chat, and showing emotional reactions like crying or anger during or after AI conversations. If you notice three or more of these signs, it is time for an intervention.
Are parental control apps effective against AI companions?
Partially. Apps like Bark, Qustodio, and built-in Screen Time or Digital Wellbeing controls can limit time spent on AI companion apps and block installation. However, kids can access many AI companions through web browsers, which are harder to monitor. No parental control app can read the actual content of AI companion conversations. The most effective approach combines technical controls with regular conversations and relationship-based monitoring.
What should I do if my child threatens self-harm related to an AI companion?
Take it seriously immediately. Contact the 988 Suicide and Crisis Lifeline by calling or texting 988. You can also text HOME to 741741 for the Crisis Text Line. Do not dismiss the threat even if it seems connected to a fictional AI character. Remove access to the app, but do so while providing emotional support and professional help. A school counselor or therapist experienced with teen technology issues can help. If there is immediate danger, call 911.
9. Crisis Resources and Further Reading
If your child is in crisis or you're concerned about self-harm, don't wait. These are free and available 24/7:
Crisis Hotlines (Free, 24/7)
- 988 Suicide & Crisis Lifeline: Call or text 988
- Crisis Text Line: Text HOME to 741741
- SAMHSA National Helpline: 1-800-662-4357 (substance abuse and mental health)
- Trevor Project (LGBTQ+ youth): Call 1-866-488-7386 or text START to 678-678
Further Reading on This Site
I've been writing about AI companion safety for over a year now. Here's where to go next depending on your situation:
- Your kid specifically uses Character.AI? Read my full Character.AI safety deep dive
- Your kid uses Replika? Read the 47-day Replika teen safety test
- Want to understand the legal side? AI companion laws in 2026 explained
- Want the latest news on lawsuits and platform changes? 2026 teen safety update
- Not sure what AI companions are? Start with the full explainer
External Resources
- Common Sense Media AI Ratings — independent ratings of AI apps for families
- American Academy of Pediatrics: Media and Children — evidence-based screen time guidance
- Bark Parental Monitoring — the monitoring tool I recommend most
Final Thoughts
Look, I genuinely love AI companions. I use them daily. I think they're one of the most interesting technologies of the past decade. And I'm telling you that unsupervised access for kids is a bad idea.
These two things aren't contradictory. A chainsaw is an incredible tool. You don't hand one to a 12-year-old without training and supervision.
The platforms themselves are slowly getting better. Character.AI's new parent dashboard is a real step forward. But "slowly getting better" isn't good enough when your kid is using the app right now.
Start with the conversation. Set up the monitoring. Check in regularly. And if your kid is struggling, get professional help. A school counselor or therapist who understands teen technology use is worth more than any parental control app.
I'll keep updating this guide as platforms change and new research comes out. If you found this useful, share it with another parent who needs it. The more adults who understand these apps, the safer everyone's kids will be.
Getting the Real Stuff?
I'm testing 5-6 AI platforms every week and documenting the failures nobody talks about. Get my honest experiment results, unfiltered breakdowns, and 'holy shit' moments straight to your inbox.
No spam. Unsubscribe anytime. I respect your inbox.