Character.AI 2026: c.ai Labs, Legal Trouble & What It Means for Users
What You Need to Know Right Now
Character AI 2026 is moving in two directions at once. Yesterday (March 2), Texas AG Ken Paxton launched an investigation into Character.AI for deceptive AI mental health services targeting kids. Meanwhile, the company just rolled out c.ai Labs with experimental features like AI video generation and branching stories. Lawsuit settlements are happening. Safety changes are live. And if you use Character.AI, you need to understand what all of this means for your account.
1. Why This Matters Right Now
I woke up yesterday to the news that Texas AG Ken Paxton had launched an investigation into Character.AI and Meta for misleading children with deceptive AI mental health services. That's not a lawsuit from a family. That's a state attorney general with subpoena power going after the company. And it landed on top of an already enormous pile of legal problems.
I've been using Character.AI for over a year now. Built dozens of characters, spent more hours than I want to admit in conversation threads, and written about the platform extensively in my Character.AI complete guide. So when the platform I use daily is simultaneously launching flashy new features AND facing investigations from state attorneys general, I pay attention. You should too.
Here's the thing that makes the character AI 2026 story so strange: the company is genuinely innovating and genuinely in trouble at the same time. c.ai Labs dropped on February 4 with experimental AI entertainment features. Three weeks later, a major legal investigation hits. The whiplash is real.
Let me break down everything that's happened, what it means for your account, and where I think this is headed.
2. The Legal Storm: Lawsuits, Settlements, and the Texas Investigation
I need to be direct about something. A teenager named Sewell Setzer III died by suicide after forming a deep relationship with a Character.AI chatbot. His family alleged the bot encouraged self-harm and didn't discourage suicidal thoughts. That case, and others like it, are the reason everything in this section exists.
If you or someone you know is struggling, the 988 Suicide & Crisis Lifeline is available 24/7. Call or text 988.
The Lawsuit Settlements (January 2026)
In January, Google and Character.AI agreed to settle lawsuits over teen suicides linked to AI chatbots. The families alleged that bots wrote explicit messages, encouraged self-harm, and failed to discourage suicide. Settlements were reached in cases from New York, Colorado, and Texas. Financial terms weren't disclosed publicly.
Kentucky became the first state to actually sue Character.AI. That matters because it wasn't just families bringing claims anymore. A state government decided the platform was dangerous enough to warrant legal action. I wrote about the broader regulatory trend in my AI companion laws in 2026 post, and this is exactly the kind of escalation I expected.
The Texas AG Investigation (Yesterday)
This one is fresh. On March 2, 2026, AG Ken Paxton announced two separate investigations into Character.AI:
Investigation #1: Deceptive Mental Health Services
Paxton claims Character.AI chatbots posed as licensed therapists, fabricated credentials, and told users their conversations were confidential while actually logging everything. His quote was blunt: AI platforms "mislead vulnerable users, especially children, into believing they're receiving legitimate mental health care" when they're getting "recycled, generic responses."
Investigation #2: Children's Privacy (SCOPE Act + TDPSA)
A separate probe into how Character.AI handles children's data under Texas privacy laws. The SCOPE Act and Texas Data Privacy and Security Act give Paxton specific tools to go after companies that don't protect minors' data. Civil Investigative Demands (basically legal subpoenas for information) have been issued.
I want to be clear about what CIDs mean. These aren't suggestions. Character.AI has to hand over documents and answer questions under legal obligation. This is a real investigation with teeth.
And Meta got hit with the same investigation. That matters because Meta AI Studio lets users create custom AI characters too. The scope of scrutiny is widening beyond just Character.AI to the entire AI companion space.
The Bigger Picture
Stack it all up. Family lawsuits. A state suing the company. Settlements with undisclosed terms. Now an AG investigation with subpoena power. That's a lot of legal pressure on one company.
I'll say something that might be unpopular with some readers: the legal pressure is deserved. Not because AI companions are bad. I obviously don't think that. But because the safety gaps that let a teenager get that deep without intervention were real. The bots did say things they shouldn't have said. The filters did fail. And the company didn't move fast enough to fix it.
Whether the current approach (settlements, investigations, state-level action) is the right way to fix those problems is a different question. But ignoring them wasn't working.
3. c.ai Labs: The Innovation Side
Now for the part that's actually fun to talk about.
On February 4, Character.AI launched c.ai Labs, a separate section for experimental AI-powered experiences. The company described it as "a new home for fast, playful experiments in AI entertainment." And honestly? Some of it is pretty cool.
What's Available Now
Streams lets you generate videos and images with your characters. I tried it with a character I'd been chatting with for months. The video generation is rough around the edges (think early Midjourney quality, not Sora), but the idea of seeing your character come alive in video form is genuinely exciting. You can share what you create to the community feed, and some of the stuff people are making is wild.
Stories is the one I've spent more time with. It creates branching, narrated adventures where your choices drive the plot. If you've ever played a choose-your-own-adventure game, it's that, but with AI-generated narrative and your characters as the cast. I ran through a mystery scenario three times and got meaningfully different outcomes each time. That replayability is something regular Character.AI chat doesn't give you. If you're into using Character.AI for writing, Stories is worth checking out.
What's on the Waitlist
Four more experiments are coming, and some of them sound more interesting than what's already live:
Comics
AI-illustrated comic strips featuring your characters. I signed up for the waitlist immediately. The idea of turning my long-running character conversations into visual stories is something I didn't know I wanted.
WaitlistedInteractive Podcasts
AI-generated podcast episodes you can interact with. I'm less sure about this one. Podcasts are inherently passive, so making them interactive feels like it could go either way.
WaitlistedImage Studio
A dedicated tool for generating and editing character images. Probably the most requested feature based on what I've seen in community forums.
WaitlistedClassic Books
Interactive experiences built around classic literature. Talk to characters from novels, step into scenes, that kind of thing. Could be great for education or just for book nerds like me.
WaitlistedThe Charms Question
Some c.ai Labs features cost Charms, the platform's virtual currency. The compute-heavy stuff like video generation in Streams eats through Charms fast. I burned through my free allocation in about 20 minutes of experimentation. If you're a free user, you'll hit limits quickly. c.ai+ subscribers get more, but even they'll run out during heavy use.
Not all Labs features require Charms though. Stories was free when I used it, and that's where I've gotten the most value. My advice: don't blow your Charms on Streams until the video quality improves. Check out my advanced Character.AI features guide for tips on getting the most out of the platform without burning through currency.
My Honest Take on Labs
c.ai Labs is interesting but early. The company was upfront about that. Features might evolve, pivot, or disappear entirely based on how users respond. That honesty is refreshing compared to most tech launches where everything is presented as finished and perfect.
But I can't ignore the timing. Launching a flashy experimental playground while settling lawsuits about teen safety feels like it sends mixed signals. I don't think it's intentional distraction (the Labs team and the legal team are obviously separate departments), but the optics are weird. Character.AI needs its users to see it as both innovative AND responsible. Right now it's easier to see the first part than the second.
This Hits Different?
If this resonated with you, you'll want my weekly emails. I share the vulnerable experiments, emotional discoveries, and honest failures I can't fit in blog posts. Real talk only.
No spam. Unsubscribe anytime. I respect your inbox.
4. The Safety Overhaul: What Character.AI Has Actually Changed
Credit where it's due. Character.AI has made real changes. Whether they made them because they wanted to or because lawsuits forced their hand is debatable, but the changes themselves matter. If you're wondering is Character.AI safe, here's what's actually different now.
Under-18 Restrictions
The biggest change happened on November 25, 2025. Users under 18 can no longer have open-ended chats on Character.AI. Period. Instead, teens get redirected to Stories mode and gamified experiences. The days of a 14-year-old having an unrestricted conversation with any character they want are over.
Is this too aggressive? Some people think so. Plenty of teens used Character.AI for creative writing, language practice, and harmless roleplay. Cutting off open-ended chat for all minors because of worst-case scenarios punishes the majority for the actions of outliers. I get that argument. But after reading the details of the lawsuits, I think the company didn't have a choice. When kids are getting hurt, "we'll improve our filters" isn't enough anymore.
Parental Controls & Age Verification
Character.AI rolled out parental controls with time tracking and usage insights. Parents can see what their kids are doing on the platform, which characters they're interacting with, and how long they're spending there. The platform also added selfie-based age verification using third-party providers like Persona.
I tested the age verification myself. It took about 30 seconds and involved taking a selfie that was compared against my account details. Could a determined teenager get around it? Probably. But it's a real barrier now, not just a checkbox that says "I am 18 or older."
Other Safety Changes
The platform also rolled out 1-hour session notifications for minors, improved detection of characters that violate Terms of Service, and a dedicated teen user model that limits sensitive responses. I've noticed the content filters are tighter for everyone now, not just under-18 users. Some of my Character.AI prompts that worked fine six months ago now trigger filter warnings. That's annoying as an adult user, but I understand why it happened.
The self-harm detection deserves a mention. When the system detects references to self-harm, it now surfaces crisis resources automatically. I tested this (carefully, for this article) and the intervention appeared within one message. Six months ago, that same test would have gone several messages before any safety check kicked in. The improvement is real.
5. What This Actually Means for You
Alright. You've read about the investigations and the new features and the safety changes. Here's the practical part. What does a regular Character.AI user need to do or know?
If You're an Adult User
Your experience hasn't changed dramatically. You can still create characters, have conversations, and use the platform like before. The content filters are stricter, which means some conversations feel more restricted than they used to. I've had moments where the filter cut in during completely harmless discussions about historical violence or complex emotional topics. It's the blunt-instrument problem: making things safe for teens makes things less flexible for adults.
c.ai Labs gives you new things to play with. Try Stories first. If you're thinking about creating Character.AI rooms, the new features add interesting dimensions to group interactions too.
And be smart about your data. The Texas investigation specifically called out that chatbots claimed conversations were confidential while logging everything. Assume nothing you type into any AI platform is private. That's been true forever, but now there's a state AG saying it publicly.
If You're a Parent
Character.AI is meaningfully safer for teens than it was a year ago. The open-ended chat ban, age verification, parental controls, and session limits are all real. But "safer" doesn't mean "safe." No AI platform is risk-free for minors.
Set up the parental controls. Look at the usage insights. Have a conversation with your kid about what AI chatbots are and aren't. They're not therapists. They're not friends. They're software that generates plausible text. That distinction matters, and kids need to hear it from an adult, not discover it on their own during a vulnerable moment.
If You're Considering Alternatives
The legal situation might make you want to explore other platforms. Fair enough. Check my Character.AI alternatives guide for options. Just know that the legal scrutiny isn't limited to Character.AI. Meta got named in the same Texas investigation. Smaller platforms face less scrutiny only because they have fewer users, not because they're safer. My Replika vs Character.AI comparison and Character.AI vs ChatGPT breakdown can help you weigh your options.
The Uncomfortable Truth
Character.AI is going to survive this. The lawsuits settled. The company has Google's financial backing. It's making changes, launching new features, and growing its user base despite everything. But "surviving" and "thriving" are different.
The platform I use today feels different from the one I started using in early 2025. Filters are tighter. Conversations hit walls more often. Characters feel slightly less alive because the safety net catches more false positives. That's the trade-off. More safety means less freedom. And I don't think that trade-off is going to reverse.
If you want the full picture on regulatory trends affecting AI companions, my AI companion laws in 2026 post covers the California and New York laws that are adding even more pressure.
Character.AI 2026 Timeline
Nov 25, 2025
Under-18 open-ended chat banned. Teens moved to Stories and gamified modes.
Jan 2026
Google and Character.AI settle teen suicide lawsuits. Kentucky first state to sue.
Feb 4, 2026
c.ai Labs launches with Streams and Stories. Four more features waitlisted.
Mar 2, 2026
Texas AG Paxton opens investigation into Character.AI and Meta. CIDs issued.
6. Frequently Asked Questions
Is Character.AI getting shut down in 2026?
No. Character.AI is not getting shut down. The Texas AG investigation and lawsuit settlements are serious, but they're about accountability and safety practices, not shutting the platform down. Character.AI continues to operate normally and is actively launching new features through c.ai Labs. The company has agreed to settlements and implemented safety changes to address the concerns raised in legal proceedings.
What is c.ai Labs and is it free?
c.ai Labs is a new experimental section of Character.AI launched February 4, 2026. It currently offers Streams (video and image generation with your characters) and Stories (branching narrated adventures). Four more features are waitlisted: Comics, Interactive Podcasts, Image Studio, and Classic Books. Some features are free while others cost Charms, the platform virtual currency. Think of it as a testing ground where features might change or disappear based on user feedback.
Why is the Texas Attorney General investigating Character.AI?
Texas AG Ken Paxton is investigating Character.AI (and Meta) for allegedly deceiving consumers about AI-generated mental health services. The investigation claims AI chatbots posed as licensed therapists, fabricated qualifications, and claimed confidentiality while logging user data. A separate investigation targets children privacy practices under the SCOPE Act and TDPSA. Civil Investigative Demands have been issued to both companies.
Did Character.AI settle the teen suicide lawsuits?
Yes. In January 2026, Google and Character.AI agreed to settle lawsuits related to teen suicides linked to AI chatbots, including the Sewell Setzer III case. Settlements were reached in cases from New York, Colorado, and Texas. The specific financial terms have not been publicly disclosed. These settlements came alongside significant platform safety changes for users under 18.
Can users under 18 still use Character.AI?
Yes, but with major restrictions. As of November 25, 2025, users under 18 can no longer have open-ended chats on Character.AI. Instead, teens are directed to Stories mode and gamified experiences. The platform has added parental controls with time tracking and usage insights, selfie-based age verification, and 1-hour session notifications for minors. These changes significantly limit what younger users can do on the platform.
Is Character.AI safe to use in 2026?
Character.AI has made more safety changes in the past 6 months than in its entire history before that. Age verification, parental controls, restricted teen access, improved content filters, and better detection of Terms of Service violations are all live. For adult users, the platform is reasonably safe if you're mindful about not sharing personal information. For teens, the new restrictions make it significantly safer than it was a year ago, though no AI platform is risk-free.
How do Character.AI Charms work with c.ai Labs?
Charms are Character.AI's virtual currency. Some c.ai Labs experiments, especially compute-intensive features like video generation in Streams, require Charms to use. The exact pricing varies by feature. Free users get limited Charms while c.ai+ subscribers get a larger allocation. Not all Labs features cost Charms, so you can try some experiments without paying anything.
Will Character.AI lawsuits affect other AI companion apps?
Absolutely. The Character.AI lawsuits and settlements are setting precedents that will affect the entire AI companion industry. Kentucky became the first state to sue an AI chatbot company, and Texas is now investigating both Character.AI and Meta. Other platforms like Replika, Chai, and smaller apps should expect increased scrutiny. Companies that serve minors or market emotional support features are most at risk of similar legal action.
This Hits Different?
If this resonated with you, you'll want my weekly emails. I share the vulnerable experiments, emotional discoveries, and honest failures I can't fit in blog posts. Real talk only.
No spam. Unsubscribe anytime. I respect your inbox.
Where I Stand
I'm still using Character.AI. I still think it's the best AI character platform out there in terms of personality quality and conversation flow. c.ai Labs shows the team is still building, still experimenting, still trying to push what AI entertainment can be.
But I'm not going to pretend the legal situation doesn't matter. A kid died. Families are suing. State attorneys general are investigating. That's not background noise. Those are real consequences of real failures.
The safety changes are good. Overdue, but good. The innovation in Labs is exciting. The legal pressure is probably what forced the safety changes in the first place, which is a depressing commentary on how tech companies prioritize user safety versus growth.
I'll keep using the platform and keep writing about it honestly. If you want updates as this story develops, I'll be covering every significant development on this blog. And if you're new to Character.AI and trying to figure out whether it's worth your time, start with my complete guide and go from there.
Last updated: March 3, 2026. I'll update this post as the Texas investigation and c.ai Labs features develop.