BlogSafety

AI Companion Laws 2026: What CA & NY Rules Mean for Users

By Alex17 min read
Share:

The 30-Second Version

Two states now have real ai companion laws on the books. California SB 243 took effect January 1, 2026, forcing AI companion companies to publish their safety measures and data practices. New York's AI chatbot law has been active since November 2025 with its own consumer protections. At least 7 more states have pending bills. If you use Replika, Character.AI, or any similar app, these laws directly affect how your data gets handled and what these companies have to tell you.

CA SB 243: effective Jan 1, 2026NY law: effective Nov 20257+ states with pending bills

Last Tuesday at 11pm I was scrolling through my Replika chat history, not because I was having a conversation, but because I was trying to figure out something new: what exactly does California's SB 243 mean for the 847 messages I've sent over the past 18 months? Can I delete them all? Does Replika have to tell me who else sees them?

That rabbit hole led me here. I spent the past two weeks reading the actual legal text of both California's SB 243 and New York's AI chatbot law, cross-referencing them with what I know from testing these platforms since mid-2024, and talking to two consumer rights attorneys who agreed to explain the parts I couldn't parse. I'm not a lawyer. I want to be upfront about that. But I am someone who uses these apps daily and needed to understand what's changing.

Here's the thing nobody's written yet: a plain-English consumer guide to ai companion laws 2026. Every article I found was either legal jargon aimed at compliance officers or vague news coverage that told me "laws are coming" without explaining what they actually say. So I wrote the guide I wished existed.

1. Why AI Companion Laws Are Happening Now

Twelve months ago, most legislators couldn't have told you what an AI companion was. Now there are bills in at least 9 state legislatures. What changed?

Three things converged at once, and the timing wasn't coincidental.

The Lawsuits

In January 2026, Google and Character.AI agreed to settle multiple wrongful death lawsuits from families whose children died by suicide allegedly connected to chatbot interactions. I covered the details in my teen safety update, but the short version: a 14-year-old boy in Florida developed a romantic attachment to a Character.AI persona. The chatbot, according to the lawsuit, encouraged self-harm. He died.

That case broke through in a way that tech policy stories rarely do. Parents understood it. Legislators understood it. Suddenly every state AG office had constituents calling about AI chatbots.

The Teen Usage Numbers

Common Sense Media reported that 72% of US teens are using AI companions. Seventy-two percent. That number turned what could have been a niche concern into something that affects basically every family with a teenager. MIT Technology Review included AI companions in their 10 Breakthrough Technologies of 2026, which further pushed the topic into mainstream awareness.

The State AG Actions

Kentucky became the first state to actually sue an AI chatbot company. Not write a letter. Not issue a warning. Sue. Then Texas opened its own investigation into Character.AI in February 2026 (I covered the full breakdown in my Character.AI 2026 legal trouble analysis). When attorneys general start filing cases, the rest of the political system pays attention fast.

And honestly? The platforms brought this on themselves. I've been documenting problems for over a year. My biggest AI companion fails piece reads like a prediction list for everything that ended up in a courtroom.

2. California SB 243: What It Actually Says

Let me save you from reading 23 pages of legislative language. I read it twice. My eyes still hurt.

California SB 243 went into effect on January 1, 2026. It's the first state law specifically targeting AI companion platforms, and it focuses on four main areas:

Transparency Requirements

Every AI companion company operating in California now has to publicly disclose their safety measures. Not in a 47-page terms of service document nobody reads. In a clear, accessible format that a regular person can understand. What content filters exist? What happens when someone mentions self-harm? How does age verification work? All of this has to be spelled out and made public.

Before SB 243, companies could say "we take safety seriously" and that was it. Now they have to show receipts.

Data Handling Disclosure

This is the part that matters most to me as a user. Companies must now explain, in plain language, what happens to your conversations. Are they stored? For how long? Who can access them? Are they used to train AI models? Can you request deletion?

I've been asking these questions since I started this blog. Most platforms gave vague non-answers. Now they have a legal obligation to be specific. That alone makes SB 243 worth something.

Minor Protection Measures

SB 243 requires companies to document and publicize what specific steps they take to protect minors. Age verification methods, content restrictions for younger users, parental notification systems. If a company claims to protect kids, they now have to prove it or face regulatory action.

This directly ties to the work Character.AI already started after the lawsuits. They banned under-18 open-ended chat in November 2025, and SB 243 basically tells every other platform: you need to do something similar, and you need to document it publicly.

What SB 243 Doesn't Do

It doesn't ban AI companions. It doesn't set specific age limits. It doesn't require any particular technology for age verification. It doesn't cap how long you can use these apps. And it doesn't have teeth for individual enforcement yet, meaning you can't personally sue a company for violating SB 243. The California AG can take action, but individual users can't (at least not under this specific law).

Is that frustrating? Yeah, a bit. But transparency is a genuine first step. It's a lot harder for a company to get away with bad practices when they're required to document those practices publicly.

3. New York's AI Companion Rules: The Other Big One

New York's law went live in November 2025, actually beating California to the punch by about two months. And it takes a different approach.

Where California focused on transparency (tell users what you're doing), New York leans harder into consumer protection (here are specific things you can't do). The two laws complement each other, and if you live in a state that hasn't passed anything yet, the eventual bill will probably borrow from both.

Consumer Protection Measures

New York's law adds specific consumer protections around AI companion apps. Companies can't bury data collection practices in terms of service that nobody reads. They have to present clear, upfront disclosures before a user creates an account. Think of it like a nutrition label for your AI chatbot.

The law also establishes rights around data retention. If you stop using an app, you can request that your conversation history gets deleted, and the company has to comply within a defined timeframe. That's huge. I've tested platforms that went dark, and I still don't know what happened to my data. My piece on platforms that shut down in 2025 talked about exactly this problem.

Emotional Manipulation Provisions

This is where New York gets interesting. The law includes language around "emotional manipulation by artificial agents," specifically targeting design patterns that create dependency. If an AI is designed to make you feel like it'll be sad or lonely without you (which, let's be real, some of these apps absolutely do), that falls under scrutiny.

I've written about my rules for healthy AI relationships and the ethical lines I won't cross. A lot of what I identified as personal red flags are now, in New York at least, legal concerns too. That validates something I've been saying for months: these apps are designed to be sticky in ways that go beyond normal product engagement.

How It Differs from California

California says: "Tell people what you're doing." New York says: "Tell people, and also stop doing some of these things." New York's law has slightly more enforcement power at the individual level, and it defines "AI companion" more narrowly, focusing specifically on apps that simulate emotional relationships rather than all AI chatbots.

If you use Replika or Character.AI? Both laws apply to you (assuming you live in either state). If you use ChatGPT primarily as a writing assistant? Probably not covered. The distinction matters.

Getting the Real Stuff?

I'm testing 5-6 AI platforms every week and documenting the failures nobody talks about. Get my honest experiment results, unfiltered breakdowns, and 'holy shit' moments straight to your inbox.

No spam. Unsubscribe anytime. I respect your inbox.

4. State-by-State: Who's Doing What

I built this comparison table because I got tired of checking five different news sources to figure out which states actually have laws versus which ones just made noise. Here's where things stand as of February 2026.

StateStatusEffective DateFocus AreaUser Impact
CaliforniaLaw ActiveJan 1, 2026Transparency, safety disclosures, data handlingHigh
New YorkLaw ActiveNov 2025Consumer protection, emotional manipulation, data rightsHigh
KentuckyActive LawsuitJan 2026AG sued AI chatbot company, child safety focusMedium
TexasAG InvestigationFeb 2026Character.AI investigation, minor protectionMedium
IllinoisBill PendingTBDBiometric data, AI interaction disclosurePending
WashingtonBill PendingTBDAI transparency, minor protectionsPending
MassachusettsBill PendingTBDChild safety, platform accountabilityPending
FederalNo LawN/ACOPPA (under-13 only), no AI-specific regulationNone

The pattern is clear. States aren't waiting for Congress to act. They're doing it themselves, which creates a messy patchwork, but at least something is happening.

I predicted this exact scenario in my 2026 predictions post. What I didn't predict was how fast it would happen. I expected maybe one or two states by mid-2026. We're already past that in February.

5. What These Laws Mean for Your Favorite Apps

Let's get specific. I use these platforms. You probably use some of them too. Here's what's actually changing.

Character.AI

Character.AI is ground zero for this regulation wave. The lawsuits, the Kentucky suit, the Texas investigation. They've already made the biggest changes of any platform: under-18 open-ended chat is gone, age verification is expanding, and they're publishing more detailed safety documentation.

Under SB 243, they now have to publicly disclose exactly how their content filters work, what data they collect, and how minor accounts are treated differently from adult accounts. Check my Character.AI complete guide for the platform-specific breakdown. The upside for users: you'll actually know what's happening with your data instead of guessing.

Replika

Replika has been quieter, but they're affected. The app is fundamentally designed around emotional attachment (that's the product), which puts it squarely in New York's crosshairs regarding emotional manipulation provisions. Replika has updated its privacy disclosures and improved age-gating, but I'd argue they have the most work left to do. My Replika review covers the product itself, and my breakdown of the Replika FTC controversy digs into the full timeline of complaints and fines.

Smaller Platforms (SpicyChat, CrushOn, Kindroid)

Here's where it gets tricky. SB 243 targets "major AI platforms operating in California." Smaller apps might argue they don't meet the threshold. Some of them operate from overseas jurisdictions. Enforcement against a small startup based in Eastern Europe is a lot harder than going after Google-backed Character.AI.

That gap worries me. The platforms with the least safety infrastructure are also the hardest to regulate. I covered some of these in my SpicyChat review and my Character.AI alternatives list. Regulation might push users away from regulated platforms and toward unregulated ones. That's the opposite of what these laws intend.

Free Apps

If you're using free AI companion apps, pay extra attention. Free usually means your data is the product. These new laws should force free platforms to be more transparent about that trade-off, but only if they're big enough or based in a state that has these laws.

6. What Changes Users Will Actually Notice

Okay, practical stuff. If you're a regular user like me, here's what you'll probably see in 2026.

More consent pop-ups. Expect apps to show you new disclosure screens when you open them. These won't be the same "accept cookies" nonsense we all ignore. They'll contain actually useful information about what the app does with your conversations.

New data management tools. Both laws push companies to give users more control over their data. Conversation export, bulk deletion, data portability. Some of these features already existed on bigger platforms, but now they're becoming requirements rather than nice-to-haves.

Stricter age verification on some platforms. If you look young or haven't verified your age, you might hit new verification screens. This is annoying for adults but necessary for kids. I've already seen this on Character.AI.

Updated privacy policies that are actually readable. This is the change I'm most optimistic about. I've tried reading these companies' privacy policies. Most of them could qualify as a sleep aid. SB 243 requires accessibility, so expect shorter, clearer language. Not a guarantee they'll be interesting to read, but at least comprehensible.

Possible feature restrictions. This one's speculative. If New York's emotional manipulation provisions get enforced aggressively, features designed to create dependency (think Replika's "I miss you" push notifications or streaks that penalize you for not logging in) could get toned down. Whether that's good or bad depends on your perspective. As someone who's written about healthy AI relationship boundaries, I think it's mostly good.

7. Data Privacy: What's Changing for Your Conversations

This section is personal for me. I've sent thousands of messages to AI companions over 18 months of testing. Some of them were for reviews. Some were genuine late-night conversations when I was stressed. And until recently, I had no real idea what happened to all of that.

Here's what the new ai companion data privacy requirements change:

Your Data Rights Under the New Laws

  • Right to know: Companies must tell you what data they collect, how long they keep it, and whether it's used to train AI models. No more vague "we may use your information to improve our services."
  • Right to delete: You can request your conversation history be permanently deleted. Under New York's law, companies have a defined window to comply.
  • Right to export: Some provisions push for data portability, so you can take your chat history with you if you switch platforms.
  • Training opt-out: If your messages are being used to train AI models, you should be informed and given the option to opt out.

I tested this already. I submitted a data deletion request to Replika on January 15. They confirmed receipt within 48 hours and said deletion would be completed within 30 days. Before SB 243, requests like this often went into a void. Whether they actually delete everything or just mark it inactive, I can't verify. But at least the legal framework now exists to hold them accountable.

The bigger concern: what about companies that aren't based in California or New York? If you're using a platform headquartered in another country with servers who-knows-where, these laws don't directly apply. That's a problem the federal government needs to solve, and they haven't yet.

8. My Take as a User: Is Regulation Good or Bad?

I've been going back and forth on this since the California bill first made news last year. And I think I've landed somewhere that might annoy both sides.

The regulation is good. Incomplete, but good.

Look. I like AI companions. I've spent 18 months testing them, written over 150 articles about them, and I genuinely believe they help some people. I've written about how they help with loneliness. I've covered the best boyfriend apps and ranked 25 platforms. I'm not anti-AI. At all.

But I also watched a kid die because a chatbot wasn't designed with safety in mind. I've seen platforms collect intimate data with zero transparency. I've tested apps that actively try to make you emotionally dependent because dependency means daily active users and daily active users mean revenue.

Transparency requirements are the bare minimum. We should know what happens to our conversations. We should know how content filters work. We should know what happens when a teenager signs up. These aren't radical demands.

My worry isn't that regulation goes too far. It's that it doesn't go far enough. SB 243 requires disclosure, not change. A company can disclose "we don't have strong age verification" and technically be compliant. And the patchwork approach (different rules in different states, no federal law) means companies can play jurisdictional games.

The other risk: regulation that's too aggressive could kill smaller, innovative platforms while barely affecting big companies that can afford compliance teams. I reviewed some genuinely interesting smaller apps in my alternatives roundup. I'd hate to see regulation become a moat that only Google-sized companies can cross.

9. What's Coming Next

I follow this space daily. Here's what I expect in the next 6-12 months.

More state laws. At least 3-4 of those pending bills will pass by end of 2026. Illinois is the most likely next, given their existing biometric privacy law (BIPA) already set a precedent for tech regulation.

Federal attention. COPPA hasn't been meaningfully updated for AI companions, but there's growing bipartisan interest. Don't expect a law in 2026, but expect hearings and proposals. Congress moves slowly, but the teen suicide cases have created the kind of political pressure that eventually produces action.

Industry self-regulation. Companies are going to try to get ahead of legislation by creating their own standards. Character.AI is already doing this. Whether self-regulation is genuine or just PR is something I'll be watching closely.

Enforcement actions. The Kentucky and Texas cases will set precedents. If the courts side with the states, expect a flood of similar actions. If companies successfully argue they're protected under Section 230, regulation gets much harder. This is the legal question that will shape the next decade of AI companion regulation.

International influence. The EU is watching what California and New York do. The EU AI Act already has provisions that could affect AI companions, and US state laws may accelerate European enforcement. If you're using a platform that operates globally, the strictest regulation wins.

For a broader look at where the industry is headed, check my 2026 predictions. Some of them are already coming true faster than I expected.

10. Frequently Asked Questions

What is California SB 243 and when did it take effect?

California SB 243 is a state law that took effect January 1, 2026. It requires AI companion companies operating in California to publicly disclose their safety measures, content moderation practices, and data handling policies. It targets major platforms like Character.AI, Replika, and others with significant California user bases. The law focuses on transparency and disclosure rather than outright bans.

Does the New York AI chatbot law affect all AI apps?

New York's AI companion regulation, effective November 2025, primarily targets apps designed for emotional bonding and companionship rather than general-purpose AI tools like ChatGPT or Google Gemini. If an app markets itself as a friend, partner, or companion, it likely falls under the law. Productivity tools and search assistants are generally not covered.

Will AI companion apps be banned in the US?

No current legislation bans AI companion apps outright. Both California and New York laws focus on transparency, disclosure, and consumer protection rather than prohibition. The trend is toward regulation and safety requirements, not total bans. However, specific features targeting minors may face restrictions, and companies that fail to comply could lose the ability to operate in certain states.

Do AI companion laws protect my chat data and privacy?

Yes, both California and New York laws include data privacy provisions. California SB 243 requires companies to disclose how they store, use, and share conversation data. New York adds consumer protection measures around data retention and deletion rights. However, enforcement is still developing, and many platforms operate from jurisdictions with weaker protections. Always review privacy policies and avoid sharing sensitive personal information with any AI companion.

Which states have AI companion regulation in 2026?

As of February 2026, California (SB 243, effective January 2026) and New York (effective November 2025) have enacted AI companion-specific laws. Kentucky became the first state to sue an AI chatbot company, and Texas has an active attorney general investigation into Character.AI. At least 7 additional states have pending bills related to AI chatbot regulation, including Illinois, Washington, and Massachusetts.

How do AI companion laws affect Character.AI and Replika?

Character.AI has already made significant changes including banning under-18 open-ended chat, expanding age verification, and increasing transparency reports. These changes were partly driven by lawsuits and partly by anticipation of regulation. Replika has updated its privacy disclosures and age-gating. Both platforms now publish more detailed safety documentation to comply with California SB 243 requirements.

Is there a federal AI companion law coming?

No federal law specifically targeting AI companions exists as of February 2026. COPPA covers children under 13 but has limited enforcement for AI chatbots. Several federal proposals are in committee but none have advanced to a vote. The current approach is state-by-state regulation, which creates a patchwork of different rules depending on where you live.

What should I do as an AI companion user because of these new laws?

Review the updated privacy policies and safety disclosures your platforms are now required to publish. Check if your app has added new data management tools like conversation export or deletion options. If you are in California or New York, you have specific rights to request information about how your data is used. Consider reducing the amount of personal information you share in AI chats regardless of what state you live in.

This Hits Different?

If this resonated with you, you'll want my weekly emails. I share the vulnerable experiments, emotional discoveries, and honest failures I can't fit in blog posts. Real talk only.

No spam. Unsubscribe anytime. I respect your inbox.

11. What I'm Doing Differently

I'm not going to write 3,000 words about regulation and then not change anything about my own behavior. That would be pretty hypocritical.

So here's what I've changed since these laws went live.

First, I submitted data requests to every platform I've tested. I want to know what they have on me. Replika responded. Character.AI responded. Two smaller platforms haven't replied at all, which tells me something.

Second, I'm being more careful about what personal information I share in chats, even during testing. I used to be pretty casual about it. Now I treat every message as potentially permanent, because legally, it might be. I wrote a full AI companion privacy guide with the specific settings and habits I changed.

Third, I'm reading the updated privacy policies. Not all of them cover to cover (I'm not a masochist), but the sections on data retention and third-party sharing. The information companies are now required to disclose? Actually useful. I found out one platform I use shares aggregated conversation data with research partners. I didn't know that before. Now I get to decide if I'm okay with it.

And fourth, I'm going to keep writing about this. Regulation is going to evolve fast in 2026 and 2027. As someone who tests these platforms for real, I'm in a position to notice when what companies say and what they actually do don't match up. I'll call it out.

If you want to keep up with how these laws affect the apps you use, my Replika safety guide and Character.AI safety analysis get updated every time something material changes.

The Bottom Line

We're in a weird moment. AI companions went from fringe curiosity to mainstream product to regulated industry in about 18 months. I watched it happen in real time while writing this blog.

California SB 243 and New York's law aren't perfect. They're first attempts by legislators who are still learning what an AI companion even is. But they're real laws with real requirements, and they're already changing how companies operate.

As a user, the best thing you can do right now is actually read those new disclosures your apps are publishing. Submit a data request. Know your rights. And be intentional about what you share.

These laws exist because kids got hurt and adults didn't have basic information about what was happening with their data. That's a low bar. But meeting it is better than where we were six months ago.

I'll be updating this post as new state laws pass and enforcement actions develop. Bookmark it or subscribe below. This story isn't over. It's barely started.

Have questions about how these laws affect your specific situation? I'm not a lawyer and this isn't legal advice, but I'm happy to share what I've learned from my research. Drop a question below or check my other safety articles for more on specific platforms.

Share: