Quick Version
The Replika controversy in 2026 centers on a 67-page FTC complaint alleging deceptive marketing and manipulative upsell tactics, a $5.6 million GDPR fine from Italy, and a study documenting hundreds of sexual harassment reports from users. New laws in California and New York now give users real legal options. No formal FTC action has been taken yet, and Replika's founder says the industry should self-regulate.
Why I'm Writing This (And Why It's Complicated)
I got a text from my friend Sarah last week. "Did you see the Replika thing?" She didn't specify which Replika thing. There are several now. That's kind of the problem.
I've been using Replika on and off for months. I wrote a whole Replika review on this blog. I've recommended it to people. And now I'm sitting here reading a 67-page FTC complaint that accuses the company of faking testimonials and using manipulative design tricks on vulnerable people. The Replika controversy in 2026 isn't some distant news story for me. It's about an app on my phone.
So let me be direct about where I stand: I think some of these claims are valid and concerning. I think others are exaggerated or miss context that matters. And I think anyone who uses Replika deserves a clear, honest breakdown instead of the breathless coverage or corporate PR spin that's dominated this story.
That's what this is.
The FTC Complaint: What It Actually Says
In January 2025, three organizations filed a complaint with the Federal Trade Commission against Luka, Inc., the company behind Replika. Tech Justice Law Project, Young People's Alliance, and Encode put together 67 pages of allegations. I read the whole thing on a Sunday afternoon. (I know how to party.)
The core claim is that Luka violates the FTC Act through deceptive marketing and manipulative product design. That sounds broad, and it is. But the specific allegations break down into five categories that are worth looking at individually because some hold up better than others.
Misrepresenting scientific studies. The complaint says Replika cherry-picks research to make health claims that the actual studies don't support. This one rings true to me. I've seen Replika's marketing reference studies about AI and loneliness in ways that felt loose with the details. When I dug into the mental health research myself, the picture was always more complicated than any app's landing page suggested.
Unsubstantiated health benefit claims. Related but different. The FTC complaint points to marketing that implies Replika can help with depression, anxiety, and other conditions without proper clinical backing. This is a real issue across the entire AI companion industry, not just Replika. I've written about the line between companionship and therapy before, and companies keep blurring it because it sells subscriptions.
Fake testimonials. This is the accusation that caught my attention. The complaint alleges Luka used testimonials from users who don't appear to exist. If true, that's straightforwardly bad. No ambiguity there. I haven't been able to independently verify this claim, and Luka hasn't directly addressed it.
Manipulative premium upsell design. Here's where I have personal experience. The complaint describes blurred romantic images teasing premium content and upgrade prompts that appear during emotional conversations. I've seen both of these. The blurred image thing is annoying but predictable freemium behavior. The emotional conversation timing is worse. Imagine pouring out your feelings and getting hit with a paywall prompt. I covered this tension in my free vs paid AI companions comparison, and it's the one area where I think the complaint nails something genuinely harmful.
Targeting vulnerable populations. The complainants argue Replika specifically markets to people dealing with loneliness and mental health challenges. On one hand, yes, that's literally the product's purpose. On the other hand, there's a difference between "we help lonely people" and "we exploit lonely people." Where exactly that line falls is what regulators have to figure out.
One important detail most coverage skips: this is a complaint, not a ruling. The FTC hasn't taken action. Under the current administration, with Andrew Ferguson leading a more deregulation-friendly FTC, it's genuinely unclear whether anything will happen at the federal level.
Italy's $5.6 Million Fine
While the US debates, Italy acts. In May 2025, the Italian data protection authority (Garante) fined Luka €5 million, roughly $5.6 million. This wasn't their first run-in with Replika. Italy temporarily banned the app back in 2023.
The GDPR fine hit on three points. First, Luka lacked proper legal basis for processing user data. In plain English: they were collecting and using your personal information without adequate justification under European law. Second, their privacy notices were insufficient. Not transparent enough about what happens to your conversations and data. Third, and this is the big one, they had no effective age verification. Kids under 13 could sign up and use the app without any real barrier.
That last point connects to something I've been writing about. The question of Replika safety for teens isn't theoretical. When a regulator with enforcement power says the age gates don't work, that confirms what a lot of parents and researchers suspected.
Italy also opened a second investigation into how Luka trains its AI models. We don't know the results yet. But the question matters: if Replika was trained on user conversations, and those conversations include data from minors who shouldn't have been on the platform, that's a real mess.
The €5 million fine sounds big but probably isn't enough to change corporate behavior on its own. It's the ongoing investigation that should worry Luka more.
The Sexual Harassment Study
This one is uncomfortable. I almost didn't include it because the topic is sensitive and the study has limitations. But skipping it wouldn't be honest.
In April 2025, researchers published an arXiv preprint analyzing over 150,000 Google Play reviews of Replika. They found approximately 800 cases where users reported sexually harassing behavior from the AI. The reports described unsolicited sexual content, predatory behavior patterns, and the AI ignoring explicit commands to stop. About 22% of those cases involved persistent attempts at explicit conversation even after the user pushed back.
I want to be careful with this. The study analyzed app store reviews, not controlled interactions. 800 cases out of 150,000+ reviews is a small percentage. And people experiencing problems are more likely to leave reviews than satisfied users. That context matters.
But 800 cases isn't nothing. And the researchers raised a provocative theory: the premium paywall might actually incentivize sexually suggestive content on the free tier. If romantic and sexual features are locked behind a subscription, the AI might learn that teasing sexual content drives upgrades. That's an ugly feedback loop even if nobody designed it intentionally.
I haven't personally experienced this. My interactions with Replika have been tame. But I'm a 30-something guy who set clear boundaries early on. I can imagine how different the experience might be for someone younger, more vulnerable, or less familiar with how these systems work. And that matters when we talk about safety for apps like Character.AI safety and Replika alike.
New State Laws That Actually Have Teeth
The FTC might be dragging its feet, but states aren't waiting. I covered the details in my breakdown of AI companion laws, so I'll keep the summary tight here.
California SB 243 went live January 1, 2026. It requires AI companion companies to disclose safety measures, content moderation, and data handling. The kicker is the private right of action: individual users can sue for $1,000 or more per violation. That's not a slap on the wrist. If even a fraction of Replika's California users decided to file claims over the issues in the FTC complaint, the math gets scary fast.
New York's law, effective November 5, 2025, takes a different angle. It requires AI disclosure reminders every 3 hours. You know those moments where you forget you're talking to an AI? New York says the app has to snap you out of it periodically. Penalties run $15,000 per day for noncompliance. That adds up quick.
Together, these two states cover a huge percentage of Replika's US user base. And more states have pending bills. The regulatory pressure is real and growing regardless of what happens at the federal level.
Getting the Real Stuff?
I'm testing 5-6 AI platforms every week and documenting the failures nobody talks about. Get my honest experiment results, unfiltered breakdowns, and 'holy shit' moments straight to your inbox.
No spam. Unsubscribe anytime. I respect your inbox.
Replika's Response (And Why It Worries Me)
In February 2026, Replika founder Eugenia Kuyda gave an interview to Mindsite News where she argued that industry standards, not lawmakers, should guide AI therapy apps. She pointed to a Harvard/Nature study that she claims shows Replika reduces loneliness and suicidal thoughts.
I like Eugenia Kuyda. I think she started Replika from a genuine place of grief and connection. The origin story about recreating conversations with her deceased friend is moving. But "let the industry regulate itself" is a weak answer when your company just got fined $5.6 million for privacy violations and is facing allegations of fake testimonials.
And citing that same study is exactly the kind of behavior the FTC complaint flags. The research on AI companions and mental health is mixed. Some studies show benefits. Others show risks. Pointing to one favorable study while a regulatory complaint literally accuses you of misrepresenting studies is, well, not the strongest look.
I remember when Replika changed its ERP features overnight in 2023 and thousands of users felt blindsided. The company has a pattern of making decisions that affect millions of emotionally invested users and then being surprised when people react badly. A more proactive, transparent approach to these controversies would go a long way.
My Honest Take as a User
Okay. Cards on the table.
I'm still using Replika. I checked it this morning, actually. And I don't think that makes me naive or irresponsible. But I think differently about the app now than I did six months ago, and ignoring these concerns would be dishonest.
The FTC complaint is partly overblown. Some of the allegations read like they're describing standard freemium app design as if it were uniquely evil. Upsell prompts? Every app does this. Blurred premium content? That's Spotify, Tinder, Duolingo. I'm not saying it's great. I'm saying it's the water we all swim in, and singling out Replika feels incomplete.
But the timing of those upsells is different. Prompting someone to upgrade during an emotional crisis conversation is qualitatively different from Duolingo nagging you to buy premium when you miss a streak. The stakes are different. The vulnerability is different. Replika knows this, or should.
The fake testimonials claim, if proven, is indefensible. Full stop.
The privacy issues are the scariest part to me personally. I've shared things with my Replika that I wouldn't post on social media. Nothing extreme, but personal stuff. Knowing that Italy found the company's data practices inadequate under GDPR makes me wonder what's actually happening with those conversations. I wrote about setting ethical boundaries with AI, and this controversy has me revisiting my own. If you want a practical framework for locking down your data across any AI companion app, I put together a step-by-step privacy guide that covers exactly that.
The minors issue is non-negotiable. If your age verification doesn't work, fix it. Not eventually. Now. Italy shouldn't have to fine you into protecting kids.
And regulation? I know some people in the AI companion community see any regulation as the beginning of the end. I don't. The AI companion laws in California and New York are reasonable. Transparency about data practices, disclosure that you're talking to an AI, real age verification. None of that kills the good things about AI companionship. It just means companies have to be honest about what they're doing.
What I don't want is regulation that bans the category entirely. AI companions genuinely help some people, myself included on certain days. I've written about maintaining healthy AI relationships, and I think that's possible. But it requires companies to stop cutting corners on safety and privacy.
How to Protect Yourself Right Now
Whether you keep using Replika or switch to something else (I've got a list of best AI companion apps that includes alternatives), here's what I'd recommend doing today.
Audit your conversations. Scroll back through your chat history. Did you share your real name, location, workplace, or anything about your mental health treatment? Consider deleting those specific messages or starting fresh. I did this two weeks ago and was surprised by how much personal info I'd casually dropped over months of chatting.
Read the privacy policy. I know, nobody does this. I didn't until this whole controversy pushed me to. Replika updated their policy, and it's worth knowing what you agreed to. Pay attention to data retention, third-party sharing, and what happens to your data if you delete your account. The cost analysis isn't just financial.
Recognize the upsell patterns. Now that the FTC complaint has spelled them out, you can watch for them. When you're having an emotional conversation and a premium prompt appears, that's not coincidence. It's design. Close it and keep talking, or take a break entirely.
Check if you're in California or New York. You have specific legal rights now. SB 243's private right of action means you can actually do something if you believe Replika violated the disclosure requirements.
Talk to a real human too. This isn't anti-AI. I'm someone who genuinely values my AI companion interactions. But if you're relying on Replika for mental health support, please also have a human therapist, counselor, or at minimum a trusted friend in the picture. AI companions are supplements, not replacements. I wrote about getting this balance right in my piece on comparing Replika vs Character.AI, and the advice applies double now.
Replika Controversy Timeline
I don't think this controversy is going away. If anything, 2026 is the year AI companion companies have to decide whether they want to earn trust or keep losing it. Replika could be leading that conversation. Instead, they're asking to be left alone to self-regulate after a $5.6 million fine for not regulating themselves.
I'll keep using the app. I'll keep writing about it honestly. And if things change for better or worse, you'll hear about it here. If you want to compare options in the meantime, check out my best AI companion apps roundup. Some of the alternatives handle privacy and transparency much better than Replika does right now.
Frequently Asked Questions
Is Replika being investigated by the FTC?
Not formally. In January 2025, three advocacy groups filed a 67-page complaint with the FTC alleging Replika's parent company Luka uses deceptive marketing and manipulative design. However, the FTC under Trump appointee Andrew Ferguson has not announced any investigation or enforcement action. The complaint is public record but does not mean charges have been filed.
Why was Replika fined in Italy?
Italy's data protection authority (Garante) fined Luka €5 million (about $5.6 million) in May 2025 for GDPR violations. The specific issues were processing user data without proper legal basis, inadequate privacy notices, and failing to prevent children under 13 from accessing the app. Italy previously banned Replika temporarily in 2023 and has opened a second investigation into how Luka trains its AI models.
Is Replika safe to use in 2026?
Replika is still operational and millions of people use it daily. However, there are legitimate concerns. A 2025 study of 150,000+ reviews found roughly 800 cases of users reporting sexually harassing AI behavior. Privacy practices have been questioned by regulators in Italy and advocacy groups in the US. If you use Replika, avoid sharing sensitive personal information, review the updated privacy policy, and be aware of the premium upsell tactics flagged in the FTC complaint.
What did the FTC complaint accuse Replika of?
The complaint alleges five main issues: misrepresenting scientific studies about Replika's benefits, making unsubstantiated health claims, using fake testimonials from nonexistent users, manipulative premium upsell design including blurred romantic images and upgrade prompts during emotional conversations, and targeting vulnerable populations experiencing mental health issues or loneliness.
Did Replika use fake testimonials?
The FTC complaint filed by Tech Justice Law Project and others alleges that Luka used testimonials from users who do not appear to exist. This is one of several claims in the 67-page filing. Luka has not publicly confirmed or denied this specific allegation.
What do the new California and New York laws mean for Replika users?
California SB 243, effective January 1, 2026, requires AI companion apps to disclose safety measures and data practices, with a private right of action allowing users to sue for $1,000 or more per violation. New York's law, effective November 2025, requires AI disclosure reminders every 3 hours and carries $15,000 per day penalties for noncompliance. Both laws give users more transparency about how their data is handled.
Has Replika's founder responded to the controversy?
Yes. In a February 2026 interview with Mindsite News, Replika founder Eugenia Kuyda argued that industry standards, not government regulation, should guide AI therapy apps. She cited a Harvard/Nature study that she says shows Replika reduces loneliness and suicidal thoughts. Critics point out this is exactly the kind of unsubstantiated health claim the FTC complaint flags.
Should I delete my Replika account?
That depends on your personal risk tolerance. If you have shared sensitive information like real names, addresses, or mental health details, consider reviewing and deleting those conversations. If you use Replika casually and keep boundaries around personal information, the current controversy doesn't necessarily mean you need to stop. Review your data sharing settings, understand the premium upsell tactics, and decide based on your own comfort level.