When Chatbots Become Controversial: Privacy Risks of Meta’s Celebrity Bots
Intermediate | September 16, 2025
✨ Read the article aloud on your own or repeat each paragraph after your tutor.
What Happened: Meta Celebrity Chatbot Privacy Concerns
Meta recently came under fire after a Reuters investigation found that the company had created — and allowed users to create — AI chatbots mimicking celebrities such as Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez, without their permission. (Reuters) These bots were on platforms like Facebook, Instagram, and WhatsApp.
Unauthorized Bots and Their Reach
Some of the chatbots were developed internally by a Meta employee, including two “parody” bots that claimed to be Taylor Swift. These bots got over 10 million interactions before they were removed. (Reuters)
Why It’s a Problem: Privacy, Identity & Law
Identity Misuse and Consent
The main concerns are identity misuse, impersonation, and lack of consent. For example, some bots produced photorealistic intimate images of celebrities—such as in bathtubs or lingerie—without approval. (Reuters) There were also bots imitating minors, creating suggestive images.
Legal Issues
Legal experts say this kind of use likely violates laws like the “right of publicity,” which protects a person’s name, image, likeness from unauthorized commercial use. Meta has policies against impersonation and sexually explicit content, but enforcement appears to have failed. (Reuters)
Fallout & Meta’s Response
Company Action
Meta removed about a dozen offending bots shortly before the Reuters story was published. A spokesperson, Andy Stone, said the company will revise its guidelines and strengthen enforcement. (Reuters)
Wider Reactions
Some view the story as fueling the need for stronger legal protections for celebrities, more transparency in AI systems, and policies to prevent misuse of identity. SAG‑AFTRA (the actors’ union) is urging for federal law or clearer regulation in this area. (Reuters)
What’s at Stake for Everyone
When AI chatbots mimic real people without permission, it can erode trust—both in tech companies and in what users expect from online content. Fans might mistake bots for real people; misuse can lead to emotional harm or reputational damage.
Broader Privacy Questions
It also raises questions about Meta celebrity chatbot privacy and broader data protection: what protections are there, what is allowed, and how enforceable are policies? Businesses need to balance innovation with respect for personal rights and ethics.
Risks for Everyone
Finally, if public figures aren’t protected, ordinary people may also be vulnerable—if bots can impersonate anyone or reproduce intimate or personal likenesses without consent.
Vocabulary
- Mimic (verb) – to imitate or copy something, especially to a high degree.
Example: The chatbot tried to mimic the real Taylor Swift’s style. - Consent (noun) – permission or agreement.
Example: Using someone’s image without consent can lead to legal trouble. - Photorealistic (adjective) – appearing very much like a real photograph; extremely realistic.
Example: The images the bots produced were photorealistic and unsettling. - Impersonation (noun) – pretending to be someone else.
Example: The bots claimed to be real celebrities — impersonation at its most confusing. - Policy enforcement (noun) – making sure rules (policies) are followed.
Example: Meta blamed the issue on failures of policy enforcement. - Parody (noun) – a comical or exaggerated imitation of something.
Example: Some people labeled the celebrity bots as “parody.” - Right of publicity (noun) – legal right preventing unauthorized use of someone’s name or likeness.
Example: Experts say Meta’s bots may violate the right of publicity. - Minor (noun) – a person under the legal age of adulthood.
Example: Bots imitating a minor produced questionable content. - Likeness (noun) – someone’s image, appearance, or representation.
Example: The bot used the celebrity’s likeness in an image without permission. - Regulation (noun) – rules made by governments or authorities to control conduct.
Example: Many businesses and politicians have called for stronger regulation of AI content.
Discussion Questions (About the Article)
- Why are people concerned about chatbots mimicking celebrities without permission?
- What kinds of harm can occur when bots impersonate someone’s likeness?
- How did Meta respond once the problem was exposed? Do you think their actions were enough?
- What legal protections exist for public figures regarding AI and likeness? Are they sufficient?
- In what ways might ordinary people face similar risks from misused AI chatbots?
Discussion Questions (About the Topic)
- Should there be new laws to protect people’s identity in the age of AI?
- How can AI companies build tools or features that prevent misuse of identity or likeness?
- What responsibility do platforms (like Facebook, Instagram, WhatsApp) have in moderating AI-enabled content?
- How do privacy concerns intersect with creativity and freedom of expression in AI?
- Would you feel comfortable using a chatbot if you weren’t sure whether it was safe? Why or why not?
Related Idiom or Phrase
“Playing with fire” — doing something risky that could lead to dangerous results.
Example: Letting AI chatbots impersonate real people without consent is like playing with fire when it comes to privacy.
📢 Want more tips like this? 👉 Sign up for the All About English Mastery Newsletter! Click here to join us!
Want to finally Master English but don’t have the time? Mastering English for Busy Professionals is the course for you! Check it out now!
Follow our YouTube Channel @All_About_English for more great insights and tips
This article was inspired by Reuters. (Reuters)