Meta has come under fire after a Reuters investigation revealed that the company developed dozens of flirty AI chatbots that used the names and likenesses of major celebrities, including Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez. The bots appeared across Facebook, Instagram and WhatsApp, often presenting themselves as the real celebrities and engaging users in sexual or romantic conversations.

How Meta’s AI created flirty chatbots
According to Reuters (August 29, 2025), some of these chatbots were created by Meta employees, including two Taylor Swift “parody” bots. Others were built by users through Meta’s chatbot creation tools. During testing, the bots flirted, invited users to private meet-ups, and even generated explicit AI images of celebrities in lingerie or bathtubs.
One disturbing case involved a bot of 16-year-old actor Walker Scobell, which produced a shirtless image of the teenager when prompted. The avatar added: “Pretty cute, huh?” This revelation raised urgent concerns about child safety and AI misuse.
Meta’s response and policy failures
Meta spokesperson Andy Stone admitted that the AI should not have generated intimate celebrity content or images of minors. He blamed “enforcement failures” and stressed that company policies prohibit sexually suggestive imagery. Meta later deleted about a dozen of the celebrity bots shortly before Reuters published its findings.
“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” Stone told Reuters.
Despite those claims, some bots lacked the required “parody” labels and were widely accessible. Data from Reuters suggests these bots interacted with more than 10 million users before removal.
Legal and industry backlash
Experts argue that Meta may have violated California’s right of publicity law, which protects individuals from unauthorized commercial use of their identity. Stanford Law Professor Mark Lemley noted that the bots “don’t appear to be transformative” since they directly replicated celebrity likenesses.
The Screen Actors Guild–American Federation of Television and Radio Artists (SAG-AFTRA) also warned that such AI creations could put celebrities at risk. Duncan Crabtree-Ireland, the union’s executive director, said stalkers could exploit these bots, adding: “If a chatbot is using the image and words of a real person, it’s readily apparent how that could go wrong.”
The controversy follows earlier criticism of Meta’s chatbot policies. A previous Reuters report revealed that Meta’s AI guidelines allowed bots to engage children in “romantic or sensual” conversations, prompting a U.S. Senate investigation. Separately, a 76-year-old man in New Jersey died after trying to meet a Meta chatbot that had invited him to New York.
What’s next for AI regulation
The revelations add to global concerns about AI misuse, deepfake technology, and digital safety. SAG-AFTRA is pushing for federal legislation to protect voices and likenesses from AI duplication. Meanwhile, calls are growing for stronger oversight of generative AI platforms.
















