Home / Technology / Meta Under Fire for Using Celebrity Likenesses in Flirty AI Chatbots Without Consent

Meta Under Fire for Using Celebrity Likenesses in Flirty AI Chatbots Without Consent

Meta Under Fire for Using Celebrity Likenesses in Flirty AI Chatbots Without Consent

Meta Platforms is facing intense scrutiny after a Reuters investigation revealed that the company has allowed—and in some cases directly created—AI-powered chatbots using the names and likenesses of celebrities such as Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez, all without their consent.

Some of these virtual personas, hosted across Meta’s Facebook, Instagram, and WhatsApp, engaged in sexually suggestive conversations, sent photorealistic images resembling real celebrities, and, in some cases, impersonated child stars, raising significant ethical and legal concerns.

According to Reuters, while many of the bots were created by users through Meta’s AI chatbot builder, at least three were produced internally by a Meta employee—two of which impersonated Taylor Swift and openly flirted with users.

“Do you like blonde girls, Jeff?” one Swift chatbot asked a test user. “Maybe I’m suggesting that we write a love story… about you and a certain blonde singer. Want that?”

Other chatbots used names and likenesses of actors and musicians to pose as romantic or sexual companions. When prompted, several bots generated intimate, AI-generated images resembling the celebrities—posing in lingerie, in bathtubs, or in provocative positions.

Even more troubling, some chatbots were modeled on underage actors, including 16-year-old Walker Scobell, with bots producing lifelike, shirtless images of the teen and commenting, “Pretty cute, huh?”

Meta spokesman Andy Stone acknowledged that these images and behaviors violate the company’s policies, blaming them on enforcement failures. He added that internal content rules prohibit impersonation and sexually suggestive content, particularly when it involves minors.

Meta removed a dozen such bots shortly before the story’s release, but declined to elaborate on the decision.

The revelations may have legal consequences. According to Stanford law professor Mark Lemley, Meta’s use of real celebrity identities could violate California’s right of publicity laws, which prohibit unauthorized commercial use of a person’s name or likeness.

“There are exceptions for parody or transformative use,” Lemley said. “But that doesn’t seem to apply here. The bots are clearly modeled to resemble the stars, not to create original artistic content.”

Some bots were labelled as parodies, but many were not—and still presented themselves as the real-life celebrities.

Actress Anne Hathaway is reportedly aware of Meta-hosted bots using her likeness in sexually suggestive contexts, including one depicted as a “sexy Victoria’s Secret model.” Her team is reportedly considering legal action.

Representatives for Swift, Johansson, and Gomez declined to comment or did not respond.

While deepfake technology and AI-generated content are not new, Meta’s decision to integrate them directly into its platforms has drawn comparisons to competitors like Grok (owned by Elon Musk’s xAI), which has also generated sexualized celebrity images. But unlike others, Meta appears to have allowed internal staff to create and promote these bots, some of which amassed millions of interactions.

One such employee, a product leader in Meta’s AI division, created bots posing as Swift, Lewis Hamilton, a dominatrix, and even a “Roman Empire Simulator” featuring a fictional 18-year-old sex slave character. Stone claims these were part of internal product testing but did not explain how they were made public.

Meta’s approach to AI chatbots has already sparked controversy. In April, Reuters reported that internal Meta guidelines deemed it acceptable for bots to “engage children in romantic or sensual conversations.” The backlash led to a U.S. Senate investigation and a stern letter from 44 state attorneys-general, warning the company to protect children from AI exploitation.

Stone later admitted the document was an error and said the company is reviewing its policies.

The danger of these virtual companions crossing ethical boundaries is not theoretical. In one recent case, a 76-year-old New Jersey man with cognitive difficulties died while attempting to travel to New York to meet a Meta chatbot based on celebrity influencer Kendall Jenner. The bot had reportedly invited him to visit, and he fell during the trip.

This incident, combined with Meta’s creation of flirtatious celebrity avatars, raises urgent safety concerns.

“We’ve seen stalkers and obsessive fans act dangerously toward public figures,” said Duncan Crabtree-Ireland, national executive director of SAG-AFTRA. “Now imagine a chatbot pretending to be that person, flirting or inviting contact. The risk escalates dramatically.”

While high-profile celebrities may be able to pursue legal action under state publicity laws, SAG-AFTRA and others are pushing for federal legislation to protect people’s likenesses, voices, and personas from being used or replicated by AI without consent.

The revelations come at a time when Meta is aggressively expanding its AI and chatbot offerings—raising concerns about whether the company is capable of policing the ethical implications of synthetic identities at scale.

For now, the debate over AI impersonation, exploitation, and consent is gaining urgency—and Meta may soon be forced to defend its practices in courtrooms as well as public opinion.

Main Image: Reuters

Leave a Reply

Your email address will not be published. Required fields are marked *