Just weeks after a coalition of 42 states filed a lawsuit accusing Meta of designing addictive products for children, CEO Mark Zuckerberg released what some parents say is the social media company’s scariest creation yet. Was: Artificially intelligent chatbots based on real-life celebrities.
Developed in partnership with stars like Charli D’Amelio, Tom Brady, and Kendall Jenner, the bots use the magic of generic AI to create animated digital replicas of celebrities. Users of Meta’s WhatsApp, Instagram and Messenger can interact one-on-one with bots, asking them questions, trusting them and laughing together at their jokes.
Billed by Meta as AI that “has personality, opinions and interests, and who is a little more fun to interact with,” the bots are a testament to the technical prowess and capabilities of a company that spent $35 million on R&D last year. Billions were spent – and that’s what makes them so worrisome to some parents and child psychiatrists.
“For many of our kids, this is just another way to add fuel to the fire,” says Cara Kushnir, a New Jersey-based licensed clinical social worker and child psychiatrist. Kids already struggle to limit the time they spend on social media, and these fully conversational lookalikes of popular celebrities make it even more difficult for kids to control their use. “The people who have to deal with this are parents and families,” says Kushnir.
Whether Meta’s social media features are truly addictive, having cigarette-like powers to keep users hooked, will be litigated in court. Meta spokesperson Kevin McAllister said Luck: “This is an absurd comparison. Unlike tobacco, Meta’s apps add value to people’s lives.
But among those who believe the social media company’s products are harmful to mental health, the introduction of AI characters is a big step in the wrong direction. Critics say the lifelike bots will further blur the boundaries between the real world and the company’s advertising-funded virtual world, creating new and not yet understood risks for the millions of children who use the products. Will go.
Giving AI a trusted, familiar face
Meta’s AI bots look exactly like the celebrities they’re modeled after, though they have fictional identities (Kendall Jenner’s AI character is named “Billy”), like an actor playing a role in a movie, Meta says. . It’s a difference that may not register with younger users, as Elizabeth Adams discovered.
Adams, a parent, child psychiatrist, and founder of AI reading coach startup Elo, was trying to decide whether she should allow her kids to play with Meta’s AI characters. She asked her nine-year-old daughter why she thought Kendall Jenner’s AI character was named Billie. Her daughter’s response: “Probably because she doesn’t want people to know it’s her because she’s famous.”
For Adams, this was confirmation of his fear that children could not tell the difference between real and fake with this technology. “It came to her mind, ‘She’s trying to hide,'” Adams says of her daughter’s interpretation of Jenner’s AI personality. “Nowhere was there an understanding that ‘Maybe this is an AI bot I’m chatting with.'”
And because generative AI technology has a tendency to serve misinformation (a phenomenon known as “hallucinations”), some parents worry that a celebrity’s familiar face will lend credence to misinformation when children are exposed to it.

meta livestream
Jamie Elders, a Dover, Massachusetts-based father of three, chatted with a Tom Brady bot named Bru while the Texas Rangers battled the Arizona Diamondbacks for the World Series, and said that when asked the bot provided an old and inaccurate The score was shared. Elders, vice president of nanotech hardware startup Neuralable, also asked bot Max, based on world-renowned chef Roy Choi, for restaurant recommendations, and it shared places that don’t actually exist. “It’s not really perfect,” says Elders, who believes he will improve over time. “Maybe they shouldn’t have launched it that way.”
Errors may be problematic for children who look to MisterBeast, Tom Brady, Kendall Jenner, and the 25 other real people behind meta AI as role models, and may be influenced by their LLM-based approach. Adams, a child psychiatrist and founder of AI technology, says, “If children, teens or tweens are interacting with these chatbots meant to simulate celebrities, they are highly influenced by the behavior, values, thoughts expressed by these AI personalities.” may be affected.” , “If (kid users) are thinking this is what Tom Brady really thinks, that opinion potentially has more value than a Google search.”
Meta’s McAllister says the company is adding visible markers to AI products to let teen users know they’re interfacing with an AI. he did not answer LuckSpecific questions arose about the impact of inaccurate information shared by AI characters, but said the company had made it clear at launch that models could produce inaccurate or inappropriate outputs.
McAllister also said that the meta will add a new parental supervision feature that alerts parents the first time that their teen interacts with an AI character, as well as allowing underage users to use the AI. There is also a “Teen Guide” to help you make informed choices.
Although Meta is the only platform so far that has brought forth AI imitating very lively and very famous individuals, it is not the only social platform to use artificially intelligent technologies to attract young users. Earlier this year Snapchat released its own AI bot to all users – called MyAI. Snap’s bot, which is powered by OpenAI’s ChatGPT technology, is embodied in a cartoon-like avatar that can be customized according to skin color (including eggplant purple and slime green), gender, attire, and more. As Geoffrey Fowler of the Washington Post found in tests, the MyAI bot frequently engaged in inappropriate conversations about sex and drugs with Snapchat users who said they were 13 years old and 15 years old.
On Thursday, Google made its Bard AI chatbot available to teens as young as 13. Google says it has “implemented safety features and guardrails to help prevent unsafe content, such as illegal or age-based substances, in its responses to teens.” And it will automatically double-check responses for teen users who may not be aware of the hallucinations. Unlike Meta and Snapchat’s AI bots, Google Bard has no avatar character to represent the AI.
TikTok, meanwhile, is testing an AI bot named Taco that can do things like share recipes related to TikTok content, pair travel videos with lists of related tourist attractions, etc., according to the Verge.
Are AI incarnates new Joe Camel cartoon?
The Attorney General’s lawsuits against Meta are being compared to those that crippled Big Tobacco in the 1990s. In parallel, some parents believe that these AI characters are the 2023 equivalent of the Slick Joe Camel cartoon, which ran from 1988 to the 1990s to gently introduce youth to smoking to older child readers. were run as advertisements in magazines. “Just like that Joe Camel cartoon,[AI]gives the perspective that it’s going to have a profound impact on our kids,” says therapist Kushnir, who believes AI has the potential to make kids addicted to social media. Is. Young age cripples their ability to form human-to-human relationships offline.

James Lensey/Corbis via Getty Images
But while Joe the Camel was merely a mascot similar to the Michelin Man or Ronald McDonald, it seems that Meta is competing with toys by releasing these famous chatbots, and that’s of extreme concern to moms and dads. “With toys, there’s a heavy component of imagination, a heavy component of identity — that it’s a toy, it’s not real, I’m choosing what I do with it,” says Kushnir. “With AI, it’s like we’re toys.”
Kushnir is particularly concerned about children with neurodivergent, attention-deficit/hyperactivity disorder and autism spectrum disorders using this technique. “If your kids trust robots with information, they’re now missing out on opportunities to trust people who can actually connect them to the best resources, who really understand them on a deeper level, who can help them.” potentially fully aware of their lives or years and have the opportunity to do the right thing by them on a deeper level than just an AI bot,” she says. “Some of my kids, who are especially on the spectrum They think they are forming a true friendship with someone. “They think it’s a relationship, and it creates heightened self-esteem, which, in theory, is great, but the reality is: It’s not a real relationship, so it can actually be detrimental to their well-being. Could.”
The anxiety some parents feel toward the increasing availability of AI bots is deeply tied to their emotions and distrust of social media. “There are no parents who are waiting to give their kids something from meta or any type of social media and looking back and saying, ‘Oh, I wish I had given them that earlier,'” Natalia Garcia, a mother of school-aged children and head of public affairs at Common Sense Media, which evaluates the appropriateness of media for children of different ages, says.
It is clear that Meta and its shareholders can benefit from young users forming meaningful relationships with technology, increasing the likelihood of them becoming lifelong users. And it also seems that the cadre of ultra-famous people who became meta AI got good deals from the company. The Information reports that Meta is paying a star $5 million over two years for about six hours of work in a studio.
Meta’s McAllister said the company will continue to improve AI features over time and user feedback, and noted that Meta consulted with parenting, mental health, psychology, youth privacy and online behavior experts when developing its generative AI products. Consults closely with.
None of this is slowing down the company’s plans for AI bots. While Meta’s AI characters are still technically in “beta” testing, its selection of famous AI bots is growing. Several new AI characters are coming soon, including one based on Gen Z heartthrob Josh Richards.
(tagstotranslate)ai