East Africa News Post

Complete News World

Snapchat’s new chatbot, powered by artificial intelligence, is sounding alarm bells for teens and parents

Snapchat’s new chatbot, powered by artificial intelligence, is sounding alarm bells for teens and parents

(CNN) – Just hours after Snapchat released its My AI chatbot to all users last week, Lyndsi Lee, a mom from East Prairie, Missouri, told her 13-year-old daughter to stay away from the feature.

“It’s a temporary solution until you know more about it and can set some healthy boundaries and guidelines,” said Lee, who works for a software company. He worries about the way My AI is presented to young users like his daughter on Snapchat.

The feature is based on the viral AI chatbot tool ChatGPT, and, like it, can make recommendations, answer questions, and chat with users. But the Snapchat version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji for it, and include it in conversations with friends.

The result is that chatting with a Snapchat chatbot can feel less of a deal than visiting ChatGPT. It may also be less obvious that you are talking to a computer.

“I don’t think I’m ready to teach my daughter to emotionally separate humans from machines when they look the same from their point of view,” she tells me. “I think there is a very clear line [Snapchat] pass “.

The new tool is facing backlash not only from parents, but also from some Snapchatters who are bombarding the app with nasty comments in the App Store and criticism on social media about privacy concerns, “fear posts,” and the inability to remove a feature from their chat feed unless They pay for a premium subscription.

While some may find value in the tool, the mixed reactions point to the risks companies face when deploying new generative AI technology in their products, especially products like Snapchat, whose users are more youthful.

See also  What is Google Chrome Remote Desktop and how to configure it

Snapchat was an early launch partner when OpenAI opened up access to ChatGPT to third-party companies, and many more are expected to follow. Almost overnight, Snapchat forced some families and lawmakers to ask questions that seemed theoretical just a few months ago.

The new Snapchat chatbot. Credit: Snapchat/My AI

In a letter to the CEOs of Snap and other tech companies last month, weeks after My AI was made available to common Snap customers, Democratic Senator Michael Bennett expressed concern about the chatbot’s interactions with younger users. In particular, he cited reports that he could provide children with advice on how to lie to their parents.

“These examples may be disturbing for any social media platform, but they are particularly concerning for Snapchat, which is used by nearly 60% of American teens,” Bennett wrote. “Although Snap acknowledges that my AI is ‘experimental,’ it has been quick to enroll American children and teens in its social experiment.”

In a blog post last week, the company said, “My AI is far from perfect, but we’ve made a lot of progress.”

user reaction

In the days since it was officially launched, Snapchat users have raised concerns.

One user described his interaction as “creepy” after saying he was lying because he didn’t know where the user was. After the user moderated the conversation, he said the chatbot accurately revealed that he lived in Colorado.

In another TikTok video with over 1.5 million views, a user named Ariel recorded a song with an intro, chorus, and piano chords written by My AI about what it’s like to be a chatbot. When Ariel returned the recorded song, the chatbot denied his participation replying: “Sorry, but as an AI model, I don’t write songs.” Ariel described the exchange as “creepy”.

See also  Pokémon Sleep for iPhone is now available on the App Store

Other users shared concerns about how the tool understands information, interacts with it, and gathers information from images. “I took a picture… he said ‘nice shoes’ and asked who the people were [en la foto]one Snapchat user wrote on Facebook.

Snapchat told CNN that it continues to improve My AI based on community feedback and is working on more restrictions to keep its users safe. The company also said that, as with its other tools, users don’t have to interact with My AI if they don’t want to.

However, My AI cannot be removed from chats, unless the user subscribes to the premium monthly service, Snapchat+. Some teens say they chose to pay Snapchat+’s $3.99 fee to deactivate the tool before canceling the service.

But not all users hate this feature.

One user wrote on Facebook that she asked My AI to help her with her homework. Get all the questions right. Another indicated that she had relied on her for comfort and advice. “I love my best friend in my pocket,” she wrote. You can change your Bitmoji [avatar] by him and surprisingly he gives some really great advice for some real life situations… I love the support he gives.”

The first assessment of how adolescents use chatbots

ChatGPT, which trains on massive amounts of online data, has already been criticized for spreading inaccurate information, responding to users inappropriately, and allowing students to cheat. But integrating this tool into Snapchat can exacerbate some of these problems and add new ones.

Alexandra Hamlet, a clinical psychologist in New York City, said parents of some of her patients have raised concerns about how teens interact with the Snapchat tool. There are also concerns about counseling chatbots and mental health because AI tools can reinforce someone’s confirmation bias, making it easier for users to seek interactions that confirm their unhelpful beliefs.

See also  Stop your cell phone from listening to everything you say and always know what you're up to | News from Mexico

“If a teen is in a negative mood and has no conscious desire to feel better, they may seek a conversation with a chatbot that they know will make them feel worse,” he said. “Over time, having interactions of this kind can erode a teen’s sense of worth, even though they know they’re actually talking to a bot. In the emotional frame of mind, it’s less likely that an individual would consider these kinds of logic.”

For now, it’s up to parents to start meaningful conversations with their teens about best practices for communicating with AI, especially as the tools start to appear in more popular apps and services.

Sinead Bovell, founder of WAYE, a startup that helps prepare young people for the future with advanced technologies, said parents need to make it clear that “chatbots are not your friend.”

“They are not your trusted therapist or counselor, and anyone dealing with them needs to be very careful, especially teens who may be more likely to believe what they say,” he said.

“Parents should be talking to their kids now about how not to share anything personal with a chatbot that they wouldn’t share with a friend, even though from a user design perspective, a chatbot is in the same category as on Snapchat.”

He added that federal regulations that force companies to comply with specific protocols are also necessary to keep up with the rapid pace of advances in artificial intelligence.