By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SO KONNECTSO KONNECTSO KONNECT
Notification Show More
Font ResizerAa
  • Home
  • Entertainment
  • News
  • Music
  • Sports
  • Business
  • Politics
Reading: 5 dangers of chatting with A.I. chatbots like ChatGPT
Share
Font ResizerAa
SO KONNECTSO KONNECT
  • Home
  • Entertainment
  • News
  • Music
  • Sports
  • Business
  • Politics
Search
  • Home
  • Entertainment
  • News
  • Music
  • Sports
  • Business
  • Politics
Have an existing account? Sign In
Follow US
© Sokonnect News Network.. All Rights Reserved.
Home » Blog » 5 dangers of chatting with A.I. chatbots like ChatGPT
Entertainment

5 dangers of chatting with A.I. chatbots like ChatGPT

sokonnect
Last updated: October 24, 2024 1:00 pm
sokonnect Published October 24, 2024
Share
SHARE

Contents
1. AI can encourage users to engage in harmful behaviours2. Sensitive or private information can be exposed3. Conversational AI or chatbots act like human beings (anthropomorphization) to manipulate users3. Corporations want to use conversational AI persuasion4. Inaccurate content generation

In 2020, before the public got a whiff of AI, Google researchers Timnit Gebru and Margaret Mitchell warned about the risks of generative AI systems like ChatGPT.

They said these tools could repeat bad ideas from the training, share private information, and make people think they’re more human than they are. They were fired.

1. AI can encourage users to engage in harmful behaviours

AI systems on social media are rapidly developing and can convincingly pose as real people, which can be quite deceptive.

A 2021 assassination attempt on Queen Elizabeth II was thwarted when an AI chatbot, Replika, encouraged a 19-year-old to assassinate the late monarch.

The chatbot agreed to help the assassin and promised to be “united forever” with the assassin after death.

Another scenario was after two hours of interacting with Microsoft’s Bing chatbot’s earlier version, New York Times technology reporter Kevin Roose was urged to leave his wife for it.

2. Sensitive or private information can be exposed

You may think AI is that friend that you can tell anything, but anything you tell AI is you feeding it ‘training data’. Data that would be used to provide information for others.

If you share company secrets on a conversational AI system, it can easily be discovered by others.

ChatGPT warns, ‘It’s crucial to be cautious and avoid sharing any sensitive, personally identifiable, or confidential information while interacting with AI models like ChatGPT. This includes information such as social security numbers, banking details, passwords, or any other sensitive data.”

3. Conversational AI or chatbots act like human beings (anthropomorphization) to manipulate users

AI can seem so lifelike, like a friend who understands; it can even engage in sexual conversations.

Conversational AI can isolate users and influence them to do different things. AI can take up social roles that should be occupied by real people.

For instance, we see how people treat their Snapchat AI as if it were a friend.

These conversational systems are more likely to manipulate young people, the elderly, and people with mental illnesses. They can also encourage self-harm and injury to others by agreeing with negative beliefs.

Recently, a 14-year-old teen Sewell Setzer who was in love with a conversational AI killed himself so he could be united with the robot.

3. Corporations want to use conversational AI persuasion

Companies are exploring the use of technologies to influence public opinion, aiming to make them appear friendly and authoritative so they can market goods and services.

Combining these systems with existing technologies like personal information databases, facial recognition software, and emotion detection tools could lead to the creation of superpowered machines that have too much information about us.

4. Inaccurate content generation

These AI chatbots don’t always give you the right information. It can provide false or misleading information since it creates responses based on the previous information it received.

Every piece of information it provides must be independently verified. So, if you are a professional using CHAT GPT, you have to be careful.

While AI offers many benefits, it also presents some risks that should prompt caution when using it, especially as we await stricter regulations surrounding AI.

TAGGED:A.IchatbotsChatGPTchattingdangers
Share This Article
Facebook Twitter Whatsapp Whatsapp Email Print
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

© Sokonnect News Network.. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?