You are currently viewing Humans must adopt a ‘new way of life’ to defeat AI’s sinister edge, expert says

Humans must adopt a ‘new way of life’ to defeat AI’s sinister edge, expert says

PEOPLE will have to change their behavior to avoid falling prey to fraudulent chatbots.

A cyber expert told The US Sun how you should avoid trusting artificial intelligence too much.

AI chatbots are an amazing tool – but be very careful when using themCredit: Getty

AI chatbots are already gaining popularity, with tens of millions of people flocking to apps like OpenAI and Google’s ChatGPT Gemini.

These chatbots use large language models that allow them to talk to you as a person.

In fact, a study recently claimed that OpenAI’s GPT-4 model passed the Turing test – meaning that humans cannot reliably distinguish it from a real person.

We spoke to cyber expert Adam Pilton, who warned that the human way chatbots speak makes them much more capable of tricking us.

“We feel it would be easier to be attracted to the conversational nature of a chatbot, compared to perhaps a fraudulent website or search result,” said Adam, a cybersecurity consultant at CyberSmart and a former detective sergeant investigating cybercrimes.

He continued: “As humans, we build trust where we potentially see a connection, and it’s much easier and more understandable to be able to build a connection with a chatbot than a website.

“The website doesn’t respond to our specific requests, whereas with the chatbot we feel like we’re building a relationship because we can ask it specific questions.

“And the answer it gives us is specifically tailored to answer that question.

“In this modern digital world, we live in a key skill that will now be verifying information, we can’t just trust what we are told first.”

Deepfakes more ‘sophisticated’ and dangerous than ever as AI expert warns of six upgrades that allow them to trick your eyes

VILE TALKERS

Earlier this year, scientists revealed how AI had mastered the art of “deception” – and taught it to itself.

And chatbots are even capable of deceiving and manipulating people.

Spotting the signs that a chatbot is trying to scam you is important.

But Adam warned that we now need to adopt a “new way of life” where we don’t trust AI chatbots – and instead check what we’re told elsewhere.

What is ChatGPT?

ChatGPT is a new AI tool

ChatGPT, which launched in November 2022, was created by San Francisco-based startup OpenAI, an AI research firm.

It is part of a new generation of AI systems.

ChatGPT is a language model that can generate text.

It can converse, generate readable text on demand, and produce images and video based on what it has learned from a vast database of digital books, online writings, and other media.

ChatGPT essentially works as a written dialogue between the AI ​​system and the person asking it questions

GPT stands for Generative Pre-Trained Transformer and describes the type of model that can create AI-generated content.

If you prompt it, for example asking it to “write a short poem about flowers”, it will create a piece of text based on that request.

ChatGPT can also maintain conversations and even learn from the things you’ve said.

It can handle very complex prompts and is even used by businesses for job assistance.

But be aware that he may not always be telling you the truth.

“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” OpenAI CEO Sam Altman said in 2022.

“Misinformation and simply false information will be a growing problem for society and democracies around the world as we continue to evolve in this digital world,” Adam told The US Sun.

“In this way, information verification will be a common requirement and the use of chatbots is no different.

“We can no longer depend on one source of information, checking from multiple reliable sources is now a way of life.”

SHARE THE CARE

AI ROMANCE SCAMS – BEWARE!

Beware of criminals using AI chatbots to scam you…

The US Sun recently exposed the dangers of scam AI romance bots – here’s what you need to know:

AI chatbots are being used to scam people looking for romance online. These chatbots are designed to mimic human conversation and can be difficult to spot.

However, there are some warning signs that can help you identify them.

For example, if the chatbot responds too quickly and with generic answers, it’s probably not a real person.

Another clue is if the chatbot tries to move the conversation off the dating platform to another app or website.

Also, if the chatbot asks for personal information or money, it’s definitely a scam.

It is important to remain alert and cautious when communicating with strangers online, especially when it comes to matters of the heart.

If something seems too good to be true, it probably is.

Be skeptical of anyone who seems too perfect or too eager to move the relationship forward.

By being aware of these warning signs, you can protect yourself from falling victim to AI chatbot scams.

Chatbots will become more popular over time as their capabilities grow.

But there are many risks, including giving them too much of your own information.

Experts recently warned The US Sun about the importance of not telling AI too much about yourself.

They have even been described as a “treasure trove” for criminals looking to find information about victims.

Used safely, chatbots can be extremely helpful – but be careful not to tell them too much and don’t trust everything they say.

Leave a Reply