Three times artificial intelligence has scared scientists: from creating chemical weapons to claiming to have feelings

THE AI revolution has only just begun, but there have already been numerous disturbing developments.

Artificial intelligence programs can be used to act on the worst human instincts or to accomplish humans’ most evil goals, such as creating weapons or terrorizing its creators for lack of morality.

1

Visionaries like Elon Musk think that uncontrolled AI could lead to the extinction of humanityCredit: Getty Images – Getty

What is artificial intelligence?

Artificial intelligence is a generic phrase for a computer program designed to simulate, mimic or copy the processes of human thought.

For example, an AI computer designed to play chess is programmed with one simple goal: to win the game.

During the game, the AI ​​will model millions of potential outcomes of a given move and act based on the one that gives the computer the best chance of winning.

The humanoid robot powered by artificial intelligence has been named the world's first CEO of the Chinese video game company
The new voice alteration technology has removed the accents of call center workers

An experienced human player will act similarly, analyzing moves and their consequences, but without the perfect memory, speed or stiffness of a computer.

AI can be applied to numerous fields and technologies.

Self-driving cars aim to reach their destination and absorb stimuli such as signs, pedestrians and roads along the way, just as a human driver would.

Artificial intelligence programs have also taken unexpected turns and stunned researchers with their tendencies or dangerous applications.

AI invents new chemical weapons

In March 2022, researchers revealed that artificial intelligence invented 40,000 new possible chemical weapons in just six hours.

Scientists sponsored by an international security conference said an artificial intelligence robot invented chemical weapons similar to one of the most dangerous nerve agents of all time, called VX.

VX is a tasteless and odorless nerve agent and even the smallest drop can cause a human to sweat and contract.

“The way VX is lethal is actually it prevents the diaphragm, the lung muscles, from being able to move so that the lungs become paralyzed,” Fabio Urbina, the lead author of the article, told The Verge.

“The biggest thing that came up in the beginning was that many of the compounds generated were expected to be actually more toxic than VX,” Urbina continued.

The dataset that fueled the AI ​​model is publicly available, meaning that a threat actor with access to a comparable AI model could plug in the open source data and use it to create an arsenal of weapons.

“All it takes is some knowledge of the code to turn a good AI into a chemical weapons machine.”

AI claims to have feelings

A Google engineer named Blake Lemoine made widely publicized claims that the company’s LaMDA (Language Model for Dialogue Applications) bot was awake with conscience and feeling.

“If I didn’t know exactly what this is, which is this computer program we built recently, I would think it was a 7 and 8-year-old who knows physics,” Lemoine told The Washington Post in June 2022.

Google rejected his claims.

Brian Gabriel, a Google spokesperson, said in a statement that Lemoine’s concerns have been examined and, in line with Google’s Artificial Intelligence Principles, “the evidence does not support his claims.”

“[Lemoine] it was said that there was no evidence that LaMDA was sentient (and a lot of evidence to the contrary), ”said Gabriel.

“Of course, some in the wider AI community are considering the long-term possibility of sentient or general AI, but there is no point in anthropomorphizing today’s conversational models, which are not sentient.”

Google put Lemoine on administrative leave and later fired him.

Cannibal AI

Researcher Mike Sellers developed a social artificial intelligence program for the Defense Advanced Research Projects Agency in the early 2000s.

“For a simulation, we had two agents, quite naturally named Adam and Eve. They started out knowing how to do things, but not knowing much else.

“They knew how to eat, for example, but not what to eat,” Sellers explained on a Quora blog.

The developers put an apple tree into the simulation and the AI ​​agents would receive a reward for eating apples to simulate the feeling of satisfying hunger.

If they ate the bark of the tree or the house within the simulation, the reward would not have been activated.

A third AI agent named Stan was also included in the simulation.

Stan was there while Adam and Eve ate apples, and they began to associate Stan with eating apples and satisfying hunger.

“Adam and Eve ran out of apples on the tree and were still hungry. They looked around for other potential targets. Well, to their brains, Stan looked like food,” Sellers wrote.

“So each of them took a bite out of Stan.”

Rape victim whose DNA was used to accuse her of a crime sues for millions

The AI ​​revolution has begun to take shape in our world: Artificially intelligent robots will continue to simplify life, replace human workers, and become more responsible and autonomous.

But there have been several horrific cases of AI programs doing the unexpected, giving legitimacy to the growing fear of AI.

Leave a Reply

%d bloggers like this: