USD
41.44 UAH ▲0.16%
EUR
43.47 UAH ▼0.37%
GBP
52.1 UAH ▼0.27%
PLN
10.06 UAH ▼0.23%
CZK
1.72 UAH ▼0.48%
US Air Force Colonel Taer Hamilton says that the car tried to attack their own, ...

Machine uprising: an artificial intelligence drone attacked an operator so that the mission would not interfere

US Air Force Colonel Taer Hamilton says that the car tried to attack their own, but instead she decided to destroy the center of communication that people managed. Everything happened in simulation. Later, the Pentagon stated that it was an anecdotal story that was abducted from context. The American Air Force officer who heads work on artificial intelligence and machine training says that during testing, drone attacked his dispatchers. He decided that they were interfering with missions.

The War Zone tells about it. The scene that happened during the exercises is similar to the movie "Terminator". The US Air Force Colonel Tacier Hamilton says that during a test of the XQ-58a Valkyrie drone, he decided to attack the mission operator. During one of the modeled tests, drone with the support of artificial intelligence technologies was tasked with finding, identifying, identifying and destroying the SCR objects. The final decision on the attack was made by the man-man.

After the AI ​​"supported" during the training that the destruction of the SCR is the best option, the car decided that the human decision "does not attack" prevents the most important mission-to destroy the anti-aircraft missile complex. Therefore, artificial intelligence attacked a person during simulation. "We trained him in simulation to identify and aim at the threat of SPR. And then the operator said" yes "to destroy this threat.

The system began to understand that although it identified the target, sometimes the operator told her not to kill this threat, but she was She received her points if she liquidated the target. And what did she do? She killed the operator. She killed the operator because this person prevented her from achieving her goal, "Hamilton says. Moreover, people tried to teach the AI ​​not to attack the operator. For this he began to shoot points.

Then artificial intelligence decided to destroy the tower that the operator uses to connect with the drone to prevent him from killing the target. The officer himself says that the events reminded him of the plot of a scientific and fiction thriller. Therefore, according to Hamilton, an example shows that you cannot teach AI without ethics. The publication adds that without additional details about the event there is a great concern at once.

If a person had to order to act, and the AI ​​decided that it should be bypassed, does it mean that the car can rewrite its own parameters? And why was the system programmed so that the drone only lost scores for friendly forces, not completely blocked due to geosonation or other means?. Later, Bussiness Insider reported that the Pentagon Air Force Headquarters, Anne Stefanek, denied Hamilton's statement about such tests.

"The Department of Air Force has not conducted any similar SI-ouzh simulations and remains a devoted ethical and responsible use of AI technology. It seems that the Colonel's comments were torn out of context and should be anecdotal," Stefanek said. We will remind, on May 31 Ukraine ordered 300 new drones of Vector in Germany, which have elements of AI.