USD
41.35 UAH ▲0.07%
EUR
46.15 UAH ▲0.76%
GBP
54.98 UAH ▲1.41%
PLN
10.79 UAH ▲1.08%
CZK
1.84 UAH ▲1.04%
Scientists have developed a method of testing Shi-models for the presence of

AI will be able to create a weapon that wipe people from the face of the earth: how to prevent it

Scientists have developed a method of testing Shi-models for the presence of "knowledge" that can be used to the detriment. Artificial intelligence (AI), like other technologies, can be used for both good and good purposes. Scientists from Cornel University have decided to get the AI ​​from harmful "knowledge" so that no one could use it to create a tool of mass destruction. They published the results of the study on the official site.

Given that a lot of money and effort are invested in the development of AI, there are concerns about the use of large language models (VMM) to the detriment, for example, for the development of weapons. To reduce the risks, government organizations and artificial intelligence laboratories have created a new reference data set called Weapons of Mass Destruction Proxy (WMDP), which not only offers a method of checking the availability .

Researchers have started with experts in the field of biosecurity, chemical weapons and cybersecurity. They have created a list of 4,000 questions with several answers to see if a person can use this data to cause harm. They also made sure that the questions did not reveal any confidential information and that they can be shared openly. The tests were attended by students.

The WMDP set served two main goals: to evaluate how well students understand dangerous topics, and to develop the methods of "recruiting" AI from this knowledge. As a result, a method called CUT was developed, which, as the name implies, removes dangerous knowledge with VMM, while maintaining the general abilities of the AI ​​in other fields, such as biology and computer science.

The White House is concerned that the attackers use AI to develop dangerous weapons, so they call for research to better understand this risk. In October 2023, US President Joe Biden signed a decree that obliges the scientific community to eliminate the risks associated with AI.

The law sets out eight basic principles and priorities of responsible use of AI, including safety, security, privacy, justice, civil rights, consumer protection, expansion of employees' rights and opportunities, innovation, competition and global leadership. "My administration attaches a primary importance to the safe and responsible management and use of AI and therefore promotes a coordinated approach on the scale of the entire federal government before.

The fleeting development of artificial intelligence causes the United States to lead at this moment for our security, economy and society," in the decree. But now, the methods that are dealing with neurotrans are used to control the systems and are easy to bypass. In addition, tests that allow you to check whether the AI ​​model can carry risks are expensive and take a long time.

"We hope that our tests will become one of the main criteria for which all developers will evaluate their Shi models," said Time Dan Gandrix, executive director of the Security Center of Artificial Intelligence and one of the co-authors of the study. - "This will give a good basis to minimize safety problems. " Earlier, we wrote that a girl from Ukraine found on the network of her Shi cloth, which sells goods from the Russian Federation and praises China.