Technology

Advanced states are capable of creating a cybercribe that will bypass any protection: what is threatening

The malicious software created by artificial intelligence will allow you to carry out fast and imperceptible cabbage, affecting computers and other victims. Through artificial intelligence training on large databases, technologically advanced states can already create a circumventing cyber defense. This is warned by the National Cyber ​​Security Center of the United Kingdom (NCSC).

In order to create such powerful software, threat subjects need to teach the AI ​​model using high -quality expression databases - harmful computer programs, codes fragments, or commands of commands that use systems vulnerability. The resulting system will create a new code that will bypass current security measures. "There is a realistic likelihood that high -tech states have harmful programs that are large enough to teach an artificial intelligence model for this purpose," NCSC warned.

This warning was only part of a series of alarm signals that NCSC mentioned in its report. The agency expects that artificial intelligence will exacerbate the global threat of warriors, improve the search and tracking of victims and reduce the barrier to penetrate cybercriminals. Generative AI also exacerbates the threats in the field of social engineering, such as convincing interactions with victims and the creation of seductive documents.

AIs will complicate the detection of phishing, depleting and malicious requests by email or password reset. Due to this, nation -states will receive the most powerful weapons. "Very capable state entities almost certainly have the greatest potential as a source of cyber threats to use the potential of artificial intelligence in expanded cyber -operating.

" However, in the near future, it is expected that artificial intelligence will exacerbate the available threats, not create qualitatively new risks. Experts are especially concerned that this will exacerbate the global threat of warriors. "The warriors continue to be threatened with national security," says James Babbage, CEO of the Agency. More advanced attack options are unlikely to be implemented by 2025, but after the artificial intelligence system will improve, researchers say.