Technology

"Can break up suddenly": Russia has used AI from Openai v. Ukraine - ft

During operations, Doppelhanger and Bad Grammar, the Russians tried to deprive Ukraine of supporting partners, creating content with neurotromes. Structures related to Russia, China, Iran and Israel use the tools of artificial intelligence of Openai, which created a chatbot Chatgpt to create and spread misinformation, in particular about war in Ukraine. Financial Times writes about it.

According to the company, models of artificial intelligence Openai to generate text and images were used in large volumes in five operations of hidden influence and to create comments. The aforementioned content concerned the war in Ukraine, a conflict in the Gaza sector, elections in India, as well as policies in Europe, the USA and China. AI was also used to increase productivity, including code adjustment and social networks.

According to the OpenAi report, the company's tools were used, in particular, in the Operation of Russia "Dopplganger" aimed at undermining Ukraine's support. The Russian Grammar Operation is also mentioned. According to the company, the attackers used OpenAi models to start the code to run the Telegram bot, as well as to create short comments on political topics in Russian and English for distribution in Telegram.

In addition, according to the publication, the AI ​​tools from Openai used the Chinese spamflage network that promotes Beijing's interests abroad. It is about creating text and comments on several for publication on the social network X. Also reporting the prevention of the Israeli campaign, probably organized by Stoic, which used AI to create articles and comments on Instagram, Facebook and X.

Openai policy prohibits the use of its models for its models for its models deception or misleading others. The company said they are working on detecting misinformation campaigns, and they also create tools based on AI to do it more efficiently. Openai representatives assure that they have already complicated the work of the attackers and their models have repeatedly refused to generate the content requested.

OpenAi Ben Nimmo, Chief Researcher of the Intelligence and Investigation Department, said in a conversation with journalists that the use of OpenAi models in campaigns has increased slightly. "Now is not the time for self -esteem. History shows that the operations of influence that have not produced results for years can suddenly flash if no one is looking for them," Ben Nimmo emphasized.

Recall that Openai tested its GPT-4 artificial intelligence system to understand whether it poses any threats to humanity. Experts have found a "slight risk" that it will help people create a deadly biological weapon. Openai has also reported that OpenAi has announced partnership with Axel Springer's media. According to experts, the OpenAi partnership with a number of large media can limit users' access to reliable information as Chatgpt will become a monopolist in the information chatbot market.