By Natali Moss
As experts explain, PromptSteal uses LLM to generate commands for the malware to execute, rather than hard-coding the commands directly into the software itself. PromptSteal masquerades as an "image generation" program that guides the user through a series of prompts to generate images while querying the Hugging Face API to generate commands to run in the background. Researchers suspect that PromptSteal uses stolen API tokens to make requests to the Hugging Face API.
The program specifically asks LLM to issue commands to generate system information, as well as to copy documents to the specified directory. The output of these commands is then blindly executed locally by PromptSteal before leaking the output.
"While PromptSteal is likely still in the research and development stage, this type of obfuscation method is an early and significant indicator of how attackers will likely supplement their campaigns with artificial intelligence in the future," GTIG emphasized. As a reminder, the new EvilAI malware family combines AI-generated code with traditional Trojan attack methods while maintaining an unprecedented level of stealth.
All rights reserved IN-Ukraine.info - 2022