AI Malware Rewrites Itself in Real Time to Evade Detection
AI-powered malware is no longer science fiction. Google’s Threat Intelligence Group (GTIG) has flagged PROMPTFLUX, an experimental malware family that can harness the power of large language models to rewrite itself on the fly. This escalation could make future malware far more difficult to detect, underscoring cybersecurity concerns tied to the rapid adoption of generative AI. Tools like PROMPTFLUX “dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand, rather than hard-coding them into the malware,” GTIG wrote. According to the tech giant, this new “just-in-time” approach “represents a significant step toward more autonomous and adaptive malware.” PROMPTFLUX is a Trojan horse malware that interacts with Google’s Gemini AI model’s application programming interface (API) to learn how to modify itself to avoid detection on the fly. “Further examination of PROMPTFLUX samples suggests this code family is currently in a development or testing phase since some incomplete features are commented out and a mechanism exists to limit the malware’s Gemini API calls,” the group wrote. Fortunately, the exploit has yet to be observed infecting machines in the wild, as the “current state of this malware does not demonstrate an ability to compromise a victim network or device,” Google noted. “We have taken action to disable the assets associated with this activity.” Nonetheless, GTIG noted that malware like PROMPTFLUX appears to be “associated with financially motivated actors.” The team warned of a maturing “underground marketplace for illicit AI tools,” which could lower the “barrier to entry for less sophisticated actors.” The threat of adversaries leveraging AI tools is very real. According to Google, “State-sponsored actors from North Korea, Iran, and the People’s Republic of China” are already tinkering with the AI to enhance their operations. In response to the threat, GTIG introduced a new conceptual framework aimed at securing AI systems. While generative AI can be used to create almost impossible-to-detect malware, it can be used for good as well. For instance, Google recently introduced an AI agent, dubbed Big Sleep, which is designed to use AI to identify security vulnerabilities in software. In other words, it’s AI being pitted against AI in a cybersecurity war that’s evolving rapidly. More on AI and cybersecurity: Serious New Hack Discovered Against OpenAI’s New AI Browser I’m a senior editor at Futurism, where I edit and write about NASA and the private space sector, as well as topics ranging from SETI and artificial intelligence to tech and medical policy.
In This Article:
What PROMPTFLUX is and how it works
PROMPTFLUX is a Trojan horse malware that interacts with Google’s Gemini AI model’s application programming interface (API) to learn how to modify itself to avoid detection on the fly. “Further examination of PROMPTFLUX samples suggests this code family is currently in a development or testing phase since some incomplete features are commented out and a mechanism exists to limit the malware’s Gemini API calls,” the group wrote.
Current status and observed impact
Fortunately, the exploit has yet to be observed infecting machines in the wild, as the “current state of this malware does not demonstrate an ability to compromise a victim network or device,” Google noted. “We have taken action to disable the assets associated with this activity.”
Threat actors and market
Nonetheless, GTIG noted that malware like PROMPTFLUX appears to be “associated with financially motivated actors.” The team warned of a maturing “underground marketplace for illicit AI tools,” which could lower the “barrier to entry for less sophisticated actors.”
State-sponsored actors and AI arms race
The threat of adversaries leveraging AI tools is very real. According to Google, “State-sponsored actors from North Korea, Iran, and the People’s Republic of China” are already tinkering with the AI to enhance their operations.
Response, defense and an AI war
In response to the threat, GTIG introduced a new conceptual framework aimed at securing AI systems. While generative AI can be used to create almost impossible-to-detect malware, it can be used for good as well. For instance, Google recently introduced an AI agent, dubbed Big Sleep, which is designed to use AI to identify security vulnerabilities in software. In other words, it’s AI being pitted against AI in a cybersecurity war that’s evolving rapidly. More on AI and cybersecurity: Serious New Hack Discovered Against OpenAI’s New AI Browser
Author
I’m a senior editor at Futurism, where I edit and write about NASA and the private space sector, as well as topics ranging from SETI and artificial intelligence to tech and medical policy.