Hackers Try to Clone Google’s Gemini With 100,000+ AI Probes
Google built Gemini to answer questions. Now attackers are using questions as lockpicks.
In a surge of more than 100,000 carefully engineered prompts, threat actors have been hammering Google’s Gemini chatbot in what the company calls “model extraction” or “distillation” attacks. By systematically probing the system, adversaries attempt to reverse engineer the model’s underlying logic, reasoning patterns, and chain of thought to build rival AI systems without paying the steep cost of training one from scratch.
Google says the activity appears to be tied to actors in countries including North Korea, Russia, and China. The company classifies the effort as intellectual property theft and a clear violation of its terms of service.
Other companies may see similar attacks
But Gemini may just be the opening act.
John Hultquist, the chief analyst of Google’s Threat Intelligence Group, told NBC News that while Gemini may be one of the first targets, other companies’ custom AI tools are likely to see these types of attacks as well.
“We’re going to be the canary in the coal mine for far more incidents,” Hultquist said.
Experts warn this trend will accelerate. “Given the cost of training new models, it’s not surprising to see model extraction attacks as an illegal way of trying to gain ground on developing a new model,” Melissa Ruzzi, director of AI at AppOmni, told TechRepublic in a statement. “We can expect more and more AI to be used in attacks.”
The proprietary logic and specialized training found in major LLMs have made them high-value targets, Google said. Whereas adversaries once relied on conventional intrusion operations to steal trade secrets, actors can now use legitimate API access to attempt to “clone” select AI model capabilities.
Agentic AI introduces internal data risks
Law firm Shumaker, Loop & Kendrick adds that agentic AI systems introduce additional risk. When organizations grant AI agents broad access to sensitive systems, data leakage can quietly erode trade secrets, patents, trademarks, and copyrights.
“By leaking data, agentic AI can quietly erode IP rights unless you change the defaults,” the firm wrote in a recent blog. “These leaks can negatively impact trade secrets, patents, trademarks, and copyrights.”
The firm advises organizations to grant agents credentials only for the tasks they perform.
Related reading: Google is also testing AI defenses in Chrome, offering up to $20,000 to researchers who can expose security flaws in its AI features.
The post Hackers Try to Clone Google’s Gemini With 100,000+ AI Probes appeared first on eWEEK.