The company identified over 100,000 prompts it suspects were intended to extract proprietary reasoning capabilities.
Google says threat actors launched 100,000+ model extraction attacks against Gemini, attempting to reverse engineer its AI logic and training data.
Google released a report on Thursday warning of an increase in “distillation attacks” targeting its Gemini AI model to steal ...
Private-sector entities hit Gemini with over 100,000 prompts to trick it into revealing its full reasoning processes, which Google says is a form of intellectual property theft.
A new Google report says attackers tried to clone Gemini by repeatedly prompting it at scale to work out its reasoning ...
Google finds nation-state hackers abusing Gemini AI for target profiling, phishing kits, malware staging, and model extraction attacks.