Back

Google's Gemini Chatbot Experiences Extensive 'Distillation Attacks' by Commercial Actors

Show me the source
Generated on: Last updated:

Google's Gemini Targeted by AI Model 'Distillation Attacks'

Google's flagship artificial intelligence chatbot, Gemini, has experienced attempts by commercially motivated actors to replicate its functionalities through repeated queries. One campaign involved over 100,000 prompts.

Details of the Attacks

A report published by Google on Thursday stated an increase in “distillation attacks.”

These attacks involve repeated questioning designed to understand a chatbot's operational logic. Google identified this activity as “model extraction,” where parties investigate the system's patterns and logic. The company indicated that the goal of these attacks is to obtain information to develop or enhance other AI models.

Google stated its belief that private companies and researchers seeking competitive advantage are largely responsible for these activities. A spokesperson informed NBC News that Google believes the attacks have originated globally but declined to provide additional details on the suspects.

John Hultquist, chief analyst of Google’s Threat Intelligence Group, suggested that the nature of attacks on Gemini indicates similar incidents may become prevalent for smaller companies' custom AI tools.

Intellectual Property Concerns

Google regards distillation as intellectual property theft.

Technology companies have invested substantial resources in developing AI chatbots, or large language models, viewing the internal mechanisms of their leading models as highly valuable proprietary information.

Despite mechanisms to identify and block distillation attempts, major large language models (LLMs) are inherently susceptible due to public accessibility.

Last year, OpenAI, developer of ChatGPT, alleged that its Chinese competitor DeepSeek engaged in distillation attacks to improve its models.

Google stated that many attacks aimed to uncover the algorithms that enable Gemini's reasoning capabilities, or its method for processing information. Hultquist noted that as companies develop custom LLMs trained on sensitive data, their vulnerability to similar attacks may increase.