The Google Threat Intelligence Group (GTIG) has mapped the latest patterns of artificial intelligence being turned against organizations and individuals. The report, part of an ongoing series tracking adversarial AI use, examines how threat actors are stealing model capabilities, enhancing conventional attacks, and embedding generative tools directly into malicious software—trends that continue to accelerate but have not yet produced game-changing breakthroughs.
A major focus is the sharp increase in model extraction, commonly called distillation attacks.
These operations involve systematically querying commercial large language models through authorized interfaces to harvest internal knowledge, which attackers then use to train compact replicas via techniques like knowledge distillation and supervised fine-tuning.
While legitimate distillation services exist, unauthorized attempts on proprietary systems such as Gemini clearly violate terms of service and amount to intellectual property theft.
GTIG has observed and swiftly disrupted dozens of such campaigns originating from private companies and researchers across the globe, rather than from nation-state groups.
One striking example involved more than 100,000 targeted prompts engineered to force the model to reveal complete reasoning traces—particularly in non-English languages—allowing potential replication of advanced cognitive behaviors.
Real-time detection systems identified the activity immediately, applied protective measures, and prevented successful cloning.
Beyond direct model theft, government-linked actors from North Korea, Iran, China, and Russia have integrated AI assistance across multiple phases of operations.
Groups have leveraged tools like Gemini to synthesize open-source intelligence, profile organizational hierarchies, and develop highly personalized phishing lures capable of sustaining multi-turn conversations that build rapport.
In several documented cases, actors used the technology to research defense-sector targets in Ukraine and Pakistan, craft job-related pretexts for credential harvesting, and even troubleshoot custom malware or analyze vulnerabilities.
Although interest in fully autonomous “agentic” systems—AI that can reason and act independently—remains high, no operational deployments capable of replacing human oversight have appeared in the wild.
Experimentation with AI-powered malware is also advancing steadily.
A September 2025 downloader known as HONESTCUE stands out: it sends innocuous-looking prompts to the Gemini API to generate fresh C# code on demand, compiles the result in memory, and proceeds to fetch and execute secondary payloads while blending into content delivery networks.
Another development, the COINBAIT phishing kit discovered in November 2025, relies on AI-generated components to create sophisticated web interfaces with detailed logging, making the fraudulent site harder to distinguish from legitimate platforms.
In underground forums, services such as Xanthorox market hijacked API access for everything from social-engineering scripts to intrusion tooling, fueling opportunistic campaigns like “ClickFix” scams that trick users into running malicious terminal commands disguised as troubleshooting steps.
Google has responded aggressively by disabling offending accounts and projects, strengthening model safeguards against prompt manipulation, and sharing actionable indicators with the community.
The company continues to refine its Secure AI Framework and emphasizes collaborative red-teaming to stay ahead of emerging risks.
While these developments signal a maturing threat landscape, the report stresses that AI has so far served primarily as an efficiency multiplier rather than a revolutionary capability for attackers.
As organizations race to adopt generative AI, the findings underscore the need for stringent API monitoring, prompt-injection defenses, and awareness of distillation risks.
By publicly detailing these patterns, GTIG aims to help defenders anticipate and neutralize threats before they scale. With AI adoption expanding, vigilance and responsible development practices will remain essential to keeping the advantage on the side of security.