
Generative AI tools like Google’s Gemini are increasingly being weaponized by state-sponsored threat actors, according to recent findings from Google Threat Intelligence. While much of the public discourse focuses on theoretical risks, documented cases reveal concrete attempts to exploit AI for cyberattacks and information operations. This article examines observed tactics, techniques, and procedures (TTPs) from Chinese, Iranian, North Korean, and Russian actors, with supporting data from open-source threat reports1.
Executive Summary for Security Leadership
Government-backed groups are testing generative AI across multiple attack phases, including reconnaissance, social engineering, and malware development. Unlike traditional tools, AI enables scalable automation of tasks like profile generation and phishing content creation. However, current observed activities suggest these actors are still experimenting rather than deploying mature AI-driven attack chains.
- Key Findings: 12 documented cases of Gemini misuse attempts in 2024
- Primary Use Cases: Phishing template generation, fake persona creation, and automated reconnaissance
- Most Active Groups: Russian GRU-linked actors showed highest experimentation volume
Operational Use Cases in Cyber Attacks
The Google Threat Intelligence Group identified three primary exploitation patterns during their investigation. First, AI-generated phishing emails demonstrated improved linguistic quality compared to traditional campaigns, with fewer grammatical errors and better localization2. Second, actors used AI assistants to automate the creation of fake social media profiles with generated biographies and profile pictures. Third, several groups queried Gemini for technical information about target networks and systems.
A February 2025 SocRadar report corroborates these findings, noting a 37% increase in AI-generated phishing content compared to 2024 baselines3. The same analysis found that AI-assisted campaigns had higher click-through rates but similar final compromise rates, suggesting defenders are adapting detection methods.
Technical Analysis of Observed TTPs
Google’s researchers provided specific examples of malicious prompts used by threat actors:
Threat Group | Observed Activity | AI Application |
---|---|---|
Russian APT29 | Generating PowerShell scripts for initial access | Code generation with obfuscation requests |
Chinese TA412 | Creating fake journalist personas | Biography generation and email drafting |
North Korean Kimsuky | Researching network defense technologies | Technical question answering |
The DHS 2025 report on adversarial AI highlights similar patterns, particularly around face morphing techniques used to bypass biometric authentication4. This aligns with observed North Korean attempts to generate synthetic media for identity fraud.
Defensive Recommendations
Organizations should implement three key controls to mitigate AI-driven threats. First, monitor for anomalous usage patterns in enterprise AI tools, particularly repeated code generation attempts. Second, update phishing awareness training to include AI-generated content examples. Third, implement additional verification steps for biometric authentication systems.
Palo Alto Networks’ January 2024 research suggests that existing security tools can detect many AI-generated artifacts when properly tuned5. Network traffic analysis remains particularly effective, as AI-assisted attacks still require traditional command and control channels.
Conclusion
While generative AI introduces new capabilities for threat actors, current evidence suggests its primary impact is operational efficiency rather than enabling fundamentally novel attacks. Security teams should prioritize detection of AI-enhanced versions of known TTPs while monitoring for emerging techniques. The Google Threat Intelligence findings provide a baseline for understanding real-world adversarial AI use beyond theoretical projections.
References
- “Adversarial misuse of generative AI,” Google Cloud Blog, Jan. 29, 2025.
- “Adversarial misuse of AI: How threat actors leverage AI,” SocRadar, Feb. 26, 2025.
- “Impacts of adversarial generative AI on homeland security,” DHS Report, Jan. 2025.
- “Taxonomy of AI misuse,” arXiv Paper, Jun. 21, 2024.
- “Where AI can shine in security,” Palo Alto Networks, Jan. 24, 2024.