
A developer at Elon Musk’s artificial intelligence company xAI accidentally leaked a private API key on GitHub, exposing access to internal large language models (LLMs) designed for SpaceX, Tesla, and Twitter/X data. The key remained publicly accessible for two months before being discovered and removed, according to a report by KrebsOnSecurity1. Security researchers warn this could have enabled prompt injection, supply-chain attacks, or unauthorized data extraction.
Security Breakdown and Exposed Models
The leaked API key provided access to several proprietary LLMs, including unreleased versions of Grok (such as grok-2.5V
and research-grok-2p5v-1018
) and specialized models like tweet-rejector
and grok-spacex-2024-11-04
1. GitGuardian detected the exposure but the key remained active from March 2 to April 30, 2025, raising questions about xAI’s internal monitoring processes. Philippe Caturegli of Seralys first reported the leak via LinkedIn after discovering the key in a public repository2.
Broader Implications for AI Security
This incident mirrors past API key leaks, such as Tesco’s exposure through GitHub Copilot in 20233. The xAI models were reportedly fine-tuned for internal use cases at Musk’s companies, potentially containing sensitive operational data. While no evidence of exploitation has surfaced, the two-month exposure window created significant risk. xAI has not commented publicly, but the repository was taken down after direct notification1.
Government AI Surveillance Concerns
Separate reports indicate growing scrutiny of AI systems in government contexts. The Department of Government Efficiency (DOGE) allegedly used Grok-based tools to analyze Education Department data and monitor federal communications45. These developments compound ethical questions about AI deployment in sensitive environments.
Remediation and Best Practices
For organizations using LLM APIs, we recommend:
- Implementing automated key rotation (maximum 30-day validity)
- Enforcing repository scanning tools like GitGuardian
- Restricting model access with IP whitelisting
- Monitoring API usage patterns for anomalies
The xAI incident underscores the persistent challenge of credential management in AI ecosystems. As companies race to deploy internal LLMs, security teams must balance accessibility with robust controls to prevent similar exposures.
References
- “xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs”, KrebsOnSecurity, May 2025.
- Philippe Caturegli, “Yo xAI, your devs are leaking API keys on GitHub again”, LinkedIn, May 2025.
- “Copilot Leaks Code I Should Not Have Seen”, Medium, 2023.
- “DOGE fed Education Dept. data into AI tools for spending audits”, Washington Post, February 2025.
- “Musk’s DOGE using AI to snoop on US federal workers”, Reuters, April 2025.