As artificial intelligence companies prepare to pour unprecedented amounts of money into the 2026 midterm elections, a significant counter-movement is emerging within the AI industry itself. According to recent reports, some AI professionals and companies are discussing the creation of super PACs specifically designed to curb the industry’s political influence, setting the stage for a high-stakes regulatory battle that will determine how AI is governed for years to come.1
The conflict centers around a fundamental question: should AI development be governed by a single federal standard that prioritizes innovation, or should states be permitted to enact their own safety regulations? This debate has escalated from theoretical discussions to a full-scale political war, with over $100 million already committed to influence the outcome. The battle lines are drawn between a well-funded industry coalition backed by the Trump administration and a growing movement of state legislators, safety advocates, and even some AI companies that favor regulatory guardrails.
The Pro-AI Coalition: “Leading the Future” Super PAC
The “Leading the Future” Super PAC represents the industry’s aggressive push against state-level AI regulation. Launched with over $100 million from major Silicon Valley figures and firms, this political action committee aims to promote what it describes as a “bold, forward-looking approach to AI.”2 The group’s primary strategy involves advocating for a single, innovation-friendly federal standard that would preempt state laws, arguing that a “patchwork” of regulations would stifle American innovation and cede technological leadership to China.6, 7, 8
Major backers include venture capital firm Andreessen Horowitz (a16z), OpenAI co-founder Greg Brockman, Palantir co-founder Joe Lonsdale, and AI startup Perplexity AI along with angel investor Ron Conway.1, 2, 3 The PAC is run by political strategists Zac Moffatt and Josh Vlasto, both veterans of the successful crypto-focused Fairshake PAC, indicating the financial and political sophistication behind this effort.2, 6 Their first political target has been New York Democratic Assemblyman Alex Bores, sponsor of the RAISE Act, against whom they have launched a multi-million dollar campaign.2, 3
The Counter-Movement: Emerging Push for Regulation
In response to the industry’s political spending, talks are underway to create a new network of super PACs aiming to raise approximately $50 million to support midterm candidates from both parties who prioritize AI regulations.1 This effort is being spearheaded by Brad Carson, a Democratic former congressman from Oklahoma, with accelerated discussions among employees at Anthropic—an AI company that has publicly favored safety guardrails—and donors linked to the “effective altruism” movement.1
What makes this regulatory push particularly noteworthy is the unusual political alignment it has created. Influencers and commentators from across the political spectrum, including conservative Matt Walsh and progressive Ryan Grim, have voiced concerns about AI’s societal dangers, creating bipartisan momentum for regulation that transcends traditional political divisions.5 The targeted candidate at the center of this storm, Alex Bores, has framed the super PAC’s attack as a fundraising and organizing opportunity, stating on Instagram: “The part that scares Trump’s megadonors the most is that I actually understand AI.”3, 4
The Flashpoint: New York’s RAISE Act
The Responsible AI Safety and Education (RAISE) Act has become the primary battleground in the AI regulatory war. This state-level bill passed the New York legislature in June 2025 and awaits Governor Kathy Hochul’s signature.2, 3 The legislation applies specifically to large AI companies that have spent more than $100 million on model training and contains several key provisions that have drawn industry opposition.
Under the RAISE Act, covered companies would be required to publish and follow safety and security protocols, mandate safeguards to prevent “critical harm” such as the creation of chemical weapons or cyberattacks causing over $1 billion in damage, force disclosure of serious safety incidents within 72 hours, and face civil penalties of up to $30 million for violations.2, 3 Assemblyman Bores argues that the bill essentially enforces voluntary commitments companies have already made, preventing what he describes as a “tobacco company” scenario where known dangers are hidden from the public.2, 3
Federal Government’s Position and Strategy
The Trump administration has aligned itself squarely with the industry’s goals, strongly advocating for a single federal AI standard to override what it characterizes as a “patchwork” of state regulations. President Trump has expressed this position publicly on Truth Social, making it a visible administration priority.2, 7 The White House has drafted an executive order that would create an “AI Litigation Task Force” within the Justice Department specifically to challenge state AI laws, and potentially withhold federal funding from states with AI laws deemed non-compliant.3, 7
While this effort was reportedly paused briefly, the administration continues to actively pursue the goal through multiple channels. In addition to executive action, the administration is working with congressional Republicans to insert a moratorium on state AI laws into must-pass spending bills, creating multiple pressure points to achieve its regulatory objectives.2, 7 This comprehensive approach demonstrates the administration’s commitment to establishing federal preemption as the governing principle for AI regulation.
Broader Implications for AI Governance
This conflict represents more than a typical lobbying campaign—it constitutes a fundamental struggle over whether AI development will be governed by a single federal framework or a fragmented system of state jurisdictions. The outcome will likely set the regulatory paradigm for a generation of AI development, making the 2026 midterm elections a de facto referendum on AI governance.1, 5, 7
The industry’s aggressive targeting of a technically knowledgeable legislator like Alex Bores—who holds a Master’s in Computer Science and previously worked as a Palantir engineer—suggests a specific fear of regulators who possess the expertise to craft and defend sophisticated regulations.9 This technical understanding moves the debate beyond broad political rhetoric into specific safety considerations, potentially creating more effective and targeted regulations that address actual risks rather than perceived threats.
As AI continues to evolve at a rapid pace, the regulatory framework established in the coming years will have profound implications for security professionals across multiple domains. The balance struck between innovation and safety, between federal preemption and state experimentation, will determine how AI systems are developed, deployed, and secured in critical infrastructure, business operations, and daily life. The massive financial commitments on both sides indicate that all parties recognize the stakes involved in this regulatory battle.
References
- “Fears About A.I. Prompt Talks of Super PACs to Rein In the Industry,” The New York Times, Nov. 25, 2025.
- “Here’s what’s in the RAISE Act and why it sparked a $100 million political fight,” CNBC, Nov. 24, 2025.
- “A $100 Million AI Super PAC Targeted New York Democrat Alex Bores. He Thinks It Backfired,” WIRED, Nov. 21, 2025.
- “CNBC Television interview with Alex Bores,” YouTube, Nov. 24, 2025.
- “Fears of AI’s Impact Create New Political Alliances and Tensions,” Marketing AI Institute, Nov. 19, 2025.
- “Exclusive: $100M pro-AI super PAC takes aim at NY Democrat,” POLITICO Pro, Nov. 17, 2025.
- “AI super PAC drops $10M to kill state regulations nationwide,” The Tech Buzz, Nov. 24, 2025.
- “Big Tech Launches $100 Million pro-AI Super PAC,” AI Safety Newsletter #62, Center for AI Safety, Aug. 27, 2025.
- “Alex Bores Instagram Post,” Instagram, Nov. 22, 2025.