
The AI industry’s position on government regulation has undergone a dramatic reversal between 2023 and 2025. Where major tech firms once actively lobbied for federal oversight of artificial intelligence, they now oppose regulatory measures under the current administration, framing them as barriers to competing with China.1
From Biden-Era Advocacy to Trump-Era Resistance
In 2023, OpenAI, Google, and other AI leaders publicly supported the Biden administration’s push for AI regulation, emphasizing existential risks and the need for transparency.2 This stance was formalized in Executive Order 14110, which mandated disclosure requirements for foundation models and created new privacy safeguards.3 However, the 2025 White House order “Removing Barriers to American Leadership in Artificial Intelligence” repealed these provisions, marking a policy reversal aligned with industry’s new position.1
Recent petitions from OpenAI and Meta illustrate this shift. A 15-page document from OpenAI argues against state-level AI rules, while Meta seeks exemptions for using copyrighted data in training sets under “fair use” provisions.1 Industry coalitions now claim that compliance costs disproportionately burden startups and hinder innovation.4
Generational Divide in Regulatory Attitudes
While corporations resist oversight, younger demographics demand stronger safeguards. Gen Z activists have mobilized over 100,000 signatures for the Federal AI Transparency Act through groups like Encode Justice.5 Their priorities include:
- Bans on high-risk applications like facial recognition
- Mandatory disclosure of training data sources
- Third-party bias audits for hiring algorithms
This contrast reflects broader societal tensions between innovation priorities and accountability measures. California’s SB 942 (2026) represents a compromise, requiring transparency for AI systems while avoiding outright bans.4
Global Regulatory Fragmentation
The U.S. policy shift creates stark contrasts with other jurisdictions. The European Union’s AI Act imposes strict categorization of AI systems by risk level, with non-compliance penalties reaching 7% of global revenue.6 China maintains government approval requirements for AI deployments under its 2023 Interim Measures.1
Region | Key Legislation | Enforcement Mechanism |
---|---|---|
EU | GDPR (2018), AI Act (2024) | Revenue-based fines |
US | CCPA (2020), Colorado AI Act (2024) | Civil penalties ($500K max) |
China | Interim AI Measures (2023) | Government approval system |
This patchwork creates compliance challenges for multinational firms, particularly around data governance. GDPR’s right to erasure and CCPA’s opt-out provisions directly conflict with U.S. companies’ push for unrestricted data access.7
Security Implications of Deregulation
The removal of transparency requirements raises concerns about:
- Undocumented training data sources creating supply chain risks
- Reduced visibility into model behavior for threat detection
- Weakened accountability for AI-assisted cyber operations
Corporate enforcement struggles compound these issues. Internal bans on consumer AI tools like ChatGPT frequently fail due to shadow IT usage, leaving organizations vulnerable to data leaks.8 Recommended countermeasures include mandatory training on approved enterprise alternatives and SSO-based access controls.
Looking Ahead
The regulatory pendulum may swing again with future administrations. Proposed frameworks like UNESCO’s AI Ethics Recommendation (2021) and the G7 Hiroshima Process (2023) offer potential middle grounds through voluntary standards.9 However, the current U.S. stance prioritizes speed over safeguards, with industry arguing this is necessary to maintain technological leadership.
As AI capabilities advance, this deregulated environment may force organizations to develop internal governance structures absent federal requirements. The coming years will test whether market forces alone can address AI’s societal risks.
References
- “Tech giants shift stance on AI regulation”. Calcalistech. 2025.
- “Removing Barriers to American Leadership in Artificial Intelligence”. White House Executive Order. 2025.
- “AI Rules and Regulations”. Britannica Money. 2024.
- “Gen Zers demand stronger AI regulation”. Fortune. 2024.
- Encode Justice advocacy platform. 2024.
- “EU AI Act”. European Union. 2024.
- “Global AI Regulatory Tracker”. White & Case. 2025.
- “Corporate AI enforcement challenges”. Reddit/r/ITManagers. 2025.
- “UNESCO AI Ethics Recommendation”. 2021.