U.S. Elected Donald Trump as President: What Can We Expect in Tech Regulations?
U.S. Elected Donald Trump as President: What Can We Expect in Tech Regulations?
The recent U.S. election results, with Donald Trump becoming the 47th president and Republicans securing control of the Senate—and possibly the House—signal major changes in digital and AI policy. Trump’s administration is expected to adopt a deregulatory, innovation-focused approach, contrasting sharply with President Biden’s framework.
Shifts in AI Policy
President-elect Trump has pledged to dismantle Biden’s 2023 AI Executive Order (EO) [1] on his first day in office. Biden’s order, created in response to congressional inaction, includes voluntary provisions that focus on AI’s safety risks, bias mitigation, and the responsible development of AI models. It led to the creation of the U.S. AI Safety Institute (AISI), guidance on model vulnerabilities, and partnerships with tech companies like OpenAI.
However, critics and Trump allies argue that these measures, including mandatory reporting on training and security protocols, are burdensome and risk disclosing trade secrets. Concerns over executive overreach were raised due to the EO’s reliance on the Defense Production Act [2], a law originally designed for national defense.
Trump’s running mate, JD Vance, and other Republicans have expressed concerns about these regulations stifling innovation and reinforcing the power of tech incumbents. Meanwhile, some Republicans criticize NIST’s anti-bias work as politically motivated, arguing it amounts to censorship [3].
What Will Replace the AI Executive Order?
Trump’s previous executive orders supported AI research and skill development, emphasizing “trustworthy” technologies while protecting civil liberties and promoting American values. Yet, specifics on how a new AI policy under Trump might look remain unclear.
Some within Trump’s circle advocate for focusing on AI’s physical risks, such as its potential misuse in bioweapons development. However, new regulations restricting AI may not be a priority, raising questions about the future of institutions like the AISI.
On the one hand, experts state that Trump’s administration may prefer applying existing laws over creating new ones, potentially leaving room for state-led initiatives. States like California and Colorado have already enacted AI-related laws, with California’s recent legislation requiring transparency on AI training [4].
On the other hand, other anticipates Trump’s protectionist policies will include tighter export controls on AI technologies, especially targeting China. This builds on the Biden administration’s existing bans on AI chip exports. However, they warn that these moves could stifle global cooperation and lead to more authoritarian uses of AI abroad.
Tech Regulation at a Global Scale
As Trump’s election sets the stage for potential deregulation in the U.S., contrasts remain with the approaches taken by other global powers, such as the European Union and China, which have distinct philosophies and strategies for managing technological growth and oversight.
The United States takes a market-driven approach to technology regulation, focusing on innovation and implementing sector-specific rules. This strategy reflects a preference for allowing industries to self-regulate, supported by targeted oversight rather than comprehensive federal mandates. Under President Trump’s administration, this approach is expected to lean further toward deregulation, potentially reducing restrictions on AI development to foster growth and maintain a competitive edge.
In contrast, the European Union (EU) follows a proactive and precautionary path in its technology regulations. The EU places a strong emphasis on protecting individual rights and ensuring fair competition within the market. This commitment is exemplified by its upcoming AI Act, which categorizes AI applications based on risk levels and imposes stringent requirements on transparency and accountability for high-risk AI systems. Overall, the EU maintains a high-regulation environment, positioning itself as a leader in promoting ethical AI use and protecting consumer interests.
China, meanwhile, adopts a state-driven model that prioritizes social stability and political control while simultaneously pushing for technological advancement. The country enforces comprehensive regulations that mandate strict safety assessments and labeling of AI-generated content to combat misinformation and align with state objectives. This approach results in tight oversight that allows China to maintain control over the narrative and pace of innovation, ensuring that technological growth aligns with government priorities.
In our monthly webinar on AI developments we highlight:
1. Whats happening on a global scale that you should know (form an authorative source, Anove).
2. What you must know and do in order to Govern the AI risk and leverage on the AI opprtunity at the same time.
3. Learn how you can (continuously) advise your stakeholders.
Data protection specialist and Anove CTO Jean-Hugues Migeon will facilitate this session. Migeon is an expert in international law and is specialised in tech-legislations.
References:
[1] President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/)
[2] The Defense Production Act of 1950: History, Authorities, and Considerations for Congress (https://crsreports.congress.gov/product/pdf/R/R43767)
[3] Study finds that AI models hold opposing views on controversial topics (https://techcrunch.com/2024/06/06/study-finds-ai-models-hold-opposing-views-on-controversial-topics/)
[4] AI Safety Under Republican Leadership, Dean W.Ball, (https://www.hyperdimensional.co/p/ai-safety-under-republican-leadership)