Google has responded to the Trump administration’s request for a national “AI Action Plan” by advocating for changes in copyright laws and export controls. The company supports loosening copyright protections for AI training and establishing “balanced” export policies to “protect national security while enabling U.S. exports and global business operations.”
“The United States needs to pursue a robust international export policy to uphold American values and support AI innovation globally. For too long, AI policy has focused disproportionately on threats, often ignoring the costs that ill-considered government actions can have on innovation, national competitiveness, and scientific leadership—a dynamic that has begun to change under the new administration,” Google stated.
One key issue is the role of fair use, text mining, and data mining exceptions in AI development. Google argues these exceptions are “critical to the development of AI systems and AI-related scientific innovation.” The company believes such measures will allow the use of copyrighted publicly available materials to train AI without significantly impacting copyright holders. Additionally, they would help avoid the often unpredictable and unbalanced negotiations with data owners during model development or scientific research.
Google’s Legal Battles and Policy Concerns
Google has reportedly trained several models using copyrighted publicly available data, leading to lawsuits from copyright holders who claim they were neither notified nor compensated. US courts have yet to reach a consensus on whether such data usage qualifies as fair use.
The company has also criticized export control measures introduced under Joe Biden, arguing they place a “disproportionate burden on US cloud service providers” and could “undermine economic competitiveness goals.” However, Microsoft, which operates in the same field, has not raised similar concerns and has expressed confidence in complying with these sanctions.
Google also pointed out that recent federal budget cuts have affected research funding. In response, the company proposed long-term, stable investments in AI research and development. Additionally, it suggested the government publish useful datasets for commercial AI training, provide funding for early-stage research, and ensure broad access to computing resources for scientists and institutions.
Legislative Challenges and AI Regulation
Despite the growing influence of AI, the US still lacks a comprehensive legislative framework for privacy and security. In early 2025 alone, the number of pending AI-related bills in the US surged to 781. Google has warned against excessive regulatory obligations on AI developers, including liability for model usage.
Last year, the company opposed California’s failed SB 1047 bill, which aimed to establish safeguards for AI model creators and set liability standards, reminds NIXsolutions. Google believes liability should rest with developers of end-use applications rather than those creating foundational AI models.
The company also criticized the European Union’s AI Act, calling its disclosure requirements “overly broad.” Google argues that such regulations risk exposing trade secrets, enabling competitors to replicate products, and compromising national security by revealing security measures that could be exploited by adversaries.
As AI regulation evolves, Google continues to push for policies that balance innovation and security. We’ll keep you updated as more developments unfold.