Innovation, China, and the Fair Use Dilemma
Can the West protect creators and still lead the global AI race?
Years ago, I lived and worked in Qatar at Texas A&M’s branch campus, launched in partnership with the Qatar Foundation to support the country’s effort to diversify its economy through education and research. In one memorable meeting, Dean Mark Weichold was asked by Qatari industry leaders, “What would it take to create the next Google here?” His response was clear: “Without strong intellectual property protection, no innovator will bring their best ideas here—regardless of the paycheck.”
That insight has never felt more relevant. As the U.S. now debates how copyright law should apply to GenAI training, we face the inverse challenge. If we weaken the very protections that made America the home of innovation, we risk ceding our leadership in the GenAI era to countries less concerned with the rule of law. In today’s Dispatch, I explore what’s at stake—and why it matters far beyond the courtroom.
The big picture
America’s economic power has long depended on two foundations: a culture of innovation and strong legal protections for intellectual property. The reason most technological breakthroughs—from semiconductors to the smartphone—happen in the United States or the West is because our legal traditions protect creators. These protections have made the U.S. a magnet for investment, creativity, and risk-taking.
But in the age of AI, those pillars may be pulling in opposite directions. Training large language models requires vast amounts of copyrighted material. While U.S. courts are still weighing whether this qualifies as fair use, China is moving full speed ahead—often without similar legal constraints. If U.S. companies face costly legal hurdles while Chinese firms do not, we risk falling behind—not on innovation, but on implementation.
Why it matters
If U.S. courts decide that training GenAI models on copyrighted content without permission violates fair use, the result could be:
Massive new costs for model training and data access.
Fewer startups and open-source projects.
Slower innovation—at the very moment China is accelerating.
But if the courts swing too far the other way and declare everything fair game, we risk undermining the creative economy that has powered American influence for decades—from Hollywood to GitHub. Weakening IP protections doesn’t just hurt creators; it erodes the trust and investment that fuel innovation and gives the U.S. its economic and cultural edge on the global stage.
This isn’t just a copyright question—it’s a strategic one.
The legal front line
Major lawsuits are testing the limits of fair use and AI development:
The New York Times v. OpenAI & Microsoft: The Times alleges its reporting was scraped and used to train models like ChatGPT without permission—threatening its business model and journalistic integrity.
Authors Guild v. OpenAI: Backed by major writers like George R.R. Martin, the Guild argues that unauthorized use of books for model training violates copyright and threatens the viability of authorship as a career.
Getty Images v. Stability AI: Getty says AI-generated images based on their content are both a copyright and trademark violation.
Several high-profile cases are already making their way through the courts, and the early rulings are in favor of content owners. In Thomson Reuters v. Ross Intelligence, a federal court found that using Westlaw data to train an AI tool was not fair use—a win for content owners. This and other legal outcomes will shape the future of AI development in the U.S.—and both the creative and tech sectors are watching closely to see which vision of fair use prevails.
Ultimately, these issues will be decided by the United States Supreme Court.
The real risk: innovation vs. protection
This debate pits two American strengths against each other: our world-leading commitment to intellectual property rights and our drive to lead the next wave of technological innovation. But while the U.S. weighs lawsuits and legal precedent, China is moving quickly and strategically—training models on massive datasets with fewer legal barriers, minimal IP enforcement, and strong government backing.
If American developers are bogged down by litigation while Chinese competitors scale freely, we risk a repeat of what happened in 5G—ceding a generational technology edge to an authoritarian rival. The stakes aren’t just about who wins the AI race—they’re about what kind of digital economy we want to lead. Get it wrong, and we weaken our creative industries, erode investor confidence, and open the door for others to define the future.
The bottom line
We need a third way—one that:
Compensates creators through new licensing models, collectives, or AI-specific rights regimes.
Supports open innovation through fair use carve-outs for research, education, and nonprofit use.
Maintains competitiveness by providing clarity and guardrails, not a full stop on AI development.
There’s a single entity that can chart this path: the U.S. Congress. The courts can interpret the law, but only lawmakers can modernize it for the AI era.
Congress must act. Without clear, balanced legislation, we leave our creative industries exposed, our AI sector constrained, and the door wide open for less democratic nations to shape the rules of the road. The longer we wait, the more we risk falling behind—not because we lack talent or ambition, but because we couldn’t agree on how to move forward.
Does it follow that the reverse is true? Since AI content is not generated by a human (but by an algorithm), can it be liable for copyright infringement?
Does corporate personhood apply to corporations in this case, technically making AI content generated by a human?
AI is an aggregator, not an author, technically “rewording” what it is trained on. Does this in any way matter?
Hi Tim, Thanks for a very insightful post outlining the challenges and potential outcomes if we don't get this right. I would like to see a post with your thoughts regarding the optimum way to solve this dilemma.
Thanks, Mike