US law seems to be converging: no one owns AI-generated output
by Eddie Garmat

Thaler
On March 2nd, 2026 the United States Supreme Court refused to hear a case about whether art generated by an AI system is copyrightable, Reuters reports. The plaintiff, Stephen Thaler, is a computer scientist who built his own AI image generator named “DABUS”. He appealed to the high court after the lower courts upheld the 2022 decision from the U.S. Copyright Office to reject the copyright protection Thaler had requested in 2018 for an image generated by DABUS. The case was first brought to a federal judge in Washington in 2023, which ruled in favor of the office. Thaler then appealed to the U.S. Court of Appeals for the District of Columbia Circuit which affirmed the lower court’s ruling in 2025.
While the rejection itself means relatively little in the direction of AI copyright, it does prevent the possibility of new case law being established on the subject. Case law refers to judicial decisions that set binding legal precedent — and in the realm of intellectual property, or the legal ownership of creative and intangible works, such precedent determines who gets the rights to AI-generated content. In the absence of new precedent, the existing case law remains the most powerful argument regarding the question, and that body of law is remarkably consistent: works generated by AI cannot be protected by copyright.
Allen
The U.S. Copyright Office has rejected several applications for copyright protections of work generated by AI models. The Copyright Office itself states that “Copyright is a type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression.” [emphasis of the author] As I discussed in my previous piece on AI copyright, a key distinction in the modern era is that authorship implies a human created the piece. This was the key fact determining why the Copyright Office rejected Jason M. Allen’s request for copyright on an image he generated with the AI Midjourney.
In 2023, the Copyright Office issued a letter which states that “Théâtre D’opéra Spatial”, the image generated with Midjourney, was ineligible to receive copyright as the work did not have unique human authorship. This will become a recurring theme about copyright throughout this piece: work generated by AI cannot be protected as it is not authored by a human.
Kashtanova
Another case covered in my previous piece on AI copyright is Kashtanova and “Zarya of the Dawn.” “Zarya of the Dawn” is a comic book generated by Kris Kashtanova using Midjourney. Kashtanova initially held the copyright to Zarya of the Dawn before the Copyright Office rescinded it, stating it was not authored by a human, Reuters and the Wall Street Journal both report.
Though Kashtanova laid out the panels, wrote the text, and went through hundreds of iterations of the images, they only hold the rights to an unpublished literary work of the panels and text since the images were not created by a human.
NYT v. OpenAI
While current case law is nearly definitive, there still is yet to be a final word from SCOTUS. The only current court case that could provide this is New York Times v. OpenAI.
On December 27, 2023 the New York Times (NYT) first filed suit against OpenAI alleging, among other things, that OpenAI’s ChatGPT allows users to bypass NYT’s paywall, can generate false stories attributed to the daily, which can damage their reputation, and that OpenAI illegally used NYT’s articles to train its models (with no evidence of the latter), a Columbia Undergraduate Law Review article lists. OpenAI defends itself saying it did not illegally use The Times’ articles, that ChatGPT does not directly copy articles in normal use (so it could not bypass a paywall), and that using publicly available articles to train a model falls under free use. Free use is a concept in copyright law which defines when it is legal to use a copyrighted work without permission from the copyright holder. Examples of this include (and are not limited to) educational use like a video or class and transformative works (work that is inspired by or makes reference to the original piece without simply reproducing it) like a song remix or reporting.
The outcome of this case has 2 large implications among many others: the scope of free use in training AI and the question of whether a person whose copyrighted work is used to train AI has ownership, partial or whole, over the work generated by the model.
Though the free use question is the far larger implication of the two, it falls outside the scope of this piece. My 2025 piece and Audrey Pope’s “NYT v. OpenAI: The Times’s About-Face” in Harvard Law Review both focus more on this issue.
For the question of a person whose work is used to train an AI, I like to use the heavily-inspired artwork analogy. If someone spends hundreds of hours studying the art of Jackson Pollock then creates art in the Pollock’s style, does Pollock own the rights to that piece? Of course not. Likewise, ChatGPT may be able to create articles that are largely similar to The Times’ style, but not by virtue of verbatim copying. For this reason, The Times does not own the outputs of ChatGPT. However, because ChatGPT is not a human, it cannot own its outputs, therefore those are in the public domain.
If this case goes to the Supreme Court, we would learn a definite answer to the question of who owns the output of an AI. The legal community seems hesitant to say which way this case is likely to be ruled, although I believe it is likely to be ruled in OpenAI’s favor, which would mean no one owns the output of an AI once and for all. If the case is ruled in NYT’s favor, the question of who owns AI output would be much more complex, as modern AI models are trained off trillions of pages of work, making it extraordinarily difficult to track down who the AI was inspired by as they are now. A ruling in this direction could force the development of tools to hold AI models and the companies that own them more accountable for tracking the sources they are inspired by or possibly a new way of building AI that doesn’t involve scouring the internet for all the data.
In the meantime, case law shows that the outputs of an AI are in the public domain.
Where is AI Copyright Law Heading?
Current case law and decisions by the U.S. Copyright Office point to AI outputs not being copyrightable, therefore in the public domain. From Thaler to Allen to Kashtanova, the Copyright Office has repeatedly determined AI outputs cannot be claimed and they have been supported by the courts. If NYT v. OpenAI goes to the Supreme Court, the court would provide a final answer to who owns AI outputs, which would likely find that no one does. It is increasingly clear that no one will ever own the outputs of any AI.
[In this post, we used AI for polish, not purpose]


Leave a Reply