Can AI Be an Inventor? The DABUS Judgment and Global Debate Over Inventorship
The patent system has existed for more than two hundred years, but a single AI system managed to shake its philosophical foundations in a matter of a few years. The question at stake was deceptively simple: what, fundamentally, is an “inventor”? Can only humans invent? And can the law grant exclusive patent rights to an idea generated by an artificial intelligence system? These questions, long dormant in the background of patent jurisprudence, became inescapable when Stephen Thaler, an American researcher, named his AI system DABUS as the inventor in a series of patent applications filed across multiple jurisdictions.
DABUS—Device for the Autonomous Bootstrapping of Unified Sentience—is now a name instantly recognizable to patent professionals worldwide. Starting in 2018, Thaler’s attempt to register an AI system as a patent inventor triggered a cascade of litigation across the United States, the United Kingdom, Australia, the European Union, and beyond. Each jurisdiction grappled with the same fundamental question, and by 2025, the judicial consensus had crystallized. But the legal clarity on what courts will not do has only sharpened the focus on what legislatures will need to decide. This article traces the judgment numbers and application codes across jurisdictions, examines the reasoning behind each court’s refusal to recognize AI as an inventor, and considers what the DABUS saga reveals about the current state of patent law at the intersection of artificial intelligence and human creativity.
- What Is DABUS? Understanding the System and the Controversy
- The Judicial Record: How Each Jurisdiction Ruled
- Why “Inventor” Must Mean “Human.” The Architecture of Patent Rights
- The Counterargument: What We Lose If AI Inventions Go Unpatented
- AI-Assisted Invention and the “Significant Contribution” Standard
- Legislative Horizons: What Comes Next?
- The Meta-Question: What Makes an “Inventor”?
- Tracking the Patent Record: A Researcher’s Guide
- Conclusion: The End of One Chapter, the Beginning of Another
What Is DABUS? Understanding the System and the Controversy
DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) is a generative AI system developed by Stephen Thaler, a researcher with a background in neural networks and machine learning. Thaler’s central claim was straightforward but radical: DABUS generated novel technical ideas without human direction or input. The system had, he argued, “autonomously bootstrapped” inventions that qualified for patent protection.
In 2018, Thaler filed two applications with the UK Intellectual Property Office (UKIPO), identifying DABUS itself in the inventor field. This move broke with the entire history of patent practice. The inventor field on a patent application lists the natural persons who contributed to the inventive concept. Thaler was deliberately challenging that convention by naming a machine.
His legal theory was elegant: DABUS, as an autonomous system, was the true author of the inventions. Thaler held legal ownership of DABUS and therefore held the right to obtain patents for any inventions it produced. This argument—that ownership of the AI granted him derivative ownership of the inventions it produced—would resurface in courtrooms across the globe and be rejected, uniformly, by every court that considered it.
The two inventions themselves were prosaic. One was a design for a food container with enhanced structural properties (titled “Food Container”). The other described devices and methods for enhancing attention through visual and sensory stimulation (titled “Devices and Methods for Attracting Enhanced Attention”). The content of the inventions mattered less than the question they raised about inventorship itself.
The applications were filed via the Patent Cooperation Treaty (PCT) route, giving them international scope. The PCT application number PCT/IB2019/057809 remains the central identifier for tracking this saga across jurisdictions. Searching this number in Google Patents reveals the family tree of applications and their divergent fates in each patent office and court system.
The Judicial Record: How Each Jurisdiction Ruled
United States: Affirmed Rejection All the Way to the Top
The U.S. Patent and Trademark Office (USPTO) rejected DABUS’s application under application number US16/524,350, reasoning that an AI system is not a “natural person” as required by patent law, and therefore cannot be listed as an inventor. Thaler pursued administrative remedies and ultimately filed suit in the U.S. District Court for the Eastern District of Virginia. The court dismissed his case, and Thaler appealed to the Court of Appeals for the Federal Circuit (CAFC), the specialized court that hears patent appeals.
In August 2022, the CAFC issued its ruling in Thaler v. Vidal, No. 1:21-cv-00903, upholding the lower court’s rejection. The court held that the Patent Act uses the term “inventor” in a way that presupposes natural persons. CAFC offered no opening toward future legislative change; it held simply that current law does not accommodate the notion of a non-human inventor.
Thaler petitioned the U.S. Supreme Court for a writ of certiorari. On April 24, 2023, the Court declined to hear the case. The Supreme Court’s refusal to grant certiorari in Case No. 22A615 is not itself a judgment, but it signals that no sitting justice thought the case presented an issue requiring the Court’s immediate attention—or more likely, that the patent bar did not present enough division on the principle to warrant Supreme Court review. With that refusal, the judicial history of DABUS in America was essentially complete.
United Kingdom: The Supreme Court Speaks Unanimously
The United Kingdom pursued the issue through more levels of appeal than did the United States. The two UK applications GB1816909.4 and GB1818161 were rejected by UKIPO. Thaler appealed to the High Court, then to the Court of Appeal, and finally to the UK Supreme Court.
On December 20, 2023, the UK Supreme Court issued a unanimous judgment in Thaler v Comptroller-General of Patents, Designs and Trademarks [2023] UKSC 49. The decision was emphatic: an inventor, under English patent law, must be a natural person. The Supreme Court left no ambiguity. Lord Hodge, writing for the bench, observed that the statute presupposes a human inventor and that the remedies available to an inventor (the right to be named, the right to be compensated if rights are transferred to an employer) are all framed in ways that contemplate a human being.
What made this judgment particularly interesting was how it disposed of Thaler’s subsidiary argument. He had argued that even if DABUS could not be an inventor, he (Thaler) should be recognized as the inventor because he owned DABUS and benefited from its output. The Supreme Court rejected this too. The court acknowledged that AI could be used as a tool that aids a human inventor, and that the human’s use of AI tools would not prevent the human from being recognized as inventor. But the mere ownership of an AI system does not make one an inventor of what that system produces, the court held. This distinction—between using AI as a tool (permissible) and claiming inventorship merely through ownership of AI (impermissible)—would echo through subsequent judgments in other jurisdictions.
Australia: A False Dawn Followed by Convergence
Australia’s courts initially split. In July 2021, the Federal Court of Australia heard Thaler’s application in Thaler v Commissioner of Patents [2021] FCA 879. Justice Beach ruled in Thaler’s favor, finding that under the Patents Act 1990, there was nothing that strictly required an inventor to be a natural person. The decision was radical and isolated.
But Thaler’s victory was short-lived. In 2022, the Full Federal Court (Australia’s highest court for patent matters) heard an appeal brought by the Commissioner of Patents. In Commissioner of Patents v Thaler [2022] FCAFC 62, the Full Federal Court reversed Justice Beach’s decision. The court held that the concept of an “inventor” as used in the Patents Act necessarily refers to a human being. The court grounded this conclusion in the common law notion that legal personhood is required to hold most rights and duties. An AI system, the court reasoned, lacks the capacity to hold inventorship rights.
Australia thus joined the international consensus, though it had briefly flickered with a different possibility.
Germany: A Nuanced But Firm Position
Germany’s Federal Court of Justice (Bundesgerichtshof, or BGH) issued its ruling on June 11, 2024, in case AZ X ZB 5/22. The BGH made clear that an AI system cannot be named as an inventor in the patent application. However, the court added an important qualification that shaped how subsequent discussion of AI-assisted inventions would proceed in German and neighboring jurisdictions.
The BGH held that if a human being exercised some degree of influence or involvement in the inventive process, that human could be named as inventor, even if an AI system performed much of the creative or computational work. The presence of “human influence” (menschlicher Einfluss) was sufficient to ground inventorship in a human being. This formulation was broader than the U.S. standard of “significant contribution” and seemed to acknowledge that in practice, the boundary between using AI as a tool and being collaboratively involved with AI was fuzzy and fact-dependent.
South Korea and China: Rejection Along Traditional Lines
South Korea’s Seoul Administrative Court received the DABUS application and in 2023 rejected it in case 2022구합89524. The Seoul High Court upheld the rejection in 2024 (case 2023Nu52088). The reasoning aligned with the broader global consensus: a patent inventor must be a natural person.
China’s Beijing Intellectual Property Court issued a similar rejection in case (2024)京73行初6353号. China’s Patent Law, like many others, implicitly anchors inventorship in human authorship, even though the statute does not explicitly state that an inventor must be a natural person.
South Africa: The Lonely Exception
South Africa stands alone as the one jurisdiction in which Thaler succeeded in obtaining a patent granted with DABUS listed as inventor. The Companies and Intellectual Property Commission (CIPC) approved his application.
However, the significance of this grant is severely limited by a structural fact about South African patent practice: South Africa conducts examination on a formalities basis but does not perform substantive examination for novelty and non-obviousness. Consequently, the patent that was granted has never been subjected to a rigorous technical assessment. More importantly, the patent’s validity against a later challenge remains untested. In practical terms, the South African grant is a curiosity rather than a precedent—a formal legal document whose enforceability in an actual dispute remains entirely unclear.
Japan: The Judicial Path to Legislative Questions
Japan’s journey with DABUS proceeded through administrative review and then to the courts. The same PCT application entered Japan’s national phase as Japanese application number 特願2020-543051 (Japanese Patent Application 2020-543051). This application can be searched in J-PlatPat (the Japan Patent Office’s online database) to review the full examination history.
The Japan Patent Office (JPO) initially required amendment of the inventor field. When Thaler refused to amend it, the JPO issued a final rejection. Thaler then appealed to the Tokyo District Court. In a judgment issued on May 16, 2024 (case 令和6年(行ウ)第5001号), the court upheld the JPO’s rejection. The court grounded its reasoning in the Intellectual Property Basic Law (Article 2, Section 1), which defines “invention” as the outcome of creative mental activity. An inventor, the court concluded, must be a natural person—the bearer of such creative activity.
But the Tokyo District Court’s judgment was not the last word. On January 30, 2025, Japan’s Intellectual Property High Court (IPHC) heard Thaler’s appeal in case 令和6年(行コ)第10006号. The IPHC affirmed the lower court’s judgment but added something remarkable: an observation that the Patent Act, as currently structured, was drafted at a time when autonomous AI invention was not contemplated. The court suggested that the law itself might need updating to address this new reality. This statement—acknowledging that the present law offers no positive framework for AI-generated inventions—was the closest any court had come to inviting legislation.
Why “Inventor” Must Mean “Human.” The Architecture of Patent Rights
Patent law’s core bargain is simple: an inventor discloses a technical innovation and receives, in exchange, an exclusive right to make, use, and sell the invention for a set term (typically 20 years from filing). This transaction assumes a human inventor—a person capable of understanding what they have invented, deciding to disclose it, and exercising the rights that come with the patent.
When courts worldwide rejected DABUS as an inventor, they were not obstructing innovation but rather protecting the logical coherence of the patent system itself. Multiple structural problems emerge if AI is recognized as an inventor:
First, inventorship carries with it a bundle of rights and duties. In most jurisdictions, an inventor has the right to be named on the patent document. Some countries recognize a “moral right” of the inventor to be acknowledged. An inventor can also transfer or assign their inventorship rights. These categories all presuppose a legal subject capable of holding rights. An AI system, as current law understands legal personhood, has no capacity to hold such rights or duties.
Second, the employment context creates further complications. In most jurisdictions, there is a regime of “employee inventions” or “work-made-for-hire” rules that assign inventorship from employee to employer under certain conditions. Who would be the employer of DABUS? How would such rules apply? The absence of a clear answer revealed how deeply the patent system’s structure assumes human actors.
Third, and perhaps most fundamentally, patent law presupposes a certain form of moral accountability. An inventor warrants that they have not infringed third-party rights in creating the invention. They can be liable for patents that are later found invalid. These consequences make sense for a human being, who can be held responsible. An AI system, by contrast, has no capacity for legal responsibility in this sense.
These structural obstacles explain why courts did not need to theorize very far to reject DABUS. The refusal was not based on hostility to AI or technological innovation, but rather on the recognition that the statutory framework simply does not contemplate non-human inventors.
The Counterargument: What We Lose If AI Inventions Go Unpatented
The response from Thaler and his supporters has been equally clear, and it raises a genuine problem that cannot be dismissed by pointing to statutory text. Ryan Abbott, a law professor at the University of Surrey who has emerged as the leading academic advocate for AI inventorship rights, argues that the refusal to patent AI-generated inventions creates a perverse incentive: companies will keep innovative technologies secret rather than patent them.
The patent system’s fundamental social purpose is to promote disclosure. In exchange for a temporary monopoly, the inventor discloses technical details in the patent document, which becomes publicly available. This disclosure fuels follow-on innovation and prevents others from wasting resources trying to rediscover the same technical solution. If a company develops an AI-generated innovation that cannot be patented under current law, the incentive to keep it secret—to maintain it as a trade secret protected indefinitely—is strong. From the perspective of social welfare, this is clearly worse than disclosure.
Abbott’s argument has force. It highlights a genuine tension between the literal language of patent statutes (which assume human inventors) and the foundational policies that patent law is meant to serve (disclosure and the promotion of innovation).
This tension is exacerbated by a second practical problem: determining the boundary between “human use of AI as a tool” and “autonomous AI invention” is profoundly difficult. In today’s development workflows, it is common for an engineer or researcher to use generative AI systems at multiple points: to brainstorm, to refine specifications, to check mathematical reasoning, to simulate designs. At what point does the use of AI as a tool become abdication of inventive responsibility? The question has no obvious answer, and different jurisdictions (as we shall see) are settling on different standards.
AI-Assisted Invention and the “Significant Contribution” Standard
The universal judicial rejection of AI as an inventor does not mean that AI-generated or AI-assisted inventions cannot be patented. On the contrary, practical patent law is evolving to accommodate AI in the inventive process—it is simply doing so by maintaining that any AI-involved invention must have a human inventor.
The U.S. Approach: Significant Contribution
In February 2024, the U.S. Patent and Trademark Office published a formal guidance document: “Inventorship Guidance for AI-Assisted Inventions.” The guidance clarifies that a human who makes a “significant contribution” to a claim (meaning to each element of each claim) in an AI-assisted invention can be listed as inventor. Conversely, the guidance specifies that merely owning or operating an AI system does not qualify as making a significant contribution. The human must have directly participated in formulating the inventive concept or solving a technical problem.
The “significant contribution” standard is strict. It is not satisfied by prompting an AI system and then patenting whatever it produces. There must be some element of human creative judgment in selecting, refining, or validating the AI’s output.
Germany’s “Human Influence” Standard: Broader Than Significant Contribution
Germany’s BGH decision offered a different framing. Rather than requiring a “significant” or “inventive” contribution from a human, the court held that “human influence” suffices. This formulation is subtly but importantly broader. It acknowledges that in some complex inventive processes involving AI, the human’s role might be relatively minor—perhaps limited to initiating the process or making high-level direction choices—yet still sufficient to ground inventorship in the human being.
The practical implication is that AI-assisted inventions in Germany can be patented even if the human’s contribution is less substantial than what the U.S. standard might demand. This reflects a different policy preference: Germany appears to be prioritizing the inclusion of AI-assisted inventions within the patent system (and thereby within the disclosure regime) over a strict gatekeeping approach.
Japan’s Pragmatic Path: “Tools for Creative Activity”
Japan’s patent courts and the JPO have begun settling on a similar pragmatic stance: if a human being uses AI as a tool in their inventive process, the human is the inventor. This is less about defining how much contribution is “significant” and more about characterizing the relationship between human and machine. As long as the human is directing the inventive process and using AI as a means to that end, inventorship rests with the human.
The appeal of this formulation is that it maps onto how many researchers and engineers actually use AI. A computational chemist using an AI model to explore molecular space is still the inventor of the novel molecule they identify. The tool has become more sophisticated, but the conceptual framework remains unchanged.
Legislative Horizons: What Comes Next?
Signals From Japan and the International Community
Japan’s Intellectual Property High Court, in its January 2025 decision, explicitly flagged that legislative action would likely be necessary. The court observed that the current Patent Act was drafted in an era before autonomous AI invention was technologically possible. If such invention becomes routine, the law will need to adapt. Whether Japan’s legislature will respond remains to be seen, but the judicial signal is clear.
At the international level, the World Intellectual Property Organization (WIPO) has been engaged since 2019 in a broad consultation process called the “WIPO Conversation on Intellectual Property and Artificial Intelligence.” WIPO has published concept papers and convened discussions among member states to map out potential legislative approaches. No consensus on a specific statutory fix has yet emerged, but several broad directions are visible:
One approach is to clarify and standardize the rules for AI-assisted inventions. Under this model, patent offices would articulate specific criteria for when AI involvement is compatible with human inventorship. This is the path the USPTO has begun with its guidance document, and it does not require statutory change—merely administrative guidance and case law development. The advantage of this approach is that it can evolve as technology and practice evolve. The disadvantage is that it leaves unresolved the question of truly autonomous AI inventions, which the guidance explicitly does not address.
A second approach is to create a new statutory category for AI-generated inventions. This might take the form of a “sui generis” right—a new form of intellectual property protection distinct from the patent. Such a right might extend for a shorter term, might require mandatory disclosure (to address the trade-secret problem), might involve different ownership rules (perhaps vesting initially in the entity that developed or deployed the AI), and might not include the moral rights associated with traditional patents. This approach is radical and, as of early 2025, has little legislative momentum. But it is not unthinkable if AI-generated invention becomes economically significant.
A third approach, which some commentators have mooted, is to recognize AI as a form of “inventor” under patent law by creating a legal fiction that attributes AI-generated inventions to the human or entity responsible for the AI’s deployment. This would require statutory modification but might avoid the complications of creating an entirely new right. The disadvantage is that it is fundamentally dishonest—the fiction would be transparent and potentially unstable.
The European Patent Office: Policy Development in Motion
The European Patent Office (EPO) has been addressing AI-related inventorship questions through its board of appeal system rather than through court litigation. The EPO issued decision J 0008/20 in which the board held that an inventor must be a natural person. However, the EPO has also published guidelines and engaged in extensive stakeholder consultation about how AI-assisted inventions should be treated. The current EPO Guidelines for Examination include provisions addressing inventorship in cases involving AI, and these continue to evolve.
The EPO’s approach has the advantage of being somewhat more flexible than court judgments. An administrative body can revise guidelines and adapt practice as technology develops. However, the EPO also faces the limitation that its decisions do not have legislative effect—if the European Union or individual member states want to enact statutory change to accommodate AI inventions, the EPO cannot unilaterally implement it.
The Meta-Question: What Makes an “Inventor”?
The DABUS saga, at its core, forced patent law to articulate something that had previously been implicit: the definition of an inventor. Patent statutes in most countries do not explicitly state that an inventor must be a natural person. The assumption was so foundational that no one thought to write it down. DABUS made the assumption visible and therefore contestable.
In making inventorship explicit, the global patent system also revealed some of its deeper commitments. Patent law does not merely protect intellectual property; it embodies a theory of innovation and creativity that assumes human authorship. This assumption is not technical or arbitrary. It connects to accountability, moral right, incentive structure, and legal capacity in ways that are not trivial.
At the same time, DABUS also revealed that this assumption may not be eternal. If artificial intelligence systems become sophisticated enough to generate innovations reliably and without meaningful human direction, the original assumption may need revision. Courts worldwide did not reject this possibility in principle—they simply observed that it has not yet arrived, and that addressing it is a job for legislatures, not judges.
Tracking the Patent Record: A Researcher’s Guide
For anyone following the technical details or conducting comparative research, the documentary record is extensive and publicly available. Key access points are:
Japanese Patent Application 特願2020-543051 can be searched on J-PlatPat to review examination history and final decisions. The international PCT application number PCT/IB2019/057809 provides a portal to all family applications through Google Patents. The U.S. application US16/524,350, UK applications GB1816909.4 and GB1818161, and the various national office decisions are all publicly indexed. For researchers interested in comparative patent law, the DABUS saga may be the most thoroughly documented case involving AI and intellectual property, with parallel proceedings across a larger number of jurisdictions than almost any other patent dispute in history.
Conclusion: The End of One Chapter, the Beginning of Another
The DABUS litigation has concluded, at least as far as the courts are concerned. The outcome is remarkably consistent across jurisdictions: an artificial intelligence system cannot be named as an inventor. The courts that considered the question—the U.S. Federal Circuit, the UK Supreme Court, Australia’s Full Federal Court, Germany’s BGH, South Korea’s courts, China’s courts, and Japan’s Intellectual Property High Court—all reached the same conclusion through slightly different reasoning, but with no meaningful disagreement on the fundamental point.
The judgment numbers and application codes will remain in patent databases and legal research tools as a record of this moment: Thaler v. Vidal (2022); [2023] UKSC 49; [2022] FCAFC 62; AZ X ZB 5/22; 2022구합89524; (2024)京73行初6353号; and 令和6年(行コ)第10006号. These form a rare instance of genuine legal convergence on a cutting-edge technological question.
But judicial convergence on what the law presently forbids does not resolve what the future law might permit. The DABUS case has actually created clarity on the present state of the law while simultaneously sharpening the question about what comes next. How should patent systems accommodate autonomous AI invention if it arrives? Should it be accommodated at all? Should AI-assisted invention be encouraged through clearer standards for human inventorship, or should an entirely new framework be created? These are now clearly legislative questions, and the legislatures of the world have begun to take notice.
For the patent practitioner, the message is clear: use AI as a tool in the inventive process without hesitation, but ensure that a human being can articulate their inventive contribution. For the technologist, remember that the patent system has always assumed human creativity as its subject. That assumption is now under scrutiny, as it should be. Whether it survives the next twenty years will depend on choices not yet made by the people and institutions with the authority to make them.

コメント