http://ipkitten.blogspot.com/2024/02/guest-post-can-ai-be-considered-phosita.html

The IPKat has received and is pleased to host the following guest contribution by Katfriend Anna Pokrovskaya (RUDN University, Intellectual Property Center “Skolkovo”) reviewing current debates in the US and the EU on the role that Artificial Intelligence (AI) may play in patentability considerations. Here’s what Anna writes:

Can AI be considered a PHOSITA? Policy debates in the US and the EU

by Anna Pokrovskaya
“Reviewing” the prior art

The question of whether AI can be considered a person having ordinary skill in the art (PHOSITA) in a given field is a topic of significant debate in the policy realms of both the US and the EU. As AI continues to advance and permeate various aspects of our lives, including healthcare, finance, and technology, understanding its role and level of expertise in these fields becomes crucial for developing appropriate policies and legal frameworks.

US

In the US, the debate centres on the legal implications of AI’s capabilities and its impact on intellectual property law, including in relation to patentability. The U.S. Patent and Trademark Office (USPTO) and courts traditionally assess patentability based on the expertise of PHOSITA (35 U.S. Code § 103).
While AI has undeniably contributed to significant technological advancements, the current legal framework does not explicitly address whether AI can be considered a PHOSITA (Thales Visionix Inc. v. US, 850 F. 3D 1343). Nevertheless, the USPTO has issued guidelines indicating that AI inventions are eligible for patent protection, implying a recognition of AI’s role in innovation (Public Views on Artificial Intelligence and Intellectual Property Policy, 2020).
One of the key considerations in evaluating AI’s expertise is the ability of AI systems to replicate and exceed human knowledge and understanding in a specific field. AI systems can analyze vast amounts of data, learn patterns, and generate valuable insights, often at a level beyond human capacity. However, debates continue regarding AI’s ability to possess the same level of experience, judgement, and intuition as human specialists. The multidimensionality of expertise in a given field adds complexity to the debate, requiring a nuanced understanding of AI’s capabilities and limitations.

EU

Turning to the EU, the policy debates surrounding AI’s specialist status are influenced by factors such as ethical considerations, data protection, and liability (Ethics Guidelines for Trustworthy AI).
The European Patent Convention (EPC) defines a PHOSITA as a hypothetical person who has the average level of knowledge and skills in the technical field at the relevant date of the patent application (Article 56 EPC). Some argue that AI’s ability to process vast amounts of data, learn from it, and generate novel solutions gives it a level of expertise that should be recognized in the determination of inventive step (G1/19).
The European Commission has been actively working on developing an AI regulatory framework that promotes the responsible and trustworthy use of AI while upholding fundamental rights and values (see IPKat). Discussions revolve around categorizing AI systems based on their level of risk, distinguishing between high-risk AI systems, such as those used in critical sectors or with potential for significant impact, and low-risk AI systems. The EU emphasizes the importance of human oversight and accountability in the deployment of AI systems (see IPKat). According to the EU’s proposed regulations, certain high-risk AI systems may be subject to strict requirements, including conformity assessments, explicit consent, and transparency obligations (EU AI Act). This approach aims to strike a balance between fostering innovation and ensuring the protection of individuals and society from potential risks associated with AI technologies.

Some reflections

Overall, the question of whether AI can be considered a PHOSITA remains the subject of a complex and ever-evolving policy debate. It requires careful consideration of AI’s capabilities, limitations, and ethical implications, while also addressing legal and regulatory challenges. As AI continues to advance, policymakers in the US, EU, and around the world will need to collaborate and adapt their policies to ensure that AI is integrated responsibly and effectively in various domains, benefiting society as a whole.
Policy recommendations in this area can be made to ensure a balanced approach that considers the capabilities and limitations of AI systems while also safeguarding the rights and interests of inventors and society at large. Some potential recommendations may include:
  1. Establishing a framework for evaluating AI expertise: Policymakers could consider developing guidelines or criteria that assess the capabilities, reliability, and robustness of AI systems in a given field. This framework could consider factors such as the accuracy, interpretability, and generalizability of AI models, as well as the availability of relevant training data and the transparency of algorithms.
  2. Encouraging collaboration and interdisciplinary research: To bridge the gap between AI capabilities and human expertise, policymakers could promote collaboration between AI researchers and domain experts in various fields. Interdisciplinary research efforts can help ensure that AI systems understand the nuances and complexities of a particular domain, leading to more accurate assessments of their expertise.
  3. Continuous monitoring and evaluation: As the field of AI continues to evolve rapidly, policymakers should establish mechanisms for ongoing monitoring and evaluation of AI systems’ capabilities. Regular assessments can help determine the extent to which AI systems can be considered specialists and inform any necessary updates to policies and regulations.
  4. Ethical considerations: Policymakers should also prioritize ethical considerations when addressing the question of AI expertise. This includes transparency in AI decision-making, avoiding biases, promoting fairness, and ensuring accountability.
  5. International collaboration and harmonization: Given the global nature of AI development and patentability, policymakers in the US and the EU should collaborate and harmonize their policies to the extent possible. This would ensure consistency in assessing AI expertise and avoid potential discrepancies in patentability criteria across jurisdictions.
Those above are just a few recommendations that can inform the ongoing debates surrounding AI expertise in the US and the EU. Overall, it is essential for policymakers to consider a multidisciplinary approach, consulting experts from AI research, intellectual property law, ethics, and other relevant fields to strike the right balance between encouraging innovation and protecting inventors’ rights.

Content reproduced from The IPKat as permitted under the Creative Commons Licence (UK).