In recent years, artificial intelligence (AI) has intertwined itself into the health sector, helping to create strides in patient care, diagnostics, and pharmaceuticals. While AI is constantly opening up new opportunities in the healthtech sector, it also poses the challenge of understanding and managing the risks associated with its use. While the benefits of using AI in healthcare are undisputedly large, risks range from privacy concerns to the use and collection of data and bias in algorithms. In addition to the key risks, as AI applications in healthtech advance, so does the legal labyrinth surrounding them, particularly concerning intellectual property (IP) rights.
To foster innovation and address risks associated with AI, a global request for regulation specific to AI has grown. The European Union (EU) has been proactive in creating a cross-sector conducive legal framework for AI by the adoption of the long-debated proposal for an EU Artificial Intelligence Act (AI Act)1, which is the first harmonised regulatory framework for AI systems within the EU. The cornerstone of the AI Act is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The framework includes four risk tiers: unacceptable, high, limited and minimal. Medical devices, such as AI-enabled diagnostic tools, therapeutic devices or implantable devices like pacemakers, will be deemed as high-risk AI systems. A high-risk categorisation means that developers and users must adhere to regulations requiring testing, proper documentation of data quality and an accountability framework that details human oversight.
Another area having legal implications for AI in healthtech is IP rights. The questions of inventorship, ownership and patentability of AI-made inventions remain at least somewhat unclear now. However, the European Patent Office (EPO) has guidelines for patenting AI-based inventions, which can be extended to AI in healthtech. EPO identifies three possible categories of AI inventions: 1) human-made inventions using AI for the verification of the outcome, 2) inventions in which a human identifies a problem and uses AI to find a solution and 3) AI-made inventions, in which AI identifies a problem and proposes a solution without human intervention. EPO considers inventions involving AI “computer implemented inventions” (CII). An AI related invention may be patentable when “AI leaves the abstract realm by applying it to solve a technical problem in a field of technology”. Generally, the approach on CIIs varies between EPO member states’ jurisdictions.
When AI systems are used in essential business operations or when considerable investment costs are associated with the introduction of AI systems, companies in the healthtech sector should prepare already now for the impact of the AI Act and consider other questions relating to AI as part of their operations. Should it emerge at a later stage that the AI system falls under the classification of a high-risk or even prohibited AI system, its usage may only be allowed to a limited degree or possibly be disallowed entirely from the AI Act’s date of application. Moreover, complying with the AI Act’s stipulations could incur additional expenses.
HealthTech Connect is an evening with healthtech professionals, pitching competition and new visions. It’s an occasion for healthcare technology companies to plug into the legal and investment acumen required for successful journeys. Read more and join HealthTech Connect.
1 The initial proposal for the Artificial Intelligence Act came from the European Commission in April 2021. Later, in late 2022, the European Council adopted what is termed as a general approach position concerning the legislation. On 14 June, amendments were adopted, and currently, the draft text of the legislation acts as the basis for negotiations between member states and the European Commission, a process that can extend over a significant duration.