The EU as a pioneer of responsible innovation

The Commission proposal for a new AI regulation

The earliest artificial intelligence research dates back to as early as the mid-1900s. However, owing to the unforeseen capacities of modern computing and the availability of data, we are now experiencing the golden age of artificial intelligence. For all its benefits, this development has also introduced new technology-related risks as well as uncertainties in determining legal responsibilities. The EU is seeking to pave the way in the field of AI and, in April, the European Commission published the globally first-ever proposal for a comprehensive regulation of artificial intelligence. The proposal aims at guaranteeing the EU’s competitiveness in the field of developing new technologies and ensuring that AI systems are reliable.

Artificial intelligence – advanced analytics or self-aware super systems?

AI is currently a significantly hot topic and may be used to refer to a large variety of different technologies. A key challenge in discussing artificial intelligence is actually pinpointing what artificial intelligence means precisely. For those fixed on dystopias, the daunting prospect of self-aware robots from sci-fi movies may spring to mind. On the other hand, the concept of artificial intelligence often refers to nearly any advanced analytics solutions that are based on machine learning and that can be used in the automation of data processing and decision-making.

From a technical perspective, there is no precise definition for artificial intelligence. Generally, artificial intelligence can be used to refer to a machine’s ability to exhibit skills that are traditionally linked to human intelligence (such as learning, planning, deduction and creation) without continuous control by a human user. In terms of machine learning technologies, a machine’s ability of independent improvement of its performance through further experience and data is also essential. Despite these typical criteria, the concept of artificial intelligence continues to evolve as technology develops.

Practical applications of artificial intelligence are already part and parcel of our everyday life, since many of the tools we use daily contain some form of artificial intelligence. Such tools include voice-controlled virtual assistants, automated translation solutions, tools for recommending marketing and other content, and many smart home applications.

 

 

'Black box' issues jeopardise the requirements of transparency for decision-making concerning individuals and may, in practice, generate discriminating or incorrect results.

Ethical concerns and the acceptable use of artificial intelligence

The development of artificial intelligence has raised serious concerns about the risks surrounding related technologies and their acceptable use, thereby sparking widespread debate on the appropriate guidance and ethics for AI.China and the United States have been forerunners in the development of artificial intelligence. At the same time, the social credit system and other forms of surveillance over citizens in China as well as the extensive utilisation of customer data by American technology giants have raised concerns about the ethical use of AI systems.

With the rising use of artificial intelligence, the resulting technologies have also become increasingly complex. Some AI systems have already become so complicated that their users are unable to understand their underlying logic or how the system produces a particular result. Such ‘black box’ issues jeopardise the requirements of transparency for decision-making concerning individuals and may, in practice, generate discriminating or incorrect results.

Moreover, the lack of understanding of how AI systems work makes it difficult to ensure that the systems operate in accordance with the law. Appropriate regulatory supervision and the individual’s right to impact decision-making in the context of artificial intelligence may prove all but impossible if an artificial intelligence application has the intellectual advantage over its operator.

“Discrimination and bias may also be reflected in artificial intelligence and its results since AI systems trained with mass data reproduce (often unacknowledged) prejudices.”

The responsible and reliable nature of an AI system is, in many respects, determined in its development phase. However, the data used during that development stage can contain intentional or unintentional distortions. Discrimination and bias may also be reflected in artificial intelligence and its results since AI systems trained with mass data reproduce (often unacknowledged) prejudices. All of the preconceived attitudes that culture and the media instill in humans may, therefore, manifest themselves in the operation of artificial intelligence. A system may not originally employ such preconceived attitudes, but is prone to learning them over time.

A key challenge for building trust in AI systems naturally lies in the appropriate allocation of liabilities for the damages caused by an artificial intelligence-based system. On the EU level, there are both general and sector-specific initiatives aiming at resolving questions concerning AI liability. As there is currently no uniform liability regime, the allocation of liabilities by contractual means remains a central tool to this end.

EU leading the way in the regulation of artificial intelligence

Artificial intelligence constitutes a key element of society’s digital transformation and is currently one of EU’s legislative priorities. Although current legislation provides certain safeguards and rules, it is not enough to address the specific and novel issues brought on by AI systems.

In April, the European Commission published its proposal for the globally first-ever comprehensive regulation for artificial intelligence.

“The Commission’s proposal seeks to introduce a clear set of rules for the use and development of artificial intelligence. The new regime would ensure the safety, transparency, ethics and neutrality of AI systems used in the EU.”

The Commission’s proposal is based on a risk-based approach, in which AI systems are categorised based on the risks they pose on people’s health, safety and basic rights:

1. Prohibited AI systems

Certain applications of artificial intelligence, which are deemed against the values of the EU, would be subject to an outright ban. These include the political manipulation of people with AI technologies that affect the subconscious, authorities’ social credit systems, and the use of real-time face recognition technologies in public spaces for law enforcement purposes.

2. High-risk AI systems

The majority of the requirements under the proposal target so-called high-risk AI systems that pose a great risk for people’s health, safety or basic rights. The proposal would impose strict requirements for using such AI systems and for placing them on the market.

High-risk AI systems would include technology that is used in the context of, for example, critical infrastructure, education, employment, public services and law enforcement. For example, AI systems related to robotic surgery, recruitment or credit scoring could be categorised as high-risk.

3. Limited-risk AI systems

The proposal sets lighter, transparency-related requirements for limited-risk AI systems. For example, if AI applications, such as chatbots or tools that aim to recognise human features or emotions, are used people shall be notified that they are interacting with artificial intelligence.

4. Minimal-risk AI systems

The proposal would not impose any new restrictions for the development or use of so-called minimal-risk AI systems. Such applications would include video games or spam filters that use artificial intelligence. In practice, the majority of applications of artificial intelligence would be under this category.

 

Similarly to the EU’s general data protection regulation (GDPR), the requirements under the proposed regulation would be enforceable by considerable sanctions. According to the proposal, breaching the regulation could lead to a maximum administrative fine of EUR 30 million or 6% of the relevant company’s total worldwide turnover.

Since its publication, the proposal has sparked lively debate. Although the EU’s efforts to promote reliable and ethical use of artificial intelligence has been commended, particularly the scope of the regulation, the prohibited AI systems and the boundaries between the different risk categories have raised questions among relevant practitioners. With the concept of artificial intelligence itself being a complex matter, the EU faces obvious challenges as it attempts to formulate the regulation in a manner, which allows a clearly defined scope for its articles, all the while observing future technological development.

 

Stay tuned!

In our future Quarterly issues, we will be looking into contractual arrangements concerning the development and implementation of AI systems, the development of artificial intelligence in general, and the legal issues and practical tips related to the procurement of AI-based applications.

More by the same author

The Proposal for a Directive on Corporate Sustainability Due Diligence Published

The European Commission’s proposal for a Directive on Corporate Sustainability Due Diligence and amending Directive (EU) 2019/1937 (COM(2022) 71 final) has been published on 23 February 2022. This long-awaited Directive will require companies to perform due diligence to identify, prevent, mitigate and account for external harm resulting from adverse human rights and environmental impacts in the company’s own operations and in the value chain.

ESG in M&A transactions

Environmental, social and governance (ESG) factors are increasingly making their way into M&A decision-making worldwide, and that is the case also in the Nordics. Growing ecological alertness and demand for climate action as well as social awareness boosted by the inequalities exposed by the COVID-19 pandemic are making ESG factors even more critical and something no player in the market can afford to disregard. Stakeholders on a broader spectrum are demanding accountability and transparency from companies. Not only changing attitudes but also increasing regulatory pressure have shifted the way risks are perceived – what once was a potential reputational risk might now also be or soon become a compliance risk and destroy value. More companies are also adopting sustainability as a core business strategy, not least because of the financial potential of sustainable opportunities.

Finland chapter for ICLG: Merger Control 2022

The International Comparative Legal Guide to Merger Control Laws and Regulations 2022 covers common issues in merger control laws and regulations – including relevant authorities and legislation, notification and its impact on the transaction timetable, remedies, appeals and enforcement and substantive assessment.

Latest insights

Ehdotus whistleblowing-lainsäädännöksi on julkaistu

Alert / 21 Sep 2022
Reading time 3 minutes

The EU General Court dismisses appeals in landmark state aid rulings

Alert / 16 Sep 2022
Reading time 4 minutes