The EU as a pioneer of responsible innovation

The Commission proposal for a new AI regulation

The earliest artificial intelligence research dates back to as early as the mid-1900s. However, owing to the unforeseen capacities of modern computing and the availability of data, we are now experiencing the golden age of artificial intelligence. For all its benefits, this development has also introduced new technology-related risks as well as uncertainties in determining legal responsibilities. The EU is seeking to pave the way in the field of AI and, in April, the European Commission published the globally first-ever proposal for a comprehensive regulation of artificial intelligence. The proposal aims at guaranteeing the EU’s competitiveness in the field of developing new technologies and ensuring that AI systems are reliable.

Artificial intelligence – advanced analytics or self-aware super systems?

AI is currently a significantly hot topic and may be used to refer to a large variety of different technologies. A key challenge in discussing artificial intelligence is actually pinpointing what artificial intelligence means precisely. For those fixed on dystopias, the daunting prospect of self-aware robots from sci-fi movies may spring to mind. On the other hand, the concept of artificial intelligence often refers to nearly any advanced analytics solutions that are based on machine learning and that can be used in the automation of data processing and decision-making.

From a technical perspective, there is no precise definition for artificial intelligence. Generally, artificial intelligence can be used to refer to a machine’s ability to exhibit skills that are traditionally linked to human intelligence (such as learning, planning, deduction and creation) without continuous control by a human user. In terms of machine learning technologies, a machine’s ability of independent improvement of its performance through further experience and data is also essential. Despite these typical criteria, the concept of artificial intelligence continues to evolve as technology develops.

Practical applications of artificial intelligence are already part and parcel of our everyday life, since many of the tools we use daily contain some form of artificial intelligence. Such tools include voice-controlled virtual assistants, automated translation solutions, tools for recommending marketing and other content, and many smart home applications.

 

 

'Black box' issues jeopardise the requirements of transparency for decision-making concerning individuals and may, in practice, generate discriminating or incorrect results.

Ethical concerns and the acceptable use of artificial intelligence

The development of artificial intelligence has raised serious concerns about the risks surrounding related technologies and their acceptable use, thereby sparking widespread debate on the appropriate guidance and ethics for AI.China and the United States have been forerunners in the development of artificial intelligence. At the same time, the social credit system and other forms of surveillance over citizens in China as well as the extensive utilisation of customer data by American technology giants have raised concerns about the ethical use of AI systems.

With the rising use of artificial intelligence, the resulting technologies have also become increasingly complex. Some AI systems have already become so complicated that their users are unable to understand their underlying logic or how the system produces a particular result. Such ‘black box’ issues jeopardise the requirements of transparency for decision-making concerning individuals and may, in practice, generate discriminating or incorrect results.

Moreover, the lack of understanding of how AI systems work makes it difficult to ensure that the systems operate in accordance with the law. Appropriate regulatory supervision and the individual’s right to impact decision-making in the context of artificial intelligence may prove all but impossible if an artificial intelligence application has the intellectual advantage over its operator.

“Discrimination and bias may also be reflected in artificial intelligence and its results since AI systems trained with mass data reproduce (often unacknowledged) prejudices.”

The responsible and reliable nature of an AI system is, in many respects, determined in its development phase. However, the data used during that development stage can contain intentional or unintentional distortions. Discrimination and bias may also be reflected in artificial intelligence and its results since AI systems trained with mass data reproduce (often unacknowledged) prejudices. All of the preconceived attitudes that culture and the media instill in humans may, therefore, manifest themselves in the operation of artificial intelligence. A system may not originally employ such preconceived attitudes, but is prone to learning them over time.

A key challenge for building trust in AI systems naturally lies in the appropriate allocation of liabilities for the damages caused by an artificial intelligence-based system. On the EU level, there are both general and sector-specific initiatives aiming at resolving questions concerning AI liability. As there is currently no uniform liability regime, the allocation of liabilities by contractual means remains a central tool to this end.

EU leading the way in the regulation of artificial intelligence

Artificial intelligence constitutes a key element of society’s digital transformation and is currently one of EU’s legislative priorities. Although current legislation provides certain safeguards and rules, it is not enough to address the specific and novel issues brought on by AI systems.

In April, the European Commission published its proposal for the globally first-ever comprehensive regulation for artificial intelligence.

“The Commission’s proposal seeks to introduce a clear set of rules for the use and development of artificial intelligence. The new regime would ensure the safety, transparency, ethics and neutrality of AI systems used in the EU.”

The Commission’s proposal is based on a risk-based approach, in which AI systems are categorised based on the risks they pose on people’s health, safety and basic rights:

1. Prohibited AI systems

Certain applications of artificial intelligence, which are deemed against the values of the EU, would be subject to an outright ban. These include the political manipulation of people with AI technologies that affect the subconscious, authorities’ social credit systems, and the use of real-time face recognition technologies in public spaces for law enforcement purposes.

2. High-risk AI systems

The majority of the requirements under the proposal target so-called high-risk AI systems that pose a great risk for people’s health, safety or basic rights. The proposal would impose strict requirements for using such AI systems and for placing them on the market.

High-risk AI systems would include technology that is used in the context of, for example, critical infrastructure, education, employment, public services and law enforcement. For example, AI systems related to robotic surgery, recruitment or credit scoring could be categorised as high-risk.

3. Limited-risk AI systems

The proposal sets lighter, transparency-related requirements for limited-risk AI systems. For example, if AI applications, such as chatbots or tools that aim to recognise human features or emotions, are used people shall be notified that they are interacting with artificial intelligence.

4. Minimal-risk AI systems

The proposal would not impose any new restrictions for the development or use of so-called minimal-risk AI systems. Such applications would include video games or spam filters that use artificial intelligence. In practice, the majority of applications of artificial intelligence would be under this category.

 

Similarly to the EU’s general data protection regulation (GDPR), the requirements under the proposed regulation would be enforceable by considerable sanctions. According to the proposal, breaching the regulation could lead to a maximum administrative fine of EUR 30 million or 6% of the relevant company’s total worldwide turnover.

Since its publication, the proposal has sparked lively debate. Although the EU’s efforts to promote reliable and ethical use of artificial intelligence has been commended, particularly the scope of the regulation, the prohibited AI systems and the boundaries between the different risk categories have raised questions among relevant practitioners. With the concept of artificial intelligence itself being a complex matter, the EU faces obvious challenges as it attempts to formulate the regulation in a manner, which allows a clearly defined scope for its articles, all the while observing future technological development.

 

Stay tuned!

In our future Quarterly issues, we will be looking into contractual arrangements concerning the development and implementation of AI systems, the development of artificial intelligence in general, and the legal issues and practical tips related to the procurement of AI-based applications.

More by the same author

The Autumn and Final Countdown for DORA Have Kicked Off

The surge of new cybersecurity and data legislation in the EU is sure to keep companies busy digesting upcoming regulatory requirements and reviewing existing compliance measures. To name a few, this autumn marks the one-year countdown to the application of the Data Act, a few months until the first provisions of the AI Act kick in, and mere days until the NIS2 Directive should be implemented in EU member states. However, for the financial sector, the most significant regulatory development in this area is the EU’s Digital Operational Resilience Act, more commonly known as DORA.

Implementing the Data Act without Clashing with the GDPR?

The Data Act will largely apply as of 12 September 2025, imposing new obligations and rights in relation to personal and non-personal data in the context of, e.g., connected products and related services. As rules governing data expand, it is increasingly important to map what data sets are processed by an organisation and how they are managed in the upcoming regulatory framework. For data sets including personal data (which is often the case!), it is vital to align the implementation of the Data Act with existing GDPR compliance.

Executive Summary: Sustainability in M&A Transactions

Sustainability has become an increasingly important part of M&A transactions. On the one hand, acquisitions can contribute to a company’s strategy and its sustainability goals and on the other hand, taking sustainability into account can have an impact on the completion of a transaction irrespective of its objectives. In addition, sustainability is becoming legally binding and the regulatory pressure for sustainability is increasing. Further, sustainable M&A transactions have become profitable investments.

Latest insights

New Cyber Security Requirements for Connected Products

Alert / 20 Nov 2024
Reading time 4 minutes

Welcoming Our New Partner: Eeva-Lotta Kivelä

Article / 20 Nov 2024
Reading time 2 minutes