Ethical concerns and the acceptable use of artificial intelligence
The development of artificial intelligence has raised serious concerns about the risks surrounding related technologies and their acceptable use, thereby sparking widespread debate on the appropriate guidance and ethics for AI.China and the United States have been forerunners in the development of artificial intelligence. At the same time, the social credit system and other forms of surveillance over citizens in China as well as the extensive utilisation of customer data by American technology giants have raised concerns about the ethical use of AI systems.
With the rising use of artificial intelligence, the resulting technologies have also become increasingly complex. Some AI systems have already become so complicated that their users are unable to understand their underlying logic or how the system produces a particular result. Such ‘black box’ issues jeopardise the requirements of transparency for decision-making concerning individuals and may, in practice, generate discriminating or incorrect results.
Moreover, the lack of understanding of how AI systems work makes it difficult to ensure that the systems operate in accordance with the law. Appropriate regulatory supervision and the individual’s right to impact decision-making in the context of artificial intelligence may prove all but impossible if an artificial intelligence application has the intellectual advantage over its operator.
“Discrimination and bias may also be reflected in artificial intelligence and its results since AI systems trained with mass data reproduce (often unacknowledged) prejudices.”
The responsible and reliable nature of an AI system is, in many respects, determined in its development phase. However, the data used during that development stage can contain intentional or unintentional distortions. Discrimination and bias may also be reflected in artificial intelligence and its results since AI systems trained with mass data reproduce (often unacknowledged) prejudices. All of the preconceived attitudes that culture and the media instill in humans may, therefore, manifest themselves in the operation of artificial intelligence. A system may not originally employ such preconceived attitudes, but is prone to learning them over time.
A key challenge for building trust in AI systems naturally lies in the appropriate allocation of liabilities for the damages caused by an artificial intelligence-based system. On the EU level, there are both general and sector-specific initiatives aiming at resolving questions concerning AI liability. As there is currently no uniform liability regime, the allocation of liabilities by contractual means remains a central tool to this end.
EU leading the way in the regulation of artificial intelligence
Artificial intelligence constitutes a key element of society’s digital transformation and is currently one of EU’s legislative priorities. Although current legislation provides certain safeguards and rules, it is not enough to address the specific and novel issues brought on by AI systems.
In April, the European Commission published its proposal for the globally first-ever comprehensive regulation for artificial intelligence.
“The Commission’s proposal seeks to introduce a clear set of rules for the use and development of artificial intelligence. The new regime would ensure the safety, transparency, ethics and neutrality of AI systems used in the EU.”
The Commission’s proposal is based on a risk-based approach, in which AI systems are categorised based on the risks they pose on people’s health, safety and basic rights:
1. Prohibited AI systems
Certain applications of artificial intelligence, which are deemed against the values of the EU, would be subject to an outright ban. These include the political manipulation of people with AI technologies that affect the subconscious, authorities’ social credit systems, and the use of real-time face recognition technologies in public spaces for law enforcement purposes.
2. High-risk AI systems
The majority of the requirements under the proposal target so-called high-risk AI systems that pose a great risk for people’s health, safety or basic rights. The proposal would impose strict requirements for using such AI systems and for placing them on the market.
High-risk AI systems would include technology that is used in the context of, for example, critical infrastructure, education, employment, public services and law enforcement. For example, AI systems related to robotic surgery, recruitment or credit scoring could be categorised as high-risk.
3. Limited-risk AI systems
The proposal sets lighter, transparency-related requirements for limited-risk AI systems. For example, if AI applications, such as chatbots or tools that aim to recognise human features or emotions, are used people shall be notified that they are interacting with artificial intelligence.
4. Minimal-risk AI systems
The proposal would not impose any new restrictions for the development or use of so-called minimal-risk AI systems. Such applications would include video games or spam filters that use artificial intelligence. In practice, the majority of applications of artificial intelligence would be under this category.
Similarly to the EU’s general data protection regulation (GDPR), the requirements under the proposed regulation would be enforceable by considerable sanctions. According to the proposal, breaching the regulation could lead to a maximum administrative fine of EUR 30 million or 6% of the relevant company’s total worldwide turnover.
Since its publication, the proposal has sparked lively debate. Although the EU’s efforts to promote reliable and ethical use of artificial intelligence has been commended, particularly the scope of the regulation, the prohibited AI systems and the boundaries between the different risk categories have raised questions among relevant practitioners. With the concept of artificial intelligence itself being a complex matter, the EU faces obvious challenges as it attempts to formulate the regulation in a manner, which allows a clearly defined scope for its articles, all the while observing future technological development.
Stay tuned!
In our future Quarterly issues, we will be looking into contractual arrangements concerning the development and implementation of AI systems, the development of artificial intelligence in general, and the legal issues and practical tips related to the procurement of AI-based applications.