Mapping and Analyzing Use Cases for AI

D&I’s AI Prep List for Lawyers Vol 2.

Posted on

18 Dec

2024

Dittmar & Indrenius > Insight > Mapping and Analyzing Use Cases for AI

As with all work, prioritising AI-related tasks is essential to ensure you focus your time where it is most needed. Even knowing this, the constant overflow of to-dos can be overwhelming – especially when every task is accompanied by a request to complete it yesterday. In this D&I Quarterly series, Iiris Kivikari, our Head of IP, Media and AI, gives you a peek into what is taking up space on our team members’ (virtual) desks in order to help you figure out what AI work you might want to be focusing on right now.

Identifying Beneficial Use Cases

One of the typical first steps in responsibly integrating AI into a business, particularly in regulated fields such as finance and healthcare, is to map out potential use cases where AI could provide significant benefits.

Our clients often start by conducting a comprehensive review of their operations to identify areas where AI can enhance efficiency, accuracy, and innovation. This process involves collaboration between various departments, including business units and administrative functions, to ensure a holistic view of the potential applications of AI.

For instance, in the healthcare sector, AI can be used to improve diagnostic accuracy, streamline administrative processes, and personalize patient care. In finance, AI can enhance fraud detection and improve customer service through chatbots. Not to even mention the fact that most office workers, regardless of their area of business, could use a helping hand in sifting through and responding to emails.

“After mapping out these use cases, companies can then prioritize which AI projects to pursue based on their potential impact and feasibility.”

Conducting an In-Depth Legal Analysis

Once you have identified which use case you want to give a try, each one should undergo a thorough legal analysis to ensure compliance with relevant regulations and standards. This step is especially crucial in regulated industries where the misuse of AI can lead to significant legal and reputational risks. Nevertheless, if you read part 1 of our AI Prep List series, you know that even situations where AI is harnessed for the benefit of employees themselves can have their own pitfalls – which you can avoid if you do your due diligence and know where to look.

“By conducting a use case-specific legal analysis, organisations can establish clear guidelines and prerequisites for the compliant use of AI in each identified use case. This is often imperative, because – as we all know – the fact that AI is technically capable of something, does not necessarily mean we are allowed to use it for that purpose.”

Our team works closely with clients to analyse the legal implications of each use case, considering factors such as data protection, privacy, intellectual property, communications laws, liability, (business) area specific regulations, cyber security, ethical considerations and, of course, the AI Act. In healthcare, AI systems must adhere to, for example, the AI Act, general and patient data protection laws, data security laws, medical device regulations, and even provisions on the quality of the output text – just to name a few. By conducting a use case-specific legal analysis, organisations can establish clear guidelines and prerequisites for the compliant use of AI in each identified use case. This is often imperative, because – as we all know – the fact that AI is technically capable of something, does not necessarily mean we are allowed to use it for that purpose.

Establishing a Framework for Compliance

After mapping and analyzing use cases, the next step is to establish a framework for ongoing compliance. This involves creating the necessary policies and procedures that govern the development, procurement, deployment and monitoring of AI systems, and the use of the AI outputs. For example, many of our clients are implementing cross-functional AI governance committees to oversee the implementation of new use cases and ensure that the use of AI is duly considered in internal processes and policies.

“As always, change is also easier to bring about if the top management is committed and involved in getting things done.”

Key components of this framework include the continuous monitoring of AI systems, and training programmes for employees on AI-related issues (i.e. working towards AI literacy). As always, change is also easier to bring about if the top management is committed and involved in getting things done.

By establishing a comprehensive compliance framework, companies can mitigate risks and ensure that their use of AI remains responsible and aligned with regulatory requirements. This is not to say that (all) AI processes and policies need to be separate from pre-existing ones; on the contrary, they can and often even should be integrated. Especially your existing structures related to data protection, intellectual property, data security and IT may form a brilliant foundation for AI work too. If so, all that is needed is tweaking these existing organizational structures to encompass the quirks that are inherent to AI.

 

Read also D&I’s AI Prep List for Lawyers – Vol 1. AI and Communications

More by the same author

D&I’s AI Prep List for Lawyers – Vol 1. AI and Communications

As with all work, prioritising AI-related legal tasks is essential to ensure you focus your time where it is most needed. Even knowing this, the constant overflow of to-dos can be overwhelming – especially when every task is accompanied by a request to complete it yesterday. In this D&I Quarterly series, Iiris Kivikari, our Head of IP, Media and AI, gives you a peek into what is taking up space on our team members’ (virtual) desks in order to help you figure out what AI work you might want to be focusing on right now.

eIDAS2.0 Has Arrived – What is an EUDI Wallet?

The awaited eIDAS Regulation (EU) 1183/2024, known as eIDAS2.0, introduces new comprehensive rules aimed at facilitating a secure and seamless Europe-wide digital identity framework by amending the first eIDAS Regulation (EU) 910/2014. As the most notable change, eIDAS2.0 introduces a new EU Digital Identity Wallet (EUDI Wallet), meaning an electronic authentication application that must be interoperable throughout the EU. In function, the application will be similar to ordinary wallets, especially when looking at what types of data is stored in it. The Regulation entered into force on 20 May 2024 and the European Commission is due to adopt technical implementing acts in November 2024, after which the Member States have 24 months to implement at least one EUDI Wallet.

Ready or Not, Here Comes the AI Act!

D&I’s summary of the changes coming your way The European Parliament has approved the Artificial Intelligence Act on 13 March 2024. The AI Act is a huge step forward in creating a legal framework for AI technology throughout the European Union. It brings about substantial new obligations for both the developers and users of artificial intelligence (or, using the terminology of the Act itself, the providers, importers, deployers, authorised representatives and other parties listed in the Act). However, although the categorisation does cut a few corners, the AI Act can be seen as a type of “product safety” legislation. As such, it leaves a wide range of topics to be dealt with in other EU and/or national laws, or by the parties involved in a specific transaction.

Latest insights

Government Proposal on New Tax Credit for Large Industrial Investments in Finland

Article / 20 Dec 2024
Reading time 2 minutes

Takeaways on Connecting Offshore Wind Power to the Finnish Grid

Article / 18 Dec 2024
Reading time 2 minutes