top of page
  • Arash Heidarian

Decoding the EU AI Act: A Practical Guide for Seamless Integration into MLOps

Updated: Nov 21, 2023



Table of contents


Introduction

AI's evolution from a mere buzzword to a transformative force across industries is swiftly becoming a pivotal concern. This rapid progression has raised concerns about its broader societal implications. In response, the European Union has introduced the EU AI Act to establish regulations around artificial intelligence technologies.  Discussions among European Union legislators aiming to find a middle ground on a risk-centric structure for overseeing artificial intelligence applications seem to be delicately poised. Brando Benifei, a Member of the European Parliament and one of the legislative rapporteurs for AI regulation, characterized the negotiations on the AI Act as currently being in a complex and challenging phase during a recent roundtable organized by the European Center for Not-For-Profit Law and civil society association EDRi.

The proposed EU AI Act encompasses a legislative framework for governing various AI systems, spanning both high- and low-risk AI applications. This framework is designed to emphasize transparency, accountability, and responsible use of AI, while fostering innovation and competition within the EU's digital market. Integrating compliance with the EU AI Act into business operations, team structure, and decision-making processes is now critical for any data and AI-oriented company.

Although there are lot of challenges to get things legally in-place, it’s an inevitable fact that sooner or later all teams and processes should adopt legal framework into their agenda before taking any ambitious or risky decisions. So far what we know from EU AI act, is that it hugely focuses on transparency, documentation and governance. What is happening in Europe would be eye-opening for all governments across the globe, and it eventually pushes more legislations and law enforcement in a very close future.

To ensure adherence to the Act, several key questions need to be addressed:

  1. How does the Act measure compliance?

  2. What procedures are necessary to establish these metrics?

  3. How can ongoing compliance be ensured across teams to avoid breaching defined thresholds?

According to the AI Act. Art. 3 No. 1 of the AI Act defines an "artificial intelligence system (AI system)" as:

"machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments"


Risk Categories 

The EU AI Act classifies artificial intelligence (AI) systems into four distinct risk categories based on their potential impact and level of risk they pose to individuals and society:


  • Minimal Risk: AI systems falling under this category are considered to pose minimal or no risk to individuals or society. These systems are typically low-impact and do not require specific regulatory measures. Examples may include basic rule-based decision-support systems with minimal consequences, computer games, and AI-based spam filters.


  • Low Risk: This category encompasses AI systems with a low potential for harm. Applications like simple chatbots, basic machine learning models, and certain types of expert systems may fall into this group. While these systems have a higher impact than those in the minimal risk category, they still entail minimal regulatory requirements.


  • High Risk: AI systems categorized as high risk are those that have the potential to cause significant harm or have critical implications if they fail. This category includes AI applications used in critical sectors such as healthcare, transportation, and law enforcement, recruitment and human resource and any product subject to EU product safety legislations listed in Annex III. High-risk systems are subject to more stringent regulatory requirements to ensure safety, transparency, and accountability. Compliance with predefined standards and regulatory scrutiny is heightened for these applications.


  • Unacceptable Risk: The highest-risk category is labeled as unacceptable risk. AI systems associated with risks deemed unacceptable for use within the European Union fall into this category. These may include applications with severe and irreversible consequences, posing significant threats to fundamental rights and safety. Such systems are prohibited within the EU to safeguard individuals and prevent adverse societal impacts. Systems for social scoring, real-time biometric identification in public places are few examples in this category.


The classification framework shown in Figure 1 enables tailored regulatory approaches, ensuring that AI systems are subject to appropriate levels of scrutiny and safeguards based on the risks they pose.


Figure 1. EU AI Risk Categories

EU AI Act Measurements

According to Stanford University CRFM summarization, measured criteria for EU AI Act, can be summarized into four main categories namely Data, Compute, Model and Deployment (Figure 2). The table shown in Figure 2 is self-explanatory and is a great summary of what we need to measure and disclose.



Figure 2. EU AI Act Measurement Categories

If I want to summarize all requirements, I can’t emphasize more on 4 main pillars: transparency, documentation, logging and monitoring, for every single piece of operation in MLOPs life-cycle (Figure 3). In order to have transparency, we need to have a proper mechanism in place to log and monitor any resource which leads to Model, Data and Compute power used in the process. As a result of having everything monitored and logged, we should be able to document every subprocess and main process.



Figure 3. AI Acts Pillars

Embed AI Acts into MLOps

According to the most recent updates on EU AI Act reported on EY website, the penalties for non-compliance with the AI Act are significant and can have a severe impact on the provider's or deployer's business. They range from €10 million to €40 million or 2% to 7% of the global annual turnover, depending on the severity of the infringement.  It is a clear indication that breach to the act, could bring distasterouse hits for businesses not only financial-wise, but also reputation-wise. Hence, it is essential to equip the disaster recovery (DR) process with AI Acts regulations. In order to do so, a team of  MLOPs engineers, cyber security engineers, legal experts and solution architects should prepare service level objectives (SLO), indicators (SLI), agreements (SLA) according to thresholds and deadlines identified in the AI Acts. 

In SLA, all concerts, thresholds, deadlines, and risky actions should be identified. Then SLO should cover boundaries for SLIs, and build a proper warning/alert system, to prevent any potential harm, before it gets worse or affects users. I have tried to put the process in a diagram shown in Figure 4.  Having all sophisticated monitoring and logging tools on cloud services, implementing such a mechanism is not a difficult task, but preparing and identifying areas of risks and mitigation plan, is what businesses need to invest more on. 



Figure 4. Embedding AI Acts in DR and MLOps process

Summary

The recently proposed EU AI Act outlines stringent regulations, introducing significant penalties for non-compliance that can profoundly affect businesses financially and reputationally. This blog post emphasizes four pivotal pillars—transparency, documentation, logging, and monitoring—in the MLOPs life-cycle to meet the act's requirements. Transparency is achieved through robust mechanisms that log and monitor resources leading to models, data, and compute power. Comprehensive monitoring and logging enable thorough documentation of every subprocess and main process. Considering the severe consequences of non-compliance, it is imperative to integrate AI Act regulations into disaster recovery processes. A collaborative effort involving MLOPs engineers, cybersecurity experts, legal professionals, and solution architects is essential to formulate Service Level Objectives (SLO), Indicators (SLI), and Agreements (SLA) aligned with the thresholds and deadlines identified in the AI Act. These proactive measures not only mitigate financial risks but also safeguard the reputation of businesses in the evolving landscape of AI governance.


Reference


44 views

Recent Posts

See All

Comments


bottom of page