Why You Should Care About the EU AI Act Today
  Back to blog home

Why You Should Care About the EU AI Act Today

Data Engine Aug 09, 2023

Most of us working in the ML and AI industry will see a headline about a new regulation and gloss over it. New regulations fall into the category of ‘Legalese’, a term that portrays any regulation, contract, or other legal documents as written in a foreign language. Data Scientists don’t usually spend time understanding legalese just as most non-data scientists have no idea what is involved in fine-tuning a Large-Language-Model (LLM). So, when a new rule comes out we prefer to let the legal team handle it and assume they will let us know if anything is relevant.

That’s partially true, your legal team will probably get involved at some point but when they do you will have heaps of technical debt which could take you months to figure out. Instead, a general understanding of what’s coming from the regulators will go a long way to reduce technical debt and work in a way that would allow you to become compliant quickly once the rules come into effect. Or avoid building problematic capabilities to begin with. That is exactly why I wrote this post!

Photo by Guillaume Périgois on Unsplash

The European Union Artificial Intelligence Act

The EU AI Act is a proposal that was adopted by the European Commission in 2021 — this past week it was passed by the European Parliament and the final version is expected to pass later in 2023.

The rule will apply to anyone who is distributing AI systems in any of the 27 member states of the European Union.

Reasons and goals of the proposal

The European Commission rightly points out that AI can bring massive benefits to many of the difficult challenges the European Union and the world are facing, such as climate, health, and mobility. However, the technology also comes with certain risks. Consider a not-so-distant future where deep fakes are so prevalent that the public is unable to differentiate between what is real and what is fake. The implications of this scenario are far-reaching, impacting not only individuals’ social and privacy concerns but also posing a threat to the very fabric of democratic systems.

The commission is striving for a balanced approach that will effectively manage the risks while maintaining and advancing the EU as a leader in developing positive AI technologies.

Objectives of the EU AI Act:

  1. Ensure safety and respect existing laws — such as GDPR, copyright and other safety laws
  2. Ensure legal certainty — uncertainty about future regulations can scare away investors and entrepreneurs
  3. Strengthen governance and enforcement of laws on fundamental rights and safety for AI systems
  4. Create a single market for AI throughout the EU — prevent a scenario where each of the 27 member states has different rules and regulations for AI

The proposal attempts to present a comprehensive and future-proof framework. It includes flexible mechanisms that enable it to be dynamically adapted as the technology evolves.

AI Risk Categories

The rule defines 4 risk-based categories. The risk-based approach means that different models are treated differently, as opposed to a blanket or specific approach. This is considered a relatively forward-looking form of rule-making as it is not prescriptive to specific tools and it generally allows for innovation.

Understanding which category your model/product fits into is key to understanding the requirements you will need to comply with.

Category 1: Unacceptable risk

These are AI systems that:

  1. Deploy subliminal techniques in order to distort a person’s behavior in a manner that is likely to cause physical or psychological harm;
  2. Exploit vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behavior of a person in a way that is likely to cause physical or psychological harm;
  3. Perform social scoring that can lead to unfavorable treatment of individuals or groups — this could mean an AI model that influences credit scores or hiring processes based on potential profiling of race, gender, religion, etc.
  4. Deploy ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, except in specific extreme situations.

Restrictions: These systems pose an unacceptable risk and are prohibited.

Category 2: High Risk

AI Systems are defined as high-risk if they are either of the following:

  1. Safety components of regulated products. For example, products that require a CE mark (machinery, toys, radio equipment), aircrafts, motor vehicles, medical devices, PPE, and more.
  2. Certain AI Systems for bio-identification, management of critical infrastructure, education and training, law enforcement, migration, asylum and border control, justice and democratic processes and more.

Restrictions and requirements:

  • Requires a risk management system with specific testing to identify risk management measures
  • Data and data governance — data sets shall be subject to appropriate data governance and management practices concerning design choices, data collection, data annotation, processing, statistical properties and biases and more.
  • Detailed and up-to-date technical documentation
  • Must include automatic logging capabilities while the system is in use
  • Transparency requirements
  • Appropriate and declared levels of accuracy
  • Registration of the AI system in an EU database for high-risk AI systems
  • 3rd patry conformity assessment + CE mark

Category 3: AI Systems with transparency obligations

These are AI systems that interact with humans, detect emotion, determine association with (social) categories based on biometric data, or generate or manipulate content (deep fakes).

Restrictions: Information/transparency obligations such as disclosers to end-users.

Category 4: Low or minimal risk

These are all other AI systems

Restrictions: None.

Example AI Systems

Example 1:

An agricultural company is using a computer vision model to segment and identify the condition of recently picked tomatoes before shipping them out to distribution.

This AI model performs a quality assurance task and it does not interact with humans. Therefore the model is considered Low or Medium Risk and does not have any regulatory requirements.

Example 2:

A national railway operator has decided to use AI to monitor and verify that tracks are free of debris or obstructions. Being an important safety component of the railway’s operations and that products relating to rail are already regulated in the EU, this AI system would fall under the High Risk Category and would need to comply with the requirements above.

Example 3:

In order to generate more viral traffic to a social network a Large-Language Model (LLM) is used as a chat-bot that encourages minors to engage in dangerous behaviour in the guise of a fun or cool challenge or game. This type of use of an AI system goes agains EU values and is strictly prohibited.

Example 4:

A satirical television show uses deep fakes of politicians and celebrities as part of its humorous content. This type of content would need to include a disclosure that the content is generated through automated means and is not real.

Supporting innovation

The European Commission understand that compliance with regulation comes with a cost which can sometimes hinder innovation. To address that, the rule encourages EU member states to set up testing environments or sandboxes in which developers of AI systems can test their systems in a controlled manner. The full details of what this will look like haven’t been published yet but, these are likely to be regulatory sandboxes (as opposed to infrastructure ones). A regulatory sandbox would grant certain companies with exemptions from part of the regulation while under the supervision of regulators for a limited period of time.

The European Commission will also consider the needs and limitations of SMEs and startups when setting fees relating to conformity assessments.

What you can do today

Start by determining which category your AI system falls under so that you understand the level of scrutiny you can expect. There are some intricacies and details that need to be considered so get your legal team involved (and don’t assume that they are already following the changes). Your legal team can help you better understand the requirements that need to be considered for your specific AI system.

Make sure you are employing ML development best practices and standards. These include data, code, annotations and model versioning, experiment tracking and more. Best practices will keep your projects organized, reduce technical debt and make you more efficient so you can focus on experimenting and deploying better-performing models.

Data governance and management — the new rules require that datasets have appropriate statistical properties so that they properly represent the population of intended users and are not biased or do not discriminate against a certain population. This means that for each model or product, ML practitioners need to have a good understanding of the data that was used to train, test and validate the model — keeping track of the correct data lineage is critical and is almost impossible to do retroactively.

Proper data management and governance are important to remain compliant with existing EU regulations (the AI Act reiterates the importance of compliance with existing rules) like GDPR and copyright rules. Tools like DagsHub’s Data Engine would allow you to include metadata with information about copyright licenses or whether certain data includes personal information.

Conclusion

The EU AI Act will affect every individual or company that is planning to offer AI solutions and products in the European Union. The overarching purpose of the regulation is positive — keep people safe. While some limitations and costs are required in order to comply with the rule, the rule itself is designed to be flexible and allow innovation.

Understanding the regulation, involving your legal team and properly managing data and projects with industry best practices and tools is key to being prepared and reducing technical debt.

Since the EU is first to publish major AI regulation it would be expected that similar frameworks be introduced in other countries as well.

Disclaimer: The information provided in this post is not intended to be legal advice. It is intended for informational purposes only and should not be relied upon for legal advice. If you need legal advice, please consult a qualified legal professional.

Tags

Avi Lozowick

Go-to-market @ DagsHub

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.