top of page
  • Writer's pictureRosie Burbidge

What does the EU's AI Act mean for you?


A photograph of the first computer created at Bletchley Park

The EU's AI Act is the first comprehensive law to regulate artificial intelligence (AI). It entered into force on 1 August 2024. Because of its mid Summer launch date, the impact of the new Act has slightly flown under the radar. However, there is plenty of time to plan for the changes that it contemplates because the obligations in the Act come into effect on various dates between 2025 and 2030.


The rules primarily concern businesses that run AI platforms but there are also implications for businesses that use AI. For example, from 2 February 2025, certain AI systems that pose unacceptable risks will be banned. Further, the obligations on general-purpose AI models and enforcement provisions come into effect on 2 August 2025. From this date, providers of general-purpose AI models must (among other things) make available a summary of content used for training and ensure they comply with EU law on copyright and related rights.


High-risk systems under the AI Act

The Act is likely to be of most interest to developers of AI systems that are categorised as high risk. There are a number of obligations that apply from August 2026 and 2027.


According to Article 6 of the Act, an AI system is considered to be “high-risk” where (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by certain EU harmonisation legislation and (b) the product must undergo a third-party conformity assessment, with a view to placing it the market or putting it into service.


In addition, certain AI systems are specifically designated as high-risk, namely: biometrics; critical infrastructure; education and vocational training; employment, workers’ management and access to self-employment; access to and enjoyment of essential private services and essential public services and benefits; law enforcement; migration, asylum and border control management; and administration of justice and democratic processes.


Risk management and quality management

Risk management systems and quality management systems are mandatory for AI systems that are categorised as high-risk.


According to Article 9 of the Act, the risk management system must run through the lifecycle of the high-risk AI system with regular systemic review and updating. This review must include all of the following steps:


  1. Identification and analysis of the known and reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights.

  2. Estimation and evaluation of risks.

  3. Evaluation of risks including based on the data from post-market monitoring.

  4. Adoption of appropriate and targeted risk management measures.


In identifying the most appropriate risk management measures, AI developers should ensure safety by design, protective measures where appropriate and safety information (including training to deployers where appropriate).


Testing must be performed during development and before placing on the market and should be carried out against prior defined metrics and appropriate probabilistic thresholds.


Under Article 17 of the AI Act, high-risk AI systems must also have a quality management system. Quality management refers to the level of safety and other public policy objectives.


The quality management system must ensure compliance with the Act and should be documented in the form of written policies, procedures and instructions. It must cover the lifecycle of the AI system.


The Act provides a list of 13 aspects that the quality management system must cover. These include:

  • the strategy for regulatory compliance;

  • design control and verification;

  • examination, test and validation of the AI system;

  • reporting of serious incidents;

  • data management systems and procedures;

  • document and record keeping; and

  • accountability.


AI Office

The Act also creates an AI Office within the European Commission.


The Act gives the AI Office powers including the ability to conduct evaluations of general-purpose AI models and request information and measures from AI model providers. The AI Office can also apply fines. Its specific tasks include: supporting the AI Act and enforcing general-purpose AI rules; strengthening the development and use of trustworthy AI; and fostering international cooperation.


The Office will have about 140 staff in total. It held a webinar, focusing on high-risk AI systems, in May 2024. It can be viewed here.


What does this mean?

Further information is likely to be published throughout 2024 and onwards. The AI Office says it will collaborate with stakeholders during the Act's implementation.


In addition, the AI Pact has been set up to enable businesses to share best practices and join activities to promote voluntary action ahead of the deadline. The AI Pact includes the making and sharing of pledges.


Given the growing use of AI, and the wide-ranging scope of the AI Act, this new legislation is likely to have an impact on many if not all businesses whether in the form of direct changes within the business or in terms of the impact on the market and competitor business practices. It will be necessary therefore to be well prepared for the deadlines and to take advantage of the resources available.


To find out more about the issues raised in this blog contact Rosie Burbidge, Intellectual Property Partner at Gunnercooke LLP in London - rosie.burbidge@gunnercooke.com


#AI #software #IPlawyer #EU #AIoffice

Commentaires


bottom of page