The European Artificial Intelligence Act (EU AI Act) goes into force today. The EU legislation aims to regulate services that leverage AI. The goal is to enable the development of the technology while protecting consumers and businesses.
The legal framework applies to both public and private actors inside and outside the EU as long as the AI system as long as it has an impact on people located in the EU.
The EU has established “dissuasive penalties” to stop scofflaws.
In a post by the EU, the Commission highlighted the varying levels of risk.
- Minimal risk: Most AI systems, such as AI-enabled recommender systems and spam filters, fall into this category. These systems face no obligations under the AI Act due to their minimal risk to citizens’ rights and safety. Companies can voluntarily adopt additional codes of conduct.
- Specific transparency risk: AI systems like chatbots must clearly disclose to users that they are interacting with a machine. Certain AI-generated content, including deep fakes, must be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.
- High risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems. Such high-risk AI systems include for example AI systems used for recruitment, or to assess whether somebody is entitled to get a loan, or to run autonomous robots.
- Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behaviour of minors, systems that allow ‘social scoring’ by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).
EU member states have until next year (2 August 2025) to designate national competent authorities, who will oversee the application of the rules for AI systems and carry out market surveillance activities.
The European Artificial Intelligence Board has been created to pursue uniform application of the law.
The EU said that prohibitions of AI systems deemed to present an unacceptable risk will apply after six months, while the rules for General-Purpose AI models will apply after 12 months.