The “first-ever” legal framework is designed to ensure trust and safety while fostering AI innovation.
In a speech by EC Executive Vice President Margarethe Vestager, five obligations were outlined for AI in Europe:
- AI providers are required to feed their systems with high-quality data to make sure the results don’t come out biased or discriminating;
- They also need to give detailed documentation about how their AI systems work, for authorities to assess their compliance;
- Providers must share substantial information with users to help them understand and properly use AI systems;
- they have to ensure an appropriate level of human oversight both in the design and implementation of Artificial Intelligence;
- and finally, they must respect the highest standards of cybersecurity and accuracy.
Vestager said that their strategy for Europe’s digital future, is to create, “an ecosystem of trust goes together with an ecosystem of excellence.”
“For Europe to become a global leader in trustworthy AI, we need to give businesses access to the best conditions to build advanced AI systems,” said Vestager. ” This is the idea behind our revised coordinated plan on Artificial Intelligence. It coordinates the investments across Member States to ensure that money from Digital Europe and Horizon Europe programs is spent where we need it the most. For instance in high-performance computing or to create facilities to test and improve AI systems.”
In a statement by the Commission, high-risk AI systems were outlined:
- Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
- Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
- Safety components of products (e.g. AI application in robot-assisted surgery);
- Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
- Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
- Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
- Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
- Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).
- High-risk AI systems will be subject to strict obligations before they can be put on the market:
Adequate risk assessment and mitigation systems;
- High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
- Logging of activity to ensure traceability of results;
- Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
- Clear and adequate information to the user;
- Appropriate human oversight measures to minimise risk;
- High level of robustness, security and accuracy.
Overall, the goal is to create “enabling conditions for AI to grow and develop. Next steps in the policy initiative include having European Parliament and the Member States adopting the Commission’s proposals on the approach for AI, as well as Machinery Products, in the legislative procedure. Once adopted, the Regulations will be directly applicable across the EU.
Of course, AI is already fairly prolific within various industries – financial services being a key benefactor from the technology.
The EU plans to dedicate €1 billion per year in AI while attracting over €20 billion in overall investment in AI – each year.
Only time will tell if this structured approach will drive greater innovation and adoption or if a more laissez-faire policy may be superior.
The new rules, as well as Q&As, are available here.