Yesterday, [14 June 2023], members of the European Parliament (MEPs), the European Union’s main legislative body, voted on a bill to regulate the use of AI (artificial intelligence) in EU member states.
MEPs voted with 499 in favour, 28 against and 93 abstentions ahead of talks with EU member states on the final draft of the legislation. These talks began yesterday. If ratified by member states, the act could become law by 2025.
One key focus of the legislation was the banning of harmful uses of AI systems.
The legislation was developed to promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its harmful effects. This includes reforming the role of the EU’s AI office to give it more authority in the implementation of the regulation and making it easier for the general public to voice concerns and lodge complaints about AI systems.
Dragos Tudorache, MEP and co-rapporteur, said: “The AI Act will set the tone worldwide in the development and governance of artificial intelligence, ensuring that this technology, set to radically transform our societies through the massive benefits it can offer, evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law.”
The legislation was designed on a risk-based approach to establish obligations for providers and those deploying AI systems; as a result, some of its rules are dependent on the level of risk a particular AI system can generate.
AI systems deemed to represent an ‘unacceptable’ level of risk to people’s safety would thus be prohibited, such as those used for social scoring (categorising people based on their social behaviour or personal characteristics).
The list of banned uses includes:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial approval;
- biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- predictive policing systems (based on profiling, location or past criminal behaviour);
- emotion recognition systems in law enforcement, border management, the workplace, and educational institutions;
- and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
What’s more, MEPs also approved measures to ensure the classification of high-risk applications would include AI systems that pose significant harm to people’s health, safety, fundamental rights or the environment.
A key priority for this area would include AI systems used to influence voters and the outcome of elections and those used for social media algorithms.
In a press conference, Roberta Metsola, president of the European parliament, said: “It took how many hours, days and nights…[we have] found a balanced and human-centred approach to the world’s first AI act legislation that will no doubt be setting the global standard for years to come.”
Providers of foundations models (machine learning models that are trained on large quantities of data, either semi- or self-supervised), will have to assess and mitigate possible risks to health, safety, fundamental rights, the environment, democracy and rule of law and register their models in the EU database before their release on the EU market.
Such models are used to develop generative AI like ChatGPT. Providers of these systems would have likewise have to comply with transparency requirements, such as disclosing what content is AI-generated, supporting the identification of deep-fake images and introducing safeguards against generating illegal content.
The European Parliament has said that detailed summaries of the copyrighted data used to train generative AI systems would also have to be made publicly available.
However, to support AI innovation and support SMEs, MEPs have added exemptions for research activities and AI components delivered under open-source licenses. This means the new law would promote ‘regulatory sandboxes’ to act as test-beds for public authorities to use AI before wider deployment.