European Union lawmakers have finalised approval for a world-first AI law, known as the Artificial Intelligence Act. This marks a key milestone in global legislation focused on AI regulation.
The legislation received resounding support from members of the European Parliament yesterday [14 March], setting the stage for its implementation later this year.
This move comes five years after the initial proposal of regulations, indicating a concerted effort to address the challenges posed by rapidly advancing AI technology.
Read more: European Parliament votes in favour of ‘world’s first’ AI act
The legislation is positioned to implement a ‘human-centric’ approach to AI, with the act aiming to ensure human control over technology while also harnessing its potential for innovation, economic growth and societal progress.
The act adopts a risk-based approach, categorising AI applications based on their potential impact.
On the legislation, Dragos Tudorache, a Romanian lawmaker and co-leader of the European Parliament’s draft negotiations, said: “The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it — the technology — helps us leverage new discoveries, economic growth, societal progress and unlock human potential.”
New regulations are particularly focused on uses that threaten citizens’ rights, including biometric categorisation systems that monitor sensitive or protected characteristics, as well as non-specific sourcing of facial images from the internet or CCTV footage with the intention of building facial recognition databases.
High-risk applications such as medical devices or critical infrastructure will be subject to stringent requirements, including the use of high-quality data and transparent user information.
Certain AI applications that are deemed to pose unacceptable risks, such as social scoring systems or predictive policing, are outright banned under the legislation.
However, low-risk systems like content recommendations may face reduced levels of scrutiny.
Of particular significance is the inclusion of provisions addressing generative AI models, exemplified by systems like OpenAI’s ChatGPT.
In response to the rapid evolution of AI capabilities, the law now mandates detailed disclosures regarding data sources and compliance with copyright regulations for developers of such models. Additionally, measures are in place to scrutinize and mitigate risks associated with the deployment of advanced AI systems, particularly those with systemic implications.
While the United States has also taken steps toward AI legislation, including President Joe Biden’s executive order, the EU’s comprehensive rules are expected to influence regulatory frameworks worldwide. With concerns over the potential risks and ethical implications of AI, governments worldwide are increasingly recognizing the need for robust regulatory measures to ensure responsible development and deployment of this transformative technology.