Around the world, governments are grappling with how best to manage the increasingly unruly beast that is artificial intelligence (AI).
This fast-growing technology promises to boost national economies and make completing menial tasks easier. But it also poses serious risks, such as AI-enabled crime and fraud, increased spread of misinformation and disinformation, increased public surveillance and further discrimination of already disadvantaged groups.
The European Union (EU) has taken a world-leading role in addressing these risks. In recent weeks, its Artificial Intelligence Act came into force.
This is the first law internationally designed to comprehensively manage AI risks – and Australia and other countries can learn much from it as they too try to ensure AI is safe and beneficial for everyone.
AI: a double edged sword
AI is already widespread in human society. It is the basis of the algorithms that recommend music, films and television shows on applications such as Spotify or Netflix. It is in cameras that identify people in airports and shopping malls. And it is increasingly used in hiring, education and healthcare services.
But AI is also being used for more troubling purposes. It can create deepfake images and videos, facilitate online scams, fuel massive surveillance and violate our privacy and human rights.
For example, in November 2021 the Australian Information and Privacy Commissioner, Angelene Falk, ruled a facial recognition tool, Clearview AI, breached privacy laws by scraping peoples photographs from social media sites for training purposes. However, a Crikey investigation earlier this year found the company is still collecting photos of Australians for its AI database.
Cases such as this underscore the urgent need for better regulation of AI technologies. Indeed, AI developers have even called for laws to help manage AI risks.
The EU Artificial Intelligence Act
The EU’s new AI law came into force on August 1.
Crucially, it sets requirements for different AI systems based on the level of risk they pose. The more risk an AI system poses for health, safety or human rights of people, the stronger requirements it has to meet.
The act contains a list of prohibited high-risk systems. This list includes AI systems that use subliminal techniques to manipulate individual decisions. It also includes unrestricted and real-life facial recognition systems used by by law enforcement authorities, similar to those currently used in China.
Other AI systems, such as those used by government authorities or in education and healthcare, are also considered high risk. Although these aren’t prohibited, they must comply with many requirements.
For example, these systems must have their own risk management plan, be trained on quality data, meet accuracy, robustness and cybersecurity requirements and ensure a certain level of human oversight.
Lower risk AI systems, such as various chatbots, need to comply with only certain transparency requirements. For example, individuals must be told they are interacting with an AI bot and not an actual person. AI-generated images and text also need to contain an explanation they are generated by AI, and not by a human.
Designated EU and national authorities will monitor whether AI systems used in the EU market comply with these requirements and will issue fines for non-compliance.
Other countries are following suit
The EU is not alone in taking action to tame the AI revolution.
Earlier this year the Council of Europe, an international human rights organisation with 46 member states, adopted the first international treaty requiring AI to respect human rights, democracy and the rule of law.
Canada is also discussing the AI and Data Bill. Like the EU laws, this will set rules to various AI systems, depending on their risks.
Instead of a single law, the US government recently proposed a number of different laws addressing different AI systems in various sectors.
Australia can learn – and lead
In Australia, people are deeply concerned about AI, and steps are being taken to put necessary guardrails on the new technology.
Last year, the federal government ran a public consultation on safe and responsible AI in Australia. It then established an AI expert group which is currently working on the first proposed legislation on AI.
The government also plans to reform laws to address AI challenges in healthcare, consumer protection and creative industries.
The risk-based approach to AI regulation, used by the EU and other countries, is a good start when thinking about how to regulate diverse AI technologies.
However, a single law on AI will never be able to address the complexities of the technology in specific industries. For example, AI use in healthcare will raise complex ethical and legal issues that will need to be addressed in specialised healthcare laws. A generic AI Act will not suffice.
Regulating diverse AI applications in various sectors is not an easy task, and there is still a long way to go before all countries have comprehensive and enforceable laws in place. Policymakers will have to join forces with industry and communities around Australia to ensure AI brings the promised benefits to the Australian society – without the harms.
This article is authored by Rita Matulionyte, associate professor in law, Macquarie University It is republished from The Conversation under a Creative Commons license. Read the original article.