The UK government has published a new whitepaper on the use and governance of artificial intelligence (AI) as part of a new national blueprint to drive responsible innovation.
By setting out regulations for the use of AI, the government hopes to help the technology be used to deliver social and economic benefits across sectors ranging from agriculture to healthcare.
The paper is also hoped to mitigate the future risks associated with the rapid development of AI, including data privacy concerns and other human safety concerns.
The Spring Budget saw the government commit to investing a share of £900m into developing a new AI Research Resource, with the proposals outlined in the whitepaper intended to create the right environment for AI to thrive in the UK, as part of its wider digital technologies agenda.
Existing UK legislation on AI is not cohesive or universal, with the government describing it as a ‘patchwork of legal regimes’ that can create bureaucratic and financial barriers for UK firms hoping to make use of the technology.
Through its AI programme, the government hopes to develop legislation that is both adaptable enough to allow for innovation, while also not positioning itself as the single governing body by empowering existing regulators. The idea behind this is to enable sector-specific regulators to come up with tailored, context-specific approaches that suit use cases for AI across a variety of sectors.
The whitepaper outlines five clear principles that these regulators should consider to ensure safe and resilient innovation – the principles it names are:
- safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed
- transparency and explainability: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
- fairness: AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes
- accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
- contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI.
During the next 12 months, regulators are expected to issue practical guidance to organisations, as well as other tools and resources to delineate how these principles will be implemented.
Michelle Donelan, science innovation and technology secretary, said: “AI has the potential to make Britain a smarter, healthier and happier place to live and work. Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.
“Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.”
As part of the whitepaper, the government is also consulting on how to improve coordination between regulators, as well as monitor and evaluate the AI framework, with changes being made if needed based on the collected feedback. It is open to individuals and organisations working within or with AI.
What’s more, £2m has been allocated to funding a testbed for businesses to trial how regulation could be applied to AI products and services.