The proliferation of AI-powered tools used in the justice system without proper oversight, particularly by the police, has serious implications for human rights and civil liberties, according to the House of Lords Justice and Home Affairs Committee.
In its report, Technology rules? The advent of new technology in the justice system, published today [30 March], the committee found the pace of the development of technologies is largely unseen by the public. It added without sufficient safeguards, supervision, and caution, advanced technologies used in the justice system in England and Wales could undermine a range of human rights, risk the fairness of trials and damage the rule of law.
Facial recognition is the best known, but other technologies are in use, and more are being introduced. Development is moving fast, and controls have not kept up, the report stressed.
However, the committee did acknowledge the benefits: preventing crime, increasing efficiency, and generating new insights that feed into the criminal justice system. But called for mandatory training for the users of AI technologies, such as facial recognition, particularly given their potential impact on people’s lives.
The report also stated that users can be deferential (“the computer must be right”) rather than critical. The committee is clear that ultimately decisions should always be made by humans.
There are risks of exacerbating discrimination. The report highlights serious concerns about the dangers of human bias contained in original data being reflected, and further embedded, in algorithmic outcomes. The committee heard about dubious selling practices and claims made as to products’ effectiveness which are often untested and unproven.
Furthermore, the committee has called for the establishment of a mandatory register of algorithms used in relevant tools. Without a register it is virtually impossible to find out where and how specific algorithms are used, or for Parliament, the media, academia, and, importantly, those subject to their use, to scrutinise and challenge them.
It recommends that a national body should be established to set strict scientific, validity, and quality standards and to certify new technological solutions against those standards. No tool should be introduced without receiving certification first, allowing police forces to procure the technological solutions of their choice among those ‘kitemarked’, the committee said.
The findings also calls for a duty of candour on the police so that there is full transparency. AI can have huge impacts on people’s lives, particularly those in marginalised communities.
Baroness Hamwee, chair of the Justice and Home Affairs Committee, said: “What would it be like to be convicted and imprisoned on the basis of AI which you don’t understand and which you can’t challenge?
“Without proper safeguards, advanced technologies may affect human rights, undermine the fairness of trials, worsen inequalities and weaken the rule of law. The tools available must be fit for purpose, and not be used unchecked.
“Government must take control. Legislation to establish clear principles would provide a basis for more detailed regulation.
“We welcome the advantages AI can bring to our justice system, but not if there is no adequate oversight. Humans must be the ultimate decision makers, knowing how to question the tools they are using and how to challenge their outcome.”