Would it ever be possible to create an actually impartial AI judge that would understand the spirit

I'm not talking about those sentencing algorithms just to be clear. Those things are horrible about being essentially racist. For example if you look at past sentencing behavior to guide future sentencing behavior and that behavior was racist at the time. Then its more then likely that the algorithm will inherit that bias from the data.

I'm talking about something that can take everything that's been documented in terms of deliberations and public discussions surrounding a law, and the text of the law itself as well as all other laws. Then decide what would be trully just in this individual situation. I know this is all a tall order, and it approaches the possibility of requiring Artifical General Intelligence, however the more interesting question is can you do it without using AGI? Could you in theory apply the law without say a sense of self awareness?

For eg. China has a civil law system that uses case law to determine the outcome of trials. With just 120,000 judges to deal with 19 million cases a year, it is little wonder the legal system is turning to AI, law firm Norton Rose Fulbright says.

The Supreme People’s Court has asked local courts to take advantage of big data, cloud computing, neural networks and machine learning. It wants to build technology-friendly judicial systems and explore the use of big data and AI to help judges and litigants resolve cases.

An application named Intelligent Trial 1.0 is already reducing judges’ workloads by helping sift through material and producing electronic court files and case material.

But the emphasis is still on helping – rather than replacing – judges, barristers and lawyers.

“The application of artificial intelligence in the judicial realm can provide judges with splendid resources, but it can’t take the place of the judges’ expertise,” said Zhou Qiang, the head of the Supreme People’s Court, who advocates smart systems.

Eliminating bias?

But recent advances in AI mean the technology can do far more than sifting through vast quantities of data. It is developing cognitive skills and learning from past events and cases.

This inevitably leads to questions as to whether AI will one day make better decisions than humans.

All human decisions are susceptible to prejudice and all judicial systems suffer from unconscious bias, despite the best of intentions. 

Algorithms that can ignore factors that do not legally bear on individual cases, such as gender and race, could remove some of those failings.

One of the most important considerations for judges is whether to grant bail and how long prison sentences should be. These decisions are usually dictated by the likelihood of reoffending.

Algorithms are now able to make such decisions by giving an evidence-based analysis of the risks, rather than relying on the subjective decision-making of individual judges.

Despite these obvious advantages, it is far from clear who would provide oversight of the AI and check their decisions are not flawed. And more cautious observers warn that AIs may learn and mimic bias from their human inventors or the data they have been trained with.

Making connections

But AI could also help solve crimes long before a judge is involved. VALCRI, for example, carries out the labour-intensive aspects of a crime analyst’s job by wading through texts, lab reports and police documents to highlight areas that warrant further investigation and possible connections that humans might miss.

AIs could also help to detect crimes before they happen. Meng Jianzhu, former head of legal and political affairs at the Chinese Communist Party, said the Chinese government would start to use machine learning and data modelling to predict where crime and disorder may occur.


©2020 by RAG3 CLAN. Proudly created with