Traditionally, litigating parties are confronted with a human judge – an individual who can empathise with the parties’ problems while interpreting legal rules to solve the matter. When the adjudication is an AI entity, there is no shared “humanness”. The absence of humanness could influence the authority and the constitutional standing that courts have in society.
An algorithm might be seen as less or more authoritative or empathetic than a human depending on the context. One might imagine that systems affected by a backlog of cases of minor importance might benefit from a speedier resolution of disputes through algorithms. But where algorithms are used to solve cases that directly impact fundamental rights, this may cause more concern.
The reality is that because AI relies on data produced by humans, which is imbued with human preferences and inclinations reflecting pre-existing biases; the alleged solution to potential biases in the mind of a judge may not be effective after all. The data could exacerbate biases and potential inequalities in the interpretation of the law because of its exponential nature. So using AI to make judicial decision-making more neutral may not be the right way forward.
We need to regulate as soon as possible before AI reaches a level of advancement that could seriously undermine not only the job market but also some essential aspects of our society, such as a free press, access to the internet and the right to be informed. If AI is used to manipulate public information and create fake news, good, reasonable governance may become virtually impossible.
Additionally, we should reflect on which values should guide the use of AI, and which usages of this technology should be permitted because they benefit society at large.
Source: LSE
Leave a Reply