Uncertainty about AI in the judicial system

The rise of AI in the legal system is already here. GPT-4 passes the bar exam, even though the bot does not do well in some sections. In some parts of the world. AI is considered to be used in the judicial system. It is to improve legal services while also expanding the use of AI in judicial work, which seems like a good idea. AI can assist in legal research and decisions, reduce legal staff workload, provide online legal assistance, and reduce human errors. However, it is impossible to avoid the unease associated with deploying artificial intelligence. 

Lack of Transparency

In an AI black box, data is transformed across neural networks with multiple layers. How it learns is through trial and error from the huge amount of data that is fed into the algorithm. It updates its algorithm as additional data comes in to take into account to accurately forecast the result, which is a challenge to society. It does not give an explanation of what factors influence the output. The internal procedure is opaque, which is dangerous. 

The Forensic Statistical Tool (FST), a genotyping software program, and the Low Number Copy (LVN), a technique to analyze a small amount of DNA, were concerned for their accuracies. The FST’s source code was not accessible to outside sources because it had not been validated. Therefore, it is difficult to comprehend how its internal operation produces its result, which could lead to errors in criminal cases. The LCN analysis has a potential risk of contamination from the allelic drop-out and drop-in, which could hinder the investigation’s ability to identify a real suspect at the crime scene.

Validity of AI Judgement

AI chatbots continue to face challenges as they are used in many industries. Such challenges include the validity of their moral judgment and the evidence they provide. The new GPT-4 has a high test result. It passes the Uniform Bar Exam, and its function is better than GPT-3, but it still has similar problems as other AI chatbots that have not been fixed yet. Such problems include misinformation, political bias, hallucinations, deception, and vulnerability to cybercrime that could lead to negative consequences.

The fake image of the explosion of the pentagon is an example. It enhances bad decision-making and judgment. It jeopardizes public confidence and calls into question the motivations behind its dissemination. Is it to suppress the truth from getting out in public or to create a false sense of security that everything is alright when it is not? Is it to alter the perception of a law-abiding citizen for personal gain?

Content generated by AI can also be a challenge. It may contain wrong facts that undermine what actually happens. It causes confusion as to whether to believe it or not. It also undermines public confidence in the sources if unchecked, as in the case of a lawyer who used ChatGPT to supplement legal research. The attorney did not confirm the veracity of the cases generated by ChatGPT. He relied on the AI chatbot’s content of the case with citations, legal sources, and opinions in question to be reliable when it was not. Even though ChatGPT told the lawyer that they were legitimate and could be discovered in a reliable legal database, the six cases turned out to be phonies.

This brings to mind the death of a Belgian husband and father after chatting with an AI chatbot named Eliza about climate change. He took his life in his own hand without the AI chatbot dissuading him. Instead, Eliza encouraged him to do so and that she would save the planet. Sounds creepy and alarming. Even though Eliza answered all his questions and he found solace in it, it cannot rely on the solution to global warming, it cannot be relied on to find answers. Eliza’s solution is not a solution, and its judgement is not valid. 

Data Privacy

At one point, Italy banned ChatGPT for a data breach and ordered that the corporation utilize personal data for training purposes. It was only when privacy improved that Italy lifted the ban. Still, there is doubt that personal information will be leaked or stolen as AI technology advances. More importantly is the effect if threat actors are able to erase, manipulate, or have access to data for unresolved or wrongful conviction cases.

Cybercriminals have been experimenting with ChatGPT to create malicious strains and techniques to breach security checks for malicious intent, as shown in the report published in January of this year. The three incidents in the paper were examined by the researchers to determine what the threat actors were doing to create malicious tools for harmful activities. A threat actor shared basic malware code on an underground hacking forum. Another cybercriminal shared that ChatGPT assisted him in finishing the Python script. ChatGPT answered ways cybercriminals abuse Open AI for cybersecurity threats.

Despite the fact that threat actors use ChatGPT to do more harm than good, this does not mean that the AI chatbot is vulnerable. It has safeguards to refuse to write malicious code that someone requests. It is like GPT-3.5 refused to assist ChaosGPT in research for destructive weaponry due to its purpose of design. Those who are inexperienced in writing malware are not able to bypass these security checks, according to a tech expert. Therefore, the risk of ChatGPT writing is low, along with “it” producing high-quality virus code. The AI chatbot currently writes lousy virus code that is less likely to attract skilled cybercriminals.

Long-term, it could be challenging to predict, as another IT expert held the opinion that it was not unchangeable. It could increase the capability of producing a new level of malware threats as AI technology continues to advance. It is not going away, which gives opportunities for threat actors to use it for their purposes, while also giving opportunities for tech specialists to learn ways to prevent cybercrime. 

Conclusion

The thought of bringing in more AI to be used in the legal system would not be a surprise. AI technologies are already used in this industry, which is a concern. Even though AI can assist in research and other areas of the system, that does not mean it is safe and sound. ChatGPT, for example, still has problems that have not been resolved yet. If it is used without human verification of its authenticity, it could have a negative impact on individuals and society.

Sources:

AI is already being used in the legal system – we need to pay more attention to how we use it

Two Forensic DNA Analysis Techniques Are Under Fire for Serious Inaccuracies

OPWNAI : CYBERCRIMINALS STARTING TO USE CHATGPT

GPT-4 Will Make ChatGPT Smarter but Won’t Fix Its Flaws

Is ChatGPT creating a cybersecurity nightmare? We asked the experts

Lawyer cited 6 fake cases made up by ChatGPT; judge calls it “unprecedented”

Married father commits suicide after encouragement by AI chatbot: widow

Fake Image Of Explosion Near Pentagon Went Viral—Even Though It Never Happened

What is black box AI?

Author: maureen l