Should We Trust AI to Fix the Justice Gap? Maybe. But It’s Complicated.

When someone proposes a new way to tackle the access-to-justice crisis, the instinct should be to say yes. Legal support remains wildly inaccessible—while a handful of lawyers try to serve millions, low-income communities have long gone underserved, and now even middle-class families are finding legal help out of reach. Many don’t even realise they might need a lawyer—they carry on, quietly overwhelmed.

Given the current state of legal aid funding, which isn’t a priority on Congress’s agenda, the legal tech world has turned to generative AI as a potential lifeline. The optimistic hope is that, if developed thoughtfully, these tools could significantly enhance access to real legal assistance, rather than serving up a digital nightmare wrapped in pseudo-legal jargon.

But here’s the catch: what if the better the AI gets, the more dangerous it becomes?

That unsettling dilemma loomed large during an ABA TECHSHOW panel featuring Kara Peterson of justice tech startup descrybe.ai and Jessica Bednarz from the Institute for the Advancement of the American Legal System. They dove into the pros and cons of using AI to bridge the justice gap.

One big upside? Peterson highlighted that AI could help explain legal concepts in simpler terms. That matters because most people don’t know they have a legal issue until they understand it. Bednarz cited 2021 data showing people turn to search engines for legal help, but now, that’s rapidly shifting to tools like Chatgpt.

The problem? Most users can’t distinguish solid AI-generated legal guidance from absolute nonsense. And even lawyers regularly get this wrong. So when people turn to mass-market AI tools, the advice they get can range from comically off-base to outright harmful.

That’s where custom-built legal AI tools come in. Legal professionals designing these tools understand the real needs of users. Random developers feeding court data into a language model? Not so much. The real challenge is connecting people to the good stuff, because right now, most folks are likelier to end up with Chatgpt hallucinations than a well-designed legal tool hiding in a vendor list.

A group of legal experts recently explored these regulatory and ethical challenges. According to Bednarz, they’re leaning toward a “soft power” strategy: encouraging innovation through guidance and test environments and spreading the message that using AI for legal help doesn’t automatically mean unauthorised practice of law. This strategy aims to foster responsible and ethical use of AI in the legal field, rather than stifling innovation. A full report on their findings and recommendations is coming soon.

But Damien Riehl from vLex raised a more challenging question: Why do regulators let Chatgpt run wild with questionable legal advice, yet clamp down when smaller, purpose-built tools emerge? There’s an apparent inconsistency in enforcement that feels more political than principled.

Another panellist noted a troubling paradox. People know to be sceptical of Google. But when AI starts sounding authoritative, especially when it’s more accurate, people might trust it too much. That misplaced trust can be more damaging than confusion.

Riehl described a spectrum: from no help at all, to Google, to consumer-grade AI, to legal-specific AI, to AI with a human lawyer. The further along you go, the better—in theory. But in practice, better tech might lull users into false confidence. A clunky tool might make someone think twice. A polished one might convince them they don’t need a lawyer, even when they do.

It’s not unlike the medical world, where a little knowledge can lead to big mistakes. If you tell someone half-truths, they may think Vitamin A is a measles vaccine.

In the hands of non-experts, even good information can spiral into misinformation. Google offered possibilities. AI offers confident, definitive answers even when they’re wrong. And that’s the real danger. There’s a reason some call it “Mansplaining as a Service,” a term that highlights the potential for AI to provide condescending or oversimplified explanations, particularly in the legal context, where nuance and context are crucial.

So, where does that leave us? In a world where people will try DIY legal fixes no matter what. A common practice of individuals attempting to address their legal issues without professional assistance, often due to the high cost and limited accessibility of legal services. And if that’s the case, we’d better ensure they have better tools. It’s like selling voltage testers at the hardware store—not because we want people messing with their wiring, but because many will anyway, and at least this way they’re safer.

Ultimately, the ethics of using AI in legal services are complex. But on balance, we’re better off encouraging careful, law-savvy development of legal AI tools than trying to stop the tide. However, it’s crucial to remain aware of the potential risks. Just don’t forget: the tools that help the most can hurt the most—if we’re not paying attention.

Source: Above the Law

Leave a Reply

Your email address will not be published. Required fields are marked *