Task Force on Responsible Use of Generative AI for Law

In light of the Mata vs. Avianca Airlines. case, where an attorney filed citations and cases fabricated by ChatGPT to a court, the need for responsible use guidelines has arisen more pointedly than before.  

The Task Force on Responsible Use of Generative AI for Law will examine and report upon principles and guidelines for applying due diligence and legal assurance applicable to Generative AI for law and legal processes. The purpose of this Task Force is to develop principles and guidelines on ensuring factual accuracy, accurate sources, valid legal reasoning, alignment with professional ethics, due diligence, and responsible use of Generative AI for law and legal processes.  

The Task Force believes this technology provides powerfully useful capabilities for law and law practice and, at the same time, requires some informed caution for its use in practice. At this point in history, we think it’s appropriate to encourage the experimentation and use of generative AI as part of law practice, but caution is clearly needed given the limits and flaws inherent with current widely deployed implementations. 

Eventually, every lawyer will be aware of the beneficial uses and the limitations of this technology, but currently, it appears reasonable and proportional to ensure that attorneys are explicitly and specifically aware of and attest to the best practice of human review and approval for content sourcing from generative AI.

Source: law.mit.edu


Posted

in

,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *