Generative AI Passes the Legal Ethics Exam

Nearly every jurisdiction in the United States requires prospective lawyers to pass the Multistate Professional Responsibility Exam (“MPRE”). Administered by the National Conference of Bar Examiners (NCBE), the MPRE is one of the two tests an aspiring attorney must pass, testing the examinee’s knowledge and understanding of the ethical rules and standards of conduct they must adhere to in their practice of law.

Leagalon challenged OpenAI’s GPT-4 and GPT -3.5, Anthropic’s Claude 2, and Google’s PaLM 2 Bison to 100 simulated exams, composed of questions crafted to model the MPRE.

GPT-4 performed best, answering 74% of questions correctly, an estimated 6% better than the average human test-taker.

GPT-4 and Claude 2 both scored above the approximate passing threshold for the MPRE, estimated to range between 56-64% depending on the jurisdiction.

Legalon notes that model performance varies by subject area, and there are opportunities for improvement through domain-specific knowledge and lawyer-led validation.

Source: Legalon tech






Leave a Reply

Your email address will not be published. Required fields are marked *