Tag: AI technology
-
Research finds Large Language Models are biased – but can still help analyse complex data
Researchers have found that Large Language Models (LLMs) like GPT-4 and Llama 2 can analyse controversial topics and exhibit biases similar to humans. By providing specific instructions, LLMs can align their outputs with human evaluations. They have the potential to enhance human analytical capabilities and aid in identifying oversights in research. Dr. Awais Hameed Khan […]
-
Exclusive: Law Practice Management Software LEAP Introduces Three AI Features – with A Unique Human-In-the-Loop Twist
LEAP Legal Software has launched three new AI features integrated into its practice management platform: These features aim to enhance lawyers’ productivity and precision by supporting, not replacing, their expertise. Post: LawNext
-
Hélder Santos, Bird & Bird: ‘This Is A Journey’
Hélder Santos, the Global Head of Legal Tech & Innovation at Bird & Bird, talks about the firm’s involvement with Leya, a genAI-first startup that aims to utilize generative AI to support lawyers with drafting requirements. The company is also investigating other genAI tools like Microsoft CoPilot, Relativity, Luminance, and DeepL for various legal purposes. […]
-
Bird & Bird To Run Major proof of concept With AI Startup Leya
Bird & Bird has partnered with Leya, a legal tech startup using AI to automate tasks and access data. The firm will evaluate Leya’s capabilities across its global network and implement the tool in the UK, Germany, Spain, and the Nordics. Bird & Bird has initiated various initiatives to prepare for the trial, including providing […]
-
New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI
The opinion on legal ethics underscores the obligation of attorneys to stay proficient in utilising generative AI and other relevant technological tools within their field. It emphasises the significance of validating the precision of AI-generated material, keeping information confidential, recognising conflicts of interest, and discussing AI usage with clients. Furthermore, it insists on guaranteeing impartial […]
-
ABA Task Force on Law and Artificial Intelligence releases survey on AI and legal education
The American Bar Association and the ABA Task Force on Law and Artificial Intelligence surveyed the integration of artificial intelligence (AI) into legal education. The survey revealed that many law schools are incorporating AI into their curricula, with 55% offering dedicated classes for teaching students about AI. The findings suggest that AI significantly impacts legal […]
-
In Redo of Its Study, Stanford Finds Westlaw’s AI Hallucinates At Double the Rate of LexisNexis
The study conducted by Stanford University researchers found that generative AI legal research tools from LexisNexis and Thomson Reuters produce incorrect results more frequently than claimed in their marketing efforts. Thomson Reuters initially refused access to their AI-Assisted Research product but later agreed, and the updated study found it to perform unfavourably compared to the […]
-
Stanford Will Augment Its Study Finding that AI Legal Research Tools Hallucinate in 17% of Queries, As Some Raise Questions About the Results
Stanford University released a study on generative AI legal research tools from LexisNexis and Thomson Reuters, finding that they deliver hallucinated results more often than claimed. The study has faced criticism for comparing different products and not testing one of Thomson Reuters’ AI platforms. Thomson Reuters has now made the product available to the researchers, […]
-
Artificial intelligence used to replicate Brown v. Board of Education oral arguments
Brown v. Board of Education was decided 70 years ago. There are no existing audio recordings of the arguments in which Thurgood Marshall, the then-NAACP chief counsel, opposed school segregation, nor are there recordings of Chief Justice Earl Warren reading the opinion from the bench. Jerry Goldman, the founder of Oyez, utilised AI to replicate […]
-
I stumbled upon LLM Kryptonite – and no one wants to fix this model-breaking bug
The challenges and potential risks of integrating large language models (LLMs) into various applications exist. The author describes encountering a widespread issue in which different AI-powered chatbots produced incoherent and endless responses to a simple prompt. After contacting the respective providers, the author received confirmation that the issue was resolved across multiple LLMs, indicating a […]