Tag: Legal ethics

  • Want to use ChatGPT to help prepare for court? This is what lawyers say you should and shouldn’t do

    The increasing use of generative AI chatbots to assist non-lawyers in court proceedings has led Queensland, Victoria, New Zealand, and England to issue guidelines on the responsible use of AI chatbots in legal contexts.  While AI can provide benefits such as improving access to justice for those who cannot afford a lawyer, there are concerns […]


    The guidelines for using AI in litigation in the Supreme Court of Victoria emphasise the importance of understanding AI limitations, ensuring privacy and confidentiality, avoiding misleading other participants, disclosing the use of AI programs, exercising professional judgment in reviewing AI-generated text, using specialised legal AI tools, and checking that AI-generated text is current, accurate, and […]

  • Reed Smith hires director of applied AI, Richard Robbins

    Reed Smith, one of the top 50 law firms in the US, has hired Richard Robbins as its first director of applied artificial intelligence. Robbins, who was formerly the managing director of applied AI at Epiq, will lead a team of AI engineers and data scientists and design generative AI, predictive AI, data science, and […]

  • Guest post: Microsoft Copilot – The challenges and considerations for law firms

    The use of advanced technologies, including AI, is becoming increasingly common in the legal industry. This is done to improve efficiency, enhance service delivery, and stay competitive. One of the leading tech companies in this field is Microsoft, which has invested heavily in AI development. Recently, Microsoft launched Copilot, a comprehensive AI-driven tool designed specifically […]

  • Exclusive: Launching Today Is The First Meeting Bot Specifically for Legal Professionals, for Use In Depositions, Hearings, and More

    DepoDirect has launched CoCounsel.ai, a meeting bot specifically designed for legal events like arbitrations, hearings, and depositions. CoCounsel.ai produces real-time transcripts formatted according to legal standards and provides bookmarking and archiving functionalities that allow an entire litigation team to collaborate in real-time. This tool enables attorneys to streamline their workflow and be more productive and […]

  • Generative AI in law: The good, the bad and the ugly

    The use of generative AI in the legal industry is gaining popularity, with tools like ChatGPT, Copilot, Gemini, and Claude AI being adopted by law firms. However, privacy and confidentiality issues are a concern, and users must read the terms and conditions before using these tools. Legal AI tools are built on existing legal databases […]

  • Where Generative AI Meets Human Rights

    The use of generative AI has become a prominent topic in every sphere of life, and its impact on human rights has been felt by millions of people. The UN High Commissioner for Human Rights, Volker Türk, believes that people must be at the center of the technology to ensure that everyone benefits from AI. […]

  • RAG Systems Can Still Hallucinate

    The retrieval-augmented generation (RAG) technique has not entirely eliminated the issue of hallucinations in Lexis+ AI while answering legal questions. It is recommended to ask vendors which sources are included in the generative AI tool and only ask questions that can be answered from that data. It is also advised to always read the cases […]

  • Will There Be A ‘Legal AI Arms Race’ In Litigation?

    The use of GenAI-driven tools in litigation is becoming more prevalent, with some law firms incorporating them to gain an advantage over their opponents. These tools can help with tasks such as verifying witness statements and detecting patterns in case documents. Stephen Dowling, the founder of TrialView, said at the Future Lawyer Week conference that […]

  • De-risking AI

    Generative AI has become increasingly popular in organisations, but it has risks. Wisely AI has identified five key risks organisations should know when using Generative AI.  The first risk is anthropomorphising AI chatbots. This means projecting human motivations onto their behaviour, which can compromise ourselves. As we interact with chatbots, we may start to assume […]