GPT-4 Turbo: what’s in it for the legal sector?

OpenAI’s recent developers day showcased GPT-4 Turbo, boasting improved knowledge up to April 2023, faster processing, and cost efficiency. The addition of Custom GPT “Assistants” simplifies chatbot creation, with developers able to sell their customized models on OpenAI’s shop.

The introduction of GPT Assistants offers an easier way for legal teams to create chatbots, but concerns persist about data confidentiality within OpenAI’s walled garden. 

While initial impressions are positive, concerns arise over decreased reasoning capabilities and potential disruptions in the legal tech startup ecosystem. 

OpenAI’s move towards platformization with the GPTs shop raises worries about market competition and potential pitfalls for startups raising concerns about potential lock-in and competition dynamics in the evolving legal tech landscape.

OpenAI is most likely following the typical lock-in strategy, where a large vendor initially attracts customers with a good product, low prices and an interesting development environment. Over time, when the product has become the de-facto standard and customers will find it difficult to move to some other platform, the pricing will be increased.

Similarly, OpenAI might analyse the sales data of its GPTs Shop, to understand which products are popular. When a certain GPT is successful, OpenAI can then easily create its own version, displacing marketplace sellers. Despite the very low barrier to entry, this is perhaps something to think about for legal startups who want to offer their own custom GPT.

GPT-4 Turbo, despite advancements, doesn’t fundamentally resolve issues like attention span problems and large input text costs for legal applications. The recommendation remains to use retrieval-augmented generation for better efficiency. 

Retrieval-augmented generation (RAG) is an AI framework for improving the quality of Large Language Model responses by grounding the model on external sources of knowledge to supplement the LLM’s internal representation of information. 

Implementing RAG in an LLM-based question-answering system has two main benefits: 

  • ensuring that the model has access to the most current, reliable facts, and 
  • that users have access to the model’s sources, 

allowing its claims to be checked for accuracy and ultimately trusted.

Looking ahead, incremental improvements are expected, but breakthroughs in overcoming fundamental limitations may not occur in the short term. 

Source: Clausebase.com


Posted

in

, ,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *