Recent studies indicate that fine-tuning AI models, mainly through retrieval-augmented methods, can benefit specific legal contexts, especially with smaller models. Fine-tuning involves retraining a general AI model on specialised legal datasets to improve its understanding of legal terminology and concepts. However, despite initial enthusiasm from law firms and vendors for fine-tuning large language models (LLMs), results have been mixed, leading to a decline in interest.
Traditional fine-tuning has limitations, as it doesn’t necessarily add new knowledge but can only adjust the output style to match the training data. Smaller language models (SLMs) might be more adaptable, yet they still struggle with complex legal texts. Recent findings suggest that fine-tuning SLMs can enhance performance in legal applications, revealing their continued relevance when correctly implemented.
Source: Legaltech Hub
Leave a Reply