Knowledge Conflicts for LLMs: A Survey

This study delves into the complex challenges faced by large language models (LLMs) when blending contextual and parametric knowledge, providing an in-depth analysis of knowledge conflicts. The focus is on three categories of knowledge conflicts: context-memory, inter-context, and intra-memory conflict. These conflicts can significantly impact the reliability and performance of LLMs, particularly in real-world applications where noise and misinformation are common. By analysing the causes, categorising the conflicts, examining the behaviour of LLMs under such disputes, and reviewing available solutions, this survey aims to offer insight into strategies for enhancing the resilience of LLMs, serving as a valuable resource for advancing research in this developing field.

Source: Arixv.org


Posted

in

,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *