Rapid advancements of large language models (LLMs) have enabled the processing, understanding, and generation of human-like text, with increasing integration into systems that touch our social sphere.
Despite the success of large language models (LLMs) to process understand and generate human-like text LLMs can learn, perpetuate, and amplify harmful social biases.
This paper presents a comprehensive survey of bias evaluation and mitigation techniques for LLMs to provide a clear guide of the existing literature that empowers researchers and practitioners to understand better and prevent bias propagation in LLMs.
Source: arxiv.org
Leave a Reply