Why Large Language Models Like ChatGPT Treat Black- and White-Sounding Names Differently

The co-author of a paper on auditing large language models for race and gender bias is Julian Nyarko, a Stanford Law School Professor and also an Associate Director of Stanford HAI. The paper discusses how popular large language models handle specific queries involving first and last names indicative of race or gender. According to the authors, the names associated with Black women are associated with the least favourable outcomes. The study employs an audit design to measure and identify the level of bias in different domains of society. The authors explain that large language models predict the most probable next word based on their training, and they reproduce the associations that they learn from the data. Most popular large language models are closed-source, making it challenging to investigate the code and technical perspective.

Source: Stanford HAI


Posted

in

, , ,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *