Researchers have found that Large Language Models (LLMs) like GPT-4 and Llama 2 can analyse controversial topics and exhibit biases similar to humans. By providing specific instructions, LLMs can align their outputs with human evaluations. They have the potential to enhance human analytical capabilities and aid in identifying oversights in research. Dr. Awais Hameed Khan of the University of Queensland led the investigation and proposed that LLMs should complement human interpretation, not replace it. The research also introduces the AI Sub Zero Bias cards, a tool to analyze and scrutinize bias in the results of generative AI tools.
Source: ADM+S
Leave a Reply