How Anthropic has doubled down on AI safety

Anthropic is a company formed by individuals concerned with safety who left OpenAI. The company became one of the leading commercial AI model developers, receiving $7 billion in funding from major corporations like Amazon and Google. In July 2023, Anthropic released Claude 2, an upgraded version of its LLM, which many AI app developers use for its unique abilities in text generation and summarisation. On the safety front, Anthropic developed “Constitutional AI,” a new method for keeping Claude away from harmful or dangerous content by giving the model a set of general principles to follow, similar to a “constitution,” and a second AI constantly monitoring the first one, evaluating how well it is following the principles.

Source: Fast Company


Posted

in

, ,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *