OpenAI Board Gets Veto Power Over AI Model Launches

OpenAI said its primary fiduciary duty is to “humanity, and we are committed to doing the research required to make AGI safe.”

OpenAI has introduced a Preparedness Framework to ensure its models are safe and secure. The framework includes a Preparedness Team that will conduct safety drills, independent third-party audits, and regular updates for all models. 

OpenAI models will be evaluated for issues related to cybersecurity, persuasion, model autonomy and misuse of systems to generate chemical, biological or nuclear threats.

Models will be classified as low, medium, high, or critical based on their safety risks, and additional safety measures will be taken for models classified as high or critical. OpenAI’s primary goal is to make AGI safe and serve humanity.

Source: AI Business


Posted

in

, ,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *