Google, Microsoft, OpenAI and Anthropic form body to regulate AI development

The Frontier Model Forum is an industry body focused on ensuring the safe and responsible development of frontier AI models which are:

“those that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks”

Created by Anthropic, Google, Microsoft and OpenAI but inviting competitors and civil society organizations to partner; qualified organizations should demonstrate a 

“strong commitment to frontier model safety”

 and be developing and deploying frontier AI models.

With plans to benefit the entire AI ecosystem by drawing on the technical and operational expertise of its member companies, the forum’s founders’ main objective was to promote research in AI safety, by

  • developing standards for evaluating models; 
  • encouraging the responsible deployment of advanced AI models; 
  • discussing trust and safety risks in AI with politicians and academics and
  • helping develop positive uses for AI.

The announcement comes as moves to regulate the technology gather pace with critics voicing their concern that governments have already ceded leadership in AI to the private sector, probably irrecoverably.

Source: The Guardian


Posted

in

,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *