OpenAI, Microsoft, Google and Anthropic have created the Frontier Model Forum, a forum focused on the regulation of large-scale machine learning models. This vanguard of the AI industry, announced on Wednesday that the Forum will work to advance AI security research, identify best practices for implementing advanced AI models and collaborate with policymakers, academics and businesses.
The Forum aims to ensure the safe and responsible development of so-called "frontier AI models," which exceed the capabilities of current, state-of-the-art models. These powerful frontier models may have dangerous capabilities that pose serious risks to public safety.
Generative AI models, such as ChatGPT, extrapolate large amounts of data at high speed to share responses in the form of prose, poetry and images. While the applications of such models are many, government and industry agree that while AI has enormous potential to benefit the world, adequate security measures are needed to mitigate risks.
The Frontier Model Forum will create an advisory board in the coming months, arrange funding with a working group and create an executive board to lead the effort. This initiative is an essential step in bringing the technology sector together to advance AI in a responsible manner and address the challenges so that it benefits all of humanity.