Schurq

AI industry leaders Regulate Machine Learning: Safety and Responsibility First

|
Reading Time 1.5 minutes
By Guido Sombroek

"Leading AI companies such as OpenAI, Microsoft, Google and Anthropic are joining forces to regulate the development of large-scale machine learning models. This initiative, called the Frontier Model Forum, is a response to the growing need for accountability and security in the ever-evolving world of artificial intelligence."

OpenAI, Microsoft, Google and Anthropic have created the Frontier Model Forum, a forum focused on the regulation of large-scale machine learning models. This vanguard of the AI industry, announced on Wednesday that the Forum will work to advance AI security research, identify best practices for implementing advanced AI models and collaborate with policymakers, academics and businesses.

The Forum aims to ensure the safe and responsible development of so-called "frontier AI models," which exceed the capabilities of current, state-of-the-art models. These powerful frontier models may have dangerous capabilities that pose serious risks to public safety.

Generative AI models, such as ChatGPT, extrapolate large amounts of data at high speed to share responses in the form of prose, poetry and images. While the applications of such models are many, government and industry agree that while AI has enormous potential to benefit the world, adequate security measures are needed to mitigate risks.

The Frontier Model Forum will create an advisory board in the coming months, arrange funding with a working group and create an executive board to lead the effort. This initiative is an essential step in bringing the technology sector together to advance AI in a responsible manner and address the challenges so that it benefits all of humanity.

Source: OpenAI, Microsoft

Share this article via
Guido Sombroek
Guido Sombroek

About this schurq

Also read