Schurq

Former OpenAI scientist starts new AI company focused on secure intelligence

|
Reading Time 1.5 minutes
By Eline Tol

Ilya Sutskever, co-founder and former chief scientist of OpenAI, has announced the launch of a new artificial intelligence company: Safe Superintelligence. It has offices in Palo Alto and Tel Aviv. In a recent post on X, Sutskever revealed this move aimed at creating a safe AI environment.

Focus on safety

Safe Superintelligence seeks to distinguish itself by focusing entirely on safety, security and progress. There is no distraction from management overhead or product cycles. This model protects the company from short-term commercial pressures, Sutskever said.

Collaboration of experts

The founders of Safe Superintelligence are not just Sutskever. Former OpenAI researcher Daniel Levy and Daniel Gross, co-founder of Cue and ex-AI leader at Apple, are also co-founders. This collaboration promises a strong foundation for the new company.

Departure from OpenAI

Sutskever left OpenAI in May after playing a key role in the tumultuous departure and return of CEO Sam Altman. His new venture marks an important step in the continuing evolution of AI development. Its focus is on security and ethics.

Impact on businesses and organizations

Companies and organizations can benefit from Safe Superintelligence's focus on secure AI applications. This provides a robust foundation for AI integration without the risks of short-term commercial interests. Safe Superintelligence's model can serve as a model for others in the industry.

Safe Superintelligence provides solutions by focusing on ethical AI development and safe deployment strategies. This helps companies implement responsible AI solutions without the worries of security and ethics. Working with experienced AI experts enhances credibility and potential for innovative applications.

Source: Safe Superintelligence Inc.

Share this article via
Eline Tol
Eline Tol

About this schurq

Online Marketing Consultant

Also read