Former OpenAI chief scientist, Ilya Sutskever, along with former OpenAI engineer Daniel Levy, investor Daniel Gross, and Y Combinator has launched Safe Superintelligence, Inc. (SSI) to focus on developing AI safety and capabilities simultaneously.
SSI is based in Palo Alto and Tel Aviv, aiming to advance artificial intelligence while prioritizing safety. The founders emphasized that their singular focus on safety ensures that progress is insulated from short-term commercial pressures.
Ilya Sutskever, said, “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
Sutskever left OpenAI on May 14 after playing a role in the firing of CEO Sam Altman and stepping down from the board. Daniel Levy also left OpenAI a few days later.
The pair were previously part of OpenAI’s Superalignment team, created in July 2023 to control AI systems smarter than humans, known as artificial general intelligence (AGI). However, OpenAI dissolved the team after Sutskever and other researchers departed.
Ethereum co-founder Vitalik Buterin deems AGI “risky” but emphasizes it poses a lesser threat than corporate greed or military misuse, while over 2,600 tech figures, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, advocate for a six-month pause on AI training to evaluate its “profound risk.”
Their singular focus on safety in developing AI capabilities sets a commendable standard for addressing ethical concerns in advancing artificial intelligence technologies.
Also Read: OpenAI & Google DeepMind Employees Warn of AI Risks