In a podcast interview, William Saunders, a former safety employee at OpenAI, recently explained why he quit his position at the AI company
He expressed concerns about the company’s approach to developing Artificial Intelligence (AI), likening its trajectory to that of the Titanic and fearing it was heading into risky territory.
Saunders worked on OpenAI’s super alignment team for three years and shared his views on the company’s goals, particularly its pursuit of Artificial General Intelligence (AGI) and the introduction of paid products.
In the interview, he drew a comparison between OpenAI’s current path and historical endeavors such as the Apollo space program and the construction of the Titanic.
I really didn’t want to end up working for the Titanic of AI, and so that’s why I resigned
William Saunders
Saunders criticized what he saw as OpenAI prioritizing market success and product launches over crucial safety considerations.
Furthermore, he highlighted potential dangers associated with AI development, According to Saunders, a tragic event in AI development could manifest as a model capable of launching large-scale cyberattacks, manipulating public opinion, or aiding in the creation of biological weapons.
Saunders cautioned on postponing the rollout of new language models, and pointed to a lack of sufficient knowledge about the AI language model’s behavior, noting that it is also impossible to predict and regulate the actions of top-tier AI machinery now.
Also Read: Intuit to Lay Off 1,800 Employees, Will Shift to AI