Co-founder of Ethereum Vitalik Buterin has expressed serious worries about the speed at which superintelligent artificial intelligence (AI) referring to it as “risky” in light of the ongoing changes in OpenAI’s leadership.
Vitali has expressed on X, addressing three core issues: the risks of superintelligent AI, the benefits of open AI models, and the need for balanced regulatory measures.
Jan Leike, the former head of alignment at OpenAI, recently announced his resignation, citing a “breaking point” with the company’s management regarding its primary aims. Leike said that the advancements in artificial general intelligence (AGI) at OpenAI have caused “safety culture and processes to take a backseat to shiny products.”
The concept of AGI, which is predicted to be on par with or even more advanced than human cognitive capacities, has already started to scare industry insiders. They claim that the world is not prepared to handle such superintelligent AI systems.
This feeling is in line with Buterin’s opinions where he expressed his current views on the subject, stressing that we shouldn’t respond hastily or retaliate violently against those who attempt.
Buterin emphasized the need for open models that operate on consumer hardware as a “hedge” against a scenario in which the majority of human thought would eventually be able to be read and mediated by a small group of corporations.
He added, “Such models are also much lower in terms of doom risk than both corporate megalomania and militaries.”
Buterin also advocates for a regulatory framework distinguishing between “small” and “large” AI models, supporting lighter regulations for smaller models and stricter controls for larger ones. However, he worries current proposals might eventually classify all AI models as “large,” stifling innovation.
He notes that models with 70 billion parameters (which he runs) should be considered small, while those with 405 billion parameters should face more oversight.
Also Read: Vitalik Buterin Advocates for Free Speech in Crypto Space