OpenAI changed its governance and established a board committee to assess the security and safety of its AI models, a few weeks after its top executive on the matter left and the business essentially dismantled his internal team.
The action comes after two former board members, Helen Toner and Tasha McCauley, published harsh critiques of OpenAI’s administration in The Economist on Sunday.
The newly formed group will assess OpenAI’s technology’s security measures for ninety days and then provide a report. “Following the full board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security,” the company stated in a blog post on Tuesday.
The private company’s recent, quick advancements in AI have sparked questions about how it handles the possible risks of the technology.
Concerns grew last fall when CEO Sam Altman was momentarily removed in a boardroom coup following disagreements with chief scientist and co-founder Ilya Sutskever on the pace of AI product development and safety precautions.
Concerns resurfaced this month following the departure of Sutskever and Jan Leike, key figures at OpenAI. They led the superalignment team, focused on superhuman AI threats. Leike, who resigned, cited resource struggles, echoed by other departing employees.
After Sutskever’s exit, OpenAI absorbed his team into broader research initiatives instead of keeping it separate. Co-founder John Schulman now leads alignment research in an expanded role, titled Head of Alignment Science.
The startup has occasionally had trouble handling employee turnover. OpenAI eliminated a provision last week that would have deprived former employees of their shares if they had spoken out against the firm.
Also Read: OpenAI Partners with News Corp for Enhanced ChatGPT Content