A group of Senate Democrats and one independent lawmaker have sent a letter to OpenAI CEO Sam Altman, raising concerns about the company’s safety measures.
The Washington Post first reported on this letter, which highlights 12 key issues that OpenAI needs to address.
The first request in the letter is on whether OpenAI will allow U.S. government agencies to test and review its next big AI model before it’s released. The lawmakers also want OpenAI to use 20% of its computing power for safety research and to take steps to prevent bad actors or foreign enemies from stealing its AI technology. These requests show growing concern about the risks of advanced AI systems.
This increased scrutiny follows reports from whistleblowers that safety checks for GPT-4 Omni were rushed to avoid delays in its release. There are also claims that employees who raised safety concerns faced punishment and were pressured to sign possibly illegal non-disclosure agreements.Â
These issues led to a complaint filed with the U.S. Securities and Exchange Commission in June 2024.
Recently, major tech companies Microsoft and Apple chose not to join OpenAI’s board, despite Microsoft’s $13 billion investment in OpenAI in 2023. This decision, made in July, came shortly after the whistleblower complaints and shows the growing complexity of AI oversight and increasing attention from regulators.
Adding to these concerns, William Saunders, a former OpenAI employee, recently said he left the company because he feared their research could seriously threaten humanity. He compared OpenAI’s path to the Titanic disaster.
While Saunders isn’t worried about the current version of ChatGPT, he fears future versions and the development of AI that could surpass human intelligence. He believes AI workers have a duty to warn the public about potentially dangerous AI developments.
Also Read: Meta Cuts Reality Labs Budget by 20% to Save $3 Billion