On November 21 last year, Suchir Balaji celebrated his 26th birthday, with few of his friends in San Francisco, marking off a turbulent year that saw him quitting OpenAI as a researcher and later turning into a whistleblower. In the next two days, Balaji was supposed to catch a flight back home to California, where his parents were waiting for him.
Balaji never caught that flight.
On November 26, San Francisco police discovered his body in his apartment, under mysterious circumstances. Initially dubbed as a suicide by the local police, the mysterious death of Balaji, an OpenAI researcher who turned into an anti Artificial Intelligence (AI) advocate, has raised several uncomfortable questions.
His parents have refuted the suicide theory of police and demanded an investigation by FBI, claiming there were signs of struggle in his apartment and alleged injuries on his body. Reiterating the parents’ claims, ‘X’ CEO Elon Musk has demanded an impartial investigation, stating, “This doesn’t seem like a suicide.”
Balaji had openly advocated against the “fair use” policy of his ex-employers and was also featured in a lawsuit filed by The New York Times newspaper against OpenAI. He had claimed that OpenAI training ChatGPT AI models under “fair use policy” infringes upon the copyright laws.
With stakes riding that high, Balaji was a person of interest for many and his death has left many questions unanswered. In this article, we will try to demystify the life and work of this 26-year-old Indian American tech geek, who tried his best to warn the world against the perils of unleashed AI.
Suchir Balaji: A prodigy who was nursing a heartbreak from AI
Suchir Balaji, an Indian-American, was born in 1998 and brought up in Cupertino California. He completed his schooling from Monta Vista High School and was finalist for the season of the 2015-16 United States of America computing Olympiad. He completed his graduation from the University of California, Berkeley in 2021 with a major in Computer Science.
In an interview with Business Insider, Balaji’s mother Poornima Ramarao shared some insights from his childhood, claiming that he was a prodigy who started coding at the age of 11, build his computer at 13 and was recruited by Quora at 17. He also won many national programming championships.
According to an interview Balaji gave to The New York Times, he became obsessed with the potential of AI while as a teenager and thought that the technology could one day find cure for incurable diseases and stop ageing of humans. With the same mindset, Balaji joined OpenAI in 2018 as an intern and in 2021 he joined as a full time researcher after his graduation from UC Berkley. However, he had not anticipated how his illusion with AI will be broken in the same organization.
Why did Suchir Balaji quit OpenAI?
Balaji joined Open AI (previously it was a non profit, now turned into a profit company) back in 2018 as an intern. He believed he shared a similar vision with his employers that AI is there to benefit humanity. As a researcher, Balaji made significant contributions to ChatGPT AI models, training them with information available on the internet.
After four years of his association with OpenAI, Balaji got disillusioned after he realized that the available content used to train AI models are infringing upon copyright claims of artists, journalists and other creators. His ideology clashed with OpenAI’s future path and in 2024, after a lot of deliberation, Balaji decided to quit.
“If you believe what I believe, You have to just leave” , said Balaji in an interview to The New York Times.
But Balaji didn’t just stop there, he decided to go public about the ethical concerns of training AI. In an Interview to The New York Times, he talked about OpenAI’s information gathering practices and how they have violated copyright laws, He even said ChatGPT was harming the internet.
Balaji was planning to take legal actions against OpenAI and even met with a Copyright attorney at the end of October last year. As per a report from The New York Times, Balaji had documents to prove the copyright violation and his statement and testimony would have turned AI Industry upside down.
What is OpenAI’s Fair use Policy?
According to OpenAI, the organization deploys a “fair use” policy to train their AI models where they use original, copyrighted content of creators so that the models produce a similar but not exact output. The fair use policy also entails unlicensed use of copyrighted content to train AI models to serve a new “transformative” purpose.
However, as per a research conducted by Balaji, the AI models could easily produce unauthorized outputs that infringes upon copyright claims of original content creators.
Generative models rarely produce outputs that are substantially similar to any of their training inputs, the process of training a generative model involves making copies of copyrighted data. If these copies are unauthorized, this could potentially be considered copyright infringement, depending on whether or not the specific use of the model qualifies as “fair use”. Because fair use is determined on a case-by-case basis, no broad statement can be made about when generative AI qualifies for fair use,” said Balaji in his latest statement.
Final Words
Suchir Balaji’s death and its many ramifications have shocked the tech industry and given rise to questions that have yet to be addressed. A gifted and dedicated AI expert, he also turned into an activist after allegedly gaining insights company’s profit plans that were fundamentally opposed to Open AI’s original mission.
Frustrated over the change in trajectory of the organization’s culture from a non profit entity to a profit-making venture, Balaji noted major concerns and issues regarding OpenAI’s copyright policies as well as it’s overall impact on humanity. Balaji’s life and work is an attempt to unveil the glossy exterior of AI and reveal the complex, problematic reality behind it.
Suchir’s contribution and journey was tragically cut short last year but his ideas will continue to inspire generations and serves as a cautious stance against unchecked AI.