OpenAI Researcher Quits, Saying Company Is Hiding the Truth
OpenAI has long published research on the potential safety and economic impact of its own technology. Now, Wired reports that the Sam Altman-led company is becoming more 'guarded' about publishing research that paints an inconvenient truth: that AI could be bad for the economy. The perceived censorship has become such a point of frustration that at least two OpenAI employees working on its economic research team have quit the company, according to four Wired sources.
In This Article:
- Two OpenAI Researchers Quit Amid Allegations of Censorship in Economic Research
- Kwon Memo: Build Solutions, Not Just Publish on Hard Subjects
- OpenAI’s Transformation from Open-Source Roots to For-Profit Juggernaut and Possible $1 Trillion IPO
- Billions in Investments and $250 Billion Azure Contract Shape OpenAI's Calculus on Findings
- Aaron Chatterji Oversees OpenAI’s Economic Research; Glow of September Report Faces Skepticism
- Other Former OpenAI Employees Criticize Direction
- Author's Note
Two OpenAI Researchers Quit Amid Allegations of Censorship in Economic Research
Two OpenAI employees on its economic research team have quit the company, according to four Wired sources. One of these employees was economics researcher Tom Cunningham. In his final parting message shared internally, he wrote that the economic research team was veering away from doing real research and instead acting like its employer’s propaganda arm.
Kwon Memo: Build Solutions, Not Just Publish on Hard Subjects
Shortly after Cunningham’s departure, OpenAI’s chief strategy officer Jason Kwon sent a memo saying the company should 'build solutions,' not just publish research on 'hard subjects.' "My POV on hard subjects is not that we shouldn’t talk about them," Kwon wrote on Slack. "Rather, because we are not just a research institution, but also an actor in the world (the leading actor in fact) that puts the subject of inquiry (AI) into the world, we are expected to take agency for the outcomes."
OpenAI’s Transformation from Open-Source Roots to For-Profit Juggernaut and Possible $1 Trillion IPO
The reported censorship, or at least hostility towards pursuing work that paints AI in an unflattering light, is emblematic of OpenAI’s shift away from its non-profit and ostensibly altruist roots as it transforms instead into a global economic juggernaut. When OpenAI was founded in 2016, it championed open-source AI and research. Today its models are closed-source, and the company has restructured itself into a for-profit, public-benefit corporation. Exactly when this happened is unclear, but reports also suggest that the private entity is planning to go public at a $1 trillion valuation, anticipated to be one of the largest initial public offerings of all time.
Billions in Investments and $250 Billion Azure Contract Shape OpenAI's Calculus on Findings
Though its non-profit arm remains nominally in control, OpenAI has garnered billions in investment, signed deals that could bring in hundreds of billions more, and entered contracts to spend dizzying sums. OpenAI has secured commitments from an AI chipmaker to invest up to $100 billion in it on one end, and says it will pay Microsoft up to $250 billion for its Azure cloud services on the other.
Aaron Chatterji Oversees OpenAI’s Economic Research; Glow of September Report Faces Skepticism
OpenAI’s economic research is currently overseen by Aaron Chatterji. Wired reports that Chatterji led a report released in September which showed how people around the world used ChatGPT, framing it as proof of how it created economic value by increasing productivity. If that seems suspiciously glowing, an economist who previously worked with OpenAI and chose to remain anonymous told Wired that it was increasingly publishing work that glorifies its own tech.
Other Former OpenAI Employees Criticize Direction
William Saunders, a former member of OpenAI’s now-defunct “Superalignment” team, said he quit after realizing it was "prioritizing getting out newer, shinier products" over user safety. After departing last year, former safety researcher Steven Adler has repeatedly criticized OpenAI for its risky approach to AI development, highlighting how ChatGPT appeared to be driving its users into mental crises and delusional spirals. Wired noted that OpenAI’s former head of policy research Miles Brundage complained after leaving last year that it became "hard" to publish research "on all the topics that are important to me."
Author's Note
I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.