OpenAI has created a board committee to evaluate the safety and security of its AI models. This governance change comes weeks after the resignation of its top safety executive and the disbandment of his internal team. The new committee will spend 90 days evaluating the safeguards in OpenAI’s technology and will provide a report. OpenAI has committed to publicly sharing an update on adopted recommendations consistent with safety and security.
OpenAI has recently started training its latest AI model. The company’s rapid AI advances have raised concerns about managing potential dangers. These worries intensified last fall when CEO Sam Altman was briefly ousted in a boardroom coup after clashing with co-founder and chief scientist Ilya Sutskever over the pace of AI development and harm mitigation steps.
Concerns resurfaced this month when Sutskever and a key deputy, Jan Leike, left the company. They led OpenAI’s superalignment team, focusing on long-term threats of superhuman AI. Leike mentioned that his division struggled for computing resources within OpenAI. Other departing employees echoed this criticism. After Sutskever’s departure, OpenAI dissolved his team but stated that this work would continue under its research unit, led by co-founder John Schulman, now Head of Alignment Science.
OpenAI has faced challenges managing staff departures. Recently, it revoked a policy that cancelled the equity of former staffers who spoke out against the company. A spokesperson acknowledged criticism from ex-employees and anticipated more, emphasizing efforts to address concerns.
The new safety committee includes three board members: Chairman Bret Taylor, Quora CEO Adam D’Angelo, and ex-Sony Entertainment executive Nicole Seligman. It also features six employees, including Schulman and Altman. OpenAI will consult outside experts like Rob Joyce, a Homeland Security adviser to Donald Trump, and John Carlin, a former Justice Department official under President Joe Biden.