OpenAI on Tuesday said it created a safety and security committee led by senior executives, after disbanding its previous oversight board in mid-May.
The new committee will be responsible for recommending to OpenAI's board "critical safety and security decisions for OpenAI projects and operations," the company said.
News of the new committee comes as the developer of the ChatGPT virtual assistant announced that it has begun training its "next frontier model."
The firm that it anticipates the "resulting systems to bring us to the next level of capabilities on our path to AGI," or artificial general intelligence, a kind of AI that is as smart or smarter than humans.
In addition to Altman, the safety committee will consist of Bret Taylor, Adam D'Angelo and Nicole Seligman, all members of OpenAI's board of directors, according to the blog post.
The formation of a new oversight team comes after that focused on the long-term risks of AI. Before that, both team leaders, OpenAI co-founder Ilya Sutskever and key researcher Jan Leike, from the -backed startup.
Leike earlier this month wrote that OpenAI's "safety culture and processes have taken a backseat to shiny products." In response to his departure, Altman said on social media platform X that he was sad to see Leike leave, adding that OpenAI has "a lot more to do."
Over the next 90 days, the safety group will evaluate OpenAI's processes and safeguards and share their recommendations with the company's board, the blog post said. OpenAI will provide an update on the recommendations that it has adopted at a later date.
AI safety has been at the forefront of a larger debate, as the huge models that underpin applications like ChatGPT get more advanced. AI product developers also wonder when AGI will arrive and what risks will accompany it.
— CNBC's Hayden Field contributed to this report.