Written by 7:59 PM Tech

OpenAI, dismantling the safety team for long-term risks.

Executive of Safety Team Criticizes Company Policy Publicly

‘,
,
, ‘ChatGPT developer OpenAI dismantled a team researching the long-term risks of artificial intelligence (AI) following the departure of its executive lead, who then publicly criticized the company’s policies.’,
,
, ‘According to foreign media such as Bloomberg on the 18th local time, OpenAI’s Superalignment Team, led by Ilya Sutskever, the Chief Scientist of OpenAI, was merged with another team shortly after his departure from the company. This team was established by OpenAI in July last year to prepare for the risks of “superintelligence AI.”‘,
,
, ‘However, with the departure of Ilya Sutskever, who led the removal of Sam Altman in November last year, the team was dismantled, leading to interpretations that OpenAI may be neglecting AI safety. It was speculated that Sam Altman was ousted for prioritizing AI development competition over safety.’,
,
, ‘In particular, Yan LeCun, who led the team, also resigned from the company. He even publicly criticized the company through his former Twitter account. Yan LeCun expressed his disappointment, saying, “I thought OpenAI would be the best place for this research (superalignment), but I disagreed with the management on the company’s core priorities for a long time.” He criticized, “Creating machines smarter than humans is inherently a risky endeavor,” and expressed his concern that “OpenAI, representing humanity, has largely neglected safety culture and processes over the past few years.”‘,
,
, ‘Despite Yan LeCun’s public criticism, OpenAI responded that they prioritize safety above all else. In response to his tweets, CEO Sam Altman and Chairman Greg Brockman replied under their names, stating, “There is still no proven playbook for the path to AGI (Artificial General Intelligence),” and suggesting that empirical understanding can help guide future directions. They emphasized that OpenAI is making efforts to mitigate serious risks while providing positive aspects. They also explained, “It is not certain when safety standards can be met during the AI development process, and it is acceptable if the release date is delayed.” Additionally, they added, “We continue to collaborate with governments and various stakeholders on safety.”‘,
,
, ‘[Silicon Valley=Reporter Lee Deok-ju]’,
,
, ‘ r_start //’, ‘ r_end //’

Visited 1 times, 1 visit(s) today
Close Search Window
Close