Written by 11:13 AM Tech

“International cooperation is essential for managing AI risks.”

2nd AI Trust and Safety Conference
Ministry of Science and ICT: “Preparing systematic national strategies including the launch of the AI Safety Research Center”
, “To effectively manage the risks of cutting-edge AI models, harmonization between domestic laws and international agreements is essential.”,

The Ministry of Science and ICT held the AI Trust and Safety Conference on the 26th at the Peace and Park Convention (War Memorial of Korea, Seoul) to share achievements of the private sector and the government for responsible AI.

Around 200 participants, including the Korea Telecommunications Technology Association (TTA), Korea Information Society Development Institute (KISDI), representatives from major domestic AI corporations like Naver, Kakao, SKT, KT, Samsung Electronics, LG AI Research Institute, startup companies, and researchers, attended the event.

In a video keynote speech, Professor Yoshua Bengio from the Quebec AI Institute emphasized, “The government’s support expansion and the role of the AI Safety Research Center are necessary for securing risk management technology for AI models.”

Hyeon Oh, Director of the AI Research Institute at KAIST, highlighted the trends in international AI supremacy competition and the importance of AI as a strategic asset in her keynote address. She introduced the impact of AI technology on language and cultural inclusiveness, disparity issues, and global society, and discussed strategies for promoting AI safety policies to create new markets and enhance corporate competitiveness.

The implementation status of the AI Seoul Pledge made at the AI Global Forum in May was also revealed. At the time, domestically, Samsung Electronics, Naver, Kakao, SKT, KT, LG AI Research Institute, and internationally, Google, OpenAI, Anthropic, IBM, Salesforce, Cohere, Microsoft (MS), and Adobe participated. The conference highlighted the implementation status of each company, including risk management measures, technology research, and internal governance for safe AI development.

Additionally, the results of the “Generative AI Red Team Challenge” held in April were disclosed. The challenge involved over 1,000 participants, including AI researchers, university students, and general citizens, who explored potential risks and vulnerabilities (such as harmful information, bias, and discrimination) in generative AI models (LLM) from four domestic AI companies, including Naver. At the conference, TTA analyzed the attack attempts of the large number of participants, introducing major risks identified in seven areas and various attack techniques (e.g., denial of service, confusion induction).

The “AI Trustworthiness Award” was held for the second time this year at the event. The award was given to Dabeeo’s “Dabeeo Earth Eye 2.0,” which established and applied a self-data quality management process based on the “Trustworthy AI Development Guide.” The recipient received the Minister of Science and ICT Award and a prize of 10 million won.

Sang-Hoon Song, Director of the Information and Communications Policy Bureau at the Ministry of Science and ICT, stated, “In response to the continuous efforts of the government’s AI trust and safety policies, there has been a voluntary expansion of AI trust and safety organizations and investment in the industrial and academic sectors recently. We will strengthen policy support to spread a culture of responsible AI development and use based on voluntary efforts, and systematically respond at the national level to potential risks from advanced AI by launching the AI Safety Research Center.”

Visited 1 times, 1 visit(s) today
Close Search Window
Close
Exit mobile version