On the 30th, at the AI Festa 2025 held at COEX in Gangnam-gu, Seoul, visitors experienced Kakao’s high-performance AI, Kanana.
Kakao’s AI model “Kanana” has undergone the first AI safety evaluation in Korea. On the 29th, the Ministry of Science and ICT announced that it conducted an AI safety evaluation on Kakao’s “Kanana Essence 1.5” in collaboration with the AI Safety Institute and the Korea Information and Communication Technology Association (TTA). The evaluation is one of the means to identify and assess AI risks to ensure the safety of AI systems. Ahead of the implementation of the AI Fundamental Act next January, the ministry is providing safety assurance consulting for companies with high-performance AI models.
The evaluation utilized the AssurAI dataset released by TTA and KAIST (Professor Choi Ho-jin’s research team) on November 18, along with the high-risk field evaluation dataset from the AI Safety Institute. Results showed that Kanana secured higher safety compared to similar-sized global models like Llama 3.1 and Mistral 0.3 by reviewing a wide range of scenarios, from general risk factors like violent and discriminatory expressions to scenarios with high abuse potential, such as weapons and security. Notably, Kanana’s overall score was 3.61, higher than Llama’s 3.13 and Mistral’s 3.04.
AssurAI is a safety benchmark designed to evaluate 35 risk areas based on Korean. It is planned to enhance global alignment by incorporating it into the international AI safety research network and international standardization (ISO/IEC JTC 1/SC 42, etc.) along with high-risk field datasets established by the AI Safety Institute.
The Ministry of Science and ICT plans to apply the safety evaluation to the first phase of the independent AI foundation model project next year and will expand the evaluation targets in cooperation with domestic and international AI companies.
Kim Kyung-man, Director of the AI Policy Office at the Ministry of Science and ICT, said, “In a global context where discussion on AI safety emphasizes verification and implementation over regulation, this evaluation proves the safety competitiveness of domestic AI models,” and added, “We will support domestic AI models to lead global AI safety leadership.”
