Written by 11:46 AM Tech

Kakao Establishes ‘Kakao ASI’ for AI Risk Management

Kakao announced on the 23rd that it has established the “Kakao Artificial Intelligence Safety Initiative (Kakao ASI),” a risk management framework designed to identify and manage risks that may arise during the development and operation of artificial intelligence (AI) technology. This announcement was made at the Kakao developer conference, “If Kakao AI 2024,” held on the 22nd.

Kakao ASI aims to minimize risks associated with AI technology development and operations and to build safe and ethical AI systems. The comprehensive guidelines cover the entire lifecycle of AI systems, including design, development, testing, deployment, monitoring, and updates, allowing for proactive risk management. It encompasses a wide range of potential risks that AI and humans might trigger, including those due to human error or negligence.

Kakao ASI comprises three key elements: Kakao AI Ethics Principles, a Risk Management Cycle, and AI Risk Governance. The Kakao AI Ethics Principles are based on the “Guidelines for Responsible AI in the Kakao Group,” announced in March last year, and include principles of social ethics, inclusivity, transparency, privacy, and user protection. The company plans to provide ethical guidelines for developers and users.

The Risk Management Cycle offers a systematic approach to dealing with risks, consisting of the stages of identification, evaluation, and response. It plays a crucial role in minimizing unethical and incomplete aspects of AI technology and securing safety and reliability. The cycle is repeatedly applied throughout the entire lifecycle of AI systems.

AI Risk Governance is a decision-making framework for managing and supervising the development, deployment, and use of AI systems. It encompasses the organization’s policies, procedures, and responsibility structure, as well as alignment with external regulations. The governance framework reviews related risks from multiple perspectives. Kakao ASI Governance is structured in three stages: AI Safety, the company-wide risk management dedicated body ERM Committee, and the top decision-making body.

Kyunghun Kim, Kakao’s AI Safety Leader, stated that what differentiates Kakao’s AI risk management framework from others is the distinction between AI and humans as the sources of risks, with evaluation and response plans tailored to the characteristics of risks from each source.

Kakao plans to continue quickly identifying and addressing risks found in AI technology development and operations, even after the establishment of Kakao ASI, and will continuously upgrade the framework. The company intends to finely assess the framework to adapt to rapidly changing environments and technological demands and to strengthen the trust and safety of AI systems.

Visited 1 times, 1 visit(s) today
Close Search Window
Close