At the National Assembly, concerns have been raised about the need for appropriate ethical regulations alongside the government’s policy on promoting artificial intelligence (AI). On October 13, during the National Assembly’s Science, Technology, Information, Broadcasting and Communications Committee audit, Representative Lee Sang-hui demonstrated the serious risks of ‘modal AI’ (video-synthesizing AI), criticizing the government’s AI policy for being overly focused on industry development.
Representative Lee conducted a live demonstration of creating a manipulated video using voice files, photos, and video clips of KBS President Park Jang-beom to appear like a real person. He warned that if AI technology is misused, it could generate fake news, spread false information, and manipulate public opinion, threatening democratic order.
Despite the Ministry of Science and ICT having allocated over 5 trillion won for AI-related science and technology innovation in 2026, Lee pointed out that spending on managing and responding to AI risks, such as fake videos and ethical issues, remains extremely limited and unclear.
He emphasized that while AI investment is crucial, there must also be preventative and defensive investments to protect citizens from fake videos, manipulated content, misinformation, and election manipulation. Lee urged for the creation of comprehensive countermeasures.
In response, Vice Minister of Science and ICT Lee Kea-hoon agreed with the concerns and stated the need for institutional measures to prevent technological misuse and the importance of efforts for AI technological advancement. He also mentioned ongoing efforts to incorporate AI safety and trust into the AI Basic Law and R&D research at the AI Safety Institute, including deepfake prevention technologies.