▲ KAIST Research Team Proves Potential for Unauthorized Personal Information Extraction by LLMs
Korea Advanced Institute of Science and Technology (KAIST) has announced that they have proven the potential for extracting personal information using large language model (LLM) agents.
With the recent advancements in AI technology, large language models like ChatGPT have evolved into autonomous AI agents capable of making decisions to solve problems without human intervention.
Controversy arose after Google recently removed a clause from its AI ethics guidelines that prohibited the use of AI technology for weapons or surveillance, raising concerns about the misuse of LLM agents.
The powerful functionalities of LLM agents, which can search and utilize real-time information when combined with web-based tools, increase the risk of cyberattacks.
The joint research team led by Professor Seung-Won Shin from the Department of Electrical and Electronic Engineering at KAIST and Professor Ki-Min Lee from the Kim Jaechul Graduate School of AI demonstrated that commercial LLM services could bypass in-built defense mechanisms to conduct cyberattacks.
Using commercial LLM models such as ChatGPT, Claude, and Gemini, the research team conducted a test where they automatically collected personally identifiable information of computer science professors from major universities. They found that personal information could be collected with an accuracy of up to 95.9% within an average of 5 to 20 seconds at a low cost of 30 to 60 KRW.
This method allows for much faster attacks with less effort compared to traditional attackers.
Furthermore, in an experiment where an LLM agent impersonated a renowned professor to generate fake posts, up to 93.9% were rated as genuine.
By simply entering the target’s email address, customized phishing emails can be generated, and the click-through rate of links in these phishing emails reached as high as 46.7%, significantly higher than typical phishing attacks.
The first author of the study, researcher Hannah Kim, stated, “As LLMs are endowed with more capabilities, the threat of cyberattacks increases exponentially,” and emphasized the need for security measures that consider the capabilities of LLM agents.
Professor Seung-Won Shin added, “We plan to discuss security measures in collaboration with LLM service providers and research institutions.”
(Photo courtesy of KAIST, Yonhap News)