
On the 1st(local time), the European Union (EU) enforced the world’s first comprehensive artificial intelligence (AI) regulation.
According to the EU executive body Commission, the AI law is designed to ensure the reliability of AI developed and used in the EU, including safety measures to protect human rights. The law classifies AI regulation into four levels based on the risks that may arise when using AI technology in specific products or fields. The more likely a risk is to have a negative impact, the stricter the regulation will be applied.
AI technology used in areas such as healthcare, education, elections, and critical infrastructure is classified as high-risk and requires human supervision and risk management systems. The use of AI technology that may infringe on basic human rights is prohibited. Practices such as social scoring evaluation, which scores individuals based on characteristics and behavior using data related to AI, and collecting facial images randomly from the internet or closed-circuit television (CCTV) to build databases are included.
The use of real-time remote biometric recognition systems by law enforcement agencies is restricted except in very exceptional cases. Transparency obligations, such as specifying the content used in AI learning processes, are imposed for general AI (AGI, AI that has intelligence at or above human level).
The prohibition on core technology will be implemented six months after enforcement, and obligations related to general AI will be applied 12 months later from the enforcement date, with full implementation starting in August 2026, two years after the enforcement date.
According to the Commission, providing incorrect information related to AI technology will result in a fine of 1.5% of annual global revenue, and violation of the obligations will result in a 3% fine. In cases where the law is violated due to the use of prohibited AI applications, fines can go up to a maximum of 7%.