Published
Korea’s New AI Law: Not a Progeny of Brussels
By: Hosuk Lee-Makiyama Jimmyn Parc Claudia Lozano
Subjects: Digital Economy Far-East

Background to the Korean law
On the 26th of December 2024, Korea’s National Assembly passed the ‘Act on the Development of AI and Establishment of Trust’ (AI Basic Act)[1], consolidating 19 different AI regulatory proposals. Striving to keep pace with industry leaders in AI, the AI Basic Acts seeks to bring opportunities for businesses using AI, providing responsible oversight with a clear set of ethics guidelines and reducing uncertainty on how to proceed on ‘high-impact’ AI systems.
Similar to the EU, Korea has some history of “regulate fast and loose.” Korea was first in the world to enact a Cloud Computing Act in 2015. But unlike the EU, Korea has a nimble and executive-driven legislative culture that can easily amend failed experiments, and the new AI Basic Act mandates a regular review of its legislation and continuously benchmarking it against international standards. The Act mandates regulatory oversight by the Ministry of Science and ICT to establish AI technologies, industries, usage and standardisation initiatives to enhance national competitiveness every three years.
And (arguably) unlike the EU AI Act, the Act has been widely well-received with considerable support from businesses. Besides the demand for legal certainty, Korean businesses had anticipated greater financial support that would strengthen their competitive position in the global AI race through large-scale public-private investment.
Besides the funding and industrial policy elements, the new law bears many structural similarities to the EU AI Act. Both advocate a risk-based approach (albeit in very different ways) and classify certain use-cases as “high-impact AI.” Both contain transparency requirements, guidelines and civil penalties. Such similarities may cause observers to conclude Korea is following the path of Brussels. However, such similarities are merely superficial.
Different approach and scope
At the onset, the two laws are fundamentally different in their approach: The Basic AI Act integrates regulatory governance with industrial growth and leans toward post-market oversight. The EU prioritises risk-based regulation through ex-ante obligations that focuses almost exclusively on risk mitigation through regulatory safeguards. Where the EU has conflated details product market regulations, civil liabilities, and fundamental rights into an omnibus act, Korea adopts a more integrated approach, balancing regulation and promoting domestic innovation within the same Act.
Furthermore, Korea’s AI Basic Act merely sets broad principles for AI governance, emphasising ethical AI and safe practices. In contrast, the EU AI Act divides AI systems into four risk categories by sectoral use cases rather than a genuine risk-based approach based on a case-by-case assessment.
The new Korean law encompasses all AI technologies and applications, like the EU AI Act. It contains specific obligations also for generative AI and “high-impact” systems (more on that below). It also applies to foreign entities above a certain number of users in Korea (who must appoint a local representative) – but unlike the EU AI Act, the new Basic AI Act only applies to developers or when entities offer products and services utilising AI, rather than the users of AI. Neither does the Korean law specify general-purpose AI like in some of the more controversial aspects of the EU law.
There are other minor differences in scope: While the Korean law contains the usual national defence and security exceptions, it does not exempt scientific research or pre-market testing. Neither does the AI Basic Act contain any direct prohibitions, although the Ministry of Science and ICT has the authority to issue suspensions for non-compliance and foresees that prohibitions may emerge through enforcement decrees in the future.
Emphasis on innovation and competitiveness
Korea integrates innovation and competitiveness measures directly into the legal framework, blending regulatory oversight with industrial development strategies (article 8). The National AI Committee carries the function of identifying and improving the competitiveness of the AI industry and promoting the use of AI in industrial sectors such as manufacturing and services, the public sector, international cooperation on AI and establishing international norms.
This is a distinctly different approach than the EU AI Act, which separates market regulation from technological adaptation and innovation promotion that are addressed through funding programs like Horizon Europe. However, the EU AI Act introduces regulatory sandboxes (article 53), that are not explicitly enabled by the Korean law.
Obligations and requirements
The AI Basic Act is built on a framework of promoting trust rather than Europe’s detailed product regulation designed to ban risks (article 13). The new Korean law may require the developers to align with ethical principles but otherwise face few explicit legal obligations (article 12), with no specific duties for transparency or record-keeping. Nor does it explicitly single out general-purpose AI with distinct obligations. Instead, the AI Basic Act reiterates a ‘trust foundation’ for ‘human rights’ across its provisions (articles 2, 27, 29, 30, 31).
The Act also empowers the government to establish ethical principles (article 27) to promote ethical AI for human life and health, accessibility of products and contribution to human life and prosperity. But unlike the current direction of travel in Europe, where ‘code of ethics’ becomes a semi-binding law drafted by unaccountable academics and businesses.
AI businesses have the obligation to secure artificial intelligence safety by identifying, assessing, and mitigating risks across the AI lifecycle and establishing a risk management system to monitor and respond to AI-related safety incidents (article 32). AI business operators will later analyse the results of the implementation, which must be submitted to the Minister of Science and ICT. The Minister will establish and notify the specific implementation method and the matters necessary for the submission of the results.
“High-impact” vs “high-risk”
According to the AI Basic Act, “High-Impact AI” (article 2) is an AI system that may impact or cause danger to human life, physical safety, and fundamental rights. Areas deemed high-impact include:
- Supply of energy.
- Production processes of drinking water.
- Establishment and operation of the health care provision and utilization systems.
- Safe management and operation of nuclear substances and facilities.
- Analysis and use of biometric personal information.
- Key operations and management of transportation means, facilities, and systems.
- Judgments or evaluations that significantly affect the rights and obligations of individuals, such as hiring, loan screening, transport systems and decision-making by the state, local governments, and public institutions.
- Student evaluation in early childhood, primary, and secondary education.
- Any other areas that may have a significant impact on the protection of human life, physical safety, and fundamental rights.
Providers of high-impact AI should “endeavour to obtain inspection and certification in advance” (article 31). If an AI provider intends to provide a product or service using high-impact AI or generative AI, it shall disclose that the product or service is operated based on such AI (article 31). While Korean users “should prioritise” systems that have been tested and certified (article 30) for high-impact AI use-cases, the Act does not explicitly require the use of such systems. This is a significant divergence from the EU AI Act.
Otherwise, the Korean and EU laws introduce transparency requirements for the higher categories of high-impact or high-risk by developing and implementing a plan to explain the final results of the AI, the main criteria used and an overview of the training data used to develop and utilise the AI.
Both Korean and EU laws require ex-ante assessments of the higher categories of high-impact or high-risk. In the AI Basic Act, high-impact AI systems must undergo an ex-ante review submitted to the Ministry of Science and ICT, and an expert committee can be established to advise if necessary. High-impact systems may be required to implement a risk management plan and human oversight of the high-impact AI (article 34). The Ministry of Science and ICT may further develop and disseminate guidelines on standards and examples of high-impact AI (article 33).
However, unlike in the EU, the AI Basic Act does not require third-party conformity assessment of high-risk systems and the technical documentation requirements are less rigid under Korean law. Overall, the EU AI Act outlines a structured pre-market conformity assessment (article 43-51) which de facto is a licensing regime, whereas the Korean law emphasises post-hoc oversight supported by new agencies (e.g. AI Safety Research Institute, article 12) that is similar to antitrust enforcement.
Enforcement and civil liabilities
In addition to the general and specific obligations, the AI Basic Act establishes several new public bodies, including the National AI Committee (article 7), AI Policy Center (article 11), the aforementioned AI Safety Research Institute, and the Korea AI Promotion Association. These institutions will support the implementation and enforcement of the AI Basic Law in all its aspects.
The Minister of Science and ICT is also empowered with investigative powers, while the National AI Committee (articles 7, 8) oversees and guides national AI policies, including the establishment of R&D and investment strategies and AI data centres. This Committee is also responsible for societal shifts and policy responses related to high-impact AI. It can also make recommendations or express opinions to the heads of national organizations and AI companies.
As for sanctions (article 42, 43), fines of not more than 30 million won (equivalent to just 21,500 EUR) are issued in case of failure to fulfil the notice (article 31(1)), absence of a local representative (article 36(1)), or non-compliance with cease and desist or corrective order (article 40(3)).
The fines are just a fraction of the fines in the EU that can amount to €35 million or 7% of the total worldwide annual turnover. Furthermore, systems that are compliant with the Basic AI Law cannot be held accountable for civil liabilities, whereas EU opens for civil liabilities under the AI Liability Directive with reverse burden of proof – i.e. where developers are assumed to be liable without any proof of the opposite.
Next Steps
It goes without saying that the Korean AI law is far more innovation-friendly compared to the EU AI Act and draws relatively little from EU legislative techniques. On one hand, the AI Basic Law adopts elements of the EU’s sectoral approach by singling out certain activities as “high impact”. On the other hand, Korea avoids the most restrictive and binding ex-ante interventionist approaches for these “high impact” activities and does not impose strict product liability for AI developers.
On the 21 of January 2025, Korea signed the AI Basic Act which will take effect one year after its promulgation. Key details of the Act – definitions of high-impact, safety measure, and computational threshold will need more clarification through Presidential Decrees or notifications from the Ministry of Science and ICT.
Nevertheless, it is clear that the Korean Act does not adhere to the EU’s prohibitive mindset, but aims to establish a safe and reliable AI framework while preserving room for innovation by integrating soft law standards, liability limitations and R&D promotion into a single framework.
Korea’s AI Basic Act dresses like a Brussels daydream, but in reality more akin to US Executive Orders that promote AI uptake, and as such, Korea aligns with the vast majority of the world that have taken a “wait and see” stance while the technology is still in its infancy. Indeed, it is the lack of clear financial support that has raised industry concerns – which Korean platforms and device manufacturers view as crucial to Korea’s advancement in the global AI race against DeepSeek or OpenAI.
[1] https://likms.assembly.go.kr/bill/billDetail.do?billId=PRC_R2V4H1W1T2K5M1O6E4Q9T0V7Q9S0U0