Origins and Evolution of Korea's AI Ethics Framework

South Korea's approach to AI ethics governance reflects a deliberate effort to balance innovation promotion with responsible deployment. The country's AI ethics framework has evolved through several stages, from initial voluntary guidelines to increasingly structured governance mechanisms that intersect directly with K-Moonshot's technology development agenda.

The foundation was laid with the National AI Ethics Guidelines, published by the Ministry of Science and ICT (MSIT) in December 2020. These guidelines established three core principles: human dignity, public benefit, and the pursuit of technological fairness. Under these principles, ten detailed requirements were articulated, covering areas including human autonomy, privacy protection, diversity and inclusion, accountability, transparency, safety, and environmental sustainability. The guidelines were developed through extensive consultation with industry stakeholders, academic researchers, civil society organisations, and international governance bodies.

While the 2020 guidelines were voluntary, they established the normative foundation upon which Korea's more binding AI governance mechanisms have been constructed. The guidelines drew explicitly on the OECD AI Principles, of which Korea was a founding signatory, ensuring alignment with the international governance mainstream while preserving space for Korean-specific adaptations.

The AI Ethics Self-Checklist

The AI Ethics Self-Checklist, introduced by MSIT as a practical implementation tool, represents Korea's primary mechanism for operationalising AI ethics principles within government and public-sector AI deployments. The checklist requires developers and deployers of AI systems used in government services to conduct self-assessments across multiple dimensions: fairness of training data, algorithmic transparency, bias detection and mitigation, privacy protection, human oversight provisions, and accountability mechanisms.

The self-checklist is mandatory for all AI systems deployed by central government agencies and recommended for local government and public institutions. While compliance is monitored through periodic reviews, enforcement has relied primarily on institutional peer pressure and reputational incentives rather than punitive sanctions. This soft enforcement approach reflects Korea's broader regulatory philosophy of guiding technology development through norms and incentives rather than rigid prescriptive rules.

For K-Moonshot, the self-checklist is relevant to AI systems developed under government-funded research programmes. AI models and applications emerging from K-Moonshot missions, including Mission 7 (Physical AI Models), Mission 1 (Drug Development), and Mission 10 (AI Scientists), are expected to comply with the self-checklist when deployed in government-supported contexts, even as the checklist requirements may be adapted over time to accommodate the novel capabilities and risks of K-Moonshot technologies.

AI ETHICS GOVERNANCE APPROACH
RISK-BASED, INNOVATION-ORIENTED

Korea's AI ethics governance combines risk-based classification with innovation-friendly enforcement, seeking to protect public interests without constraining the rapid AI development targeted by K-Moonshot.

AI Impact Assessment Framework

Korea has developed an AI Impact Assessment (AIA) framework that requires structured evaluation of AI systems' potential effects on individuals, communities, and society before deployment. The AIA framework draws inspiration from privacy impact assessments under PIPA and environmental impact assessments in industrial regulation, applying similar analytical methodologies to the domain of artificial intelligence.

The AIA framework categorises AI systems into risk tiers based on their deployment context, decision-making authority, and potential for harm. High-risk applications, including AI systems used in criminal justice, healthcare diagnostics, employment decisions, and financial credit scoring, require comprehensive impact assessments that evaluate algorithmic fairness, potential for discrimination, transparency of decision-making processes, and mechanisms for human appeal and oversight.

For K-Moonshot, the AIA framework is particularly relevant to Mission 2 (Brain Implant Commercialization), where AI-driven neural interfaces raise profound ethical questions about cognitive autonomy, consent, and privacy; Mission 1 (Drug Development), where AI-assisted drug discovery algorithms must be validated for safety and efficacy; and Mission 6 (Humanoid Robots), where AI systems controlling physical robots in proximity to humans must meet safety standards that balance innovation with public protection.

The AI Basic Act: Legislative Development

Korea's legislative landscape for AI governance has evolved toward a comprehensive AI Basic Act that would establish a binding legal framework for AI development, deployment, and oversight. The proposed legislation has been under development through multiple National Assembly sessions, with debates centring on several key questions: the scope of AI systems subject to regulation, the institutional structure of AI governance (whether to establish a dedicated AI regulatory body or distribute authority across existing ministries), the balance between ex-ante regulation (pre-deployment requirements) and ex-post accountability (post-deployment monitoring and remediation), and the treatment of foundation models and general-purpose AI systems.

The AI Basic Act's development timeline has been complicated by the tension between Korea's desire to maintain a pro-innovation regulatory environment and the growing international pressure for more prescriptive AI governance, exemplified by the EU AI Act. K-Moonshot's aggressive technology development timelines create an additional dimension of this tension: overly prescriptive regulation could delay mission-critical research and development, while insufficient regulation could expose K-Moonshot technologies to liability risks and international market access barriers.

Korea's approach has been to pursue a risk-based regulatory framework that imposes binding requirements on high-risk AI applications while maintaining light-touch oversight for lower-risk uses and research contexts. This approach is consistent with the OECD AI Principles and broadly aligned with the EU AI Act's risk classification methodology, though Korea's implementation emphasises flexibility and regulatory sandbox mechanisms that allow innovative AI applications to be tested under controlled conditions before full regulatory requirements are imposed.

Sector-Specific AI Ethics Governance

Beyond the horizontal AI ethics framework, Korea has developed sector-specific governance mechanisms for AI deployment in particularly sensitive domains.

Healthcare AI

The Ministry of Food and Drug Safety (MFDS) has established regulatory pathways for AI-based medical devices, including diagnostic algorithms, treatment recommendation systems, and AI-assisted surgical tools. These pathways require clinical validation, algorithmic transparency documentation, and post-market surveillance, requirements that directly affect K-Moonshot's drug development acceleration mission and brain implant commercialization. The MFDS has demonstrated a willingness to adapt regulatory frameworks for AI medical devices, approving several AI diagnostic tools and establishing a dedicated AI medical device review division.

Autonomous Systems

AI systems controlling physical machines, from autonomous vehicles to industrial robots to humanoid robots, are subject to overlapping safety regulations from multiple agencies. The Ministry of Land, Infrastructure and Transport oversees autonomous vehicle regulation, while the Ministry of Employment and Labour governs industrial robot safety. K-Moonshot's physical AI missions must navigate this multi-agency regulatory landscape, a challenge that the regulatory sandbox programme is specifically designed to address.

Financial AI

The Financial Services Commission (FSC) has established guidelines for AI use in financial services, covering algorithmic trading, credit scoring, fraud detection, and customer service automation. While financial AI is not a primary K-Moonshot mission area, the FSC's regulatory approach provides a model for other sector-specific AI governance frameworks and influences the broader normative environment in which K-Moonshot AI technologies are developed and deployed.

International AI Ethics Engagement

Korea's AI ethics framework development is informed by, and contributes to, international AI governance dialogue. Korea's participation in multiple international AI ethics initiatives strengthens the credibility and interoperability of its domestic framework.

OECD AI Policy

Korea was a founding signatory of the OECD AI Principles in 2019 and continues to participate actively in the OECD's AI Policy Observatory and working groups. The OECD framework's emphasis on human-centred, trustworthy AI aligns with Korea's national guidelines and provides international benchmarking for Korean AI governance practices.

Global Partnership on AI (GPAI)

Korea is a member of GPAI, participating in working groups on responsible AI, data governance, and the future of work. GPAI membership provides Korean policymakers with access to international best practices and facilitates bilateral AI governance dialogues with partner nations.

AI Safety Summits

Korea's participation in the AI Safety Summit process, including the Seoul AI Summit in 2024, has positioned the country as an active contributor to international discussions on frontier AI safety. Korea's commitments at these summits include support for pre-deployment safety testing of frontier AI models, transparency in AI capability reporting, and international cooperation on AI incident response. These commitments, while voluntary, create normative expectations that influence the governance environment for K-Moonshot's most advanced AI systems.

Ethical Challenges Specific to K-Moonshot

Several K-Moonshot missions raise ethical challenges that extend beyond the current AI ethics framework's coverage, requiring governance innovation that keeps pace with technological capability.

Brain-Computer Interface Ethics

Mission 2's brain implant commercialization raises questions about cognitive liberty, mental privacy, and neural data governance that existing AI ethics frameworks were not designed to address. The prospect of AI systems that can read, interpret, or influence neural activity creates unprecedented ethical territory that requires new governance principles addressing neural data ownership, informed consent for brain-computer interface use, and the boundaries between therapeutic and enhancement applications.

Autonomous Decision-Making in Physical AI

Mission 7 and Mission 6 involve AI systems that make autonomous decisions in the physical world, controlling robots, vehicles, and industrial processes with limited human oversight. The ethical frameworks for such systems must address liability allocation (when an AI-controlled robot causes harm, who is responsible?), the appropriate level of human oversight for different deployment contexts, and the transparency requirements for AI decision-making in safety-critical applications.

AI in National Security Contexts

Several K-Moonshot technologies, including advanced AI chips, quantum computing, and space data centres, have dual-use potential with national security implications. The ethical framework must address the boundaries between civilian and military AI applications, the governance of dual-use AI research, and the transparency obligations that apply when government-funded AI research has potential military applications.

Industry Self-Regulation and Corporate Ethics

Korea's major technology companies have developed their own AI ethics frameworks and governance structures that complement the government's national guidelines. Samsung Electronics' AI ethics principles, Naver's AI ethics guidelines, and Kakao's AI ethics policy establish corporate-level commitments to responsible AI development that often exceed minimum regulatory requirements. These corporate frameworks provide an additional governance layer for K-Moonshot, as participating companies are expected to apply their internal ethics standards to mission-related AI development.

The effectiveness of industry self-regulation, however, depends on consistent implementation and genuine corporate commitment. Critics have noted that corporate AI ethics statements can function as public relations instruments rather than binding operational constraints. The challenge for K-Moonshot's governance framework is to leverage corporate ethics commitments as complements to, rather than substitutes for, binding regulatory requirements.

Outlook: Adaptive Governance for Transformative Technology

Korea's AI ethics framework faces the fundamental challenge of governing technologies that are evolving faster than governance mechanisms can adapt. K-Moonshot's ambitious technology development timelines, targeting resolution of all 12 national missions by 2035, create pressure for governance frameworks that enable rapid innovation while protecting public interests. The tension between these objectives is not unique to Korea but is particularly acute given the scale and ambition of the K-Moonshot programme.

The most promising approach, and the one Korea appears to be pursuing, is adaptive governance: establishing clear ethical principles and risk-based classification frameworks while maintaining flexibility in implementation through regulatory sandboxes, iterative guideline updates, and multi-stakeholder governance bodies that can respond to emerging ethical challenges in near-real-time. This approach recognises that the AI ethics challenges of 2030, when K-Moonshot missions are approaching their critical milestones, will differ substantially from those of 2026, and that governance frameworks must be designed for evolution rather than permanence.

For K-Moonshot stakeholders, the AI ethics framework represents both a constraint and an enabler. It constrains the programme by requiring additional compliance steps and governance processes that slow development timelines. It enables the programme by building public trust, international credibility, and market access that would be jeopardised by ethical failures. The challenge is to maintain the right balance, ensuring that Korea's AI ethics framework serves as a foundation for responsible innovation rather than a barrier to the transformative ambitions that define K-Moonshot.