Introduction: Memory as the Bottleneck of the AI Era
In the architecture of modern artificial intelligence, memory has emerged as the critical bottleneck. Training and running large language models, computer vision systems, and generative AI applications requires processing volumes of data that far exceed the bandwidth capabilities of conventional memory technologies. High Bandwidth Memory (HBM) -- a three-dimensional stacking technology that bonds multiple DRAM dies vertically using through-silicon vias (TSVs), delivering memory bandwidth measured in terabytes per second -- has become the indispensable component enabling the AI revolution. And Korea dominates its production.
SK Hynix and Samsung Electronics, Korea's two memory semiconductor giants, collectively control approximately 85% of the global HBM market as of 2025. SK Hynix alone holds roughly 50% market share and is the primary HBM supplier to NVIDIA, whose GPU accelerators power the vast majority of AI training and inference workloads worldwide. This dominance is not a legacy advantage gradually eroding under competitive pressure; it is an actively strengthening position driven by relentless technology advancement, massive capital investment, and deep integration with the world's leading AI chip designers.
K-Moonshot's eleventh mission -- developing ultra-high-performance, low-power AI accelerator chips -- is intrinsically linked to Korea's HBM leadership. AI accelerator performance is increasingly memory-bound; the most capable AI chips are those that can access the most data the fastest, and HBM is the technology that makes this possible. Korea's ability to supply both the memory and, through K-Moonshot, develop competitive AI accelerator logic chips would create a vertically integrated AI hardware capability matched only by the most advanced semiconductor ecosystems in the world.
HBM Technology: Architecture and Evolution
High Bandwidth Memory achieves its extraordinary performance through vertical stacking: multiple DRAM dies are manufactured individually, thinned to approximately 30-40 micrometres, aligned with sub-micrometre precision, and bonded using through-silicon vias -- thousands of tiny electrical connections that run vertically through each die. The resulting stack, topped by a buffer die that manages data routing and thermal regulation, is then connected to a processor (GPU, TPU, or AI accelerator) through a silicon interposer using advanced packaging technology.
HBM Technology Evolution
| Generation | Stack Height | Bandwidth | Capacity (per stack) | Launch Year | Key Adopter |
|---|---|---|---|---|---|
| HBM | 4-Hi | 128 GB/s | 1 GB | 2013 | AMD |
| HBM2 | 8-Hi | 256 GB/s | 8 GB | 2018 | NVIDIA V100 |
| HBM2E | 8-Hi | 460 GB/s | 16 GB | 2020 | NVIDIA A100 |
| HBM3 | 8-Hi | 819 GB/s | 24 GB | 2022 | NVIDIA H100 |
| HBM3E | 8/12-Hi | 1.18 TB/s | 36 GB | 2024 | NVIDIA H200/B200 |
| HBM4 | 12/16-Hi | 1.6+ TB/s | 48 GB+ | 2025-2026 | NVIDIA B300+ |
| HBM4E | 16-Hi | 2+ TB/s | 64 GB+ | 2027 (est.) | Next-gen GPUs |
Manufacturing Complexity
The manufacturing complexity of HBM represents a formidable barrier to entry. Each HBM stack requires precision die thinning (from standard ~750 micrometres to ~30-40 micrometres), TSV formation (etching and filling thousands of vertical channels through silicon), micro-bump deposition (creating the electrical connections between stacked dies), thermocompression bonding (fusing dies under precisely controlled heat and pressure), and extensive testing (each die must function correctly both individually and within the stack). The yield challenges multiply with each additional layer in the stack: an 8-Hi stack with 95% per-die yield produces only 66% good stacks; a 12-Hi stack with the same per-die yield drops to 54%.
This manufacturing difficulty explains why only three companies in the world produce HBM at scale: SK Hynix, Samsung Electronics, and Micron Technology (US). SK Hynix's lead is attributed to its early commitment to HBM development (beginning R&D in 2013), its close engineering collaboration with NVIDIA, and its advanced TSV and bonding process technology developed at its Icheon and Cheongju fabrication facilities in Korea.
SK Hynix: The HBM Market Leader
SK Hynix's position as the world's leading HBM manufacturer represents one of the most consequential competitive advantages in the global semiconductor industry. The company's approximately 50% share of the HBM market translates into dominant supplier status for NVIDIA, the company whose GPU accelerators are used in an estimated 80-90% of AI training workloads globally.
NVIDIA Partnership
The SK Hynix-NVIDIA relationship is among the most strategically important supplier-customer partnerships in the technology industry. SK Hynix was the first to mass-produce HBM3 for NVIDIA's H100 GPU, which became the defining hardware product of the AI boom beginning in 2023. The company similarly led with HBM3E for NVIDIA's H200 and B200 accelerators, maintaining its position as NVIDIA's primary HBM supplier across successive GPU generations. Industry estimates suggest that approximately 95% of NVIDIA's HBM is sourced from Korean manufacturers (SK Hynix and Samsung), with SK Hynix supplying the majority.
This partnership extends beyond simple component supply. SK Hynix and NVIDIA engage in joint specification development, where HBM performance characteristics are co-designed with GPU memory controller architectures to maximise system-level performance. This co-development relationship creates switching costs that reinforce SK Hynix's market position: competing HBM suppliers must not only match SK Hynix's manufacturing capability but also replicate its integration depth with NVIDIA's GPU design teams.
Financial Impact
SK Hynix HBM Revenue and Profitability
| Metric | 2023 | 2024 | 2025E | 2026E |
|---|---|---|---|---|
| HBM Revenue (T KRW) | ~4.5 | ~12 | ~22 | ~30+ |
| HBM % of Total Revenue | 12% | 22% | 35% | 40%+ |
| HBM Gross Margin (est.) | ~50% | ~55% | ~60% | ~55-60% |
| Total Revenue (T KRW) | 32.8 | 55+ | 65+ | 75+ |
HBM has transformed SK Hynix from a cyclical DRAM and NAND manufacturer into a structural growth company. HBM revenue has grown from approximately 4.5 trillion KRW in 2023 to an estimated 22 trillion KRW in 2025, representing the fastest-growing segment of the global semiconductor market. HBM's gross margins, estimated at 55-60%, substantially exceed conventional DRAM margins (typically 30-45%), reflecting the product's technical differentiation and the supply-demand imbalance driven by insatiable AI infrastructure demand.
Capacity Expansion
SK Hynix is investing heavily to expand HBM production capacity. The company's new M15X fabrication facility in Cheongju, South Korea, with an investment exceeding 20 trillion KRW over multiple phases, is primarily dedicated to HBM and advanced DRAM production. Additionally, SK Hynix's planned facility in Indiana, United States -- announced as part of the broader US-Korea semiconductor cooperation framework -- will include HBM production capability, diversifying the company's manufacturing geography while maintaining proximity to its largest customer, NVIDIA (headquartered in Santa Clara, California).
Samsung Electronics: Closing the HBM Gap
Samsung Electronics, the world's largest memory semiconductor manufacturer by total revenue, has found itself in the unusual position of challenger rather than leader in the HBM segment. Despite Samsung's dominant position in conventional DRAM (approximately 42% global market share) and NAND flash, the company has trailed SK Hynix in HBM technology and market share, a gap that Samsung's management has publicly acknowledged and committed to closing.
Samsung's HBM Challenges
Samsung's HBM challenges have been primarily related to manufacturing yield and thermal performance. The company's HBM3E products experienced qualification delays with NVIDIA in 2024, reportedly due to heat dissipation issues and yield shortfalls that prevented Samsung from meeting NVIDIA's specifications at the volumes and timeline required. These delays allowed SK Hynix to consolidate its market share lead during a period of explosive HBM demand growth.
Samsung has responded with a comprehensive remediation programme. The company reorganised its memory business division in late 2024, creating a dedicated HBM task force reporting directly to the CEO. Capital expenditure for advanced packaging (the manufacturing step most critical for HBM quality) was increased significantly, with industry estimates suggesting Samsung allocated over 5 trillion KRW to packaging-related investments in 2025. Process improvements in TSV formation and thermocompression bonding have reportedly improved Samsung's HBM3E yields to levels competitive with SK Hynix, and the company has begun shipping qualified HBM3E to NVIDIA and other customers in volume.
HBM4: Samsung's Opportunity to Leapfrog
Samsung views HBM4 as an architectural inflection point that could reset competitive dynamics. HBM4 introduces a fundamentally redesigned interface between the logic buffer die and the DRAM stack, moving from the current peripheral architecture to a more integrated design that co-locates processing logic with memory -- a concept Samsung has branded "Processing-In-Memory" (PIM) or "Computation-Near-Memory" (CNM). Samsung's vertically integrated manufacturing capability -- combining DRAM fabrication, logic foundry (for the buffer die), and advanced packaging in a single company -- provides a structural advantage for HBM4's more complex architecture that pure-play memory companies cannot easily replicate.
Samsung vs SK Hynix HBM Competitive Comparison
| Dimension | SK Hynix | Samsung |
|---|---|---|
| HBM Market Share (2025) | ~50% | ~35% |
| Primary Customer | NVIDIA (#1 supplier) | NVIDIA, AMD, others |
| HBM3E Status | Volume production | Volume production (H2 2025) |
| HBM4 Timeline | H2 2025 (sampling) | H1 2025 (sampling) |
| 12-Hi Stack Capability | Production (HBM3E) | Production (HBM3E) |
| Logic Integration | Buffer die (external) | Buffer die (internal foundry) |
| Advanced Packaging | TCB, hybrid bonding (dev) | TCB, hybrid bonding (dev) |
| Fab Investment (2025-2026) | ~25T KRW | ~30T KRW |
AI Data Centre Demand: The Engine of HBM Growth
HBM demand is driven almost entirely by the explosive buildout of AI training and inference infrastructure in hyperscale data centres. The world's largest cloud computing companies -- Microsoft (Azure), Google (Google Cloud), Amazon (AWS), Meta, Oracle, and an expanding roster of AI-focused infrastructure providers -- are deploying NVIDIA GPU clusters at an unprecedented rate, each GPU requiring multiple HBM stacks.
Demand Arithmetic
The demand mathematics illustrate why HBM supply remains tight. NVIDIA's B200 GPU, the flagship AI training accelerator as of early 2026, uses eight stacks of HBM3E, each containing 24 GB of memory for a total of 192 GB per GPU. A single DGX B200 server contains eight B200 GPUs, requiring 64 HBM stacks per server. A moderately sized GPU cluster for AI training might contain 1,000-10,000 such servers. The cumulative HBM requirement for the global AI data centre buildout is staggering.
Global HBM Demand Projections
| Year | HBM Demand (GB equiv.) | HBM Revenue ($B) | Growth (YoY) |
|---|---|---|---|
| 2023 | ~2.9B GB equiv. | $4.6 | -- |
| 2024 | ~8.4B GB equiv. | $16 | +248% |
| 2025E | ~14B GB equiv. | $28 | +75% |
| 2026E | ~22B GB equiv. | $40+ | +43% |
| 2027E | ~32B GB equiv. | $55+ | +38% |
| 2028E | ~42B GB equiv. | $68+ | +24% |
Beyond NVIDIA: Expanding Customer Base
While NVIDIA dominates AI accelerator shipments, the HBM customer base is broadening. AMD's Instinct MI300X accelerator uses HBM3, positioning AMD as a significant HBM consumer. Google's custom TPU (Tensor Processing Unit) chips incorporate HBM for AI training and inference workloads within Google's data centres. Amazon's Trainium chips and Microsoft's Maia AI accelerator also use HBM. This diversification of HBM demand across multiple chip designers reduces Korean manufacturers' customer concentration risk while expanding the total addressable market.
Additionally, AI inference workloads -- which run trained models to generate outputs in real time -- are growing even faster than training workloads, and increasingly require HBM for latency-sensitive applications. As AI inference scales from data centres to edge computing and on-device applications, HBM variants optimised for lower power consumption and cost may open additional market segments.
Pricing Trends and Market Dynamics
HBM pricing has followed a trajectory distinct from conventional DRAM, reflecting the product's structural supply-demand imbalance and its critical role in AI infrastructure.
Price Premium Over Conventional DRAM
HBM commands a substantial price premium over conventional DRAM, reflecting its manufacturing complexity, lower yields, and constrained supply. As of early 2026, HBM3E pricing is estimated at approximately $12-15 per GB, compared to $2-3 per GB for conventional DDR5 DRAM -- a premium of approximately 5-6x. This premium has remained stable or increased even as conventional DRAM pricing has fluctuated, indicating that HBM pricing is driven by AI demand fundamentals rather than the cyclical supply-demand dynamics that characterise the broader DRAM market.
Industry analysts expect HBM pricing to remain firm through 2026-2027, supported by demand growth that continues to outpace capacity expansion. Longer-term, as SK Hynix, Samsung, and Micron all expand HBM capacity, pricing may moderate toward a 3-4x premium over conventional DRAM by 2028-2029, though this assumes no further acceleration in AI infrastructure demand beyond current projections.
Implications for Korean Semiconductor Revenue
HBM's revenue contribution to the Korean semiconductor industry has become strategically significant. Combined HBM revenue for SK Hynix and Samsung is projected to exceed $35 billion in 2025 and approach $50 billion by 2027. These figures represent a substantial portion of Korea's total semiconductor exports, which totalled approximately $140 billion in 2025. HBM has effectively become Korea's most valuable single semiconductor product category, surpassing conventional DRAM, NAND flash, and display driver ICs.
K-Moonshot Alignment: From Memory to AI Compute
Korea's HBM dominance intersects with K-Moonshot's eleventh mission in a strategically significant way. The mission's objective -- developing ultra-high-performance, low-power AI accelerator chips -- would extend Korea's AI hardware capability from memory (where it leads) to logic processing (where it currently trails the US). Success would create a vertically integrated Korean AI chip capability combining world-leading memory with competitive AI processing logic.
The Memory-Centric AI Architecture Opportunity
An emerging paradigm in AI chip design -- memory-centric or processing-in-memory (PIM) architectures -- plays directly to Korea's strengths. Rather than treating memory and compute as separate subsystems connected by bandwidth-limited interconnects, PIM architectures integrate processing capabilities within or adjacent to memory arrays. Samsung's HBM-PIM products, which embed simple processing units within HBM stacks, represent early commercial implementations of this concept. SK Hynix's AiM (Accelerator-in-Memory) products pursue a similar approach.
If memory-centric architectures prove to be the optimal paradigm for specific AI workloads (particularly inference), Korea's memory manufacturers would be uniquely positioned to capture value in both the memory and compute layers of the AI stack -- a vertically integrated position that no other national semiconductor ecosystem could replicate. K-Moonshot's AI accelerator mission explicitly funds research into these memory-centric architectures, recognising that Korea's path to AI compute leadership runs through its memory manufacturing expertise rather than attempting to compete with NVIDIA and AMD in conventional GPU architecture.
Risks and Challenges
Concentration risk: Korea's ~85% share of HBM production creates a concentration risk for the global AI industry that is increasingly recognised by US and European policymakers. Efforts to diversify HBM supply -- including Micron's expanded HBM production in Hiroshima, Japan, and potential future HBM manufacturing in the US and EU -- could gradually erode Korean market share. However, the technology and manufacturing complexity of HBM means that meaningful supply diversification is a multi-year process, and Korean manufacturers' lead in advanced HBM generations (HBM4, HBM4E) provides a moving-target advantage.
Export control exposure: Korean HBM manufacturers are navigating an increasingly complex export control environment. US restrictions on advanced semiconductor exports to China affect Korean HBM shipments, as HBM is a controlled technology when integrated into high-performance AI accelerators. SK Hynix and Samsung must balance compliance with US export controls against the commercial opportunity in China's large and growing AI market, a balancing act with significant revenue and geopolitical implications.
Technology disruption: Alternative memory technologies -- including GDDR7 (which offers higher bandwidth than conventional DRAM at lower cost than HBM), CXL-attached memory pools, and novel architectures such as MRAM or ReRAM for AI inference -- could potentially reduce HBM's market dominance in certain application segments. Korean manufacturers are investing in these alternative technologies as hedges, but HBM's performance advantage for the most demanding AI workloads is expected to persist through the current decade.
Outlook: HBM as Korea's AI Crown Jewel
Korea's dominance in High Bandwidth Memory represents the nation's single most strategically significant position in the global AI technology stack. In an era where AI is reshaping every industry and AI compute infrastructure is the subject of intense geopolitical competition, Korea's ability to supply 85% of the world's most critical AI memory technology is an asset of extraordinary value -- for the Korean economy, for the global AI industry, and for the geopolitical alignments that shape technology access and competition.
The K-Moonshot framework recognises this asset and seeks to leverage it: using HBM dominance as the foundation for a broader Korean AI hardware ecosystem that encompasses not only memory but AI accelerator logic, advanced packaging, and memory-centric computing architectures. If this strategy succeeds, Korea's semiconductor industry will evolve from a memory superpower to a comprehensive AI hardware superpower -- a transformation with implications that extend far beyond the semiconductor industry to reshape Korea's position in the global technology landscape for decades to come.