The launch marks a pivotal moment for Samsung, which had lagged rivals in the previous HBM cycle, raising concerns over its competitiveness in AI-driven semiconductors. With HBM4, the company aims to regain momentum and secure an early lead in the next wave of AI memory demand.
Samsung brought forward its shipment schedule by about a week after consultations with customers, industry sources said. The chip is understood to have passed Nvidia’s quality tests earlier than expected, reflecting what sources described as top-tier performance.
From the outset, Samsung set out to exceed benchmarks established by the Joint Electron Device Engineering Council, the global semiconductor standards body.
To achieve this, it combined its latest 1c DRAM with a 4-nanometer foundry process, an approach industry sources said had not previously been attempted.
HBM4 delivers data processing speeds of up to 11.7 gigabits per second, 46 per cent greater than JEDEC’s 8Gbps benchmark and 22 per cent faster than its predecessor HBM3E, which reaches 9.6Gbps.
Memory bandwidth per stack reaches up to 3.3 terabytes per second, roughly 2.7 times higher than the previous generation.
Using 12-layer stacking technology, the chip supports up to 36 gigabytes of capacity, with future 16-layer versions expected to expand that to 48 gigabytes.
Despite the performance gains, Samsung said HBM4 incorporates low-power design features aimed at reducing electricity use and cooling costs in AI data centres and servers.
Samsung’s Thursday announcement ended the global race among the three major memory makers to become the first to mass-produce HBM4 chips.
The competition, which had been simmering for months, heated up after last month’s earnings calls.
Its crosstown rival SK hynix sparked the standoff by disclosing during its earnings briefing that it had already begun producing HBM4 at scale.
Samsung quickly followed, saying its own mass production would start soon.
US-based Micron Technology also reaffirmed its plan to proceed on schedule, solidifying a three-way contest for technological leadership.
Conflicting reports soon surfaced.
Some said SK hynix was already shipping HBM4, while others claimed Samsung would begin deliveries just after the Lunar New Year.
Disputes emerged over which firm could claim the “world’s first” title, while market speculation intensified with rumours that Micron had fallen out of the supply race.
There was even talk over how major client orders were being split.
Micron moved to end the speculation on Wednesday, announcing that it had begun volume production and shipments of HBM4.
At the Wolfe Research semiconductor conference, Micron CFO Mark Murphy added that the company’s full HBM4 allocation for the year had already sold out.
Within the industry, Samsung is seen as having taken a more aggressive stance, applying its most advanced process technologies throughout.
While its rivals chose more conservative configurations, Samsung pushed ahead with what many see as a calculated, high-stakes bet.
SK hynix is reportedly using 1b-nanometer DRAM with a 14-nanometer base die supplied by TSMC.
Micron is said to have adopted a DRAM-based base die design of its own.
In contrast, Samsung’s use of cutting-edge processes for both DRAM and base die reflects its push to regain ground in the intensifying HBM competition.
“It’s quite unusual for Samsung to highlight the phrase ‘world’s first’ so boldly,” said Kim Yong-seok, a distinguished professor at Gachon University's College of Semiconductor.
He added the move reflects the company’s determination not to fall behind again, whether in HBM4 or in the broader AI memory space.
Now that HBM4 production is underway, Samsung is accelerating its roadmap.
The company plans to sample a performance-enhanced version, HBM4E, in the second half of the year.
Starting in 2027, it will also begin offering custom HBM samples tailored to individual client needs.
According to the company, demand is already strong, particularly from global hyperscalers designing their own AI chips, including Nvidia.
Koreaherald