ASML, the Dutch company that makes the photolithography machines every advanced chip is printed on, reported first-quarter 2026 net sales of €8.8 billion and net income of €2.8 billion on Wednesday, beating analyst expectations of €8.5 billion and €2.5 billion. The bigger news was buried in the outlook. ASML raised its full-year 2026 sales forecast from a previous range around €34 billion to a new range of €36 billion to €40 billion, citing sustained demand for AI-related chips and, more specifically, a surprising surge in orders from memory makers.
Roger Dassen, ASML's chief financial officer, put the shift plainly in the earnings call. "Fifty-one percent of new system revenue this quarter came from memory customers," he said, a level the company has not seen in a single quarter before. A year ago, memory accounted for roughly 30 percent of ASML's new-tool orders, with logic chipmakers like TSMC and Samsung's foundry division taking the rest. In three quarters, that ratio has essentially inverted.
That is the part worth pausing on. The ASML machines that memory companies like SK Hynix, Micron, and Samsung Memory are buying are the same extreme ultraviolet scanners that print the transistors for Nvidia's GPUs. When half of ASML's output suddenly has to go toward memory instead of logic, the entire AI supply chain has quietly rerouted.
What ASML Actually Sells
ASML is the only company in the world that makes the most advanced chip-printing tools, known as extreme ultraviolet (EUV) lithography machines, and the even more advanced High-NA EUV scanners now entering production fabs. A single High-NA system costs roughly $380 million, weighs as much as two Boeing 737s, and gets shipped in pieces on specialized freight aircraft. Without these tools, you cannot make the logic chips inside Nvidia's Blackwell GPUs, Apple's M-series processors, or AMD's MI-series accelerators at commercially viable yields.
For years, the assumption in the semiconductor industry was that EUV capacity would be mostly consumed by logic. That is the chip category where the marketing lives: Nvidia, TSMC's 2-nanometer node, Apple silicon. Memory, the less glamorous category that stores what logic computes, was supposed to limp behind. That assumption just collapsed.
The driver is the shift from traditional DRAM to high-bandwidth memory, or HBM, the stacked memory that sits next to every AI accelerator. HBM requires far more advanced manufacturing than conventional memory, and each generation demands more EUV exposures per wafer. HBM4, which started ramping into volume production earlier this year, is the first generation where EUV is mandatory for key layers. Every Nvidia Rubin accelerator, every AMD MI400, every custom AI chip from Google and Amazon now ships with HBM stacks that ASML machines had to print.
Why the AI Memory Shortage Became an ASML Story
The memory squeeze has been coming for months. In March, we covered the AI memory chip shortage that forced Sony to raise PlayStation 5 prices, and the same week, Google's TurboQuant announcement rattled memory stocks. Both stories pointed at the same underlying problem: hyperscalers buying AI accelerators need enormous quantities of HBM, and HBM is displacing the DRAM that powers consumer electronics.

ASML is the supply-side answer to those symptoms. When SK Hynix, the largest supplier of HBM to Nvidia, decides it needs to double HBM capacity inside 18 months, it orders more EUV scanners from Veldhoven. When Samsung decides it cannot fall further behind in HBM, it does the same. Micron, which just landed supply commitments with both Nvidia and AMD, is ramping aggressively. All three companies are, right now, competing for the same ASML shipping slots.
"We are seeing order momentum from memory customers that exceeds our own internal forecasts from six months ago," CEO Christophe Fouquet told analysts on the call. "The urgency is coming from the AI compute roadmap. Each new generation of accelerator requires more advanced memory, and that is a direct translation into EUV demand."
The China Cloud That Almost Spoiled the Day
There is a reason ASML shares sank despite strong earnings, and it rhymes with the news cycle of the last year. The Biden-era export controls on advanced semiconductor equipment to China were tightened again in late March, and ASML confirmed on the call that it is now effectively unable to ship its High-NA EUV systems, EUV systems, and certain advanced DUV immersion scanners to Chinese customers.
Fouquet estimated that the tightened restrictions will shave roughly €1.5 billion from what would otherwise have been 2026 China revenue. "Our updated outlook absorbs that impact," he said, meaning the €36-40 billion range already reflects the Chinese order book being held flat or shrinking. The memory-driven upside elsewhere is large enough to more than offset the lost China sales, but investors noticed anyway. ASML shares fell about 4 percent in European trading after the release.
Dan Ives, a tech analyst at Wedbush Securities, called the reaction overdone. "ASML is telling you the AI supercycle is stronger than the consensus model," Ives wrote in a note to clients. "The memory line item alone is a historic shift. The China drag has been priced in for two years. This is a buying opportunity disguised as a miss."
Who Wins and Who Gets Squeezed
For semiconductor manufacturers, the ASML raise is a mixed signal. On the positive side, it confirms that end demand from AI hyperscalers is not slowing and that the memory-intensive architecture of current accelerators is durable. Nvidia, AMD, Broadcom, and the custom-silicon teams at Google, Amazon, and Meta all benefit from a memory ecosystem that can actually keep up with their compute roadmaps.
On the negative side, every EUV tool slotted to a memory fab is one that a logic fab did not get. TSMC has previously suggested that its 2026 capital expenditure plans assumed a certain number of High-NA tool deliveries; if those are now split more heavily toward Samsung's memory division or Hynix, the logic ramp to 2-nanometer and below could feel tighter than expected.
Equipment rivals are the other losers. Applied Materials, Lam Research, KLA, and Tokyo Electron all make tools that complement EUV scanners, but none of them replaces ASML. The Dutch monopoly on advanced lithography is arguably the single most concentrated bottleneck in the global economy, and today's earnings reinforced that.

For consumers, the second-order consequences keep unfolding. Every memory wafer committed to HBM is a wafer not producing the DDR5 that sits in consumer laptops or the LPDDR that sits in phones. That is the mechanism behind Sony's PS5 price hike and the reason enterprise buyers are seeing DRAM spot prices at levels last reached during the 2021 shortage. ASML's confirmation that memory capacity is being added aggressively is good news for the medium term. In the short term, the shortage gets worse before it gets better.
What to Watch Next
Three data points will tell us whether the memory-driven AI supercycle is real or a one-quarter anomaly. The first is the second-quarter order intake figure ASML will report in July; a print above €6 billion would confirm the trend. The second is capital-expenditure guidance from the big three memory players over the next six weeks. SK Hynix reports April 24, Micron on April 28, and Samsung's memory division is embedded in the company's May results. If all three raise 2026 capex, the EUV order book has structural support.
The third signal is logic. If TSMC, which reports Thursday, trims its 2-nanometer ramp pace even slightly, that will be the clearest evidence that memory is winning internal ASML allocation battles. TSMC has historically been ASML's largest customer by revenue. Watching it share wallet with memory would be a genuinely unusual configuration, and one the industry is not yet modeling correctly.
The Bigger Story
ASML is, in effect, the leading indicator for the physical shape of the AI buildout. The company sees orders from memory and logic chipmakers 12 to 18 months before those chips appear in products. Today's guidance raise says the AI infrastructure phase that has defined the last two years is accelerating, not maturing, and that the memory side of that buildout is now a larger driver than the logic side.
That reframes several debates at once. The question of whether AI capex is peaking looks harder to sustain when the only company with a view of all simultaneous fab investment is telling you it underestimated 2026 demand by several billion euros. The question of whether memory is a commodity cycle or a strategic AI asset looks settled. And the question of whether any single vendor is structurally more important to AI than Nvidia itself deserves a second look. Nvidia designs the accelerators. TSMC prints the logic. Hynix stacks the memory. ASML sold every one of them the machine that made it possible.
On the factory floor in Veldhoven, there are still customers waiting in line.
