Chip Observer – March 2026
$650 billion spend on AI infrastructure, Apple's new M5 chip packaging, and Japan's memory investments
7 Min Read April 1, 2026

This edition of the Chip Observer takes a close look at the latest artificial intelligence booms and busts, with capital expenditure on AI infrastructure reportedly about to reach $650 billion in 2026. Yet spending is brought into question, as seen with OpenAI’s $110 billion funding round, which raised some eyebrows as investors like NVIDIA reduced spending commitments.
On the consumer electronics front, Apple has made a series of exciting launches across both the high-end and value-driven PC markets. The MacBook Neo in particular sets a new standard for Original Equipment Manufacturers (OEMs), who are strained by a challenging components market and an increasingly negative public perception of Windows 11. This device opens discourse on both its software capabilities running full macOS, and the role of Apple's vertical integration from Intel CPUs toward manufacturing Apple Silicon.
New Challenges Across Consumer Electronics, Hyperscalers, and Foundries
Hyperscalers are estimated to spend $650 billion on AI infrastructure in 2026
Capital expenditures on AI infrastructure by the hyperscale computing companies Alphabet (Google), Amazon, Meta, and Microsoft are reported to reach about $650 billion in 2026. Including real estate purchases, datacenter leases, and hundreds of billions of dollars in bonds sold to cover these costs. Projects on AI ASICs, such as Google's TPUs and Meta's “Olympus,” are facing a reevaluation of their paths to market.
Apple Introduces M5 Dual-Chip Packaging and a Value-Oriented MacBook
The MacBook 16-inch Pro features an interesting advancement in its M5 chip's advanced packaging. Apple's fusion architecture enables interconnect across identical CPU tiles, making the GPU tile the primary difference between the two. In the low-end PC market, the MacBook Neo boasts an impressively low price point, setting new standards that will be hard for other OEMs to match.
Samsung starts shipments of HBM4 DRAM, produced on the 1c DRAM and 4nm logic node
Manufactured on Samsung’s 6th-generation 10-nanometer (nm)-class DRAM process (1c), Samsung indicates its initial HBM4 product provides a consistent 11.7 Gbps data rate. Other notable foundry moves were seen in Japan, with TSMC planning to build a 3nm manufacturing facility in Kumamoto and Rapidus raising a further ¥267.6 billion ($1.7 billion) from the Japanese government.





