Editorial: Where’s the AI Competition?
Nvidia Raises the Bar as Competitors Keep Falling Short
The dominant AI-chip vendor, Nvidia has raised the bar with its new Hopper and Orin accelerators. Startups, hyperscalers, and large chip vendors try to compete with the company but keep falling short.
The sign says “you must be THIS TALL to play in the AI market,” but most chip vendors keep coming up short. Nvidia towers over its competitors, holding nearly 90% share in data-center AI while having the most design wins in autonomous vehicles (AVs). Well-funded startups, including several unicorns, have been unable to demonstrate any advantage over Nvidia’s Ampere products, much less the upcoming Hopper generation. Of the big-name competitors, only Qualcomm delivers better efficiency on some benchmarks, and Hopper’s 3x performance jump gobsmacked ASIC designers such as Google.
Cerebras, Groq, and SambaNova, along with leading Chinese startups Enflame and Iluvatar, are in production but have published few or no benchmarks, probably owing to some combination of deficient hardware and unoptimized software. Of the best-funded AI-chip startups, only Graphcore has provided official MLPerf results, falling well short of Ampere in per-chip performance and power efficiency.
Qualcomm is the only credible competitor that has delivered a more efficient architecture than Nvidia’s. But it has been slow to upgrade its hardware to increase per-card performance and its software to support a broad range of data-center models. The company also lacks a training solution.
Several hyperscalers have invested in their own AI accelerators. But when the only target is in-house use, developing both a chip and a broad software stack is expensive and, if you can get something better from Nvidia, pointless. The startups are running out of time, and bigger companies must execute better to break the leader’s stranglehold.