Moffett Chip Targets AI Sparsity
Moffett, a startup headquartered in Shenzen, China, has released AI accelerators that address sparsity to increase performance 32x. The company offers three cards spanning a 70–250 W TDP range.
Hoping that sparsity will flourish in the data-center market, Chinese startup Moffett has introduced accelerator cards targeting AI inferencing. The company has designed software and hardware to accelerate sparse neural networks and is shipping its first products.
Founded in 2018, Moffett is headquartered in Shenzhen, China, with offices in Silicon Valley. Consisting of AI research scientists from Carnegie Mellon, Intel, and Marvell, the founding team has published academic papers on sparsity. The company has raised a sparse $25 million in two funding rounds.
Moffett’s PCIe accelerator cards come in three power configurations: 70 W, 165 W, and 250 W. Employing the common marketing trick of counting computations avoided by detecting sparsity, the company rates the high-end S30 card at 2.8 petaops/s, while actual performance is 88.5 TOPS.
Detecting sparsity is one of the key trends in the AI world. While it offers several performance benefits, it has yet to become commonplace. Moffett faces Nvidia, which has been supporting sparsity in the past two product generations. To compete, the company must build a comprehensive software stack that supports a wide range of neural networks in production applications.
Subscribers can view the full article in the TechInsights Platform.
The authoritative information platform to the semiconductor industry.
Discover why TechInsights stands as the semiconductor industry's most trusted source for actionable, in-depth intelligence.