Grai Matter Pivots to Floating Point
The startup has made progress on its Grai VIP deep-learning accelerator, changing native processing from INT8 to FP16 and adding audio workloads to its target applications.
Grai Matter Labs (GML) has adapted its deep-learning-accelerator (DLA) technology to audio as well as vision in its first formal product, dubbed Grai VIP. By moving from native INT16 processing to FP16, it has achieved sufficient accuracy and dynamic range for audio processing with high-enough quality for human consumption.
To accomplish this feat, the company modified its primary DLA core so that, in addition to handling much larger models than its predecessor, the chip can do so with fewer cores. GML also added CPUs and I/Os to flesh out an SoC that can run independently or under the direction of an external host for local processing to reduce cloud traffic, power, and latency while boosting privacy. Maximum neural-network performance is 1.5 trillion operations per second (TOPS, but instead using FP8, which should be similar to INT8) at 2.5W TDP.
GML revealed its event-based architecture three years ago with its Grai One proof-of-concept (PoC) chip, which focuses primarily on vision applications. By computing only when pixels change, it promises to do its work with less effort, saving energy. Grai One allowed the company to demonstrate its technology’s viability, enabling a new $14 million funding round in late 2020 for a total of $29 million. In its production configuration, GML has applied those ideas to audio as well.
GML has an initial Grai VIP die under evaluation with customers now; a production version, which may make small changes in I/O and other areas, is scheduled for 2H23. The company withheld pricing.