Tiny AI Benchmark Matches Up MCUs

Microcontrollers from STMicroelectronics feature prominently in MLPerf Tiny v1.1 results, and Plumerai’s inference engine demonstrates how it speeds up model execution.
Joseph Byrne
Joseph Byrne

Long touted for their shape-shifting abilities, FPGAs now have proof of how well they can execute AI models. The recent v1.1 MLPerf Tiny results include scores for AMD (Xilinx) Zynq 7000 devices for the first time. More complex than the MCUs that Tiny typically evaluates, Zynq scores much better, like an NBA star playing one-on-one basketball against a child.

Numerous MCUs from previous rounds have updated scores in v1.1. STMicroelectronics stands out for having several MCUs among the results, including scores from other vendors using its chips. In addition to performance data, the results include a smattering of energy-consumption measurements. Unlike MLPerf data-center submissions where many companies release results on only a few tests, most Tiny submitters supply scores for at least three of the benchmark’s four tests.

Low-cost, low-power processors target a variety of designs, and the Tiny results reveal a wide range of AI performance. One MCU can be 6x faster than another, even though both execute the models on compatible CPUs, owing to differences in the cores and the effectiveness of the software they run. Another 6x or greater gain comes from offloading execution from the CPU to hardware. The benchmark gives customers a common basis for comparing processors.

Free Newsletter

Get the latest analysis of new developments in semiconductor market and research analysis.

Subscribers can view the full article in the TechInsights Platform.


You must be a subscriber to access the Manufacturing Analysis reports & services.

If you are not a subscriber, you should be! Enter your email below to contact us about access.

The authoritative information platform to the semiconductor industry.

Discover why TechInsights stands as the semiconductor industry's most trusted source for actionable, in-depth intelligence.