MLPerf Benchmarks GPT Training Times
The MLPerf benchmark added tests to assess GPT-training performance. AI processors from Nvidia and Intel (Habana) both exhibit good scaling.
GPT3 has entered the MLPerf Training arena, and only two chip companies have ventured to grapple with this benchmark. Nvidia, working with data-center operator CoreWeave, posted the fastest training time. Intel’s Habana Labs doesn’t match Nvidia’s performance but stands out as the only business to challenge the AI juggernaut.
MLCommons, the consortium of AI companies behind the MLPerf benchmarks, has released results for MLPerf Training v3.0, the biggest update since the benchmark’s inception. In addition to evaluating large-language-model performance by testing performance on the huge GPT3 model (the basis for the popular ChatGPT), it also updates the recommender model to the more-complex DLRM-DCNv2 to better represent models in use. Other tests carry over from the prior version, v2.1.
As with past releases, the newest release has submissions for only a few AI processors. Despite many companies promising to take on Nvidia, only Intel reported even a partial v3.0 results set. In addition to submissions for Habana’s Gaudi2, Intel also provided updated results for the Xeon 8480+ (Sapphire Rapids) operating without an add-on accelerator.
As before, Nvidia’s newest big AI chip, the H100 (Hopper), delivered the most performance per chip. As its software has improved, Gaudi2’s performance per chip has inched up. It now tops Nvidia’s A100 results on the three tasks for which scores are available for both chips and even approaches the H100 on ResNet 50.
Subscribers can view the full article in the TechInsights Platform.
The authoritative information platform to the semiconductor industry.
Discover why TechInsights stands as the semiconductor industry's most trusted source for actionable, in-depth intelligence.