After spinning off from SK Telecom, Sapeon has become the first Korean company to deliver an AI chip. The X220 provides an efficient accelerator for both video analysis and language processing.
Zen 4 increases IPC and runs much faster than Zen 3 owing to optimized circuits, a process shrink, bigger buffers, deeper queues, and a larger micro-op cache.
Despite a short design cycle, Intel boosted the performance of its 13th Generation Core processor by up to 15% over the previous generation through higher clock speeds and more “efficiency” cores.
Qualcomm updated its Snapdragon 6 and 4 lines, moving to a new process, improving performance, and selectively adding features. The new nomenclature aligns with the 7- and 8-series.
The MLPerf 2.1 inference release includes preliminary results that put Nvidia’s Hopper H100 in the performance lead. Asian startups Biren and Sapeon also made impressive debuts.
The startup disclosed new details about how its tiny cores deliver tremendous performance and how its sparsity support boosts performance when training large AI models.
The newly unveiled CXL 3.0 introduces memory sharing, direct device peer-to-peer memory access without involving a host, and multilevel switching. A new global fabric-attached memory can be shared by 4,096 hosts.
Broadcom has added a network-scanning engine to its 12.8Tbps Trident 4 Ethernet switch. Capable of fingerprinting every packet, the engine improves network security.
Lightmatter’s Passage substrate is an active photonic interposer for interconnecting chiplets. All the photonic components and supporting electrical circuits reside in a single multi-reticle piece of silicon.
To tackle the largest AI models, Nvidia has designed a processor to feed its powerful new Hopper GPU. Grace has twice the memory bandwidth of any x86 processor and can hold GPT-3 in DRAM.
In the data-center accelerator race, the three-year-old startup has burst from the gate with a chiplet-based design that aims to compete with Nvidia for general-purpose-GPU (GPGPU) cloud computing.
The startup has made progress on its Grai VIP deep-learning accelerator, changing native processing from INT8 to FP16 and adding audio workloads to its target applications.