AMD is targeting the data-center AI acceleration market with its MI300X accelerator. The 3D chiplet is based on CDNA3 architecture and offers up to 2 PetaOPS INT8 performance.
As we prepare to enter 2024, we make predictions involving CPUs, AI, memory, and more. The year will be notable both for what might not happen and for what will.
Microsoft and Qualcomm hope to jumpstart Windows on Arm in 2024 with a new high-performance processor and more native software, but it might not be enough.
Arm has brought vector processing to a smaller CPU than ever before. The Cortex-M52 allows faster, more power-efficient AI processing in small IoT equipment.
Synopsys has added RISC-V to its CPU portfolio with three ARC-V families. Application coverage extends from small low-power functions to 64-bit multicore and multicluster high-performance embedded designs.
SambaNova Systems has released its fourth-generation AI processor, the SN40. Employing TSMC’s 5 nm technology, it offers 688 FP16 Tflop/s at an estimated TDP of 600 W.
Codasip’s new A730-Cheri RISC-V application CPU is the first commercial implementation of the Cheri fine-grained approach to memory protection that enables omission of a large memory-protection unit
Integrated-circuit scaling has enabled semiconductors to become an indispensable part of industry. Moore’s Law and Dennard scaling have developed into design- and system-technology co-optimization to overcome the challenges posed by data centers, AI, and consumer demand.
MediaTek’s Dimensity 9300 shakes up the CPU configuration previously typical of smartphone processors and nearly doubles the AI performance of its predecessor.
After discontinuing work on its first design, RISC V startup Ventana is readying its Veyron V2. The company plans to offer it as a 32-CPU chiplet and as a licensable core.
Axelera’s Metis AI accelerator achieves over 200 TOPS using digital in-memory computing for high-end edge applications. Typical power is less than 10 W; module pricing is below that of incumbents.
Silicon Valley startup D-Matrix is trying to accelerate LLMs with its digital in-memory compute (DIMC) chip. The company’s Corsair card provides up to 9,600 TOPS at 600 W.