Intel has introduced a new bitcoin-mining ASIC that competes respectably where the original test chip didn’t. The company will sell the chip directly rather than building its own system for sale.
AMD has added two new FPGA products to its Versal Premium line. They combine AI engines to accelerate neural networks along with plenty of programmable logic and DSP blocks for signal processing.
Broadcom and Qualcomm will support a new automated-frequency-coordination (AFC) system to allow Wi-Fi and unlicensed 5G devices to use higher power in the 6GHz band, increasing performance and range.
The Gaussian and Neural Accelerator supports always-on audio and vision workloads, reducing power and adding security. It appears on Intel PC processors, other chips, and the Clover Falls add-on chip.
The Ryzen 7 5800X3D PC processor and Milan X server processor use TSMC’s chip-on-wafer (CoW) technology to stack more cache on the compute die, improving game and HPC performance.
By acquiring Pensando, AMD will instantly offer DPUs alongside its CPUs and GPUs. Although it principally sells smart NICs, the startup recently announced a design win for its 7nm second-generation chip.
The new GlobalFoundries process combines photonic and digital components on a single 45nm chip. The primary application is data-center communications, but lidar and computing will benefit as well.
TSMC and Samsung have suffered lengthy delays in their 3nm processes, and the gain in density and other characteristics is smaller than in previous nodes. Intel says its future nodes are on schedule.
The Silicon Valley startup uses a new type of memristor to perform analog math for deep-learning acceleration. It provides nonvolatile storage and performs low-power AI computations.
EdgeCortix launched its first edge-AI inference chip by hardening its DNA IP, delivering low latency and high power efficiency for applications that fit into 5W to 20W.
The new deep-learning accelerator (DLA) can scale to more than 2,000 TOPS, providing a licensable core for applications such as autonomous driving and natural-language processing.
The newest set of MLPerf Inference results showcase the same old vendors; almost all the data-center and edge accelerators came from Nvidia and Qualcomm. Orin was the notable newcomer.
Intel’s new Arc products are its first significant discrete GPUs, using competitive performance and hardware ray tracing to bring new competition to laptop-PC graphics cards.
The dominant AI-chip vendor, Nvidia has raised the bar with its new Hopper and Orin accelerators. Startups, hyperscalers, and large chip vendors try to compete with the company but keep falling short.
Due to sample late this year, Spectrum-4 will quadruple the bandwidth of Spectrum-3 by both doubling per-lane speed to 100Gbps and doubling the number of serdes lanes to 512, yielding 51.2Tbps.
Near-memory and in-memory compute are techniques for reducing computing power—especially for AI. But they mean different things to different companies. Understanding the differences is important for understanding how some AI chips work.