A new startup, Rapid Silicon, is entering the FPGA market. Its Gemini SoC FPGAs, with hardened CPUs, compete against aging midrange alternatives from AMD, Intel, and Microchip.
MPR’s analysts have developed forecasts for various processor markets. The trend toward PC processors with AI accelerators is under way. Meanwhile, some AI-accelerator startups will topple as their funding runs out.
AMD’s MI300 accelerator will compete with Nvidia's Grace Hopper in the HPC and AI markets. It uses AMD's third-generation CDNA3 architecture and features x86, GPU, and memory die in a single package for improved performance.
Qualcomm’s Ride Flex SoC combines ADAS and cockpit applications, integrating workloads that have no safety criticality with those requiring ASIL B and ASIL D safety.
TSMC presented papers at IEDM detailing its 3nm N3 and N3E processes. N3 reduces CPP by 6nm compared with N5. SRAM cells are no smaller in N3E than in N5.
Imagination Technologies’ DXT GPU increases raw performance by 50% over the prior CXT generation. Although ray tracing remained in the headlines, changes to other circuits are responsible for this higher throughput.
AMD is addressing notebook PCs from multiple directions, fielding a mobile Ryzen that integrates an AI accelerator and a high-performance Ryzen that has up to 16 CPUs and large caches.
MPR recognizes the past year’s top products in the categories of data center, PC, embedded, smartphone, processor IP, and emerging technology. And the winners are...
The Sapphire Rapids Xeon Scalable processor integrates multiple features that Intel disables in certain models. The company’s On Demand program allows customers to enable them postpurchase.
Startup Encharge AI has exited stealth mode, announcing its in-memory-compute technology. Its chips use analog technology and provide 20x more performance per watt than competitors.
Over the past year, a few CPU-IP vendors have adopted RISC-V, challenging both Arm and RISC-V startups. Meanwhile, AI-accelerator vendors are jockeying to stand out in a crowded field.
The year 2022 saw AI accelerators and DPU becoming mainstream in data center servers. Chiplets came to more CPUs while the chip power consumption reached a new high, necessitating a rethink in cooling.
NXP’s i.MX95 family targets industrial and automotive systems as well as consumer ones. With several CPUs and three computing domains, it exceeds what’s available today, but that gap may narrow by the time it ships.