Agenda for Day Two: Wednesday November 2, 2022
9:00 – 9:50 AM
Keynote: How are power, cost and productivity limitations shaping the semiconductor industry?
Ranganathan “Suds” Sudhakar, Vice President, Engineering, Cisco Systems
Ranganathan “Suds” Sudhakar is a technology intrapreneur currently serving as Vice President, Engineering at Cisco Systems. In this role, he leads strategic R&D with a mix of organic development and inorganic investments. Earlier, Suds converged and oversaw Cisco’s worldwide silicon engineering, delivering the compute, storage and networking ASICs powering over $20 billion in systems revenue. Previously, Suds was CEO and co-founder of Zebibyte, a storage acceleration startup. His professional innovations and tactical leadership have spanned the gamut from MIPS smartwatches to SPARC supercomputers, with over 25 US patents issued. He began his career at AMD, developing their first GPU simulator and founding the Opteron CPU performance modelling team.
He has trained in the art of computer architecture at Stanford University, the discipline of EDA software at UC Santa Barbara and the science of Electronics and Communication at Anna University. Always rooting for the underdog, he lends guidance and governance to a diversity of corporate Boards.
Two generations of humans have witnessed twenty generations of semiconductor technology come and go. Like any real-world exponential, Moore’s self-fulfilling “Law” is inexorably flattening. This talk will examine how trends in silicon power, cost and productivity are coming together to drive new challenges and opportunities. With few silver bullets on the horizon, we will examine how the industry is pivoting to acknowledge – and exploit – these new realities.
9:50 – 10:10 AM
BREAK – Sponsored by Flex Logix
10:10 – 12:15 PM
Session 7: Chips for AI Acceleration
Both small and large semiconductor vendors are developing chips to deliver efficient AI performance. As the market matures, we see these vendors moving from simply offering high TOPS per watt to delivering complete hardware and software solutions that perform well on real-world neural networks. This session, led by TechInsights principal analyst Linley Gwennap, features chips for edge and data-center applications.
A Balanced Architecture for Future-Proof AI Acceleration at the Edge
Sree Reddy, Vice President of Engineering, Kinara
Sree Reddy is a trailblazer and a thought leader who is deeply passionate about building products for new markets. He has built teams with diverse skillsets ranging from Software, Silicon, Systems, and Applications from ground up and led them to success in cutting edge technology markets. At Kinara, he leads a young and talented team that is delivering best-in-class edge AI platforms. Before joining Kinara, he oversaw a highly successful acquisition of Memoir systems by Cisco Systems. His past endeavors include stints at NVIDIA, Intel, and successful startups like Abrizio and Azul Systems in various leadership and engineering roles.
Edge AI has been a hot topic for many years, but companies are now getting serious about deploying it. They’re recognizing the need to move from ‘toy’ models to leading-edge models that support higher accuracy and more functionality but require huge compute capability to run effectively. This presentation describes Kinara’s next-generation architecture, which supports much higher system-level performance than its predecessor, delivers increased flexibility for running newer models, and provides specialized encryption to protect customer’s model IP.
Tiny Spiking AI for Always-On Sensing
Dr. Sumeet Kumar is CEO of Innatera, the pioneering Dutch neuromorphic processor company. Dr. Kumar holds an MSc and PhD in Microelectronics from the Delft University of Technology, The Netherlands. He was previously with Intel, where he worked with the Imaging and Camera Technologies Group developing domain-specific tools for the development of complex media processor architectures. At Delft, Dr. Kumar is credited with creating two highly-successful European R&D programmes developing energy-efficient compute hardware for highly automated vehicles, together with organizations including Infineon, NXP, and BMW, among others. He was also responsible for leading industry-focused research on power-efficient multiprocessors and computational neuroscience.
The brain relies on tiny spiking neural networks for sparse, robust, and energy-efficient processing of sensory data. This presentation discusses Innatera’s neuromorphic processing for always-on sensing applications. The company’s Spiking Neural Processor (SNP) implements a unique analog/mixed-signal architecture for energy-efficient inference of spiking neural networks, enabling full-featured AI applications with sub-milliwatt power and sub-millisecond latency. The presentation includes real-world application use cases and introduces Talamo, a powerful PyTorch-compatible SDK that radically simplifies application development.
Addressing deep-learning trends with Habana’s data-center accelerators
Sree Ganesan, Head of Software Products, Habana Labs
Sree Ganesan leads Software Product Management at Habana Labs, working alongside a diverse global team to deliver state-of-the-art deep learning capabilities of the Habana SynapseAI® software suite to the market. Previously, she was Engineering Director in Intel’s AI Products Group, where she was responsible for AI software strategy and deep learning framework integration for Nervana NNP AI accelerators. Ms. Ganesan joined Intel in 2001 and has held a variety of technical and management roles in software engineering, VLSI CAD and SOC design methodology. Ms. Ganesan received a bachelor’s degree in electrical engineering from the Indian Institute of Technology Madras, India and a PhD in computer engineering from the University of Cincinnati, Ohio.
Deep-learning computation faces a growing number of challenges, which the industry and its players are working to address collectively and independently. With the increasing availability of resource-rich data and the growing demand for complex AI applications, compute capacity must increase while reducing model and application compute requirements. This presentation will discuss ways that Habana and its ecosystem partners are collaborating to speed time-to-model execution with greater ease for developers and data scientists.
High Performance from Cloud to Edge Inferencing
Colin Verrilli, Senior Director, Qualcomm
Colin Verrilli joined Qualcomm in July 2013 and worked as a member of the Server Division's pathfinding team. He investigated new technologies and made many innovative contributions to the server platforms. In 2016 he was instrumental in jump-starting Qualcomm's data center machine learning accelerator project, architecting many of its key features. In 2018 Colin joined Qualcomm Corporate R&D and became lead architect on the Qualcomm Could AI 100 and has guided the design and development of the product to a successful launch. Colin is also playing a major role in defining Qualcomm’s next-generation machine learning accelerator. He came to Qualcomm with a 30-year background in computer networking, computer and systems architecture and software engineering. Colin has 95 patents issued and has a Master's degree from Rensselaer Polytechnic Institute.
Natural Language Processing (NLP) models have become much larger in recent years and are projected to grow further. Common solutions for performing inference on these huge models involve expensive high-bandwidth memories and scale-out networking. This talk presents a low-cost solution that supports these workloads and offers the potential for super-linear performance improvements.
There will be Q&A and a panel discussion featuring above speakers.
12:15 – 1:45 PM
LUNCH Sponsored by Ceremorphic
1:45 – 3:45 PM
Session 8: CPU IP and GPU IP
Developers of SOCs for mobile, automotive, and data center applications need CPUs to handle computation and GPUs for rendering graphics. Designers increasingly are looking for scalable IP so that chips addressing different price points and power levels can share IP, accelerating chip design and simplifying software development. This session led by TechInsights Director of Market Analysis Joseph Byrne looks at new CPU IP and a low-power GPU capable of ray tracing.
Architecture and Key Features of SiFive's Newest Out-of-Order Vector Processor
Shubu Mukherjee, Vice President Architecture, SiFive
Shubu Mukherjee is a pioneer in the field of design and modeling of computer architecture. He is currently the Vice President, Architecture at SiFive, Inc. He was the 2009 recipient of the Maurice-Wilkes award, an ACM award for outstanding contributions to the field of computer architecture. He is also a Fellow of ACM and a Fellow of IEEE. Dr. Mukherjee received his Ph.D. degree from the University of Wisconsin-Madison under the supervision of Prof. Mark D. Hill. He received his B.Tech. from the Indian Institute of Technology, Kanpur, where he served as an adjunct professor for several years. Dr. Mukherjee worked in Digital Equipment Corporation for 10 days, Compaq for three years, Intel for nine years, Cavium for eight years, and Marvell for one year before joining SiFive.
Dr. Mukherjee is an innovator in the field of architecture design for soft errors. With a team of researchers, he developed techniques to model and protect semiconductor chips against alpha particles and cosmic radiation. Dr. Mukherjee wrote the authoritative book in the area titled, "Architecture Design for Soft Errors." ACM SIGARCH awarded him the Maurice-Wilkes award for outstanding contributions to modeling and design of soft-error tolerant microarchitectures.
Dr. Mukherjee is also a veteran computer architect in the industry. He led the Xeon performance team in Intel and spearheaded several generations of MIPS and ARM-based core architectures in Cavium and Marvell. Currently, he is part of the SiFive executive leadership team, where he leads the SoC architecture design within SiFive.
Since launching the P650 RISC-V applications processor a year ago, SiFive has made continual product enhancements to the platform such as support for the RISC-V vector extension, multicluster, virtualization, and WorldGuard security. The result is a best-in-class RISC-V processor as demonstrated by industry-standard benchmarks such as SPECint that is ready to tackle challenging computing requirements in applications such as mobile, autonomous vehicles, and the data center. This next-generation processor, called the P670, will propel RISC-V from embedded applications to the forefront of computing, where raw performance is demanded. This presentation will step through the P670 architecture and key features. It is available to lead partners now and will be ready for general release in Q1 of 2023.
Andes Technology’s Next-Generation Scalable RISC-V Application Processor Family
Charlie Su, President and CTO, Andes Technology
Dr. Charlie Su, Cofounder, President and CTO of Andes Technology, has overseen engineering and marketing since the company started in 2005. Under his leadership, Andes developed processor IP solutions based on its own ISA before joining the RISC-V Foundation as a founding member in 2016. Charlie spent 12 years in Silicon Valley with various technical and management positions. Prior to Andes, he led the CPU and DSP IP development at Faraday as Chief Architect. He obtained his Ph.D. in Computer Science at University of Illinois Urbana-Champaign, M.S. in Computer Science at National Tsing-Hua University, and B.S. in Electrical Engineering at National Taiwan University.
Today, companies designing SoCs for cloud accelerators, enterprise storage systems, data-center networking equipment, and 5G infrastructure employ Andes vector processors to form large compute arrays or employ many instances of compact processors to handle partitioned channels. In this talk, Andes will describe its forthcoming next-generation RISC-V application processor family.
Power-Efficient Scalable Ray Tracing GPUs
Kristof Beets, VP of Technology Insights, Imagination
Kristof Beets is VP of Technology Insights at Imagination Technologies where he drives the alignment of the technology roadmaps with market trends and customer needs as part of the IMG Labs Research organisation. He has a background in electrical engineering and received a master’s degree in artificial intelligence
Ray tracing has made a journey from non-real-time, compute-intensive systems, to real-time hybrid ones. Now, ray tracing is migrating from wall-plug-powered use cases (e.g., PC and console games) to battery-powered use cases that need to provide a compelling user experience within a constrained power envelope (e.g., mobile, AR, and automotive cases). Imagination will reveal the main aspects that need to be considered to create scalable ray tracing and how by using a hybrid rendering approach you can enhance the gaming experience with ray tracing while remaining in a mobile power budget.
Addressing Scalable Processor Performance in High-End Embedded Applications
Kulbhushan Kalra, Engineering Manager, ARC, Synopsys
Kulbhushan Kalra is Engineering Manager of Hardware Development for ARC Processors at Synopsys. He is responsible for the development of advanced ARC processors including Architecture, RTL design, verification and physical design. He joined Synopsys through the acquisition of Virage Logic. Prior to joining Virage Logic, he was managing the development of TriMedia DSP processors and MIPS processors at NXP Semiconductors for 12 years. Kulbhushan obtained his B.S. in Electronics and Communications Engineering from Delhi College of Engineering, India. His interests include computer architecture and multi-core high performance processors.
Ever increasing performance requirements continuously fight the tight power and area constraints demanded by embedded applications. Coherent integration of real-time, application processors and specialized hardware accelerators as well as providing a wide range of memory and bandwidth options are critical to satisfying the implementation flexibility essential for SoC designers. This presentation will discuss a flexible processor architecture which can be configured from ultra-cost-effective to extreme performance, covering the broad gamut of high-end embedded application requirements.
There will be Q&A and a panel discussion featuring above speakers.
End of Conference