Linley Spring Processor Conference 2021

April 19 - 23, 2021
Virtual Event

» Events  |  Event Info  |  Agenda Overview  |  Day One  |  Day Two  |  Day Three  |  Day Four

Agenda for Day Five: Friday April 23, 2021
View Day One

Click arrow below to view presentation abstract or click here -> to view all abstracts
8:30am-10:00amSession 8: Edge AI (Part II)

As AI applications move from cloud platforms into edge devices, processor designers are increasingly including hardware accelerators for this important function. These processors target lower performance than cloud accelerators but must meet the strict cost and power requirements of consumer, industrial, IoT, mobile, and many other types of devices. This session, moderated by The Linley Group senior analyst Mike Demler, examines a range of chips and IP cores that accelerate edge-AI inference.

Vision and AI DSPs for Ultra-High End and Always-On Applications
Pulin Desai, Director Vision and AI Product Marketing, Cadence

The number and resolution of image sensors continues to increase in mobile, AR/VR, automotive, drone, mobile, and robotics platforms, which also have a mix of different types of sensors on top of image sensors. These sensors require highly programmable, high-performance and low power vision and AI DSPs. The market also needs different performance points to address various use cases. This presentation will highlight trends in these markets and disclose new products in the Tensilica Vision and AI DSP line.

Enhancing RISC-V Vector Extensions to Accelerate Performance on ML Workloads
Chris Lattner, President, Engineering and Product, SiFive

Tremendous progress has been made in the last year towards bringing RISC-V vector (RVV) extensions to market in both hardware implementations and supporting compiler technologies. SiFive has gone a step further with the inclusion of new vector operations specifically tuned for the acceleration of common neural networking tasks.  We will demonstrate how these new instructions, integrated with a multicore, Linux-capable, dual-issue microarchitecture, with up to 256b wide vectors, and bundled with TensorFlow Lite support, are well-suited for high-performance, low-power inference applications

Energy-Efficient, Reconfigurable and Scalable AI Inference Accelerator for Edge Devices
Hamid Reza Zohouri, Director of Product, AI Hardware Accelerator, EdgeCortix

Achieving high performance and power efficiency for AI inference at the edge requires maintaining high chip utilization even with a batch size of 1. This presentation will cover how Edgecortix’s reconfigurable Dynamic Neural Accelerator (DNA) IP scales from a few TOPS to more than 50 TOPS, while maintaining high utilization, power efficiency and low latency regardless of workload. We will also outline how our MERA dataflow compiler complements the IP and enables seamless machine learning inference with DNA-enabled systems on chip.

For this session, each talk will have 10 minutes of Q&A immediately following.

10:00am-10:10amBreak Sponsored by Flex Logix
10:10am-11:10amSession 9: Efficient AI Inference

As AI services rise in popularity, service providers become more concerned with operating cost, which is driven by power consumption. A new breed of AI inference accelerators, often available in smaller form factors, address this need for data centers and high-end edge equipment. This session, led by The Linley Group principal analyst Linley Gwennap, discusses three new products that focus on power-efficient inference acceleration.

High Performance Inference for Power Constrained Applications
Cheng Wang, Sr. VP, Software Architecture Engineering, Flex Logix

In this presentation, we will discuss AI inference solutions for power constrained applications, such as edge gateways, networking towers, and medical imaging devices. We will begin with the set of considerations for hardware deployment, as these applications have lower thermal constraints and usually do not have space for a full size PCIe card. This will lead into a brief overview of the M.2 form factor, and we will then talk about the role of an M.2 inference accelerator in the system designs for such applications.

High Performance and Power Efficient AI Inference Acceleration
John Kehrli, Senior Director, Product Management, Qualcomm

The Cloud AI 100 accelerator offers leadership class performance and power efficiency across many applications ranging from datacenter to edge deployment. This talk will discuss Qualcomm’s comprehensive offering of commercial software tools for integration and deployment in production settings. It will also include new benchmark data and disclose how performance scales using multiple accelerators.

For this session, each talk will have 10 minutes of Q&A immediately following.

11:10am-12:10pmBreakout sessions with today's speakers
12:10pmEnd of Conference

 

Premier Sponsor

Platinum Sponsor

Gold Sponsor

Industry Sponsor