Linley Spring Processor Conference 2020

Held April 6-9, 2020
Virtual Event

» Events  |  Event Info  |  Agenda Day One  |  Agenda Day Two  |  Agenda Day Three  |  Proceedings & Event Videos

Agenda for Day Four: April 9, 2020
View Day One

Click arrow below to view presentation abstract or click here -> to view all abstracts
9:00am-10:40amData-Center Processors and Accelerators

Enterprise and cloud data centers demand the highest-performance processors, accelerators, and networking products. For AI acceleration, several new architectures are emerging to challenge traditional GPUs with application-optimized designs. This session, led by The Linley Group principal analyst Linley Gwennap, will discuss the impressive progress of these vendors.
9:00am-9:20amTenstorrent

Neurons, NAND Gates, or Networks: Choosing an AI Compute Substrate
Ljubisa Bajic, CEO and Lead Architect, Tenstorrent

Machine learning provide a blank slate for architecture innovation, and developers have tried many different approaches, including analog, in-memory, wafer scale, and spiking. These designers all face two fundamental questions: what is the right granule of computation, and what is the right granule of communications. This presentation will review these different approaches and introduce a new architecture that delivers efficient, fine grain, run-time conditional computation using a grid of powerful programmable processors connected by a dynamically routed packet network.

9:20am-9:40amGroq

Groq Rocks Neural Networks: The Architecture Story
Dennis Abts, Chief Architect, Groq

Groq has taken an entirely new architectural approach to accelerating neural networks. Instead of creating a small programmable core and replicating it dozens or hundreds of times, the startup designed a single enormous processor that has hundreds of function units. With the Groq architecture providing a substantial performance advantage over GPU-based solutions, engineering managers can deploy machine learning platforms that offer twice the inference performance without doubling infrastructure costs. Groq's TSP stands out in both peak performance and ResNet-50 throughput.

9:40am-10:00amCerebras Systems

Building the World’s First Wafer-Scale Processor for AI
Sean Lie, Chief Hardware Architect, Cerebras Systems

Cerebras Wafer Scale Engine (WSE) is the industry's only trillion-transistor processor. Built and optimized for AI work, the WSE contains more cores, more local memory, and more fabric bandwidth than any processor in history. This presentation will discuss the benefits of wafer-scale integration for AI work and share some of the engineering challenges and lessons learned from delivering a 400,000 core, 46,225mm2 processor.

10:00am-10:30amQuestion and Answer panel discussion with above speakers
10:30am-10:40amBreak Sponsored by SiFive
10:40am-1:10pmProcessor Technology

The decline of Moore's Law has forced the processor industry to accelerate innovation in architecture, interconnect, performance measurement, software, packaging, and other technologies. Many of these innovations apply across a range of end markets. This session, led by The Linley Group senior analyst Tom Halfhill, presents some of these recent advances.
10:40am-11:00amArteris IP

Implementing Low-Power AI SoCs Using NoC Interconnect Technology
Matthew Mangan, Applications Engineer, Arteris IP

As AI / ML processing systems have become more complex with the growing number and complexity of hardware accelerators, it has become increasingly difficult to optimize for both performance and power consumption. Resolving this conundrum is especially important for edge computing and automotive systems. This presentation describes lessons learned using network-on-chip (NoC) technology to implement AI processing SoCs that meet explosive bandwidth and tight latency requirements while meeting stringent power consumption needs.

11:00am-11:10amQuestion and Answer
11:10am-11:30amFlex Logix

Performance Estimation and Benchmarks for Real-World Edge Inference Applications
Vinay Mehta, AI Inference Technical Marketing Manager, Flex Logix

Discussions about AI inference acceleration tend to focus on hardware with little attention to software and real-world application benchmarks. This presentation will instead describe software techniques to optimize the performance of programmable inference hardware. It will also show multiple real-world benchmark results to compare the InferX X1 against some of today's most popular shipping inference hardware. It will conclude with a discussion about how to diagnose performance to better optimize models and compilers.

11:30am-11:40amQuestion and Answer
11:40am-12:00pmSiFive

“Vectors are History”: 30 Years Later
Randy Allen, VP of RISC-V Software, SiFive

Three decades ago, Forest Baskett accurately predicted the decline of vector computing in a paper entitled "Vectors are History." The analytic approach of this paper still applies, but it leads to different conclusions today. This presentation will apply that analysis to today's computing world, leading to the conclusion of the increasing importance of vector, parallel, heterogeneous, and SoC computing. Today's Intelligence of Things trends require a compiling framework to enable effective programming of such complex architectures without superhuman effort.

12:00pm-12:10pmQuestion and Answer
12:10pm-1:10pmBreakout sessions with today's speakers

 

Premier Sponsor

Platinum Sponsor

Gold Sponsor

Industry Sponsor

Media Sponsor