Linley Fall Processor Conference 2020

Held October 20-22 and 27-29, 2020
Proceedings available

» Events  |  Event Info  |  Agenda Overview  |  Day Two  |  Day Three  |  Day Four  |  Day Five  |  Day Six  |  Proceedings & Event Videos

Agenda for Day One: Tuesday October 20, 2020
View Day Two

Click arrow below to view presentation abstract or click here -> to view all abstracts
8:30am-9:20amKeynote:

Application-Specific Accelerators Extend Moore’s Law
Linley Gwennap, Principal Analyst, The Linley Group

As transistor improvements provide diminishing benefits, chip designers are picking up the slack by developing a broad range of application-specific accelerators. Next-generation AI accelerators implement sparsity and other new capabilities in the data center. AI accelerators help IoT and smart-home devices run neural networks on milliwatts of power. Data-processing units (DPUs) accelerate data-center networking, vector processors tackle 5G, and in-memory computing addresses big data. This presentation will describe the latest trends in workload accelerators and how they are deployed.   

Q&A immediately following this keynote.

9:30am-10:20amSession 1: AI in Edge Devices (Part I)

As AI services move from the service provider into edge devices, processor designers are increasingly including hardware accelerators for this important function. These processors target lower performance than cloud accelerators but must meet the strict cost and power requirements of systems such as consumer and industrial IoT devices, surveillance and retail cameras, and even mobile devices. This session, moderated by The Linley Group senior analyst Mike Demler, examines a range of chips and IP cores that accelerate AI inference in various edge devices.

An AI Inference Accelerator with High Throughput/mm2 for Megapixel Models
Cheng Wang, Sr. VP, Software Architecture Engineering, Flex Logix

Running neural network models on megapixel images typically requires a high-power accelerator connected to a large amount of DRAM. This presentation will describe InferX X1, an embedded inference coprocessor that’s just 54mm2 in TSMC’s 16nm technology, yet it achieves throughput that equals or surpasses competitor’s GPU-based designs. The coprocessor is optimized for executing large models on megapixel images, achieving high accuracy using INT8 or Bfloat16 parameters.

Scalable Multicore Inference Engine for Efficient Compute at 100 TOPS
David Hough, Distinguished Systems Architect, Imagination

With applications scaling from edge-based speech recognition to ADAS object detection and tracking, the need for higher-performance and scalable AI accelerators is imperative. The challenge is to minimize the impact on area and power while maximizing the utilization of memory bandwidth. This presentation will introduce a new multicore AI accelerator that is highly scalable so as to enable the most demanding applications. A novel tiling approach has been adapted to make the best of the multicore architecture.

For this session, each talk will have 10 minutes of Q&A immediately following.

10:20am-10:30amBreak Sponsored by The Linley Group
10:30am-12:00pmSession 1: AI in Edge Devices (Part I continued)

Graphics, Compute & AI Extensions Based on the RISC-V ISA
Iakovos Stamoulis, Director Engineering Management, Think Silicon S.A., An Applied Materials Company

Next-generation embedded processors face a huge challenge to offer significant performance capabilities on new diverse types of workloads while maintaining very low power consumption. The NEOX AI IP Series presents a flexible and scalable solution that enables the rapid deployment of AI, machine learning, and GPGPU applications on resource-constrained devices while significantly improving battery life. This design leverages the modularity and openness of the RISC-V ecosystem to provide a rich set of support tools that accelerate application-specific solutions.

Designing Smarter, Not Smaller AI Chips with Innovative Power/Performance
Hiren Majmudar, VP and General Manager, Computing Business Unit, GLOBALFOUNDRIES

Artificial intelligence is driving digital transformation in the cloud, at the edge of the network, and in the devices and sensors around us. But the industry cannot sustain its progress if it continues to implement inefficient compute architectures using ever-shrinking process technologies. This presentation will explore AI solutions for both cloud and edge including unique features built on GF’s FinFET (12LP/LP+) and FD-SOI (22FDX) platforms that address the power/performance bottleneck in AI training and inference chips.

A Neuromorphic Processor for Power Efficient Edge AI Applications
Anil Mankar, Chief Development Officer, BrainChip

Many edge-AI processors take advantage of the spatial sparsity in neural network models to eliminate unnecessary computations and save power. But neuromorphic processors achieve further savings by performing event-based computation, which exploits the temporal sparsity inherent in data generated by  audio, vision, olfactory, lidar, and other edge sensors.  This presentation will provide an update on the AKD1000, BrainChip’s first Neural network SoC (NSoC), and describe the advantages of processing information in the event-domain.

For this session, each talk will have 10 minutes of Q&A immediately following.

12:00pm-1:00pmBreakout sessions with today's speakers
1:30pm-3:30pmSpeaker 1:1 Meetings

 

Premier Sponsor

Platinum Sponsor

Gold Sponsor

Andes Technologies

GSI Technology

Industry Sponsor

Media Sponsor