Linley Spring Processor Conference 2019

Held April 10 - 11, 2019
Proceedings available

» Events  |  Event Info  |  Agenda Day One  |  Proceedings

Agenda for Day Two: April 11, 2019
View Day One

Click arrow below to view presentation abstract or click here -> to view all abstracts
9:00am-10:00amKeynote

After Meltdown and Spectre: Security Concerns Facing Contemporary Microarchitectures
Jon Masters, Computer Architect, Red Hat

Spectre, Meltdown, and Foreshadow (L1TF), members of a new class of computer microarchitecture vulnerability known as a speculative execution side-channel, have affected the entire microprocessor industry since first being disclosed a year ago. These and other variants require in-field mitigation through a variety of software, firmware, and microcode updates. Most of these mitigations come at the cost of performance, with true solutions requiring fundamental changes to the design of future processors. Contemporary processors rely heavily upon speculation for performance. This presentation will discuss how processor speculation works, the vulnerabilities that have been discovered, and how the industry can together solve these challenges for the long term.

There will be Q&A following this presentation.

10:00am-10:20amBREAK – Sponsored by Intel
10:20am-12:00pmSession 4: AI in Data Center

Data-center services are evolving from simple web functions to voice interfaces, image searches, content filtering, and data mining. Deep neural networks (AI) are being broadly deployed to support these services and sift through massive pools of "big data" swiftly and efficiently. This session, moderated by The Linley Group principal analyst Bob Wheeler, will discuss how server designers can improve the processing, memory, and I/O capabilities of their systems to address these changing workloads.

DL Boost: Embedded AI Acceleration in Intel Xeon Scalable CPUs
Ian Steiner, Xeon Scalable CPU Lead Architect, Intel

Deep-learning inference has emerged as a critical compute component in today's data centers. In the recently launched Xeon Scalable CPU, Intel added the DL Boost (VNNI) extensions to accelerate INT8 deep-learning inference. To support these new hardware capabilities, Intel is making significant engineering investments to develop the open-source software ecosystem. This presentation will provide a deep dive into the performance behaviors of a popular DL topology running on top of Intel MKL-DNN using VNNI.

Intel Nervana Neural Network Processor: Redesigning AI Training Silicon
Carey Kloss, VP and GM of AI Hardware, Intel

AI computational advances have surged, but memory is now limiting the performance and capacity of hardware architectures. Breaking through this memory barrier drove an entirely new approach to the Nervana Neural Network Processor for Learning (NNP-L), built from the ground up to accelerate deep learning. This presentation will discuss the architecture underlying the NNP-L and how this chip optimizes compute, memory, and interconnects to provide higher utilization and better accelerate deep-learning training.

DDR5: Mainstream Memory That Maximizes Effective Bandwidth
Brian Drake, Senior Business Development Manager, Micron

The "data economy" is driving demand for higher-bandwidth memory due to increasing CPU core counts, frequency, and IPC. The explosion in compute capability magnifies the pressure on memory and storage, requiring more bits and higher bandwidth. Tiered solutions of memory and storage are the reality of the future. This presentation explains how DDR5 will make a difference for compute-intensive applications and provides examples of how DDR5 improves performance on specific workloads and enables real-world bandwidth improvements.

There will be Q&A and a panel discussion featuring above speakers.

12:00pm-1:15pmLUNCH – Sponsored by Arm
1:15pm-2:45pmSession 5: SoC Design

Integration of heterogeneous IP blocks presents many SoC-implementation challenges. Designers expect plug and play IP, but each product is different, so there's always some customization required. Employing a network-on-chip (NoC) can ease integration, as can use of configurable silicon-proven cores. This session, moderated by Linley Group senior analyst Mike Demler, will discuss the benefits of NoC IP and design platforms for complex ASICs.

Opposites Attract: Customizing and Standardizing IP Platforms for ASIC Differentiation
Carlos Macian, Senior Director AI Strategy and Products, eSilicon

IP cores are fundamental building blocks of modern ASICs, often providing a competitive edge in spite of their standard nature. And yet, true differentiation and optimization mandates IP customization for the specific product needs. The challenge is how to combine standardization and ease of integration for an accelerated and predictable schedule with the need to optimize the IP. This presentation will explore an approach to this problem using best-in-class, silicon-proven IP that is also designed for ease of integration and application-specific customization.

Adapting SoC Architectures for Types of Artificial-Intelligence Processing
Matthew Mangan, Applications Engineer, ArterisIP

"AI" is often used abstractly to refer to systems (and chips) that implement machine-learning algorithms. But different types of chips are required for different types of AI/ML processing, whether for neural-network training or inference, or for a data center or battery-powered client. This presentation describes how to use network-on-chip (NoC) technology to efficiently implement SoC architectures targeting different types of AI processing, including advanced techniques such as when to use tiling or cache coherence.

Freedom Revolution: Customizable RISC-V AI SoC Platform
Krste Asanovic, Co-Founder & Chief Architect, SiFive

A new domain-specific architecture approach is needed for AI. High-performance machine-learning processors require very high bandwidth memory systems and high-speed chip-to-chip communication links. Customizable AI SoC platform includes RISC-V cores with vector extensions, HBM2 high-bandwidth memory interfaces, and Interlaken chip-to-chip interconnects carrying the TileLink coherence protocol. The platform can be configured with a variety of RISC-V management and compute cores, optimized on-chip cache and scratchpad memory systems, customer-specific hardware acceleration blocks, and is supported with a full-system software stack.

There will be Q&A and a panel discussion featuring above speakers.

2:45pm-3:05pmBREAK – Sponsored by Intel
3:05pm-4:15pmSession 6: DSP Cores

Automotive lidars and radars produce signals that are very different than cellular radios, but they have similar requirements for high-performance DSPs. Both demand low latency, high parallelism, and throughout to handle multiple antennas and receivers, along with embedded CPUs to handle control functions. This session, moderated by Linley Group senior analyst Mike Demler, will describe two new DSP-IP cores that can handle complex signal-processing tasks in automotive, IoT, and other challenging applications.

A Multipurpose Hybrid DSP and Controller Architecture for IoT and Wireless
Uri Dayan, Team Leader, Processor Architecture, CEVA

The new CEVA-BX architecture can combine DSP and control tasks on a single processor. At the real-time control domain, cellular-IoT and wireless applications benefit from executing the L1 control and sensor-fusion on the same processor, with security often being an additional requirement. At the signal-processing domain, noise reduction and speech recognition require a strong processor with low-latency DSP processing. The presentation will describe how the new architecture performs efficiently for these and other modern DSP applications.

High Resolution, Low-Power, Programmable DSPs Optimized for Radar Sensors
Pierre-Xavier Thomas, Engineering Group Director, Tensilica DSP SW Group, Cadence

Radar technology plays a critical role in automotive applications such as autonomous driving, advanced driver assistance systems (ADAS), in-cabin monitoring, and gesture recognition. These applications require increased performance and capabilities from the radar module to accurately determine the distance, direction, and speed of multiple targets. This presentation will show how the complex algorithms required for high-resolution mm-wave radar receivers can be efficiently implemented in a simple subsystem using a DSP optimized for radar signal processing.

There will be Q&A and a panel discussion featuring above speakers.

4:15pmEND OF CONFERENCE

 

Platinum Sponsor

Gold Sponsor

Micron

Andes Technologies

Media Sponsor