Linley Spring Processor Conference 2022

Conference Dates: April 20-21, 2022
Hyatt Regency Hotel, Santa Clara, CA

» Events  |  Event Info  |  Day One  |  Proceedings

Agenda for Day Two: Thursday April 21, 2022
View Day One

Click arrow below to view presentation abstract or click here -> to view all abstracts
9:00am-10:00amKeynote:

Foundations for the Future
Jason Abt, Chief Technology Officer, TechInsights

We'll have a close look at TechInsights' current technology forecast, do a deep dive on recent processor designs that plan for the future, and discuss how we can all help create foundations that others can build on.

There will be Q&A following this presentation.

10:00am-10:20amBREAK – Sponsored by Ceremorphic
10:20am-12:00pmSession 4: SoC Design

Silicon designers are often tasked with the impossible, delivering high-performance and healthy silicon within compressed design cycles. To achieve success, designers must take advantage of the latest advances in SoC design. This session, moderated by TechInsights principal analyst Linley Gwennap, discusses SoC architectures that help boost performance and reduce design times across various segments.

Is the Missing Safety Ingredient in Automotive AI Traceability?
Paul Graykowski, Senior Technical Marketing Manager, Arteris IP

Adding AI/ML to automotive designs creates unique challenges for safety compliance. Here, mitigation techniques commonly span multiple layers in the ADAS hardware. IP providers must ensure that requirements stay in sync with the implementation, verification and testing as these may change throughout the system development lifecycle. During this session you will learn how traceability provides a secure path to functional safety verification.
 

Opportunities and Benefits of Large-Scale Synchronous Domains in SoCs and Chiplets
Aakash Jani, Solutions Marketing Director, Movellus

Often a synchronous approach is the antidote to system performance limitations, but timing closure challenges or multi-team development can make it impossible to settle on an optimal solution. Believe it or not, clocking can become a territorial issue in a complex SoC design. By taking a holistic approach to clock distribution, architects can now open up opportunities and expand synchronous clock domains as best suited for their system. The Maestro platform can deliver significant PPA benefits by removing partition domain crossing FIFOs.

Pushing the Limits in the IoT
Ed Kaste, Vice President, Industrial and Multi-Market BU, GlobalFoundries

The quantity and sophistication of IoT edge devices continue to soar in lockstep with the amount of data both generated and consumed in our daily lives.  New applications at the edge are pushing the limits of IoT in new and exciting directions. This presentation examines the rapidly evolving expectations of IoT applications, specific design considerations for state-of-the-art low power IoT/edge devices, and the new technologies and features in precision analog, wireless, computing and memory to enable the solutions.

There will be Q&A and a panel discussion featuring above speakers.

12:00pm-1:05pmLUNCH – Sponsored by Flex Logix
1:05pm-2:45pmSession 5: Edge-AI Silicon

As AI services rise in popularity, service providers become more concerned with operating cost, which is driven by power consumption. A new breed of AI inference accelerators, often available in smaller form factors, address this need for data centers and high-end edge equipment. This session, led by TechInsights senior analyst Bryon Moyer, discusses three approaches to power-efficient inference acceleration.

High-Efficiency Edge Vision Processing Using Dynamically Reconfigurable TPU Technology
Cheng Wang, Senior Vice President, CTO and Co-founder, Flex Logix

To achieve high accuracy, edge computer vision requires teraops of processing to be executed in fractions of a second. Additionally, edge systems are constrained in terms of power and cost. This talk will present and demonstrate the novel dynamic TPU array architecture of Flex Logix’s InferX X1 accelerators and contrast it to current GPU, TPU and other approaches to delivering the teraops performance required by edge vision inferencing. We will compare latency, throughput, memory utilization, power dissipation and overall solution cost. We’ll also show how existing trained models can be easily ported to run on the InferX X1 accelerator.

Sakura - Energy-efficient AI Hardware Acceleration Defined by Software
Sakyasingha Dasgupta, CEO & Chief AI Architect, EdgeCortix

Rapid growth in edge AI presents the fundamental challenge of mismatch between deep neural networks and existing general-purpose processors like CPUs AND GPUs. Additionally, there is a growing need to develop environmentally friendly solutions with superior energy efficiency. Solving this, we designed the Sakura AI accelerator with a software-defined approach, forming a tight coupling between the applications, compiler, and our underlying low-power, low-latency processor design. This talk will present how Sakura and our robust software framework, Mera, can flexibly deploy solutions across devices.

In-Memory Computing With Multilevel Resistive Switching Devices for Edge Applications
Glenn Ge, Co-founder and CEO, Tetramem

Digital processors based on the von Neumann architecture have an intrinsic bottleneck in data transfer between processing and memory units. This constraint increasingly limits performance as data sets continue to grow exponentially. TetraMem addresses this issue by delivering state-of-the-art in-memory computing using our proprietary computing devices. This talk will discuss how our solution brings several orders of magnitude improvement in computing throughput and energy efficiency, ideal for various AI applications at the edge.

There will be Q&A and a panel discussion featuring above speakers.

2:45pmEnd of Conference

 

Premier Sponsor

Platinum Sponsor

Gold Sponsor