Linley Fall Processor Conference 2021

Held October 20-21, 2021
Proceedings available

» Events  |  Event Info  |  Day One  |  Virtual Agenda  |  Proceedings

Agenda for Day Two: Thursday October 21, 2021
View Day One

Click arrow below to view presentation abstract or click here -> to view all abstracts

Follow the Smart Money: VC Perspectives on Emerging Tech
Kushagra Vaid, Partner, Eclipse Ventures

Venture capital investments are at an all-time high, with an increasing amount of funding going towards semiconductor and hardware startups. This keynote presentation will dive into current market trends and investment themes, providing insights into various hardware categories where venture dollars are being allocated. We will then cover key emerging technologies that are poised to disrupt the status quo, driving new future computing models from cloud through edge.

There will be Q&A following this presentation.

9:50am-10:10amBREAK – Sponsored by Arm
10:10am-12:15pmSession 7: Edge-AI Processing

As AI applications move from cloud platforms into edge devices, processor designers are increasingly including hardware accelerators for this important function. These processors must meet lower cost and power requirements while still meeting rising performance needs of consumer and commercial applications. This session, moderated by The Linley Group principal analyst Linley Gwennap, examines a range of chips and IP cores that accelerate edge-AI inference and training.

A Packet-based Approach for Optimal Neural Network Acceleration
Sharad Chole, Chief Scientist, Cofounder, Expedera

Current architectures fail to achieve application performance targets due to low utilization, saturated bandwidth, power constraints, and NN accuracy tradeoff. With benchmark data for most SoCs at 20-40% utilization, underperformance and overdesign remain the norm. We discuss how deep-learning accelerator architecture directly limits performance and how Expedera’s packet-based approach enables optimal results. With a comprehensive SDK built on TVM, Expedera accelerator IP platform enables ideal accelerator configuration selection, accurate NN quantization and seamless deployment.

Achieving High Compute Density and Software Programmability On the Edge
Michael Solka, Sr VP of Engineering and Chief Operating Officer, Coherent Logix

As the complexity of multi-functional and multi-modal edge applications increases, the demand for highly computationally efficient and flexible software-based processing is reaching new levels. Coherent Logix is announcing its fourth generation HyperX hx40416 processor to address this demand with its Memory Network architecture based scalable computing fabric. The processor targets edge applications with high bandwidth and multi-modal sensor input, low thermal and power budget, low actionable latency, and complete software programmability. We will highlight the architectural features of this processor.

Delivering Leadership Performance and Efficiency for Edge Applications
Mike Vildibill, Vice President, Product Management, Qualcomm

The Cloud AI 100 accelerator offers leadership class performance and power efficiency across many applications ranging from datacenter to edge deployment. In this talk we will discuss Qualcomm’s comprehensive offering of commercial software and hardware tools for integration and deployment in production settings. The presentation will dive into Foxconn´s Gloria AI Edge Box, our new joint announcement, featuring a turnkey commercial device powered by Qualcomm Cloud AI 100 with Snapdragon running on Linux and 5G.

INT8 AI Training Everywhere
Moshe Mishali, CEO and co-founder, Deep AI

Floating-point operations (FLOPs) are today’s vehicle for training AI models. We discuss the impact of low-precision training technology on the entire AI ecosystem. 8-bit integer (INT8) operations shorten training times and reduce data-center bandwidth by more than an order of magnitude. Furthermore, INT8 training technology ignites a ground-breaking paradigm shift. Low-power edge devices can now perform retraining cycles completely locally, i.e., without sending data to the cloud, and mobile devices can ultimately run personalized AI applications.

There will be Q&A and a panel discussion featuring above speakers.

12:15pm-1:45pmLUNCH – Sponsored by Flex Logix
1:45pm-3:45pmSession 8: High-Performance Processor Design

Vendors seek to improve performance at various levels of a product’s architecture. This session, moderated by The Linley Group senior analyst Aakash Jani, explores how companies lift performance through their data fabrics, microarchitecture, and software. Additionally, we will see their vision for further improvements in high-performance processor design.

Alder Lake Performance Hybrid Architecture
Rajshree Chabukswar, Senior Principal Engineer, Client Computing SoCs, Intel

Reinventing the multicore architecture, Alder Lake will be Intel’s first performance hybrid architecture featuring a combination of Performance cores and Efficient cores along with the new Intel Thread Director. Intel Thread Director delivers a unique approach to thread scheduling and ensures Efficient cores and Performance cores work seamlessly together by dynamically and intelligently assigning workloads for maximum real-world performance. Alder Lake is Intel’s next-generation client SoC architecture, scaling from ultra-mobile to desktop and brings multiple industry-leading I/O and memory technologies to market. 

RISC-V at Scale: An Architecture for the Future of Computing
Shubu Mukherjee, Vice President, Architecture, SiFive

The next wave of RISC-V adoption will occur at the bleeding edge, where raw performance is paramount. Recent advances in microarchitecture, including multi-core and multi-cluster topologies, will accelerate RISC-V adoption in diverse application areas such as mobile, autonomous vehicles, and the datacenter. This SiFive talk will preview an architecture intended to deliver a leap in performance, evidenced by industry-standard benchmarks such as SPECint, that will be the foundation of the rapid acceleration of RISC-V deployment across the most challenging application domains.

Scalable Cortex CPU Clusters with Next-Generation DynamIQ Shared Unit
Pieter Arnout, Principal FAE, Arm

During the recent decade, CPU cluster topology became a key part of SoC development to meet an increasingly diverse set of needs across multiple markets, from unleashing the performance of laptops to maximizing the efficiency of wearables. The new generation of the Arm DynamIQ Shared Unit (DSU-110) addresses this challenge. This talk will cover how to use the DSU’s scalability and capability to create flexible Arm Cortex based CPU cluster designs, for higher power efficiency and more battery life for devices.

This session will include Q&A after each presentation.

3:45pmEnd of Conference


Premier Sponsor

Platinum Sponsor

Gold Sponsor

Andes Technologies

Silver Sponsor

Industry Sponsor

Media Sponsor