Communications Semiconductor Market Share 2020
Provides market share data for many categories of communications semiconductors, including Ethernet products, processors, and FPGAs.


Most Recent Guides & Forecasts
A Guide to Processors for Deep Learning
Covers processors for accelerating deep learning, neural networks, and vision processing for AI training and inference in data centers, autonomous vehicles, and client devices.
Covers processors for accelerating deep learning, neural networks, and vision processing for AI training and inference in data centers, autonomous vehicles, and client devices.
Communications Semiconductor Market Forecast 2019-2024
Provides five-year revenue forecasts for many categories of communications semiconductors, including Ethernet products, embedded and server processors, and FPGAs.
Provides five-year revenue forecasts for many categories of communications semiconductors, including Ethernet products, embedded and server processors, and FPGAs.
White Papers
Expedera Redefines AI Acceleration for the Edge
Expedera is a small company with big ideas. Rather than optimizing the usual AI techniques, the company rethought neural-network acceleration from the ground up, creating a unique approach that greatly improves performance while maintaining consistent power and die area. This design is well suited to many consumer and automotive applications, enabling customers to increase the intelligence of their devices and add new capabilities to benefit their end users. Expedera has already validated the architecture and its performance in a test chip and signed a lead licensee for its IP.
Expedera is a small company with big ideas. Rather than optimizing the usual AI techniques, the company rethought neural-network acceleration from the ground up, creating a unique approach that greatly improves performance while maintaining consistent power and die area. This design is well suited to many consumer and automotive applications, enabling customers to increase the intelligence of their devices and add new capabilities to benefit their end users. Expedera has already validated the architecture and its performance in a test chip and signed a lead licensee for its IP.
Growing AI Diversity and Complexity Demands Flexible Data-Center Accelerators
AI applications are becoming more diverse, even as models for specific applications rapidly advance. CPUs and GPUs offer the flexibility to handle new models, but they deliver poor throughput or efficiency for real-time inferencing. Purpose-built deep-learning accelerators excel for CNNs but often fare poorly on other model types. SimpleMachines developed a unique “composable computing” architecture that provides both programmability and efficiency.
AI applications are becoming more diverse, even as models for specific applications rapidly advance. CPUs and GPUs offer the flexibility to handle new models, but they deliver poor throughput or efficiency for real-time inferencing. Purpose-built deep-learning accelerators excel for CNNs but often fare poorly on other model types. SimpleMachines developed a unique “composable computing” architecture that provides both programmability and efficiency.
Mach-NX: The Root of Trusted Systems
Robust system security requires a layered approach, and the root of trust must begin with a secure boot process. Building on its leadership position, Lattice advanced its secure-control platform by introducing the next-generation Mach-NX family. These new devices keep platform security one step ahead of emerging threats while easing customer designs.
Robust system security requires a layered approach, and the root of trust must begin with a secure boot process. Building on its leadership position, Lattice advanced its secure-control platform by introducing the next-generation Mach-NX family. These new devices keep platform security one step ahead of emerging threats while easing customer designs.
Building Better AI Chips
As progressing to 7nm and beyond becomes ever more complex and expensive, GlobalFoundries is taking a different approach to improving performance by enhancing its 12nm node with lower operating voltages and new IP blocks. The changes are particularly effective for AI (neural-network) accelerators. The new 12LP+ technology builds on the success that the foundry’s customers have already achieved in AI acceleration.
As progressing to 7nm and beyond becomes ever more complex and expensive, GlobalFoundries is taking a different approach to improving performance by enhancing its 12nm node with lower operating voltages and new IP blocks. The changes are particularly effective for AI (neural-network) accelerators. The new 12LP+ technology builds on the success that the foundry’s customers have already achieved in AI acceleration.
Unified Inference and Training at the Edge
As more edge devices add AI capabilities, some applications are becoming increasingly complex. Wearables and other IoT devices often have multiple sensors, requiring different neural networks for each sensor, or they may use a single complex network to combine all the input data, a technique called sensor fusion. Others implement on-device training to customize the application. The GPX-10 processor can handle these advanced AI applications while keeping power to a minimum.
As more edge devices add AI capabilities, some applications are becoming increasingly complex. Wearables and other IoT devices often have multiple sensors, requiring different neural networks for each sensor, or they may use a single complex network to combine all the input data, a technique called sensor fusion. Others implement on-device training to customize the application. The GPX-10 processor can handle these advanced AI applications while keeping power to a minimum.