Editor’s Note: Welcome to our weekly feature, Analyst Angle. We’ve collected a group of the industry’s leading analysts to give their outlook on the hot topics in the wireless industry.
Digital signal processing is the key technology that enables connection from humans (and machines) to the Internet, whether by wire or wireless. Furthermore, there is no digital multimedia without DSP technology. The technology has become pervasive in virtually all modern communications and entertainment devices. Digital signal processing is distinct from traditional computing in that it is the mathematical manipulation of information signals to modify or improve them in some way. In short, DSP is math-intensive; far more so than traditional computers, thus requiring a unique architecture.
In the past, I have conducted on-line surveys related to DSP use, in the beginning for discrete DSP chips and later as system-on-a-chip DSP technology implementations. In one of those early surveys, devoted primarily to discrete chips, I had posed the question: “What percentage of the processor is devoted to DSP as opposed to CPU functionality?” The answer surprised me: less than 50%. Since I was addressing a DSP chip, I expected that DSP functionality would be dominant; but that was not the case. Certainly, there had to be at least a scheduler and perhaps a real-time operating system managing the DSP activity, but obviously, there was more to do, like handshaking in a modem application. But, there were no high-level compilers for DSP in those days and all programming was in assembly language (ouch!). When digital cellphones reached the market in the early 1990’s, the DSP functionality was through SoC implementation for both the speech and modem processing functions. However, the DSP cores for the new 2G modems were usually paired with a small ARM7 processor since the ARM CPU was easier to program for the more complex handshaking protocol and could handle notepad functions as well.
With the move to 3G, more complex air interface algorithms required more powerful DSP modems (sometimes employing two DSP cores) while smartphones began to emerge with high-level operating systems, truly beginning in 2000 with Nokia’s Symbian OS. The now-ubiquitous ARM core began to handle more complex protocol stacks and color display functions, requiring a move away from the older ARM7 to ARM11 and now newer cores. At that time, several discrete chips were employed, focused on communications for modem and the emerging application processor for user interfaces and applications, while incorporating both the modem and AP functions, with the required multiple CPU and DSP cores on the same die coming to market later.
Of course, Apple’s introduction of the iPhone in 2007 began to change the cellphone landscape. Initially designed for AT&T’s GPRS/EDGE network (using Infineon’s modem chips), the advanced ARM application processor running iOS set the new standard for smartphones. Newer iPhones have moved to LTE capability (using Qualcomm modems) and multi-core ARM-based A7 processors … and an ARM-based M7 as a sensor hub and co-processor. Although the A7 employs dual 64-bit ARM CPUs and quad Imagination Technologies’ GPU cores, the A7 represents the AP and are functionally independent from the modem chip in the iPhone that contains DSP cores (for modem and speech processing) included in the smartphone from Qualcomm, having transitioned from Infineon to support LTE functionality, to round out the connectivity solution.
Some of the newest LTE smartphone chips are combining the modem core on the same die as the application processor, also enabling the shared usage of modem’s DSP(s) for multimedia functions with the apps processor CPU and GPU cores, like Qualcomm’s Snapdragon chips and potentially, Broadcom’s acquired Renesas Mobile device and Nvidia’s Tegra 4i. At the same time, multimedia applications and original equipment manufacturers use case requirements have been exploding on their own. Many of these new SoCs now have dedicated DSPs to support audio and imaging features in addition to the DSPs for modem processing. For example, Qualcomm has incorporated a DSP core associated with its application processors. This shared usage of cores on the same die is the foundation for smartphone heterogeneous computing.
Traditional computer thinking (as in PCs and massively-parallel computers) is that heterogeneous computing is within the domain of multiple CPU cores, where tasks are switched between or shared with other CPU cores. One could ascribe this narrow definition to some stand-alone cellphone application processors; however, we are finding that the full spectrum of CPU, GPU, DSP and multimedia cores are increasingly on the same silicon … engendering a broader heterogeneous computing concept.
Dedicated low-power DSP engines for multimedia are typically employed for audio record and playback, microphone noise cancellation and even object detection in camera images as well as video compression/decompression. Although some image composition and processing are better suited for the massively parallel GPU, others are not uniform in their algorithms (like face detection) and are more suited for acceleration on the DSP. I believe that overall increases in both performance and power efficiency can be achieved by enabling selected tasks to shift from the CPU to other cores that may be more power-efficient or simply more powerful. Obviously, this requires a system approach for control of and communication between the several core types, allowing each core to operate independently, dynamically scaling voltage and frequency to meet specific performance needs.
At this writing, Qualcomm seems to be further along than the others, claiming to have not only a full complement of specialized processor engines for heterogeneous computing (through its latest Snapdragon products), but with support tools to enable OEMs to distinguish their products from others.
We fully expect that the heterogeneous approach will eventually be adopted by all smartphone chip suppliers, providing more powerful handsets with better power consumption.
Will Strauss is an internationally known electronics market research analyst and is well-known for his studies of wireless chip markets and is considered the leading authority on digital signal processing (DSP) market trends. He has degrees in Electrical Engineering (Georgia Tech) and Business (Southern Methodist University) and has worked in electronic design, marketing, sales and market research positions. He was formerly EVP of In-Stat and earlier VP of Market Research for Integrated Circuit En-gineering (ICE). Strauss held earlier positions with General Instrument Microelectronics, Digital Equipment Corporation and Collins Radio Company.