We've all experienced poor voice quality on a cellphone, even when signal strength is moderate or strong. I have always assumed that this problem was primarily due to poor microphone placement, how I am holding the phone, ambient background noise, and various other analog audio shortcomings. After all, once the voice is digitized and in the clean, clear world of bits, any unavoidable noise-induced errors in the data stream can be handled by error detection and correction (EDC) techniques. Certainly, there would be an occasional string of errors or major signal dropout beyond the capability of EDC, but these would be sporadic.
It turns that my assumption about the culpability of analog for the poor voice quality is often wrong — and to a greater extent than I would have imagined. In a cover story for the September 2014 issue of IEEE Spectrum, Jeff Hecht went through the entire end-to-end wireless phone link and showed quite clearly where the various weaknesses and contributors to the voice-audio shortcomings actually are. Incidentally, Hecht is a consistently excellent writer on technical topics; his work appears monthly in Laser Focus World.
The article points out that there are certainly analog weaknesses in the first stage of the connection — for obvious reasons and reasons that are not so obvious. Vendors know this, and some are taking steps to moderate the shortcomings using multiple microphones, noise-cancelling algorithms, and more. This was interesting, but I was already familiar with these and other possible partial solutions.
However, the rest of the story really opened my eyes. It turns out that there are actually five main reasons why the voice quality can suffer. Of the five reasons he cites, only the first two are analog-related.
- Handset design (microphone placement; how the handset is positioned by the user)
- Background noise (that's obvious)
- Phone-to-tower connection (distance affects signal strength and increases dropped packets)
- Voice-data conversion (multiple data compression/unpacking cycles)
- Undersea cable compression and conversion (to reduce the number of bits needed on this high-traffic path)
The article also explained that, when a digitized, compressed voice signal is passed from one wireless carrier to another, there is often a required format conversion between the systems of the carriers. This conversion further corrupts the digitized-voice data stream. Some data-handling standards should eliminate the need for these conversions, such as VoLTE and HD Voice, but they are in the early implementation stages and not yet fully deployed.
Most engineers and algorithm coders know that, despite the use of digital number crunching and its nominally error-free processing, there can still be final errors as a result of repeated calculations and intermediate-stage error buildup. Even with a 64-bit floating-point math application or processor, it is possible to have an accumulation of rounding and truncation errors that add up to a substantial aggregate error in image processing or FFT analysis. With a fixed-point processor, the possibilities for such buildup are greater, unless the coding is carefully done and the numerical-processing package is used with attention to detail.
But in most system-level analyses, you assume that the signal deficiencies are on the analog side, rather than the digital one. After all, you usually begin with real-world analog data that comes at best with 1% to 0.1% resolution (spanning roughly 3 to 4 digits, or 0 to 14 bits, depending on how you assess the situation), while the numerical-processing side uses far more bits and has more perceived precision.
Have you ever found out your digital processing was the source of data problems, even though you assumed otherwise? Have you ever encountered other design problems where you eventually realized that the source was an area which you automatically assumed could not be the cause?