Every so often someone publishes an article discussing aliasing. I thought I’d jump on the bandwagon – you know, that one in the movies whose wheels seem to be rotating the wrong way. Is aliasing a problem? Does it need fixing, and does fixing it cause another problem elsewhere? Answering that is for a later column; today, a health warning on the arithmetic of aliasing. It’s customary to demonstrate, with the trigonometric identity:

that *if* it’s possible to fit a set of points x(t) spaced at interval *τ*(tau), with a sinewave at frequency f, *then* infinitely many sinusoidal solutions are possible, at frequencies n/*τ*(tau) ± f for any integer n (homework – try this!). One of the solutions is the ‘right’ one and the rest are ‘impostors’ or ‘aliases’. But which?

Baseband people assume that the lowest-frequency solution is wanted when they process x(t), and incorrectly infer the presence of a baseband signal at f if x(t) was actually created by sampling one of those higher-frequency aliases. I spent many years designing filters – anti-aliasing filters – to keep those bad boys out (the signals, not the baseband people).

Radio folk vibrate to a higher harmonic; they usually reckon that one of the *higher* frequency solutions is the wanted one and that x(t) arose by sub-sampling the signal, at a frequency *below* the input frequency. They need anti-aliasing filters too; their filters need to have *bandpass* response because they need to keep not only higher- but also *lower*-frequency solutions from getting into their ADCs.

What does the *spectrum* of that data set x(t) actually look like? In the real world, we play x(t) out through a DAC that pings out a narrow pulse of value x(t_{i}) at time t_{i} and check the output with a spectrum analyzer. Within the warm embrace of the virtual world, we perform an FFT on x(t). What we find is... *all* – that’s right, *all* – of the frequencies that could have ‘aliased down’ (or up) to give a particular data set x(t) are present in the spectrum of x(t), whether or not they were present in the signal from which it was sampled.

I once heard an apps guy explain that aliasing happened when you put an input signal at a frequency greater than half the sampling frequency (usually called the Nyquist frequency) into an ADC and it “came out as” a signal at another frequency less than the Nyquist frequency, as if it had been reflected off an invisible mirror. This is a nice analogy, but it obscures the truth. If you look at the spectrum of a sampled signal, *the original frequency component is always there*. Read that again; this point is not made clearly enough or often enough. It’s just that the impostor frequencies are *also* present, and sometimes they turn up where our prejudice tells us that the ‘actual’ signal should be. Nothing’s gone missing, we just picked up some hitchhikers.

Perhaps the (German) apps guy was led astray by his *Muttersprache*. The German term for aliasing is *Rückfaltung*, or ‘folding-back’, and this reinforces the misconception that the signal frequency has been changed. Practitioners who work in the sampled domain know that the problem with aliasing that muddles things up is that the frequency spectrum of a complicated signal can end up *overlapping* with the extra spectrum formed through the sampling process. Once these signals have overlapped, it’s generally not possible to disentangle them. All the original frequencies are still present, it’s just that there’s now a region where these frequencies are mixed up with each other’s aliases. Figure 1, a picture from DSP guru Richard Lyons, shows stuff getting nicely mixed up (bottom trace):

**Figure 1**

**How aliasing messes things up**

A related effect is visible in the reproduction of high frequency signals by digital audio systems. Audio DACs use filters to convert the digits into smooth (or even smoo-ooth, if you like that sort of music) signals by knocking out all the high frequency components. Well, *almost* all. To cut chip cost, a particular efficient digital filter design (called a half-band filter, and used in ADCs too) is often used. This filter doesn’t have much rejection around half the sampling frequency – only 6 dB, in fact.

Take a CD containing an x(t) sampled from a 22 kHz sinewave at the standard 44.1 ksps rate. A 50 ms extract of this signal (attenuated almost 6 dB by the half-band filter) looks like Figure 2:

**Figure 2**

**What a 2 kHz signal sampled at a 44.1 ksps rate “looks like”**

What does the spectrum of this particular x(t) in Figure 2 look like? Correct: *all* the possible sinewave solutions – referred to as images in this case, where we’re reproducing rather than capturing – that fit x(t) will be there alongside the 22 kHz one. The most worrying one is at 22.1 kHz, which our low-cost half-band digital filter lets through at just slightly lower amplitude than the 22 kHz component. What happens when you sum a 22 kHz sinewave and a 22.1 kHz sinewave at nearly the same amplitude? Right again! You get something that looks like a 22.05 kHz sinewave almost fully balanced-modulated with 50 Hz – very like the plot of the sampled output above. If that hits any kind of nonlinearity... well, just think of how AM radio works. Your second homework assignment: lash this up in LTSpice or some other simulator.

And, of course, we don’t know whether this x(t) was the result of applying 22 kHz or 22.1 kHz data to the ADC, for the same reason. In other words, we can apply *either* 22 kHz *or* 22.1 kHz to the ADC and we’ll get *both* of them *à la* Figure 2 at the DAC’s output. And if *that’s* accurate reproduction of the original signal, then I’m not at the editor’s word count limit. Let me know if the aliasing monster ever bit *you* when you weren’t looking! – Kendall