RF fading stirs simulation, test issues

RF signals passing through the air are distorted by atmospheric and environmental impairments, including multipath scattering and dispersion. A new digital implementation of channel simulation promises to alleviate some of the toughest and most cost-intensive testing.

Communications quality between a basestation transmitter and mobile (or stationary) receiver depends on a number of factors, including the general quality of the propagation channel through which the signal passes. As the transmitted signal gets absorbed by the atmosphere and reflects off buildings and trees, it experiences variable fluctuations in its amplitude and phase. This phenomenon is typically called fading, sometimes referred to as multipath (a specific type of fading) or the more general category of channel impairments.

A signal transmitted from a base station can take different paths to a receiver, due to reflection, diffraction and local scattering. (Hence, multipath fading.) Different paths have different lengths associated with them, which causes the receiver to “see” multiple copies of the signal at different times of arrival and with varying amplitudes. Also, the signal can shift in phase as it is reflected and scattered off local objects. As the receiver's antenna moves through space, it will experience peaks and valleys of signal strength as these interfering wavelets add and subtract at the receiver.

Designers of today's wireless devices have to test their designs under real-world channel conditions. Channel impairments can be simulated by mathematical models that mimic the channel response of a fading channel. These models use statistics to express what an electromagnetic wave will experience as it encounters physical obstacles. These include Rayleigh fading, Rician fading and Suzuki fading.

Rayleigh fading is a distribution that models channel propagation when there is no strong line-of-sight path from transmitter to receiver. This can represent the channel conditions seen on a busy city street, where the base station is hidden behind a building several blocks away. In rural environments, where the multipath profile includes a few reflected paths combined with a strong line-of-sight path, the spectral power follows a Rician distribution. The ratio of direct ray and multipath energy is called the K factor. Observing this effect in the frequency domain, what you see is a spike in power, whose magnitude is determined by the K factor. Suzuki fading superimposes small-scale fading from multipath onto large-scale fading from reflection and diffraction. The large scale follows a log-normal distribution and the small scale follows a Rayleigh distribution.

Worst-case scenario

While the mean path loss from shadowing and large-scale reflection have a normal (Gaussian) distribution that will typically degrade the signal 6 to 10 dB, the worst-case scenario for nonline-of-sight small-scale fading will degrade the signal 20 to 30dB in the deepest fades when multipath components come in directly out of phase.

What this means for a device designer is that sufficient “fading margin” must be built into the link budget. The signal power must be strong enough at transmission, or the receiver sensitive enough, to withstand a deep fading condition in excess of 40 to 50dB.

Current methods of channel simulation operate on an RF-in, RF-out basis, or an analog I/Q in, RF-out basis. In this process, the signal to be faded is downconverted or digitized or both. The fading profile is added to the digital signal, and the result is upconverted back to RF. Then the noise-additive white-Gaussian noise (AWGN)-is added in. Noise must be kept separate from the fading profile because AWGN is independent of any multipath channel response.

The process has two major inefficiencies: conversion loss and noise calibration. Conversion loss occurs every time a signal is sampled from analog or reconstructed to analog. This adds errors to the system that are caused by the test equipment, not the channel or device under test. Similarly, adding noise changes the total power level as well as the carrier-to-noise ratio (C/N). To calibrate the noise level to the incoming signal power, it is necessary to statistically determine the carrier power after fading, an expensive and time-consuming process.

A fading test solution for baseband device designers, like Agilent Technologies' “Baseband Studio for Fading” (an addition to the company's E4438C ESG vector signal generator), allows engineers to avoid the calibration problems associated with traditional fading simulators. The baseband signal from the ESG is sent to the fader and is faded, and then noise is added, all in the digital domain. In fact, the entire process remains digital right up to the point where it is upconverted to RF.

It is imperative to verify receiver performance in real-world conditions as early as possibe in the design cycle. The impact of the fading properties on wireless communications can be mitigated through intelligent design and thorough testing.

Noah Schmitz is wireless application marketing engineer at Microwave Test Accessories Group Agilent Technologies (Lake Stevens Site) in Everett, Wash.

See related chart

0 comments on “RF fading stirs simulation, test issues

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.