Editor’s note: I am pleased to introduce this month’s guest TI author, Robert Keller, Systems Manager in TI’s High-Speed Products group. He has 15 years of experience supporting high-speed products in wireless infrastructure communication, test and measurement and military systems. He received a B.A. in Physics and Mathematics from Washington University, St. Louis, Missouri, and a Ph.D. in Applied Physics from Stanford University. He has 10 US patents in networking and sensor applications. Robert can be reached at email@example.com.
Clock phase noise or jitter (which is phase noise integrated across offset frequency) is a significant concern for high-speed data converter systems, and in some cases it can limit system performance. Significant effort, cost and power are spent to generate as clean of clocks as possible to minimize the impact of the clock phase noise. However, there is a class of systems where the impact of clock phase noise can be reduced. These systems are where a signal is generated by a digital-to-analog converter (DAC), then sampled by an analog-to-digital converter (ADC). Examples include radar, ultrasound and digital pre-distortion (DPD) feedback systems.
Figure 1 shows a system where a signal is generated by a DAC, sent through a channel, and then sampled by an ADC. The channel could be air for a radar system, a human body for ultrasound, or a power amplifier for a DPD feedback system. Jitter on the DAC clock results in the timing of the exact sample being early or late. This essentially translates into an amplitude error, which results in noise. However, if the ADC shares the same clock with the same jitter, then the ADC sample time will be the same as the DAC sample time. While the DAC output signal will have noise, this noise will be cancelled during the ADC sampling. This process gives a higher signal-to-noise ratio (SNR) to measure the effect of the channel.
There are limitations on the clock phase noise cancellation. First, only the clock phase noise that is common to the DAC and ADC will cancel. In Figure 1 , only phase noise generated before the clock signal is split between the DAC and ADC will be cancelled, which is the clock synthesizer phase noise. Any phase noise generated in the path after the split to the DAC and ADC will not cancel – for example, from the clock input buffer and clock paths inside the DAC and ADC.
A second limitation is due to any offset in time between the clock edge when the analog signal is generated by the DAC, and the clock edge used by the ADC to sample the point in time of the analog signal. If the delay is large, then the phase noise between the DAC clock edge and ADC clock edge will not be well correlated. As a result, the bandwidth of the phase noise correlation will be approximately the inverse of the delay. Only phase noise offset less than this bandwidth will be cancelled. For example, a DPD feedback system with 10 ns channel delay would cancel up to 100 MHz offset, whereas a radar system with tens of s roundtrip delay only cancels phase noise below tens of kHz offset.
To demonstrate the effect of clock phase noise cancellation, we generated a deliberately noisy clock using an arbitrary waveform generator with phase noise pedestal up to 10 MHz offset (Figure 2 ). The noisy clock was used as a clock for the DAC, which then generated a single-tone analog signal. The effect of the noisy clock was quite apparent on the tone at the DAC output, resulting in a pedestal of increased noise around the tone. The tone was then sampled by an ADC, which was clocked either with the same noisy clock used by the DAC, or a separate clean clock synthesizer.
The results are shown in Figure 3 . When a clean clock was used, the noise on the DAC output due to the noisy clock is apparent in the ADC spectrum.
However, when the noisy clock shared by the DAC is used for the ADC, the phase noise pedestal is cancelled by more than 40 dB, leaving just the DAC’s output tone.
Please join us next time when we will discuss the increasing need of high-voltage standoff in CAN applications.