Make better measurements by using all the information you can get
Some of the things we do as part of good measurement practice are part of a larger context, and understanding this context can help you save time and money. Sometimes they’ll save our sanity when we’re trying to meet impossible targets, but that’s a topic for another day.
Take, for example, the simple technique for setting the best input level to measure harmonic distortion with a signal analyzer. The analyzer itself contributes distortion that rises with the signal level at its input mixer, and that distortion can interfere with that of the signal under test. To avoid this problem, the analyzer’s attenuator—sitting between the input and the mixer—is adjusted to bring the mixer signal level down to a point where analyzer distortion can be ignored.
It’s a straightforward process of comparing measured distortion at different attenuation values, as shown below.
Adjusting analyzer attenuation to minimize its contribution to measured second harmonic distortion. Increasing attenuation (blue trace) improves the distortion measurement, reducing the distortion contributed by the analyzer.
When increases in attenuation produce no improvement in measured distortion, the specific analyzer and measurement configuration are optimized for the specific signal in question. This is a deceptively powerful result, customized beyond what might be done from published specifications.
Of course, the reason to not use a really low mixer level (i.e., lots of attenuation) is to avoid an excessively high noise level in the measurement and thereby lose low-level signals. Indeed, if the goal is to reveal spurious or other signals, less attenuation will be used and distortion performance will be traded for improved signal/noise ratio (SNR).
Setting attenuation levels to one type of performance is standard practice, and it’s useful to understand this in the larger context of adding information to customize the measurement to your priorities. This approach has a number of benefits that I’ve summarized previously in the graphic below.
A common tradeoff is between cost, time, and information. Information is an output of the measurement process, but it can also be an input, improving the process.
The interactions and tradeoffs are worth a closer look. One possible choice is to purchase an analyzer with higher performance in terms of distortion, noise, and accuracy. Alternatively, you can compensate for the increased noise (due to the high attenuation value) by using a narrower resolution bandwidth (RBW). You’ll get good performance in both distortion and noise floor, but at the cost of increased measurement time.
That alternative approach involves using your engineering knowledge to add information to the process, perhaps in ways you haven’t considered. The optimization of distortion and noise described above is a good example. Only you know how to best balance the tradeoffs as they apply to your priorities—and adjusting the measurement setup means adding information that the analyzer does not have.
There are other important sources of information to add to this process, ones that can improve the overall balance of performance or speed without other compromises. Some involve information you possess, and some use information from Keysight and from the analyzer itself.
I’ll describe them in a part 2 post here next month. In the meantime you can find updated information about spectrum analysis techniques at our new page Spectrum Analysis At Your Fingertips. We’ll be adding information there and, as always, here at the Better Measurements RF Test Blog.