Originally posted Mar 10, 2015
Precisely small is more of a challenge than precisely big
Recently, I’ve been looking at sensitivity measurements and getting acquainted with the difficulty of doing things correctly at very low signal levels. It’s an interesting challenge and I thought it would be useful to share a couple of surprising lessons about specifications and real-world performance.
From the outset, I’ll concede that data sheets and detailed specifications can be boring. Wading through all that information is a tedious task, but it’s the key to performance you can count on, and specs are a reason to buy test equipment in the first place. Also, extensive specifications are better than the alternative.
Sensitivity measurements show the role and benefits of a good data sheet in helping you perform challenging tests. Say, for example, you’ve got a sensitivity target of 1 µV and you need a signal just that size because the desired tolerance is ±1 dB. In a 50Ω system, that single microvolt is −107 dBm, and 1 dB differences amount to only about 10 nV.
The hard specs for a Keysight MXG X-Series microwave signal generator are ±1.6 dB and extend to −90 dBm, so there are issues with the performance required in this situation. However, it’s worth keeping in mind that the specs cover a wide range of operating conditions, well beyond what you’ll encounter in this case.
Once again this is a good time to consider adding information to the measurement process as a way to get more from it without changing the test equipment. A relevant item from the signal generator data sheet illustrates my point.
The actual performance of a set of MXG microwave signal generators is shown over 20 GHz, and the statistical distribution is provided as well. Though the measurement conditions are not as wide as for hard specs, these figures are a better indication of performance in most situations.
The performance suggested by this graph is very impressive—much better than the hard specs over a very wide frequency range—and it applies to the kind of low output level we need for our sensitivity measurement. Accuracy is almost always better than ±0.1 dB, dramatically better than the hard spec.
The graph also includes statistical information that relates to the task at hand. Performance bounds are given for ±one standard deviation, and this provides a 68% confidence level if the distribution is normal (Gaussian). If I understand the math, a tolerance of ±0.2 dB would then correspond to two standard deviations and better than 95% confidence.
The time spent wading through a data sheet is amply rewarded, and the right confidence can then be attached to the performance of a tricky measurement. The confidence you need in your own measurements may be different, but the principle is the same and the process of adding information will improve your results.
So far, we’ve taken advantage of information that is generic to the instrument model involved. Even more specific information may be available to you, and I’ll discuss that in a future post.