benz

Efficient Averaging: All Samples are not Created Equal

Blog Post created by benz on Oct 13, 2016

Originally posted Sept 25, 2015

 

Is noise driving me mad, or just driving my interest in statistics?

I suspect that many of you, like me, have no special interest in statistics per se. However, we’ve also learned how powerful statistics can be in revealing the characteristics of our world when noise obscures them. This is especially true with our circuits and signals.

Two commonly used statistical tools are averaging and analysis of sample distributions. A while back I finally got around to looking at a distribution question that had been bugging me, and now it’s time to understand one aspect of averaging a little better.

Averaging is probably the most common statistical tool in our world, and we are often using one or more forms at once, even if we’re not explicitly aware of doing so.

Averaging is used a lot because it’s powerful and easy to implement. Even the early spectrum analyzers had analog video bandwidth filters, typically averaging the log-scaled video signal. These days many signal analyzers perform averaging using fast DSP. The speed is a real benefit because noise may be noisier than we expect and we need all the variance reduction we can get.

Years ago, I learned a rule of thumb for averaging that was useful, even though it was wrong: The variance of a measurement decreases inversely with the square root of the number of independent samples averaged. It’s a way to quantify the amount of averaging required to achieve the measurement consistency you need.

It’s a handy guide, but I remembered it incorrectly. It is the standard deviation that goes down with the square root of the number of independent samples averaged; variance is the square of standard deviation.

An essential condition is sometimes overlooked in applying this rule: The samples must be independent, not correlated with each other by processes such as filtering or smoothing. For example, using a narrow video bandwidth (VBW) will constrain the effective sample rate for averaging by the IF detector, no matter how fast the samples are produced. The same goes for the RBW filter, where the averaging effect of the VBW can be ignored if it is at least three times wider than the RBW (another rule of thumb).

What does the effect of correlated samples look like in real spectrum measurements? Performing a fixed number of averages with independent and correlated samples makes it easy to see.

A 50-trace average is performed on spectrum measurements in a vector signal analyzer. In the top trace the samples are independent or uncorrelated. In the bottom trace the samples are correlated by overlap processing of the data, resulting in a smaller averaging effect.

A 50-trace average is performed on spectrum measurements in a vector signal analyzer. In the top trace the samples are independent or uncorrelated. In the bottom trace the samples are correlated by overlap processing of the data, resulting in a smaller averaging effect.

For convenience in generating this example I used the 89600 VSA software and trace averaging with overlap processing of recorded data. In overlap processing, successive spectrum calculations include a mix of new and previously processed samples. This is similar in concept to a situation in which an averaging display detector samples much faster than the VBW filter. The average is valid, but variance and standard deviation do not decrease as much as the number of samples in the average would suggest.

You probably won’t face this too often, though if you find yourself frustrated with averaging that doesn’t smooth your data much as expected, you might question the independent samples condition. Fortunately, measurement applications are written with this in mind, and some allow you to increase average counts if you need even more stable results.

If issues such as this are important to you, or if you frequently contend with noise and noise-like signals, I suggest the current version of the classic application note Spectrum and Signal Analyzer Measurements and Noise. The explanations and measurement techniques in the note are some of the most practical and effective you’ll find anywhere.

Finally, it’s time for your daily new three-letter acronym: NID. It stands for normally and independently distributed data. It applies here and it’s a familiar concept in statistics, but apparently an uncommon term for RF engineers.

Outcomes