I've always been a bit concerned about the use of averaging on my VNA, as I never really know how many sweeps to perform as the averaging is based on an IIR filter.

This thread Enable average function and set average count but N5230A only sweep once prompted me to look at it in a bit more detail. So I wrote a simple C program to compute the output of

New_Data_Displayed=(New_Data/N)+Old_displayed_data*(N-1)/N

with Old_displayed_data initially set to 0, and New_Data=1. This is the step response of course.

I also tried it with an initial value of -1 V, on the assumption that it was a worst case. I only used real data, so I don't know how this would work with complex numbers, but I suspect the results are not going to be too dissimilar.

The attached graph shows the results of this, using 100 averages.

Base on an initial input of 0, with 100 averages.

* After 100 sweeps, the output is 0.633968 V

* In reaches 0.90 V after 230 sweeps

* It reaches 0.95 V after 299 sweeps

* It reaches 0.99 V after 459 sweeps.

Base on an initial input of -1, with 100 averages.

* At 68 or less sweeps, the output is actually negative, even though the input is +1 V.

* At 69 sweeps the output is +0.000326 V

* After 100 sweeps the output is 0.267935 V

* It reaches 0.9 V after 299 sweeps

* It reaches 0.95 V after 368 sweeps.

* It reaches 0.99 V after 528 sweeps.

Reducing the IF bandwidth by a factor of 100 for S-parameter measurements will increase the sweep time a factor of approximately 100. But if one sets the averaging to 100, the filter takes considerably more than 100 sweeps to settle. Based on a worst case of the old data being -1 and the new data being +1, it takes 368 sweeps to settle within 95% of the final value and 528 sweep to settle within 99% of the final value.

My 8720D is at Keysight at the minute being calibrated, so I don't have a VNA here with me, but I think there is the option to restart the averaging, which I assume sets the first sample to be a current data point, so would dramatically reduce the time for it to settle .

I attached the graph and the 40 line C program I used to generate the data.

I'm sure a mathematician could do a better analysis of this than me, but I think this quick analysis, which took me well under an hour, shows one has to be a bit careful with the averaging function, and expecting to get an accurate result after 100 averages if the averaging factor if 100, it not wise.

Any comments?

Dave

Edited by: drkirkby on Sep 23, 2014 9:57 AM

This thread Enable average function and set average count but N5230A only sweep once prompted me to look at it in a bit more detail. So I wrote a simple C program to compute the output of

New_Data_Displayed=(New_Data/N)+Old_displayed_data*(N-1)/N

with Old_displayed_data initially set to 0, and New_Data=1. This is the step response of course.

I also tried it with an initial value of -1 V, on the assumption that it was a worst case. I only used real data, so I don't know how this would work with complex numbers, but I suspect the results are not going to be too dissimilar.

The attached graph shows the results of this, using 100 averages.

Base on an initial input of 0, with 100 averages.

* After 100 sweeps, the output is 0.633968 V

* In reaches 0.90 V after 230 sweeps

* It reaches 0.95 V after 299 sweeps

* It reaches 0.99 V after 459 sweeps.

Base on an initial input of -1, with 100 averages.

* At 68 or less sweeps, the output is actually negative, even though the input is +1 V.

* At 69 sweeps the output is +0.000326 V

* After 100 sweeps the output is 0.267935 V

* It reaches 0.9 V after 299 sweeps

* It reaches 0.95 V after 368 sweeps.

* It reaches 0.99 V after 528 sweeps.

Reducing the IF bandwidth by a factor of 100 for S-parameter measurements will increase the sweep time a factor of approximately 100. But if one sets the averaging to 100, the filter takes considerably more than 100 sweeps to settle. Based on a worst case of the old data being -1 and the new data being +1, it takes 368 sweeps to settle within 95% of the final value and 528 sweep to settle within 99% of the final value.

My 8720D is at Keysight at the minute being calibrated, so I don't have a VNA here with me, but I think there is the option to restart the averaging, which I assume sets the first sample to be a current data point, so would dramatically reduce the time for it to settle .

I attached the graph and the 40 line C program I used to generate the data.

I'm sure a mathematician could do a better analysis of this than me, but I think this quick analysis, which took me well under an hour, shows one has to be a bit careful with the averaging function, and expecting to get an accurate result after 100 averages if the averaging factor if 100, it not wise.

Any comments?

Dave

Edited by: drkirkby on Sep 23, 2014 9:57 AM