# Deviation from Linear Phase procedure incorrectly characterizes the DUT!!!

Question asked by TheGris on Jan 17, 2013
Latest reply on Sep 23, 2014 by Dr_joel
I use GPIB to control 8753 and 8720 family network analyzers to test microwave filters. I don't agree with the logic behind the procedures published by Agilent/HP (and other companies!) to measure phase distortion as 'deviation from linear phase'.

I don't use the analyzer to determine the deviation from linear phase, nor do I use the analyzer to simulate electrical length changes. I use the analyzer to capture the wrapped phase data, and then perform all processing to determine deviation from linear phase in software on a computer.

My understanding of the 'deviation from linear phase' methodology is that you add/subtract electrical length by adjusting the electrical delay to obtain a minimum peak-to-peak value. This is supposed to remove the linear phase shift through the DUT leaving only the deviation from linear phase.

My understanding also is that if I look at 'ideal' phase (meaining the 'ideal linear phase'), the slope of the ideal phase line is due to phase rotation from electrical length; it in essence simulates placing a lossless transmission line in series with the DUT whose length is changeable.  It's also my understanding that  the effect of changing the electrical length is to rotate the actual phase dataset (change the slope of the line fit throught the data points and translates all points around it) without changing the deviation from linear phase  (since the linear phase line rotates along with the actual measured phase data). It's as if you simply drew the graph, then set it on a record player. The relationship between the data points, both the unwrapped phase and the linear phase, is unchanged; they are simply rotated *as a group*.

Supposedly the reason for the rotation is to cancel out the electrical length of your DUT  which should remove the linear phase shift from the data - rotation to a different angle represents a simple change of electrical length. So what I don't understand is why, if you use a least-squares or regression to find the linear phase by fitting a line to your observed unwrapped phase data, you wouldn't just rotate the dataset (change the electrical length) such that the slope of the linear phase was zero and then translate the dataset such that the linear phase line was on the X-axis (where Y=0). This should result in a perfect graph of the deviation from linear phase with no need to subtract the linear phase from the unwrapped phase, since it's 0 and therefore already subtracted. In addition, the peak-to-peak measurement is now easy since the deviation data, being parallel to the X-axis, allows measurement of peak-to-peak values directly on the Y axis.

_Here's the rub: doing this doesn't result in the minimum peak-to-peak value of the deviation from linear phase - experience shows that usually adding or subtracting additional electrical length is required to minimize the peak-to-peak deviation!!!_

And that bothers me. Here's why: it seems that +when the ideal linear phase line has a slope of 0 you are looking at *exactly* the deviation from linear phase for the DUT when its electrical length has been cancelled out+. So any additional addition or subtraction of electrical length represents increasing or decreasing the electrical length to determine the *optimal* electrical length required to minimize phase distortion. But... if what you are doing is trying to characterize the phase distortion of the DUT, this won't be the same thing; in fact you are setting the electrical length to compensate for what the DUT's electrical length SHOULD be to get that mimimized distortion, not measuring the phase distortion at the actual electrical length of the DUT! So instead of characterizing the DUT, you are characterizing the DUT if it's electical length were ideal...

In other words, to measure the phase distortion due to the DUT, you should NOT rotate the dataset (add/subtract electrical length) until reaching a minimum value, but rather you should rotate ONLY until the slope of the ideal (linear) phase line is 0. Then (and ONLY then) are you looking at the distortion caused by the DUT. Any additional electrical length changes introduce error in the measurement!!!

Am I missing something here? Everywhere I look it says to add/subtract delay to +minimize+ peak-to-peak deviation from linear phase; that seems WRONG!

By the way, I got a lot of this information from Chapter 2 of the 8720D/E/ES/ET user's guide where it talks about phase distortion and deviation from linear phase measurements).