AnsweredAssumed Answered

Phase Angle Accuracy with Imperfect Cal Standards

Question asked by SOLT_guy on Aug 9, 2011
Latest reply on Aug 13, 2011 by SOLT_guy
Dear Sir:

      I own a HP8753C NA.

     When one defines an "in fixture" load standard for a calibration kit,  the cal kit input definitions, offset delay and offset loss, will affect the measured phase angle that we see on a Smith Chart.   My initial calibration to make this measurement is implemented with a differenct load (not the load which is "in fixture") from a HP 7mm cal kit.

     For "in fixture" load characterization purposes, when we attempt to observe the load standard, (while the load is inserted into a fixture at 1 GHz  in TDR gated mode but there is no TDR stimulus applied to the load), we must then zoom into the Smith chart plot in order to see the in fixture load (the Smith chart plot later turns into a polar plot because we zoomed in).    When I "zoomed in" and  compared the TDR generated load phase angle (which I will refer to herein as the "target" phase angle) with the residual load phase angle (the resulting phase angle after the offset loss and offset delay load inputs were defined in a user cal kit and a calibration with these inputs was performed) I noticed that there could be a great difference between the "target" phase angle and the residual phase angle of the same load.

I want to know approximately how much of a residual phase angle error will be generated when I measure the S11 or S12 phase response of a passive device when imperfect cal kit load definitions are input.

       Let's say there exists a residual phase angle difference of 10 degrees with respect to the "residual" phase angle (the phase angle measured after calibration) and the "target" load phase angle (which were both observed at a scale of 100 milli*Units and the magnitudes of the loads are equal - the only difference is phase angle) and let's say you decided not to correct this phase angle difference and proceeded to take measurements.    If one proceeded to make phase angle measurements while the NA Smith Chart was scaled to 1.0 Units, approximately  how much of a phase error would you see at 1 GHz if you measured a passive device?

     How much of an angular error would see at 2 GHz measuring the same passive device observing a passive device at a scale of 1.0 units? 

     In this example, all other calibration standards were perfectly characterized for calibration.   I just want to get an idea of how much of a phase ange error would result from an imperfectly calibrated load phase angle.

      If you require snapshots of the load measurements, I can provide them for you, but the magnitudes will not be equal and the phase angle
will not be 10 degrees (which is simpler to calculate with).  

Outcomes