I am measuring the output of a D/A converter (AD669AR) using an Agilent DSO8104A. The D/A was calibrated for offset and gain using DC values, and are confirmed correct when using a DC meter and the scope. I am examining the peak of the waves being created by the D/A; I do this by selecting 20mV/div and offsetting the channel by 10 volts. This allows me to see the uppermost part of the wave. When the D/A is fed a 60Hz square wave or triangle wave, the output as measured on the scope is correct: it peaks at +10.00 volts. Same if I examine the bottom part of the signal. If I feed the D/A a sine wave of the same magnitude, the output is down by 40mV. Increasing the frequency drops the output more, and decreasing the frequency makes it rise. As I approach DC, it comes up to +10.00 volts.
My first thought is the D/A converter, but I measured the digital inputs and they are correct for all the types of waves. It is almost like there is a low pass filter on the output of the D/A converter, even though I am measuring the pin, which goes directly to a test point. There are no passives or anything from the pin to the testpoint. I've tried different boards (of the same type) as well as different boards (of a different type) with the same D/A converter.
Am I using the scope incorrectly? Is it a valid measurement to offset the input by 10V and have the scope set to a small V/div? We also have a leCroy DSO, and it also shows an incorrect value, but this time it's about 100mV above what it should be (for the same signal the Agilent shows 40mV below). Again, bringing the wave down from 60Hz to DC makes the 2 scopes converge on the correct value.
I'm at wits end. Some people are thinking it's the scope, some the D/A, some the measuring method.
Ron
My first thought is the D/A converter, but I measured the digital inputs and they are correct for all the types of waves. It is almost like there is a low pass filter on the output of the D/A converter, even though I am measuring the pin, which goes directly to a test point. There are no passives or anything from the pin to the testpoint. I've tried different boards (of the same type) as well as different boards (of a different type) with the same D/A converter.
Am I using the scope incorrectly? Is it a valid measurement to offset the input by 10V and have the scope set to a small V/div? We also have a leCroy DSO, and it also shows an incorrect value, but this time it's about 100mV above what it should be (for the same signal the Agilent shows 40mV below). Again, bringing the wave down from 60Hz to DC makes the 2 scopes converge on the correct value.
I'm at wits end. Some people are thinking it's the scope, some the D/A, some the measuring method.
Ron
Think of the scope input as a scope as a wide band amplifier from DC to the bandwidth of the scope. All amplifiers have a dynamic range where you get a linear performance from that amplifier. The dynamic range for your scope is +-8 divisions from center screen in the 1Mohm mode and +-12 divisions from center screen in the 50 ohm mode. At 20mV per division, that is 20mV/div times eight divisions in the 1Mohm mode for a dynamic range of 160mV above and below center screen. By now you have realized that the scope displays four divisions above and below center screen, so the dynamic range of the scope is greater than the scope display. For faithful waveform reconstruction, you would want to keep the applied signal within the dynamic range for the volt/div setting you are using.
Offset is a DC signal applied to the input of the vertical amplifier. It is used to offset any DC voltage that may be part of your input signal. If you have a 60mVp-p signal with a 10V DC offset, then you would adjust the scope offset to null out or cancel out that DC portion of your signal. With the DC portion of your signal removed, you can use the 20mV/div setting on the scope to view a 60mV p-p signal.
The problem you are having is a common usage problem with many customers. You want to use 20mV/div to get better resolution on your signal. An eight bit scope has 256 Q levels spread across the scopes eight displayed divisions. The scope digitizer (which is after the vertical amplifier) only digitizes the signal displayed on the eight vertical division. So 20mV/div times eight division divided by 256 is about 625 uV per Q level. At 20mV/div, you can display 160mVp-p on the screen. However, your actual signal is much greater than 160mVp-p. So you set the the to 20mVdiv and set the offset to 10V so that you are basically zooming on on a small portion of your signal trying to get better resolution on your measurement. Because your signal does not have a DC component to cancel out, you are effectively adding a 10V DC component to your signal by using the scope offset. That 10V DC component is out of the dynamic range of the scope at 20mV/div. You are effectively saturating the amplifier which causes a non-linear representation of the signal on the screen.
The amplifier does eventually come out of saturation. The recovery time is based on the doping of the particular batch of silicon, how hard the silicon is saturated, and how long the silicon is saturated. There are too many variables to specify an accurate recovery time. The recommendations is to not saturate the vertical amplifier so you have to use less offset from the scope so that you operating the scope amplifier in the linear portion of the operating curve. Which says make sure you stay within the dynamic range of the scope.
If you want to use 10V off scope offset when your signal has no DC component to cancel out, you should probably be at 2V/div. But at 2V/div, your resolution is 2V times 8 divisions divided by 256 which is about 62.5mV per Q level. My guess is that is not the resolution you really wanted to use.
I hope this explains what you are seeing with the scope.
I've never done a measurement like this before, and just assumed that what I was doing was valid. Now I've learned something that will stick.
Thank you
Ron