Hello,
I am irritated concerning the term 'memory depth' used for oscilloscopes. Sometimes, it seems to refer to the number of samples that can be stored, sometimes it is just the memory size. I know that this is the same if 1 sample needs 1 Byte of memory, but I need a clear definition (if there exists one).
Is there any glossary of something like that that defines the term?
Regards,
Gerald
I am irritated concerning the term 'memory depth' used for oscilloscopes. Sometimes, it seems to refer to the number of samples that can be stored, sometimes it is just the memory size. I know that this is the same if 1 sample needs 1 Byte of memory, but I need a clear definition (if there exists one).
Is there any glossary of something like that that defines the term?
Regards,
Gerald
I only vaguely know how the 'memory depth' term is use by the other makers of oscilloscopes. I do know, however, how the term is used with Agilent/Keysight oscilloscopes. We mean the number of captured data samples, per channel. On all of the recent Infiniium models that I'm aware of, 16 bits (2 bytes) are stored for each sample on each channel.
You may ask, "why 16 bits?" A/D converters are inherently slightly non-linear. During the calibration process, the 8 bit values are linearized into 16 bits to improve the accuracy of the captured data. Further frequency-dependent processing may be done, so that there may be many more than 256 quantum levels in the captured data.
If you have interpolation turned on, then additional samples are created, but those extra samples are not counted when considering memory depth. At least one manufacturer uses sample memory to store interpolated samples, so when interpolation is turned on, the actual amount of 'real' data is reduced. If you are saving trace data on one of our scopes, the saved files may be much larger than you thought because if interpolation is turned on, the additional points are saved along with the real points.
Does this answer your question?
Al