Hello Community.
When measuring rise time I have noticed that the rise time values seem to be somewhat proportional to the time/div setting. As the time/div setting gets smaller the rise time also follows suit.
When measuring rise time what is the best rule of thumb for the time/div value? It looks as for the most part that the rise time stays relatively unchanged until the max time/div setting is applied. I'm just looking for the best way to measure and report an accurate value. I have attached a few screen caps to supplement this narrative for clarification.
Any assistance is as always appreciated.
Regards,
Chris
The definition of rise time is-
“time taken by a signal to change from a specified low value to a specified high value”
Common points for the low and high values are the 10%/90% or 20%/80% of the signal. Your scope is measuring between the 10%/90% points (you may be able to change that in a menu somewhere). To find those points is must know the maximum and minimum values of the signal. So, the scope first analyzes the signal to find the maximum and minimum values, calculates the 10%/90% points and then measures the time between them.
What is happening in your plots is the maximum and minimum values are changing. Thus, the 10%/90% points are changing resulting in a rise time change.
Take sine1 first. It looks like a slightly distorted sine wave. The maximum and minimum values are clearly visible on screen. The amplitude is about 7 divisions. 80% (90%-10%) of 7 divisions is 5.6 divisions and the y markers are about 5.6 divisions apart. The measurement algorithm has worked as planned.
In sine2, you zoomed in and the top and bottom of the signal is just visible on screen. The top more so than the bottom. The y markers look to be in the same positions but the rise time is 2% lower. Based on the rise time change I would say the measurement algorithm has picked slightly different values for the maximum and minimum. Because the bottom is not clearly visible, the algorithm is starting to fail you.
In sine3, the top and bottom of the signal are not visible. So the algorithm has made the maximum/minimum decision based on what it does know. That is the highest and lowest values of the signal at the edges of the screen. Note the difference at the edges of the screen is about 4.4 divisions. 80% of that is 3.5 divisions and the markers are about 3.5 divisions apart. That is why the rise time measurement changed so much.
Moral of the story is the minimum and maximum of the signal need to be on screen for the automatic rise and fall time measurements to function accurately.