Skip navigation
All Places > Keysight Blogs > Oscilloscopes Blog > Blog > Author: Daniel_Bogdanoff

Oscilloscopes Blog

10 Posts authored by: Daniel_Bogdanoff Employee

The diode is a crucial component to master if you want to grow your electronics prowess. So, it’s crucial to have a solid understanding of how diodes behave under varying loads. Today we’re going to look at some diode fundamentals, then take a look at a video covering 4 ½ practical uses for a diode.

 

Download the "6 Essentials for Getting the Most Out of Your Oscilloscope" eBook.

 

What are Diodes?

Diodes are relatively simple but more complicated than many basic passive components you probably already know. Diodes are non-linear components. Resistors, capacitors, and inductors are linear devices, meaning they can be characterized using first order differential equations. As much as I’d like to geek out on the math, I’ll spare you. That’s what Wikipedia is for. So, what are diodes?

 

Diodes are nonlinear devices. They don’t follow Ohm’s law, and for circuit analysis, you can’t replace them with a Thevenin equivalent.

 

Diodes are passive devices, which means they don’t need power to function.

 

Diodes are two-port devices. There’s a positive input, which is known as the anode, and there’s a negative output, which is known the cathode (Figure 1).

 

A diode’s circuit symbol. The anode is on the left, and the cathode is on the right.

Figure 1. A diode’s circuit symbol. The anode is on the left, and the cathode is on the right.

 

Diodes may be simple, but they are extremely useful because of their V-I curve, shown in Figure 2. The X-axis is voltage, and the Y-axis tells you how much current can flow through the diode at that voltage level.

 

A diode’s IV curve.

Figure 2. A diode’s IV curve.

 

Let’s take a closer look at the V-I curve. What does it mean? More importantly, how can you use it to your advantage?

 

Positively Biased Diodes

In Figure 2, there are a few things worth noticing. Let’s start with positively biased diodes. At moderate positive voltages, a diode basically acts as a short. However, there is a small voltage drop, usually called the “forward voltage drop.” It’s also known as the “cut-in voltage” or simply “on voltage.”

 

You can see the forward voltage drop on the V-I curve in Figure 2 around .6/.7 V, where the current spikes. A .6/.7V drop is standard for silicon diodes, but for other diode materials, the forward voltage drop will vary.

 

You can measure the forward voltage for a specific diode using a multimeter with a diode testing capability. You can see that this silicon diode has a forward voltage of roughly .62V (Figure 3).

 

Using a Keysight U1282A multimeter to measure the forward voltage drop of a diode.

Figure 3. Using a Keysight U1282A multimeter to measure the forward voltage drop of a diode.

 

But what you need to remember is that, when exposed to a moderate voltage – say 5V, a diode will pass through 5V minus the forward voltage. So, 4.3V for a standard silicon diode. There are some methods for compensating for this drop, but that’s beyond the scope of this article.

 

Negatively Biased Diodes

Let’s now move to the left side of the Y-axis in Figure 2. When exposed to a negative voltage, there will be a nano-amp reverse current. You can generally approximate it as 0A in most situations. That is, until you get to the other big swing on the VI curve, known as the breakdown voltage.

 

If your diode is exposed to a high level of reverse bias, you blew it. Often literally. Diodes essentially can’t hold up to that level of negative voltage, and the device physically breaks down, allowing negative current flow.

 

Long story short, you can essentially think of a diode as a one-way conductor with a voltage drop. Enough preface, let’s look at a few different ways you can use diodes for your circuits.

 

How to Use a Diode in Your Designs

Now that you know how a diode works, how can you use one in your designs? Check out this video for 4 ½ practical uses for a diode:

 

 

Diodes are Great!

Diodes are extremely useful components, and if you’re working with electronics, you need to build a solid knowledge of how diodes work. The video covered a few ways to use diodes, but that’s just the beginning! How are you using diodes in your designs? Let me know in the comments here or on the Keysight Labs YouTube channel.

 

The internet got all kinds of angry at Apple recently. It turns out, Apple slows down the processor’s clock frequency on older iPhones. The internet took this as an opportunity to jump on “Evil Apple” for what it perceived as a marketing ploy to get people to upgrade.

But, the internet was wrong.

 

As it turns out, this was simply a case of good engineering by Apple’s engineers.

To understand why they would slow down the processor speed of old phones, you have to understand how lithium ion batteries work and have a basic understanding of processors.

Let’s start by taking a look at lithium ion battery technology.

Lithium Ion Battery Technology

Mobile devices use lithium ion batteries primarily because of their incredible energy density. They provide a lot of power and don’t take up much space.

We all want a battery that lasts a week. That is, until we have to carry it. Engineers must find a sweet spot somewhere between device usability and eternal battery life (known as “battery heaven”).

To understand Apple’s problem, you need to understand how lithium ion batteries (LIB) work. LIBs use a chemical reaction in which the anode (lithium-doped cobalt oxide) passes lithium ions to the cathode (graphite) through a special barrier. The ions, using an electrolyte as a conductor, can pass through this barrier. Electrons cannot, and therefore get provided to the circuit. Note the electrons in the half-reactions for the cathode and anode:

 

Figure 1.  Cathode half-reaction

 Figure 1.  Cathode half-reaction

 

Figure 2.  Anode half-reaction

Figure 2. Anode half-reaction

 

If there’s a path for the electrons, the chemical reaction will take place. If there’s not, the battery hangs in a balanced state and holds its charge.

But there’s a catch. LIBs age. A test from Battery University showed that LIBs experience a capacity drop of up to 20% after 250 charge cycles.

Not only do LIBs lose capacity, but they also lose the ability to generate high levels of current. The current production capability of an LIB is proportional to how fast its chemical reaction takes place. The faster the reaction, the higher the current.

When choosing a battery for your product, one spec you should consider is the current production capability. Typically, you know your required currents, so this is an easy call. Choose a battery that will give you enough current to power your design and enough extra headroom to be comfortable. But what happens a few years down the road? Your battery’s performance will degrade, but your device will still need the same amount of power.

The environment also takes a toll on LIBs. Like any proper chemical reaction, temperature is a factor. The colder an LIB, the slower the chemical reaction, the lower the peak current. Couple this with an old, degraded battery and you’re in for some trouble.

Eventually, you’re going to run out of extra power-generation capability. This is the problem Apple’s facing. Their older-model iPhones still require the same power that they did on day one, but their batteries aren’t holding up.

Now, Apple is slowing down your phone! Why? It’s actually for your own good. The reasoning comes down to how processors work.

Processors Need Power

Basically, processors are just an intelligently organized collection of transistors. When combined, they make up logic gates that form the core of modern processing (no pun intended). For gates to function properly, they need power. If a gate doesn’t get enough power, it’ll still work; it’ll just operate more slowly.

 

Figure 3. Low supply voltages mean increased propagation delay!

Figure 3.  Low supply voltages mean increased propagation delay!

 

Heavy processing tasks require a lot of power, and mobile devices are designed to handle this load. But what happens if a device’s battery is degraded to the point that it can’t provide enough power?

Without sufficient power to the processor, the gates won’t operate as fast. Put simply, their propagation delay increases. Processors operate expecting a certain propagation delay. The timing of its operations depend on logic blocks making decisions within an expected number of clock cycles. Low power to a processor slows down logic blocks.

Then things break.

Then what? If you have a well-designed device, it’ll realize there’s an issue and simply perform an emergency shutdown. If your device isn’t so well designed, the electronics can be damaged.

Don’t take it from me, Apple says as much in their statement on this matter:

"Our goal is to deliver the best experience for customers, which includes overall performance and prolonging the life of their devices. Lithium-ion batteries become less capable of supplying peak current demands when in cold conditions, have a low battery charge or as they age over time, which can result in the device unexpectedly shutting down to protect its electronic components.

Last year we released a feature for iPhone 6, iPhone 6s and iPhone SE to smooth out the instantaneous peaks only when needed to prevent the device from unexpectedly shutting down during these conditions. We've now extended that feature to iPhone 7 with iOS 11.2, and plan to add support for other products in the future."

So, what are Apple engineers actually doing to “prolong the life of their devices?” They’re slowing down the CPU frequency if the phone detects an insufficient battery voltage.

An iPhone owner on Twitter documented their iPhone 6’s CPU jumping from 600 MHz up to 1400 MHz after a battery replacement.

 

How iPhones Deal with Old Batteries

Apple is attacking this issue on two fronts.

The first is clear from the Twitter example – a slow down of the phone’s CPU frequency. The old battery’s lower power capability means a larger propagation delay in the processor. Slowing the CPU frequency ensures that there’s enough buffer time to cover a non-ideal propagation delay.

The second is addressed in Apple’s statement. Apple’s engineers are spreading out processor-heavy clock cycles to minimize the required battery power.

 

What Can You Do About It?

How do you avoid experiencing phone slow down?

First, take care of your battery. Don’t let it get too hot, especially if it’s fully charged. Also, don’t use off-brand chargers. A lower charging voltage is proven to prolong battery life, but it also takes longer to charge.

Second, expect to get a new battery every 350-500 charge cycles. They are not that expensive compared to a new device, and they can massively improve processing performance.

If you’re designing devices that use LIB, your users’ experience can hinge on battery management.

 

Make sure to have proper air flow around the battery. Also, think about whether or not the battery should be user-serviceable. Plan for battery degradation when selecting a battery and designing battery management circuitry.

 

Figure 4.  Design with battery limitations in mind!

Figure 4.  Design with battery limitations in mind!

 

It Seems Apple Isn’t Evil

Though some people may wish it were true, this wasn’t an evil corporate marketing scheme. It’s just good engineering. A closer look at processor basics and lithium ion battery technology shows that Apple is simply doing what they have to do to improve their users’ uptime – a noble goal.

I’d love to hear your thoughts on the issue. Let me know on Twitter(@Keysight_Daniel), the Keysight Bench Facebook page, or the Keysight Labs YouTube channel!

Hacking the specs

Everyone loves a bargain. And who doesn’t love a hacked oscilloscope? Well, it would be pretty odd for an oscilloscope company to teach you how to hack your own hardware. Besides, that’s already been done (Fig. 1). So, I’m coining a new term: “spec hack.”

 

Webster’s dictionary will one day define it like this:

 

Spec hack  (/spek’hak/), n. When an engineer uses expert-level knowledge of their test equipment to achieve performance above and beyond typical expectations of said equipment. <Thanks to a spec hack of the flux capacitor, Doc Brown discovered he only needed to go 77 miles per hour to travel through time.>

 

Today’s spec hack will look at the built-in frequency counter on an InfiniiVision 1000 X-Series oscilloscope.

 

You may think that a 100 MHz oscilloscope will only let you see signals up to 100 MHz – but that’s not actually true. Why? Oscilloscope bandwidth isn’t as straightforward as the labeled spec.

 

Figure 1: just a few days after its release, the EEVBlog YouTube channel posted an oscilloscope hack to double the bandwidth of the 

1000 X-Series.

 

You may think that a 100 MHz oscilloscope will only let you see signals up to 100 MHz – but that’s not actually true. Why? Oscilloscope bandwidth isn’t as straightforward as the labeled spec.

 

Oscilloscope Bandwidth brush up

To fully understand how far you can push your frequency counter, you must first understand how your oscilloscope’s bandwidth works. If you are confident that you know all about bandwidth, feel free to skip this next little section. If not, get ready to have your mind blown (or at least maybe learn something new).

 

Bandwidth

Essentially, if you have a 100 MHz oscilloscope bandwidth, it means you can view a sine wave (or frequency components of a non-sine wave) of 100 MHz with ≤ 3 dB of attenuation.

 

But, here’s the main take away – bandwidth is all about signal attenuation, not just about the frequencies you can or can’t see (Fig. 2).

 

Figure 2: A Keysight 6000 X-Series Oscilloscope demonstrates what a 2.5 Gbps waveform looks like at varying bandwidths.

Even at 200 MHz, there is still a visible signal.

 

Usually this won’t matter for your day-to-day oscilloscope usage. You may see round corners on what should be a crisp square wave, but it probably won’t change how you use your scope. But, when you’re using a built-in frequency counter, it can be used to your advantage.

 

But, when you’re using a built-in frequency counter, it can be used to your advantage.

 

How a frequency counter works

To understand why this effect can be advantageous, you need to understand how a frequency counter works. It’s called a frequency counter because it literally counts. It counts the number of edges found over a specific amount of time, called the gate time. The frequency is calculated like this:

 

Frequency = Number of pulses/Gate time

 

From a circuitry perspective, the counter is simply a comparator (to identify signal edges) and a microcontroller to count the output and display the results (Fig. 3) As it turns out, oscilloscopes already have this infrastructure inside their trigger systems.

 

Figure 3: An old-school HP frequency counter’s nixie tube display

 

Why oscilloscopes make great frequency counters

As luck would have it, the trigger circuitry of a scope often has comparators built into the signal path (think “edge trigger”). With some planning, it’s not difficult for oscilloscope designers to include a frequency counter built into the oscilloscope. It may sometimes require extra hardware, but the essentials already exist.

 

The most important specification of a counter is accuracy - the higher the precision of the time base, the more accurate the counter. Oscilloscopes also have to have a highly accurate time base, so a built in counter can just use the scope’s clock.

 

Finally, an oscilloscope’s trigger circuitry typically has its own special signal path designed specifically to extract the core signal and block out noise and unwanted frequency components. Unlike the oscilloscope’s acquisition circuitry, the trigger circuitry doesn’t need to recreate the signal with high accuracy, it only needs to do a fantastic job of finding edges. So, a frequency counter can use a scope’s trigger signal path instead of the acquisition signal path and get a higher fidelity edge.

 

The Spec Hack

Let’s put it all together. So far we’ve learned a few things:

 

  1. You can see signals (or signal edges) higher than the bandwidth of your oscilloscope, but it may have an attenuated amplitude.
  2. Frequency counters just need to count edges; they don’t care very much about the amplitude of the signal.
  3. Oscilloscopes have a dedicated, specially conditioned signal path dedicated to identifying edges.

 

So, a frequency counter built into an oscilloscope can measure frequencies higher than the bandwidth of the oscilloscope. The question is, how much higher?

 

So, a frequency counter built into an oscilloscope can measure frequencies higher than the bandwidth of the oscilloscope. The question is, how much higher?

 

One of the perks of working at Keysight is that that’s an easy question to answer. I pulled out my 100 MHz Keysight 1000 X-Series low-cost oscilloscope and a grossly unnecessary Keysight 67 GHz PSG (because hey, why not?) (Fig. 4) and ran a frequency sweep to see just how fast of a signal the oscilloscope’s frequency counter could measure.

 

Figure 4: a PSG producing a 529 MHz sine wave

 

The results blew me away!

 

The 100 MHz oscilloscope’s counter was able to measure up to 529 MHz! That’s over 5x the bandwidth of the oscilloscope (Fig. 5).

 

Figure 5: A screenshot of the frequency counter measuring 529 MHz. 

 

The lesson? Know your equipment!

It’s always fun to find a little, hidden gem in your test equipment. Sometimes it’s an Easter egg mini game hidden away in a secret menu; sometimes it’s a measurement you had no idea you could make. Having a good understanding of the fundamentals of the equipment you use will not only help you make better, more accurate measurements but also help you avoid any traps that might lead you down the wrong test path.

 

Having a good understanding of the fundamentals of the equipment you use will not only help you make better, more accurate measurements but also help you avoid any traps that might lead you down the wrong test path.

 

Are there any spec hacks that you’ve found? Be sure to let me know in the comments below or on Twitter (@Keysight_Daniel). Happy testing!

Here in Keysight Oscilloscopeland we talk a lot about our ASICs (application specific integrated circuits). But why? Who cares about the architecture of a cheap oscilloscope? All that matters is how well it works, right? We agree. That’s why we design and use oscilloscope-specific chips for all our scopes.

 

How, though? Custom ASICs don’t just materialize out of thin air, it takes years of planning and R&D effort. Here’s a high level look at what it takes to make an ASIC.

 

The making of an ASIC

There are several different steps (and teams) involved in the creation of an ASIC. Before anything is started, there must be a long-term product plan – what do designers want to have 5-10 years down the road? Future products will have new specs or features that will sometimes warrant an ASIC. To make that decision, product planners meet with the ASIC planners.

 

An oscilloscope's ADC ASIC

A custom 8 GHz oscilloscope ADC, used in Keysight S-Series Infiniium oscilloscopes.

 

Planning

First up is the planning team. They ask “what chips do we need to have in a few years? – Let’s make that.” And, “what will be available off-the-shelf in a few years? - Let’s not make that.” The planners will also make cost vs. performance trade-off decisions (device speed, transistor size, power consumption, etc.).

 

Also, ASICs generally fall into one of two categories: digital or analog. Analog chips are essentially signal conditioning chips designed to massage signals into a more desirable form. Digital chips are essentially streamlined FPGAs, designed for processing data inputs and providing coherent data outputs. For example, our MegaZoom ASICs take data from an oscilloscope’s front end circuitry + ADC and output waveforms, measurements, and other analytics. 

 

processor chips on 1000 X-Series scope

Fig 1: The Keysight-custom ADC and processor chips on the InfiniiVision 1000 X-Series oscilloscopes

 

It's worth noting there's a third type of ASIC - a mixed signal ASIC like an ADC (Fig 1.)

 

Digital ASICs

 

Now, let's take a closer look at process of creating digital ASICs, like the MegaZoom processor in the InfiniiVision oscilloscopes.

 

Front-end/RTL design

Once the chip is well defined by the planning, the front-end team gets to work. They are responsible for the “register-transfer level” (RTL) design (and typically spend their days with Verilog or VHDL). Their goal is to create a functioning digital model of the chip, but not a physical model. The RTL team is ultimately responsible for taking the chip design specs and turning it into actual logic and computation models. To do this, they use digital design components/building blocks and techniques like adders, state machines, pipelining, etc.

The RTL team is ultimately responsible for taking the chip design specs and turning it into actual logic and computation models.

 

As the front-end team is working, there’s also test team that works to check the RTL for errors. The goal is to try to avoid situations like the infamous Pentium FDIV bug that cost Intel nearly $500 million in 1995.

 

Once the RTL is proven to be functional by the test team, it is synthesized into a netlist. This essentially means that the RTL is converted from logic blocks into individual logic gates. Today, software handles this, but historically it was done by hand and engineers used truth tables and Karnaugh maps. The netlist is then run through a formal verification tool to make sure it implements the functionality described in the RTL before being passed to the back end team. 

 

Back-end

Once the logic is verified, it’s time to physically implement the chip. This is typically known as “floorplanning.” Floorplanners use crazy-expensive software (hundreds of $k) to place the RTL onto the chip footprint. In reality, the back end team generally gets early versions of the netlists so they can get a head-start on floorplanning.

 

The back end work begins with an overall placement of design blocks on the chip. The general workflow for the back-end team is:

 

  1. Floorplanning
  2. Individual gate placement
  3. Clock tree building
  4. Routing
  5. Optimizing
  6. Static timing analysis

For the chip to function properly, gates involved in the same computational processes should be close together. Also, designers have to make sure that power can be distributed properly throughout the chip.

 

A clock tree is a clock distribution network designed to make sure the clock reaches each of the gates at the same time. If clock edges arrived up at different times to different parts of the chip, it could cause painful timing errors. Sometimes, designers also intentionally add some clock skew to keep an edge from arriving too soon.

 

The back end work begins with an overall placement of design blocks on the chip.

 

Once placement is complete, software then auto-routes the connections between gates. You’re probably familiar with the phrase “never trust the autorouter.” In this case though, that’s really the only option unless you want to manually route hundreds of thousands (or millions) of connections.

 

Oscilloscope acquisition board

Fig 2: Routing of an oscilloscope acquisition board

 

Finally, an ongoing concern throughout the whole process is whether or not the design is actually manufacturable. This is known as DRC (design rule checking). Basically, this is a set of rules designers provide to the software to tell it what architectures (aka physical shapes) are and aren’t physically possible. Then, there's a layout vs. schematic check (LVS) to verify that the physical geometries actually implement the desired circuitry.

 

Tape Out

Once the front end and back end teams are done, it’s party time. This stage, known as “tape out,” is when the final design is prepped for production. Massive files are sent to the fab, who creates photomasks for each layer of the ASIC. It’s not unusual for there to be 30-50 masks for a single chip.

In the final stage, known as “tape out,” the final design is prepped for production.

 

Manufacturing

Once the masks are created, a number of different techniques are used to manufacture the chip. Usually a combination of photolithography, acid baths, ion implantation, furnace annealing (baking), and metallic sputter deposition is used. Each silicon wafer holds dozens (or hundreds) of identical layouts that will later be cut up into discrete chips.

 

The completed wafer is then tested for manufacturing errors. Depending on the size of the wafer and complexity of the process, planners can usually predict the failure rate of each chip. Microscopic anomalies, like a speck of dust under mask, can cause a chip to fail. “Scan testing” is used to check each gate. Scan testing consists of applying a pre-determined pattern of signals that will test every single gate on the chip, and each chips’ output is compared to the expected output. Each die is tested, and the chips that pass are sent on to be packaged.

 

Packaging

Good dies are then placed into packages and tested again. The packaging team typically designs a custom package for the die, and needs to consider signal integrity, cost, thermal regulation, and reliability. Often, we at Keysight will re-design the package of an existing ASIC using updated technology to reduce hardware cost and improve reliability of our oscilloscopes.

The packaging team typically designs a custom package for the die, and needs to consider signal integrity, cost, thermal regulation, and reliability.

For example, the ADC on our inexpensive oscilloscopes is the same ASIC used in some legacy oscilloscopes, but by improving the packaging over time we’ve reduced the package cost by nearly 5x. Thanks to that cost reduction, what was once used for only for a top-of-the-line oscilloscope we can now use in our cheap oscilloscope.

 

Support Circuitry

Once a chip is manufactured, tested, and packaged, it still needs to be surrounded with support circuitry. For example, what good is an op amp if you never configure it with resistors? But, that’s a topic for another blog post.

 

How it’s made

So, while you wouldn’t want to use this description to go design your own ASIC, you should now have a better understanding of what it takes to produce an ASIC. It’s a lot of work, but the benefits they offer compared to FPGAs are often worth the investment. For any given Keysight oscilloscope, we use a few different ASICs. We use analog ASICs for the front end, a custom low-noise ADC, and often a custom processor as the brain of our oscilloscope. While this comes with a fairly large non-recoverable engineering expense (NRE), being able to use the same chip in our $45,000 oscilloscopes and our $450 oscilloscopes earns our oscilloscopes special place on the budget-conscious engineer’s bench.

What does the piezoelectric effect have to do with oscilloscopes? If you follow any of the electrical engineering YouTube channels, you’re likely familiar with Dave Jones & the EEVBlog. His latest video caught my eye “EEVBlog #983 – A Shocking Oscilloscope Problem”. Now, this made me stop in my tracks. Not because he’s highlighting an oscilloscope “problem,” but because after waiting for 982 videos, Dave thought this topic was finally worth using the word “shocking” as a pun. I don’t know about you, but if I had made 982 videos, I’d probably have played that card already. Although, our Keysight Oscilloscopes YouTube channel just broke 250 videos and we haven’t done it yet, so you never know.

 

Anyways, what could be such a big deal? As it turns out the topic is actually, well (sigh) shocking. Who knew that simply bumping an oscilloscope the wrong way could cause mystery signals to appear on the screen? What makes this happen? It occurs because the ceramic capacitors in the oscilloscope’s acquisition system act as a piezoelectric material. Whether you are using a cheap oscilloscope or a high end oscilloscope, the piezoelectric effect is something to be aware of.

 

How does piezoelectricity work?

Piezoelectric materials are crystalline substances that produce an electric potential when subjected to mechanical stress.  Think about a crystal lattice. In general, a material’s molecules form into crystals because that is its most stable state. The molecular charges are arranged in an electrically neutral arrangement. Essentially, the positive and negative charges are all at a happy equilibrium. But as soon as an external physical pressure distorts the crystal structure, there will be an imbalance of charge. Take Fig 01 (GIF) for example. In a normal, non-compressed state the 2D lattice is at equilibrium. But as it’s compressed, the positive and negative charges “squish” out to opposite ends and create a potential across the structure. Basically, the lattice stops being an electrically neutral structure and has a charge distribution.

 

 

 Alternatively, you can apply a voltage to a crystalline structure and it will physically change the shape of the crystal – the “reverse piezoelectric effect.” This is especially useful if you want to generate or sense physical time-varying waves.

 

The piezoelectric effect and oscilloscopes

What does the piezoelectric effect have to do with oscilloscopes? Try this and see for yourself:

 

  1. Grab a standard 10:1 passive probe and connect it to your oscilloscope
  2. Zoom in vertically on your signal to a small voltage per division setting
  3. Set your trigger level slightly above your baseline signal
  4. Remove the probe’s grabber hat & tap the exposed probe tip on a hard surface
  5. Don’t panic and always carry a towel

 

You should then see a signal show up on your screen. Remember, you may have to put your oscilloscope into “Normal” trigger mode to keep the signal onscreen. Alternatively, you may be able to forgo the probe all together and simply tap on a bare BNC or even the top of the chassis (like in Figure 2). Now, don’t panic, this is a behavior that every scope in existence exhibits. It’s worth noting that I had to smack the oscilloscope pretty stinking hard to get this strong of an effect.

 

Scope Slap

A hand-numbingly hard slap demonstrates the piezoelectric effect on the Keysight InfiniiVision 1000 X-Series

 

A signal is showing up on the oscilloscope because designers use ceramic capacitors in both probes and in oscilloscope acquisition boards. Ceramic is a piezoelectric material, and the vibrations caused by physical force you exert on your probe and/or scope cause the capacitors to physically expand and contract slightly. This expansion and contraction creates an electric potential in the capacitors. Because these capacitors are part of the oscilloscopes acquisition system, that potential shows up on the oscilloscope screen. “So…” you ask me once you’re done hyperventilating, “have all of my measurements been bogus up to this point? Can I minimize this effect? Is this something I should worry about?”

 

No, yes, and probably not.

 

Unless you are working in the middle of a city-destroying earthquake or on the back of a kangaroo, you probably don’t have anything to worry about. Keysight oscilloscopes all go through extensive environmental and stress testing, including drop tests (up to 30 g’s of force!) and time on a vibration table (Fig 3). So, for Keysight oscilloscopes you can be confident that every-day vibrations won’t affect your measurements (but I can’t speak for other manufacturer’s testing procedures). If you are extra concerned about this or work on the back of a kangaroo, try using an equipment cart or table that has built-in suspension.

 

Drop Table

An Infiniivision 1000 X-Series oscilloscope being drop-tested. Just because it’s an inexpensive oscilloscope doesn’t mean it’s not rugged!

 

Wrapping up

Clearly, under the right circumstances, you can visibly observe the piezoelectric effect on your oscilloscopes. However, in my years at Keysight, I have not seen a single instance of this ever effecting an engineer’s measurements. To borrow the words of Mike from Mike’sElectricStuff:

Patient to Doctor: “Doc, it hurts when I do this!”

Doctor to Patient: “Don’t do that!”

It’s Scope Month, and you know what that means, oscilloscope giveaways! But this year we’re giving you even more chances to win.

 

How?

This year Scope Month includes a scavenger hunt. We have hidden your favorite oscilloscope guru Daniel Bogdanoff all over the world (don’t worry, they’re just life-size cardboard cutouts). Your job? Find Daniel. If you (the scopes community) find Daniel, we will add more scopes to the daily drawings during Scope Month! And the faster that you find him, the more free oscilloscopes it means for you!

 

What do I do?

Watch the Keysight Oscilloscopes Facebook and YouTube channels, because we will release a clue on the whereabouts of Daniel each Monday during Scope Month. Once you have the clue, start searching. If you get stuck, come back to the comments section of this blog post where you can collaborate with the rest of the world and maybe together you can find Daniel sooner. When you find the Daniel cutout, post a picture of it along with the hashtag #GoFindDaniel to the Keysight Oscilloscopes Facebook page or mention @Keysight_Daniel on Twitter so that we know to add more oscilloscopes to the drawing.

 

How many oscilloscopes?

If you find the Daniel cutout by 11:59 pm US Mountain Time (6:59 am UTC time) on Tuesday, we will add FIVE MORE SCOPES to the prize pool for that week. If he is found before midnight on Wednesday, we will give away four more scopes, before midnight on Thursday means three more, midnight on Friday is two more, and if Daniel is found by 11:59 pm on Saturday evening, we will add one extra scope to the prize pool. So work together! The sooner you find the cutout, the more free oscilloscopes we will give away! And you don’t even have to wait long for your reward! The drawings for these additional oscilloscopes will be done on or before the following Monday!

 

Don’t forget to enter the oscilloscope giveaway every day during Scope Month for your chance to win a new Keysight 1000 X-Series oscilloscope.

 

Check out the rest of the Keysight oscilloscope family

 

Terms and Conditions

 

GoFindDaniel CLUE #2: 68747470733a2f2f7777772e796f75747562652e636f6d2f77617463683f763d3645703650425379366945

 

It seems like all we hear about today in the test and measurement industry is “solutions.” Why is the word “solutions” such a popular buzz word? Well, it has a great double meaning. The first (and most obvious) definition of solution is an answer or resolution to a problem or situation. The second meaning of “solution” comes from as far back as the year 1590, and means a liquid mixture that is completely mixed (solute into solvent). When we talk about solutions, we really imply both of these definitions. One of them literally, and one of them metaphorically.

Let’s start with the literal meaning. When I say that our DDR solution kicks booty (metaphorically, not literally), what I mean is that we have a robust, industry-proven oscilloscope that will simplify the complicated task of triggering, analyzing and debugging parallel buses. If you work with DDR, you’re probably now thinking, “tell me more about this solution.” Ok, here goes:

“The Keysight Infiniium V-Series oscilloscopes also have the world’s fastest digital channels, which means you can probe at the various command signals to easily trigger on the different DDR commands such as read, write, activate, precharge and more. DDR triggering makes read and write separation easy, providing fast electrical characterization, real-time eye analysis and timing measurements. The DDR protocol decoder can decipher the DDR packets and provide a time-aligned listing window to search for specific packet information.”

So I may have copied that from our DDR webpage, but it doesn’t count as plagiarizing if it’s from my own company. And, it’s one heckuva solution because it does the job you’ll need it to do, and it does it well. If it didn’t get the job done, it wouldn’t be a solution. I also can’t just call it an oscilloscope, because it’s more than that. It’s a combination of hardware, software, and probing – it’s a whole solution.

But here’s what it comes down to, we call it a solution because it’s the answer to a lot of your DDR problems, and it’s so much more than just a piece of hardware for your bench.

Ok, I hinted above about how this could get metaphorical. The literal, chemistry definition of solution is a liquid mixture that has a fully dissolved (same root word as solution!) solute in a solvent. I don’t literally work with solutions. But when we at Keysight are combining and integrating software and hardware, we’re creating a metaphorical solution. For us to do the job 100% of the way, our software has to fully integrate (or dissolve) with our hardware. That’s what makes it a solution. It flows, it integrates, it works as one!

Ok, I also hinted that I’d get (possibly too) philosophical.  I’ll go so far as to say that we can’t really call it a solution until it’s in the hands of an engineer and being used to find solutions to bugs in their design. A solution can only be a solution when the test equipment is fully integrated (dissolved) into an engineer’s workflow and design process. The solution consists of the test tools (the solute) and the engineer’s skill and wit (the solvent). In chemistry, the solute is considered the “minor component” and the solvent is considered the “major component.” This holds true for our metaphorical solution. We can only do so much to provide the solute, the real quality of the solution is dictated by you, the solvent!

So, there’s really two main reasons we talk about solutions. 1st, we want to convey that we can help solve your problems with a combination of tools. 2nd, we want to partner with you to create and find the real solution, a combination of quality equipment and quality engineering.

In closing, a haiku:

Solutions, complex
Combine wit and expertise
to solve tough problems

Or more traditional English:

Roses are red
Violets are blue
I’m an engineer not a poet
solutions.

Author’s note: there may or may not have been a challenge to see how many times I could use the word “solutions” in a blog post. The answer is 32. Solutions. 33.

Daniel_Bogdanoff

Inventing the MSO

Posted by Daniel_Bogdanoff Employee Sep 1, 2016

A look into the history of the mixed-signal oscilloscope

1996 was a year to remember, it brought us the Macarena, the Nintendo 64, and the first Motorola flip phone.  But, also making its debut that year was the HP54645A mixed signal oscilloscope.  Today mixed signal oscilloscopes (MSOs) are an industry standard, but this was new and exciting technology 20 years ago.  Here’s an excerpt from the HP journal from April 1997:

“This entirely new product category combines elements of oscilloscopes and logic analyzers, but unlike previous combination products, these are “oscilloscope first” and logic analysis is the add-on.”

At this point in the tech industry, microcontrollers dominated the landscape. Gone were the 1980s and the days of microprocessors and their dozens of parallel signal lines, in was the 8-bit or 16-bit microcontroller. As the need to test dozens (or hundreds) of channels decreased, the thriving logic analyzer industry began to shift in favor of oscilloscopes.  As a result, Hewlett Packard released the 54620A; a 16-channel timing-only logic analyzer built into a 54645A oscilloscope frame. This was a big hit for engineers who only needed simple timing analysis from a logic analyzer and liked the simplicity and responsiveness of oscilloscopes.

These tools were all coming out of Hewlett Packard’s famed “Colorado Springs” division, which focused heavily on logic and protocol products. In hindsight it’s clear that the shift from a logic analyzer-focused landscape to an oscilloscope-focused landscape was inevitable.  But, when the project funding decisions had to be made the logic analyzer was king.

A few R&D engineers, however, saw it coming.  They strategized amongst themselves to get a new oscilloscope project underway.  However, they knew it was going to be a hard fought battle. Following the old adage “if you can’t beat them, join them,” the engineers proposed a new project combining the oscilloscope and the logic analyzer into one frame. The thought was that if an oscilloscope project wouldn’t get funding, then surely integrating a logic analyzer into the scope would do the trick. Below is a picture of Bob Witte’s (RW) original notes from the 1993 meeting in which the MSO was conceived. (Follow Bob Witte on Twitter: @BobWEngr) This product was internally code named the “Logic Badger,” stemming from the 54620A oscilloscope’s “Badger” code name and the 54645A’s “Logic Bud” code name.

One thing led to another, and the 54620A and the 54645A were combined into the paradigm shifting 54645D. A new class of instrument was introduced into the world: the mixed signal oscilloscope. For the first time ever, engineers could view their system’s timing logic and a signal’s parametric characteristics in a single acquisition using the two analog oscilloscope channels and eight logic channels.

From its somewhat humble beginnings, the MSO has become an industry standard tool globally, with some estimating that up to 30% of new oscilloscopes worldwide are MSOs. Logic analyzers are also still sold today and are an invaluable tool for electrical engineers thanks to their advanced triggering capabilities, deep protocol analysis engines, and state mode analysis. If you’re debugging FPGAs, DDR memory systems, or other high-channel-count projects you’ll want to consider using a logic analyzer. However, mixed signal oscilloscopes dominate today’s bench for their ability to quickly and easily trigger and decode serial protocols.

Finally, it’s worth noting that the Hewlett Packard division is still alive and strong in its current form here at Keysight Colorado Springs. In fact, many of the same engineers from the very first MSO project are still here working on today’s (and tomorrow’s) MSOs.

To learn more about how the digital channels on an oscilloscope work, check out this 2-Minute Guru videoon the Keysight Oscilloscopes YouTube channel.

Learn more about MSOs or view the mixed signal oscilloscopes available today from Keysight Technologies at Keysight.com.

(Also the only Jitter Glossary I’ve ever written)

 

Does jitter have you all shook up? This quick overview should help ease those jitters (puns intended, sorry). Learning this list of key terms will give you the confidence you need to start tackling the jitter bugs in your design.

Buckle up! Here’s the exhaustive list of terms you need to know:

Jitter: Essentially a measurement of where your signal’s edges actually are compared to where you want them to be. If your edges are too far off, bad things happen. Really, really bad things. Or sometimes just marginally bad things. Bit errors, timing errors, the works. You can hope it’s just marginally bad, or you can use the right equipment and know for sure.

Jitterbug: An old-school dance.  Note: you don’t actually need to learn this to talk intelligently about jitter. It will probably just have the opposite effect.

Probability Distribution Function (PDF): Remember your statistics class in college? Me neither.  But, you’ll probably remember the term “bell curve” because that affected your grades. A bell curve is just one type of probability distribution function and is simply another way to describe a “normal” or “Gaussian” distribution. A PDF is simply a chart of possible values based on their likelihood of occurring. The x-axis represents a possible value (sometimes marked by standard deviations away from zero) and the y-axis represents the possibility of that value occurring. We use PDFs to visualize and interpret jitter measurements.

Gaussian Distribution: or “normal distribution,” it’s unbounded and continuous. That’s a fancy way of saying that basically any value is possible. But the farther away from the middle of the PDF you go, the less likely it is that that value will occur.

Random Noise: Also “random jitter,” is 100% random and has a Gaussian distribution. It’s caused by physics (yay science!) and has three components: thermal noise, shot noise (or Poisson noise if you’re a math major), and pink noise. If you want to geek out more on this, just look it up on Wikipedia. So, you expected your clock to have a 60 ns period? Well, because you can’t get rid of random noise (earplugs don’t help) you could end up with a rogue 500 ns period every once and a while.  But you probably won’t unless you have a few years to run the test. But you could. This is why we like to measure and analyze jitter! You can analyze jitter on your oscilloscope using histograms.

Histograms: A tool that visually describes how a signal varies over time.  Figure 1 shows a jitter histogram on the Keysight InfiniiVision 6000 X-Series oscilloscope.  Because it looks like a bell, you can say “That’s Gaussian!” (and get smarty-pants points from your cubicle-mate). Because there’s only one peak on the histogram, you can say “Psh, it’s only random jitter so there’s nothing we can do about it!” (and get double bonus smarty-pants points from your cubicle-mate). But, look at Figure 2.  That looks a little bit scarier. Because the histogram has two peaks it means that there’s “deterministic” jitter.

Figure 1: A histogram of a signal that just has random noise

 

Figure 2: a “Bimodal” histogram shows that there’s deterministic jitter

 

Deterministic Jitter (DJ): It’s not random.  It’s usually bounded, so it can’t go off to infinity even if it wants to. This is when it starts to get scary, because deterministic jitter is caused by system phenomena. Notice that there are two peaks with a random distribution around each of those peaks. Random and deterministic jitter are both in play here.  Deterministic jitter can be broken down into a few sub-categories:

Bounded Uncorrelated Jitter (BUJ): Gives engineers night terrors.  It’s bounded but isn’t really related to anything in that same system.  It could be something like cross talk or just interference from the wall.  (The wall? Yeah, there’s noise everywhere. Check out this awesome video: https://youtu.be/SJefUNAJZNA)

Data Dependent Jitter (DDJ): Can be one of two things.  The first is “duty cycle distortion” (DCD). This is when one bit value tends to have a longer period than the other (like when you can get one kid out of bed way easier than the other). The second is “Inter symbol interference” (ISI). This is caused by long strings of a single bit value. This is sort of like when you’ve been sitting too long in a weird position and one leg doesn’t work right when you get up and try to walk.

Periodic Jitter: can be correlated or uncorrelated, but is always periodic.  This means it’s pretty easy to identify like we’ve done in figure 2. Take your jitter measurement, and plot a trend of the measurement.  Then measure the frequency of the trend, and that will point you directly to the culprit (probably Professor Plum in the library with the candlestick).

“Whoa Daniel, that was too much at once. Remind me again how they all relate to each other?” I’m glad you asked; here’s a nice family tree (Figure 3).

Figure 3: Jitter and its components

Jitter Measurements: This probably doesn’t need defining; I just needed a segue. Ok, fine. Jitter measurements are measurements you make to get a better understanding of the jitter you’re dealing with. Here are a few jitter measurements you might care to make:

Time Interval Error: The mother of all jitter measurements. It’s usually measured as an RMS value and describes the difference between the ideal clock period and the actual clock period. Like I said, it’s the mother of all jitter measurements. You might think this is all you need to measure, but there are some other helpful measurements out there.

Period Jitter: Is usually measured as a peak-to-peak value, and yields the difference between the longest and shortest clock periods over a specified amount of time.

Cycle-to-Cycle Jitter: Is also usually measured as a peak-to-peak value, and is the maximum difference between adjacent clock periods. The longer you measure this, the larger it’ll get, so if you want to characterize this for posterity, use a set number of cycles that you measure. Basically, period jitter tells you how bad it is in the long run, and cycle-to-cycle jitter tells you how fast you are going to get there.

All of this should be enough to get you started if you want to measure (or just discuss) jitter. If your interest was piqued or you felt cheated because I didn’t talk about eye diagrams, clock recovery, or phase lock loops, check out this app note on Jitter Analysis written by Johnnie Hancock. It’s really good, but doesn’t have as many jokes. Although fewer jokes are probably a welcome relief by this point. You can also learn more about jitter  and jitter measurement tools at Keysight.com.

Thanks for reading! If I didn’t coax you into clicking that link (who reads app notes, right?) check out our YouTube channel.

Also, check out some of our other posts! We’ve talked about probing techniques with Kenny Johnson: Splurge, get an active probe and Measure ripple and noise on power supply voltage railsconfusion in Australia and normal triggering with Johnnie Hancock; signal modulation and DIY oscilloscope Bode Plots with Mike Hoffman and  measuring system bandwidth and measuring oscilloscope and probe bandwidth with Taku Furuta.

And of course, Melissa Spencer’s oscilloscope zombie apocalypse survival guide.

 

Do you want to make sure your oscilloscope measurements are the best they can possibly be?  Don’t settle for just an average measurement; simply scaling your signal properly can dramatically improve measurement quality.  Why? Because both sample rate and the bits of resolution of your oscilloscope play a part in your scope’s measurements.

Sample rate is affected by the oscilloscope’s horizontal scaling.  The equation to remember is:

Sample Rate = Memory Depth/Acquisition Length

Memory depth is a constant value, and the acquisition length (or trace length) is a variable dependent on your time-per-division settings. As the time/division setting increases, the acquisition length increases.  Since all of this must fit into the scope’s memory, at a certain point the oscilloscope’s ADC will have to decrease its sample rate.  What does this mean practically?  Let’s look at a frequency measurement on a 100 kHz square wave. We know the frequency is precisely 100 kHz and is very stable, so we can use the standard deviation of our measurement to judge the quality of the measurement.  Figure 1 has our 100 kHz square wave scaled to be viewed over 20 ms of time.  And, the scope’s sample rate has been automatically decreased from 5 GSa/s down to 100 MSa/s in order to fit the entire trace into the oscilloscope’s memory.  And, the standard deviation of our measurement is 1.49 kHz (about 1.5%) after around 1,500 measurements.

frequency measurement on a 100 kHz square wavefrequency measurement on a 100 kHz square wave

But, look at what happens when we choose a much smaller time/div setting, effectively shortening the acquisition length and increasing the sample rate.  Figure 2 has the same signal, but horizontally scaled to 1.2 us/div. The standard deviation is now 1.5 Hz, which is one thousand times smaller than our previous measurement.

signal horizontally scaled to 1.2 us/div

All that changed was the horizontal scaling of the signal, and in turn the sample rate of the oscilloscope. So, proper horizontal scaling of your oscilloscope can have a dramatic effect on the quality of your time dependent measurement.

Just as horizontal scaling effects your time dependent measurements, vertical scaling effects your vertically dependent measurements (peak to peak voltage, RMS, etc.).  Again, let’s take the same 100 kHz square wave, but instead look at peak to peak voltage.  Figure 3 has the signal scaled to 770 mV/div.  And, the standard deviation of the peak to peak measurement is 18 mV.  By decreasing the V/div settings on the scope to 66 mV/div the measurement’s standard deviation becomes 1.22 mV.  This is almost a 15x improvement!

vertical scaling effects your vertically dependent measurements

By decreasing the V/div settings on the scope, the measurement’s standard deviation becomes 1.22 mV, almost a 15x improvement

Why does the vertical scaling make a difference?  By scaling the signal to fill as much of the screen as possible, we are able to take advantage of the oscilloscope’s full bits of resolution.  Bits of resolution is essentially a signifier of how precise an ADC is capable of being.  The higher the bits of resolution, the more vertical levels the ADC is able to detect.  For example, the image below shows a two bit ADC.  The red sine wave is the analog input to the ADC, and the blue waveform is the digitized version.  As you can see, there are four different quantization levels possible.

This image shows a three bit ADC digitizing the analog waveform.

This image shows a three bit ADC digitizing the same analog waveform.  By having more quantization levels, the ADC’s digital output is able to more closely approximate the analog input.

properly scaling signals on your oscilloscope can make a dramatic difference in the quality of your measurements

When vertically scaling a signal to fill only a portion of the oscilloscope screen, you are not utilizing the ADC’s bits to the full potential.  For example, if you scaled a signal to half of the 3 bit ADC’s screen, you would leave two quantization levels unused above your signal and two levels below.  This would mean that your three bit ADC would only be able to use four quantization levels, rendering it just as precise as a fully utilized two bit ADC.

Knowing how to properly scale signals on your oscilloscope can make a dramatic difference in the quality of your measurements.  Proper horizontal scaling significantly effects your time dependent measurement, and proper vertical scaling effects your vertically dependent measurements.  Next time you are in front of your scope, remember: good signal scaling makes great measurements!