Skip navigation
All Places > Keysight Blogs > Better Measurements: The RF Test Blog > Blog
benz

It’s RF interference again…

Posted by benz Jul 10, 2017

  The case of the troublesome garage door opener

 

Note from Ben: This is the first in a series of guest posts from Jennifer Stark of Keysight. As discussed here earlier, our increasingly crowded RF environment will result in more interference, and a higher likelihood of it causing problems. To stay ahead of them, you’ll need your creativity, deductive skills, and persistence

 

Interference is everywhere. And often from an unlikely source.

Let's take the case of an engineer (we’ll call him Mike) who recently had a problem with his garage door opener. Mike had recently installed a new garage door opener. Frustratingly, the garage door remotes that our Mike and his wife carried in their cars intermittently failed to activate the garage door opener.

As a first step, Mike called the support line for the manufacturer of the garage door opener to report the defective product. The installation support person walked Mike through a troubleshooting procedure over the phone. The procedure did not identify any reason that the hardware should be defective. At that point, the installation support person gravely pronounced “You have something called RF interference. That’s your problem, not ours.”

It turns out that Mike is an RF engineer, so he took this as an interesting challenge.

Mike used his N9912A FieldFox handheld RF analyzer in spectrum analyzer mode. He cobbled together a homemade antenna for the input connector and started sniffing around the house for RF interference. He identified the target frequencies by pressing the garage door remote button while looking at the RF spectrum.

Waterfall and spectrogram displays are a way to visually understand the time domain behavior and frequency of occurrence of signals and interference. This display is from a handheld spectrum analyzer with additional software that helps in detecting and visualizing interference.

Waterfall and spectrogram displays are useful for spotting interference and understanding its behavior in the time domain. The N9918A-236 Interference Analyzer and Spectrogram software for FieldFox analyzers adds these displays to spectrum measurements.

Armed with this information about the frequency range of interest, Mike set out looking for any signals that were near the frequency of the garage door opener. A diligent engineer, he went all over the house looking for clues. He looked in the garage. He looked upstairs in the house, above the garage. He looked in corners of the house.

Eventually, he discovered a small but significant signal in the kitchen. It appeared to be coming from the refrigerator. This puzzled Mike, but his engineering discipline compelled him to investigate. Unplugging the refrigerator did not eliminate the signal. Checking at a different time of day, Mike discovered that the interfering signal was absent even when the refrigerator was operating. It was a mystery.

Leaning on the kitchen counter to collect his thoughts, Mike took stock of what he had learned:  intermittent garage door issues, signal coming from the kitchen. Then, Mike had an insight. The garage door only failed when his wife was home, so the issue was related to the comings and goings of his wife.

At this point, Mike noticed his wife’s purse in its normal spot on the counter by the refrigerator. Mike investigated his wife’s purse with his FieldFox. Sure enough, the interfering signal was coming from the purse (not from the refrigerator). Inside the purse was the remote key fob for the car. Mike removed the battery from the key fob and the interfering signal immediately went away.

The solution was simple—replace the troublesome key fob. Now the garage door is working properly, Mike is happy, and Mike’s wife is happy. And, the pesky RF interference is no more.

  Engineers that exemplify creativity, and the ability to explain it

School is out and some are on holiday. It’s a good time to briefly widen this blog’s technology focus a bit with one of my occasional off-topic wanderings. This time we’ll look at impressive achievements of some engineers of yore, and a couple of enlightening explanations of their creations.

These days we combine our electrical skill with processors, software, and myriad actuator types to generate virtually any kind of complex mechanical action—wherever we need to connect electrons with the physical world. It’s easy to forget how sophisticated tasks were accomplished in the past, without computers or stepper motors, and how even advanced techniques such as perceptual coding were implemented with physical mechanisms.

All these elements were brought together for me recently in an impressive YouTube explanation by “engineerguy” Bill Hammack of the University of Illinois. In just four minutes, Bill explains several poorly understood aspects of film projectors that evolved in the century between their invention (c.1894) and their replacement by digital cinema technology (c.1999).

Bill uses slow-motion footage and animated diagrams to do a great job of explaining how a projector keeps the film going smoothly across the sound sensor while intermittently starting and stopping the film between the lamp and lens. This precisely executed start-stop motion, projecting the film image only when it isn’t moving, coaxes our vision system into seeing a series of stills as fluid motion.

Bill shows how the motion is produced using a synchronized cam, shuttle, and wobble plate. As I dug deeper, further research showed that some projectors instead use an equally innovative mechanism called a Geneva drive (or Geneva stop), a mechanism that was already old when the first crude projectors were created in the late 19th century. Seeing the shape of the Geneva mechanism sent me to my reproduction of the very old book Five Hundred & Seven Mechanical Movements.

Scanned image of Geneva mechanism or Geneva stop from 1889 book Five Hundred & Seven Mechanical Movements

This composite figure shows two examples of Geneva drives from the mechanisms in Henry T. Brown’s 1896 book Five Hundred & Seven Mechanical Movements. These convert continuous motion to intermittent motion with smooth starts and stops, and have built in limits or “stops.”

I figured I was nearly alone in my interest in in the old book, but that is not the case. Another quick search revealed that these manifold fruits of the Industrial Revolution have been brought into the internet age, with hyperlinks and animation at 507movements.com. The animations are addictive!

The book is a potent antidote to the tendency to forget how clever and imaginative the engineers of the past actually were, though they were often self-taught and worked with limited materials. And, if we take Edison and the Wright Brothers as examples, they were tireless experimenters.

From an 1896 book to the joys of YouTube, there is cleverness in both the engineering and the explaining. If you’re looking for something closer to our RF home, check out Bill’s demonstration of performing Fourier analysis with a mechanical device. You may never think of FFTs in quite the same way again.

  Coherence can make a big difference

Sometimes The Fates of Measurement smile upon you, and sometimes they conspire against you. In many cases, however, it’s hard to tell which—if either—is happening.

More often, I think, they set little traps or tests for us. These are often subtle, but nonetheless important, especially when you’re trying to make the best measurements possible, close to the limits of your test equipment.

In this post I’m focusing on coherence as a factor in ACPR/ACLR measurements. These ratios are a fundamental measurement for RF engineering, quantifying the ability of transmitters to properly share the RF spectrum.

To make the best measurements, it’s essential to understand the analyzer’s contribution to ACP and keep it to a minimum. As we’ve discussed previously, the analyzer itself will contribute power to measurements, and this can be a significant addition when measuring anywhere near its noise or distortion floor.

We might expect this straightforward power contribution to also apply to ACPR measurements, where the signals appear to be a kind of band-limited noise that may slope with frequency. However, it’s important to remember that these are actually distortion measurements, and that the assumption of non-coherence between the signal and the analyzer is no longer a given.

Indeed, an intuitive look at the nonlinearity mechanisms in transmitters and analyzers suggests that coherence of some kind is to be expected. This moves us from power addition (a range of 0 dB to +3dB) to voltage addition and the larger apparent power differences this can cause.

Diagram of error in ACPR/ACLR measurements vs. analyzer mixer level/attenuation. Diagram specifically calls out the case where DUT and analyzer ACPR are same, and how this can cause large ACPR errors due to coherent signal addition or cancellation.

This diagram shows how analyzer mixer level affects ACPR measurement error when the analyzer and DUT distortion are coherent. The largest errors occur when the ACP of the DUT and analyzer are the same, indicated by the vertical dashed red line.

Interestingly, the widest potential error band occurs not where the analyzer ACP is largest but when it is the same as the DUT ACP. Consequently, adjusting the mixer level to minimize total measured ACP may lead you to a false minimum.

There are a number of challenges in optimizing an ACPR measurement:

  •  Noise power subtraction cannot be used due to analyzer/DUT coherence.
  •  Narrow RBWs are no help because they have an equal effect on the apparent power from the analyzer and DUT.
  • Low mixer levels (or higher input attenuation) minimize analyzer-generated distortion but increase measurement noise floor.
  • High mixer levels improve measurement noise floor but increase analyzer-generated distortion.

While the settings for lowest total measurement error are not exactly the same as for minimum analyzer-generated ACP, they are generally very close. In a future post I’ll discuss the differences, and ways to optimize accuracy, no matter what The Fates have in mind for you.

  A quick, intuitive look at what will make them challenging

Noise is fundamental in much of what RF engineers do, and it drives cost/performance tradeoffs in major ways. If you’ve read this blog much, you’ve probably noticed that noise is a frequent focus, and I’m almost always working to find ways to reduce it. You’ve also noticed that I lean toward an intuitive explanation of RF principles and phenomena whenever possible.

In the most recent post here, Nick Ben discussed four fundamentals of noise figure. It’s a useful complement to my previous look at the measurement and the two main ways to make it.

As engineers, we work to develop a keen sense of when we might be venturing into difficult terrain. This helps us anticipate challenging tasks in measurement and design, and it helps us choose the best equipment for the job. In this post I’ll summarize factors that might make noise figure measurements especially troublesome.

First, the most common challenge in noise figure measurements: ensuring that the noise floor of the measurement setup is low enough to separate it from the noise contributed by the DUT. These days, the most frequently used tool for noise figure measurements is a spectrum or signal analyzer, and many offer performance and features that provide an impressively low noise floor for noise figure measurements.

Measurement examples of noise floor for a broad frequency span on Keysight PXA signal analyzer. Lower traces include effect of internal and external preamplifiers.

Internal (middle trace) and external (bottom trace) preamplifiers can dramatically reduce the noise floor of signal analyzers (scale is 4 dB/div). The measurements are from a Keysight PXA X-Series signal analyzer, which also includes a noise subtraction feature as another tool to reduce effective analyzer noise floor.

My instinct is to separate noise figure measurements into four general cases, resulting from two characteristics of the DUT: high or low noise figure versus high or low gain.

I should note that this is something of an oversimplification, and not useful for devices such as attenuators and mixers. For the sake of brevity in this post I’ll limit my discussion to RF amplifiers, and in a future post deal with other devices and the limits of this approach.

Because analyzer noise floor is a critical factor in the measurements, it’s probably no surprise that you’ll have an easier time measuring devices with a relatively high level of output noise. This includes devices that have a poor noise figure, no matter their gain. Less obviously, it also includes devices with a very good noise figure, as long as their gain is high enough.

The intuitive thing to keep in mind is that large amounts of gain will amplify DUT input noise by the same amount, resulting in output noise power large enough to be well above the analyzer’s noise floor.

Thus, the most difficult measurements involve devices with modest gain, especially when their noise figure is very good (low). The resulting noise power at the DUT output is also low, making it difficult to distinguish the noise power at the DUT output from that of the signal analyzer.

In his recent post, Nick also brought up the problems that interference and other (non-noise) signals can cause with noise figure measurements. Shielding your setups and ensuring connection integrity can help, and signal analyzers can identify discrete signals and avoid including them in the noise figure results.

One more complicating factor in noise figure measurements is the impedance mismatches that occur in two places: between the noise source and the DUT, and between the DUT and the analyzer. This problem is generally worse at higher frequencies, making it increasingly relevant in 5G and other millimeter-wave applications. The most thorough way to handle the resulting errors in noise power and gain measurements is to use the “cold source” noise figure method implemented in vector network analyzers.

Noise figure measurements will challenge you in many other ways, but those mentioned above should give noise figure novices a better sense of when it’s most important to be careful with the measurements and cautious in interpreting the results.

benz

The Four Ws of Noise Figure

Posted by benz May 30, 2017

  If you're going to optimize it you have to quantify it

 

Note from Ben Zarlingo: This is the second in our series of guest posts by Nick Ben, a Keysight engineer. Here he turns his attention to the fundamentals of noise figure.

 

In the previous edition of The Four Ws, I discussed the fundamentals of spurious emissions. This time I’m discussing the WHAT, WHY, WHEN and WHERE of characterizing your system’s noise figure (NF) to process low-level signals for improving product design.

What

Noise figure, also known as noise factor, can be defined as the degradation of (or decrease in) the signal-to-noise ratio (SNR) as a signal passes through a system network. In our case the “network” is a spectrum or signal analyzer (SA).

Basically, a low figure means the network adds very little noise (good) and a high noise figure means it adds a lot of noise (bad). The concept fits only those networks that process signals and have at least one input and one output port.

Figure 1, below, provides the fundamental expression for noise figure.

Equation describing noise figure as a power ratio, comparing signal/noise at input and output of two-port device

Figure 1. Noise figure is the ratio of the respective signal-to-noise power ratios at the input and output when the input source temperature is 290 °K.

Additionally, noise figure is usually expressed in decibels:

             NF (in dB) = 10 log (F) = 10 log (No) – 10 log (Ni)

Why and When

Noise figure is a key system parameter when handling small signals, and it lets us make comparisons by quantifying the added noise. Knowing the Noise Figure value, we can calculate a system’s sensitivity from its bandwidth.

 It’s important to remember that a system’s noise figure is separate and distinct from its gain. Once noise is added to the signal, subsequent gain stages amplify signal and noise by the same amount and this does not change the SNR.

 Figure 2.a, below, shows an input to an amplifier, and the peak is 40 dB above the noise floor; Figure 2.b shows the resulting output signal. Gain has boosted the signal and noise levels by 20 dB and added its own noise. As a result, the peak of the output signal is now only 30 dB above the noise floor. Because degradation in the SNR is 10 dB, the amplifier has a 10 dB noise figure.

Relative signal and noise levels compared at the input and output of an amplifier. The noise level increases more than the signal level, due to the noise added by the amplifier.

Figure 2: Examples of a signal at an amplifier’s input (a) and (b) its output. Note that the noise level rises more than the signal level due to noise added by the amplifier circuits. This change in signal and noise is the amplifier noise figure.

Where (& How)

The open question: Where are the system noise sources that affect noise figure? Most noise consists of spontaneous fluctuations caused by ordinary phenomena in the electrical equipment, and this noise is generally flat. We perform measurements on this noise to characterize noise figure. These noise sources fit into two main categories: thermal noise and shot noise.

 One more note: It’s important to consider that some of the power measured in a noise figure measurement may be some type of interference rather than noise. Therefore, it’s critical to be alert for and guard against this by performing measurements in shielded rooms to ensure we’re seeing only the spontaneous noise we want to measure.

 Wrapping Up

If you’d like to learn more about characterizing noise figure and improving your product designs, recent application notes titled Three Hints for Better Noise Figure Measurements and Noise and Noise Figure: Improving and Simplifying Measurements include great explanations and how-to techniques as well as pointers to additional resources. You’ll find both in the growing collection on our signal analysis fundamentals page.

 I hope my second installment of The Four Ws of X provided some information you can use. Please post any comments – positive, constructive, or otherwise – and let me know what you think. If this post was useful, please give it a like and, of course, feel free to share.

  Handling the frequency and bandwidth challenges of 5G, radar, and more

5G technologies and markets are much in the news these days, and for good reasons. The economic potential is large, the opportunities are relatively near-term, and the technological challenges are just the kind of thing RF engineers can get excited about. Whether your focus is on design or test, there is plenty of difficult work ahead in the pursuit of practical ways to fully utilize the potential capacity of the centimeter and millimeter bands.

Unfortunately, much of the analysis and commentary focuses on economic factors and broad-brush coverage of technological challenges. A good overview of a complex subject is essential for resource planning, but it isn’t deep enough for us to see the specific measurement challenges and how we might handle them.

Some measurement experts have a “just right” combination of broad technical knowledge and specific measurement insight to make a contribution here, and I can heartily recommend Keysight’s Pete Cain. He has not only the expertise but also an impressive ability to explain the technical factors and tradeoffs.

Pete recently produced a webcast on millimeter-wave challenges, and it’s a good fit for the needs of the RF/microwave engineer or technical manager who will be dealing with these extreme frequencies and bandwidths. It’s available on-demand now, and I wanted to share a few highlights.

His presentation begins with a discussion of general technology drivers such as the high value of lower-frequency spectrum and the public benefit of shifting existing traffic to higher frequencies to free it up whenever possible. That’s an important issue, and perhaps a matter of future regulation to avoid a tragedy of the commons.

Next, Pete goes on to explain the problem of increased noise that goes along with the wider bandwidths and increased data rates of microwave and millimeter bands. This noise reduces SNR and eventually blunts channel capacity gains, as shown here.

Comparing the maximum spectral efficiency of different channel bandwidths. The S-shaped curve shows how spectral capacity and channel efficiency returns diminish as bandwidths get very wide.

The wide bandwidths available at millimeter-wave frequencies promise dramatic gains in channel capacity. Unfortunately, these bandwidths gather up more noise, and that limits real-world capacity and spectral efficiency.

As shown in the diagram, Pete also discusses spectral efficiency and shows where existing services operate. This is where RF engineers have already turned theory into practical reality, and the landscape of tradeoffs they’ll optimize as millimeter technologies become widespread.

To further inspire the technically inclined, Pete dives deeper into the essentials of high-frequency testing, including the issues of loss and frequency response at microwave and millimeter-wave frequencies. As is often the case, high quality measurements require a combination of hardware, software, and careful measurement technique. In particular, he describes the value of establishing source and analyzer calibration planes right at the DUT, thereby minimizing measurement error.

Diagram shows measurement of a millimeter-wave DUT where the calibration planes of the input and output have been moved to the edges of the DUT itself, for better accuracy.

To optimize accuracy, it’s necessary to move the calibration planes of measurements from the instrument front panels to the DUT signal ports. Software such as the K3101A Signal Optimizer can make this much easier.

Moving the calibration planes to the DUT ports grows more important as frequencies increase. Loss is an issue, of course, but in many cases the thorniest problems are frequency-response effects such as ripple and non-repeatability. Ripple is especially troublesome for very-wideband signals, while repeatability can be compromised by sensitivity to cable movement and routing as well as connector torque and wear.

In the webcast, Pete also compares signal-connection methods, including coax, waveguide, probes, and antennas.

That’s just a quick overview of an enlightening presentation. To see the whole thing, check out the “Millimeter wave Challenges” on-demand webcast—and good luck in the land of very short waves.

  As technologists, we need ways to tell the difference

Over the past few months I’ve been hearing more about a propulsion technology called an EM Drive or EMDrive or, more descriptively, a “resonant cavity thruster.” As a technology that uses high-power microwaves, this sort of thing should be right in line with our expertise. However, that doesn’t seem to be the case because the potential validity of this technique may be more in the domain of physicists—or mystics!

Before powering ahead, let me state my premise: What interests me most is how one might approach claims such as this, especially when a conclusion does not seem clear. In this case, our knowledge of microwave energy and associated phenomena does not seem to be much help, so we’ll have to look to other guides.

First, let’s consider the EM drive. Briefly, it consists of an enclosed conductive cavity in the form of a truncated cone (i.e., a frustum). Microwave energy is fed into the cavity, and some claim a net thrust is produced. It’s only a very small amount of thrust, but it’s claimed to be produced without a reaction mass. This is very much different than established technology such as ion thrusters, which use electric energy to accelerate particles. The diagram below shows the basics.

Diagram of EM drive, showing mechanical configuration, magnetron input to the chamber, and supposed forces that result in a net thrust

This general arrangement of an EM drive mechanism indicates net radiation force and the resulting thrust from the action of microwaves in an enclosed, conductive truncated cone. (image from Wikipedia)

The diagram is clear enough, plus it’s the first time I’ve had the chance to use the word “frustum.” Unfortunately, one thing the diagram and associated explanations seem to lack is a model—quantitative or qualitative, scientific or engineering—that clearly explains how this technology actually works. Some propose the action of “quantum vacuum virtual particles” as an explanation, but that seems pretty hand-wavy to me.

Plenty of arguments, pro and con, are articulated online, and I won’t go into them here. Physicists and experimentalists far smarter than me weigh in, and they are not all in agreement. For example, a paper from experimenters at NASA’s Johnson Space Center has been peer-reviewed and published. Dig in if you’re interested, and make up your own mind.

I’m among those who, after reading about the EM drive, immediately thought “extraordinary claims require extraordinary evidence.” (Carl Sagan made that dictum famous, but I was delighted to learn that it dates back 200 years to our old friend Laplace.) While it may work better as a guideline than a rigid belief, it’s an excellent starting point when drama is high. The evidence in this case is hardly extraordinary, with a measured thrust of only about a micronewton per watt. It’s devilishly hard to reduce experimental uncertainties enough to reliably measure something that small.

I’m also not the first to suspect that this runs afoul of Newton’s third law and the conservation of momentum. A good way to evaluate an astonishing claim is to test it against fundamental principles such as this, and a reaction mass is conspicuously absent. Those who used this fundamentalist approach to question faster-than-light neutrinos were vindicated in good time.

It’s tempting to dismiss the whole thing, but there is still that NASA paper, and the laws of another prominent scientific thinker, Arthur C. Clarke. I’ve previously quoted his third law: “Any sufficiently advanced technology is indistinguishable from magic.” One could certainly claim that this microwave thruster is just technology more advanced than I can understand. Maybe.

Perhaps Clark’s first law is more relevant, and more sobering: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

I’m neither distinguished nor a scientist, but I am a technologist, and perhaps a bit long in the tooth. I should follow Clarke’s excellent example and maintain a mind that is both open and skeptical.

benz

The Four Ws of Spurious Emissions

Posted by benz Apr 30, 2017

  Refining your design and creating happy customers

 

Note from Ben Zarlingo: I invite you to read this guest post by Nick Ben, a Keysight engineer. He discusses the what, why, when and where of spurious emissions before delving into the importance of identifying them in your device’s transmitting signal and thereby improving your product design.

 

If you’re reading this, you’re probably an engineer. That means you may be looking for ways to improve your present and future designs. If yours includes a transmitter, one key to success is checking for spurious emissions that can interfere with signals outside your device’s designated bandwidth. Characterizing spurious behavior can save money and help you create happy and loyal customers.

But wait, you say: WHAT, WHY, WHEN and WHERE can I save money and create happy customers by measuring spurious emissions? I’m glad you asked. Let’s take a look.

What: A Quick Reminder

Ben Z has covered this: Spurs are unwanted stray frequency content that can appear both outside and within the device under test’s (DUT’s) operating bandwidth. Think of a spur as the oddball signal you weren’t expecting to be emanating from your device—but there it is. Just like you wouldn’t want your family’s phones to overlap with one another in the same band, they shouldn’t interfere with that drone hovering outside your window (made you look). If you refer to the traces below, the left side presents a device’s transmitted signal without a spur and the right side reveals an unwanted spurious signal, usually indicating a design flaw.  

 

Two GSM spectrum measurements are compared.  The one on the right contains a spurious signal.

In these side-by-side views of a 1 MHz span at 935 MHz, the presence of an unwanted spur is visible in the trace on the right. Further investigation should identify the cause.

Why and When

In densely populated regions of the frequency spectrum, excessive spurs and harmonics are more likely to be both troublesome and noticed. To prevent interference with devices operating on nearby bands, we need to measure our own spurious emissions.

Where (& How)

Spur measurements are usually performed post-R&D, during the design validation and manufacturing phases. Using a spectrum or signal analyzer, these measurements are presented on a frequency vs. amplitude plot to reveal any undesirable signals. Spur characterization is done over a frequency span of interest using a narrow resolution bandwidth (RBW) filter and auto-coupled sweep-time rules. The sweep normally begins with the fundamental and reaches all the way up to the tenth harmonic.

While current spur-search methods are good for the design validation phase, they aren’t great because measurements are too slow for pass/fail testing of thousands of devices on the production line. These tests are often based on published standards (perhaps from the FCC) that may be described in terms of a spectrum emission mask (SEM). Fortunately, SEM capability is available in Keysight X-Series signal analyzers.

To tackle the issue of slow sweep times and enable faster testing, today’s signal analyzers use digital technologies, especially DSP, to improve measurement speed and performance (see one of Ben’s earlier posts). Ultimately, you can achieve faster sweep times—as much as 49x faster than older analyzers—when chasing low-level signals at wide bandwidths.

Wrapping Up

If you’d like to learn more, a recent application note titled Accelerating Spurious Emission Measurements using Fast-Sweep Techniques includes detailed explanations, techniques, and resources. You’ll find it in the growing collection on our signal analysis fundamentals page.

I hope my first installment of The Four Ws of X provided some information you can use. Please post any comments—positive, constructive, or otherwise—and let me know what you think. If it was useful, please give it a like and, of course, feel free to share.

  Electrical engineers lead the way

Years ago, a manager of mine seemed to constantly speak in analogies. It was his way of exploring the technical and business (and personal!) issues we faced, and after a while I noticed how often he’d begin a sentence with “It’s like...”

His parallels came in a boundless variety, and he was really creative in tying them to our current topic, whatever it was. He was an excellent manager, with a sense of humor and irony, and his analogies generally improved group discussions.

There were exceptions, of course, and misapplying or overextending qualitative analogies is one way these powerful tools can let us down.

The same is true of quantitative analogies, but many have the vital benefit of being testable in ways that qualitative analogies aren’t. Once validated, they can be used to drive real engineering, especially when we can take advantage of established theory and mathematics in new areas.

The best example I’ve found of an electrical engineering (EE) concept with broad applicability in other areas is impedance. It’s particularly powerful as a quantitative analogy for physical or mechanical phenomena, plus the numerical foundations and sophisticated analytical tools of EE are all available.

Some well-established measurements even use the specific words impedance and its reciprocal—admittance. One example is the tympanogram, which plots admittance versus positive and negative air pressure in the human ear.

Two "tympanograms" are impedance measurements of the human hearing system. These diagrams show admittance (inverse of impedance) to better reveal the response of the system at different pressures, to help diagnose problems.

These tympanograms characterize the impedance of human hearing elements, primarily the eardrum and the middle ear cavity behind it, including the bones that conduct sound. The plot at the left shows a typical maximum at zero pressure, while the uniformly low admittance of the one on the right may indicate a middle ear cavity filled with fluid. (Image from Wikimedia Commons)

Interestingly, those who make immittance* measurements of ears speak of them as describing energy transmission, just like an RF engineer might.

Any discussion of energy transmission naturally leads to impedance matching and transformers of one kind or another. That’s where the analogies become the most interesting to me. Once you see the equivalence outside of EE, you start noticing it everywhere: the transmission in a car, the arm on a catapult, the exponential horn on a midrange speaker. One manufacturer of unconventional room fans has even trademarked the term “air multiplier” to describe the conversion of a small volume of high-speed air to a much larger volume at lower speeds.

All of these things can be quantitatively described with the power of the impedance analogy, leading to effective optimization. It’s typically a matter of maximizing energy transfer, though other tradeoffs are illuminated as well.

Maybe my former manager’s affection for analogies rubbed off on me all those years ago. I certainly have a lot of respect for them, and their ability to deliver real engineering insight in so many fields. We EEs can take some degree of pride in leading the way here, even if we’re the only ones who know the whole story.

*A term coined by our old friend H. W. Bode in 1945.

  The difference can be anything from 0 dB to infinity

Most RF engineers are aware that signal measurements are always a combination of the signal in question and any contributions from the measuring device. We usually think in terms of power, and power spectrum has been our most common measurement for decades. Thus, the average power reading at a particular frequency is the result of combining the signal you’re trying to measure plus any extra “stuff” the analyzer chips in.

In many cases we can accept the displayed numbers and safely ignore the contribution of the analyzer because its performance is much better than that of the signal we’re measuring. However, the situation becomes more challenging when the signal in question is so small that it’s comparable to the analyzer’s own contribution. We usually run into this when we’re measuring harmonics, spurious signals, and intermodulation (or its digital-modulation cousin, adjacent channel power ratio, ACPR).

I’ve discussed this situation before, particularly in spurious measurements in and around the analyzer’s noise floor.

Graphic explanation of addition of CW signal and analyzer noise floor. Diagram of apparent and actual signals, actual and displayed signal-to-noise ratio (SNR).

Expanded view of measurement of a CW signal near an analyzer’s noise floor. The analyzer’s own noise affects measurements of both the signal level and signal/noise.

It’s apparent that addition happens between the power of the signal and that of the analyzer’s internal noise. For example, when the actual signal power and the analyzer noise floor are the same, the measured result will be high by 3 dB.

However, it’s essential to understand that the added power is in the form of noise, which is not coherent with the signal—or anything else. The general incoherence of noise is a valuable assumption in many areas of measurement and signal processing.

We can get tripped up when we unknowingly violate this assumption. Consider the addition of these CW examples in the time domain, where the problem is easier to visualize:

Graphic explanation of addition of two equal-amplitude CW signals, showing effect of relative phase. In-phase addition produces a total power 6 dB greater than an individual signal, while out-of-phase addition results in zero net power.

The addition of coherent signals can produce a wide range of results, depending on relative amplitude and phase. In this equal-amplitude example, the result can be a signal with twice the voltage and therefore 6 dB more power (top) or a signal with no power at all (bottom).

I’ve previously discussed the log scaling of power spectrum measurements and the occasionally surprising results. The practical implications for coherent signals are illustrated by the two special cases above: two equal signals with either the same or opposite phase.

When the signals have the same phase, they add to produce one with four times the power, or an amplitude 6 dB higher. With opposite phase the signals cancel each other, effectively hiding the signal of interest and showing only the measurement noise floor. Actual measurements will fall somewhere between these extremes, depending on the phase relationships.

This coherent addition or subtraction isn’t typically a concern with spurious signals; however, it may arise with harmonic and intermodulation or ACPR measurements in which the analyzer’s own distortion products might be coherent with the signal under test. Some distortion products might add to produce a power measurement that is incorrectly high; others could cancel, causing you to miss a genuine distortion product.

I suppose there are two lessons for RF engineers. One: some measurements may need a little more performance margin to get the accuracy you expect for very small signals. The other: be careful about the assumptions for signal addition and the average power of the result.

In a future post I’ll describe what we learned about how this applies to optimizing ACPR measurements. Until then, you can find more information on distortion measurements at our signal analysis fundamentals page.

  Good news about RBW, noise floor, measurement speed, and finding hidden signals

Some things never change, and others evolve while you’re busy with something else. In recent years, RF measurements provide good examples of both, especially in signal analysis fundamentals such as resolution bandwidth filtering, measurement noise floor, and speed or throughput. It can be a challenge to keep up, so I hope this blog and resources such as the one I’ll mention below will help.

One eternal truth: tradeoffs define your performance envelope. Optimizing those tradeoffs is an important part of your engineering contribution. Fortunately, on the measurement side, that envelope is getting substantially larger, due to the combined effect of improved hardware performance plus signal processing that is getting both faster and more sophisticated.

Perhaps the most common tradeoffs made in signal analysis—often done instinctively and automatically by RF engineers—are the settings for resolution bandwidth and attenuation, affecting noise floor and dynamic range due to analyzer-generated distortion. Looking at the figure below, the endpoints of the lines vary according to analyzer hardware performance, but the slopes and the implications for optimizing measurements are timeless.

Graphic shows how signal analyzer noise floor varies with RBW, along with how second and third order distortion vary with mixer level. Intersection of these lines shows mixer level and resolution bandwidth settings that optimize dynamic range

Resolution-bandwidth settings determine noise floor, while attenuation settings determine mixer level and resulting analyzer-produced distortion. Noise floor and distortion establish the basis for the analyzer’s dynamic range.

This diagram has been around for decades, helping engineers understand how to optimize attenuation and resolution bandwidth. For decades, too, RF engineers have come up against a principal limit of the performance envelope, where the obvious benefit of reducing resolution bandwidth collides with the slower sweep speeds resulting from those smaller bandwidths.

That resolution bandwidth limit has been pushed back substantially, with dramatic improvements in ADCs, DACs, and DSP. Digital RBW filters can be hundreds of times faster than analog ones, opening up the use of much narrower resolution bandwidths than had been practical, and giving RF engineers new choices in optimization. As with preamps, the improvements in noise floor or signal-to-noise ratio can be exchanged for benefits ranging from faster throughput, to better margins, to the ability to use less-expensive test equipment.

Improvements in DSP and signal converters have also enabled new types of analysis such as digital demodulation, signal capture and playback, and real-time spectrum analysis. These capabilities are essential to the design, optimization and troubleshooting of new wireless and radar systems.

If you’d like to know more, and take advantage of some of these envelope-expanding capabilities, check out the new application note Signal Analysis Measurement Fundamentals. It provides a deeper dive into techniques and resources, and you’ll find it in the growing collection at our signal analysis fundamentals page.

A few months ago, Keysight’s Brad Frieden and I both wrote about downconversion and sampling, related to wireless and other RF/microwave signals. Brad's article in Microwaves & RF appeared about two weeks before my blog post, though I somehow missed it.

His main focus was on oscilloscopes and improving signal-to-noise ratio (SNR) in measurements of pulsed RF signals. He described the use of digital downconversion, resampling, and filtering to trade excess bandwidth for improved noise floor.

Inside Keysight, debates about scopes versus signal analyzers can become quite animated. One reason: we have slightly different biases to how we look at signals. Engineers in some areas reach first for oscilloscopes, while others have always leaned on spectrum and signal analyzers. It’s more than a general preference for time or frequency domain analysis, but that’s a start.

In test, those distinctions are fading among manufacturers and end users. On the supply side, oscilloscopes are extending frequency coverage into the microwave and millimeter ranges, and signal analyzers are expanding bandwidth to handle wider signals in aerospace/defense and wireless. Happily, both platforms can use the same advanced vector signal analyzer software that provides comprehensive time-, frequency-, and modulation-domain measurements.

On the demand side, frequencies and bandwidths are expanding rapidly in wireless and aerospace/defense applications. That’s why both types of instruments have roles to play.

But if they run the same software and can make many of the same measurements, how do you choose? I’ll give some guidelines here, so that your requirements and priorities guide your choice.

Bandwidth: In the last 15 years, this change has pulled oscilloscopes into RF measurements because they can handle the newest and widest signals. In some cases they’re used in combination with signal analyzers, digitizing the analyzer’s IF output at a bandwidth wider than its own sampler. That’s still the case sometimes, even as analyzer bandwidths have reached 1 GHz, and external sampling can extend the bandwidth 5 GHz! It’s telling that, in his article on oscilloscopes, Brad speaks of 500 MHz as a reduced bandwidth.

Accuracy, noise floor, dynamic range: In a signal analyzer, the downconvert-and-digitize architecture is optimized for signal fidelity, at some cost in digitizing bandwidth. That often makes them the only choice for distortion and spectrum emissions measurements such as harmonics, spurious, intermodulation, and adjacent-channel power. Inside the analyzer, the processing chain is characterized and calibrated to maximize measurement accuracy and frequency stability, especially for power and phase noise measurements.

Sensitivity: With their available internal and external preamps, narrow bandwidths, noise subtraction and powerful averaging, signal analyzers have the edge in finding and measuring tiny signals. Although Brad explained processing gain and some impressive improvements in noise floor for narrowband measurements with oscilloscopes, he also noted that these gains did not enhance distortion or spurious performance.

Multiple channels: Spectrum and signal analyzers have traditionally been single-channel instruments, while oscilloscope architectures often support two to four analog channels. Applications such as phased arrays and MIMO may require multiple coherent channels for some measurements, including digital demodulation. If the performance benefits of signal analyzers are needed, an alternative is a PXI-based modular signal analyzer.

Measurement speed: To perform the downconversion, filtering and resampling needed for RF measurements, oscilloscopes acquire an enormous number of samples and then perform massive amounts of data reduction and processing. This can be an issue when throughput is important.

With expanding frequency ranges and support from sophisticated VSA software, the overlap between analyzers and oscilloscopes is increasing constantly. Given the demands of new and enhanced applications, this choice is good news for RF engineers—frequently letting you stick with whichever operating paradigm you prefer.

  Wrestling information from a hostile universe

I can’t make a serious case that the universe is actively hostile, but when you’re making spurious and related measurements, it can seem that way. In terms of information theory, it makes sense: The combination of limited signal-to-noise ratio and the wide spans required mean that you’re gathering lots of information where the effective data rate is low. Spurious and spectrum emission mask (SEM) measurements will be slower and more difficult than you’d like.

In a situation such as this, the opportunity to increase overall productivity is so big that it pays to improve these measurements however we can. In this post I’ll summarize some recent developments that may help, and point to resources that include timeless best practices.

First, let’s recap the importance of spurious and spectrum emission and the reasons why we persist in the face of adversity. Practical spectrum sharing requires tight control of out-of-band signals, so we measure to find problems and comply with standards. Measurements must be made over a wide span, often 10 times our output frequency. However, resolution bandwidths (RBW) are typically narrow to get the required sensitivity. Because sweep time increases inversely with the square of the RBW, sensitivity can come at a painful cost in measurement time.

The result is that these age-old measurements, basic in comparison to things such as digital demodulation, can consume a big chunk of the total time required for RF testing.

Now on to the good news: advances in signal processing and ADCs can make a big difference in measurement time with no performance penalty. Signal analyzers with digital RBW filters can be set to sweep much faster than analog ones, and the analyzers can precisely compensate for the dynamic effects of the faster sweep. Many years ago, when first used at low frequencies, this oversweep processing provided about a 4x improvement. In the more recent history of RF measurements, the enhanced version is called fast sweep, and the speed increase can be 50x.

Two spectrum measurements with equivalent resolution bandwidths and frequency spans are compared.  The fast sweep capability improves sweep speed by a factor of nearly 50 times.

In equivalent measurements of a 26.5 GHz span, the fast sweep feature in an X-Series signal analyzer reduces sweep time from 35.5 seconds to 717 ms, an improvement of nearly 50 times.

I’m happy to report that DSP and filter technologies continue to march forward, and the improvement from oversweep to fast sweep has been extended for some of the most challenging measurements and narrowest RBW settings. For bandwidths of 4.7 kHz and narrower, a newly enhanced fast sweep for most Keysight X-Series signal analyzers provides a further 8x improvement over the original. This speed increase will help with some of the measurements that hurt productivity the most.

Of course, the enduring challenges of spurious measurements can be met by a range of solutions, not all of them new. Keysight’s proven PowerSuite measurement application includes flexible spurious emission testing and has been a standard feature of all X-Series signal analyzers for many years.

A measurement application in a signal analyzer has a number of benefits for spurious measurements, including pass/fail testing and limit lines, automatic identification of spurs, and generation of a results table.

The PowerSuite measurement application includes automatic spurious emission measurements, such as spectrum emission mask, that include tabular results. Multiple frequency ranges can be configured, each with independent resolution and video bandwidths, detectors, and test limits.

PowerSuite allows you to improve spurious measurements by adding information to the tests, measuring only the required frequencies. Another way to add information and save time is to use the customized spurious and spectrum emissions tests included in standard-specific measurement applications.

A new application note, Accelerating Spurious Emission Measurements using Fast-Sweep Techniques, includes more detailed explanations, techniques, and resources. You’ll find it in the growing collection at our signal analysis fundamentals page.

  Jet engines, furry hoods, and a nod to four years of the Better Measurements blog

This blog gives me occasional freedom to explore technology and phenomena that have only a peripheral relationship to RF design and measurement. Sometimes it feels like a permission slip to cut class and wander off to interesting places with remarkable analogs to our world. I say this by way of warning that it may take me a while to get around to something central to RF engineering this time.

This little side trip begins with high-bypass turbofans and the artistic-looking scallops or chevrons on the outer nacelles and sometimes the turbine cores.

Turbofan jet engine with chevrons or scallops on trailing edge of engine nacelle and engine core, to reduce turbulence and noise.

The NASA-developed chevrons or scallops at the trailing edge of this turbofan engine reduce engine noise by causing a more gradual blending of air streams of different velocities. This reduces shear and the resulting noise-creating turbulence. They look cool, too. (Image from Wikimedia Commons)

In my mental model, shear is the key here. Earlier turbojets had a single outlet with very high velocity, creating extreme shear speeds as the exhaust drove into the ambient air. The large speed differences created lots of turbulence and corresponding noise.

Turbofans reduce noise dramatically by accelerating another cylinder of air surrounding the hot, high-speed turbine core. This cylinder is faster than ambient, but slower than the core output, creating an intermediate-speed air stream and two mixing zones. The shear speeds are now much lower, reducing turbulence and noise.

The chevrons further smooth the blending of air streams, so turbulence and noise are both improved. It’s a complicated technique to engineer, but effective, passive, and simple to implement.

Shear is useful in understanding many other technologies, modern and ancient. When I saw those nacelles I thought of the Inuit and the hoods of their parkas with big furry rims or “ruffs.” In a windy and bitter environment they reduce shear at the edges of the hood, creating a zone of calmer air around the face. A wind tunnel study confirmed Inuit knowledge that the best fur incorporates hairs of varying length and stiffness, anticipating—in microcosm—the engine nacelle chevrons.

The calm air zone reduces wind chill, and it also reduces noise. Years ago I had a parka hood with a simple non-furry nylon rim that would howl at certain air speeds and angles.

Another, more modern example is the large, furry microphone windscreen called (I am not making this up) a “dead cat.” At the cost of some size, weight, and delicacy (imagine the effect of rain) this is perhaps the most effective way to reduce wind noise.

The opposite approach to shear and noise is equally instructive. “Air knife” techniques have been used for years to remove fluids from surfaces, and you can now find them in hand dryers in public restrooms. They inevitably make a heck of a racket because the concentrated jet of air and resulting shear are also what makes them effective in knocking water from your hands. Personally, I don’t like the tradeoffs in this case.

In RF applications, we generally avoid the voltage and current equivalents of shear and undesirable signal power. When we can’t adequately reduce the power, we shape it to make it less undesirable, or we push it around to a place where it will cause less trouble. For example, PLLs in some frequency synthesizers can be set to optimize phase noise at narrow versus wide offsets.

Switch-mode power supplies are another example of undesirable power, typically because of the high dv/dt and di/dt of their pulsed operation. It isn’t usually the total power that causes them to fail EMC tests, but the power concentrated at specific frequencies. From a regulatory point of view, an effective solution can be to modulate or dither the switching frequency to spread the power out.

One final example is the tactic of pushing some noise out of band. Details are described in an article on delta-sigma modulation for data converters. Oversampling and noise shaping shift much of the noise to frequencies where it can be removed with filtering.

I’m sure that’s enough wandering for now, but before I finish this post I wanted to note that we’ve passed the four-year anniversary of the first post here. I’d like to thank all of you who tolerate my ramblings, and encourage you to use the comments to add information, suggest topics, or ask questions. Thanks for reading!

  Taking advantage of knowledge your signal analyzer doesn’t have

To respect the value of your time and the limits of your patience, I try to keep these posts relatively short and tightly focused. Inevitably, some topics demand more space, and this follow up to December’s post Exchanging Information You’ve Got for Time You Need is one of those. Back then, I promised additional suggestions for adding information to the measurement process to optimize the balance of performance and speed for your needs.

Previously, I used the example of a distortion measurement, setting attenuation to minimize analyzer distortion. This post touches on the other common scenario of improving analyzer noise floor, including the choice of input attenuation. A lower noise floor is important for finding and accurately measuring small signals, and for measuring the noise of a DUT.

One of the first items to add is the tolerable amount of analyzer noise contribution and the amplitude error it can cause. If you’re measuring the noise of your DUT, the table below from my post on low SNR measurements summarizes the effects.

Noise ratio or signal/noise ratio or SNR and measurement error from analyzer noise floor or DANL

Examples of amplitude measurement error values—always positive—resulting from measurements made near the noise floor. Analyzer noise in the selected resolution bandwidth adds to the input signal.

Only you can decide how much error from analyzer noise is acceptable in the measurement; however, a 10 dB noise ratio with 0.41 dB error is not a bad place to start. It’s worth noting that a noise ratio of about 20 dB is required if the error is to be generally negligible.

Sadly, the input attenuation setting for best analyzer noise floor is not the same as that for best distortion performance. The amount of analyzer distortion you can tolerate is another useful factor. Reducing attenuation will improve analyzer noise floor and SNR, but at some point the cost in analyzer distortion performance may outweigh the benefit. And remember that video averaging provides a “free” noise floor benefit of 2.5 dB for measurements of CW signals.

Comparison of signal measurement and analyzer noise floor for different values of input attenuation and use of video averaging to improve effective noise floor

Reducing input attenuation by 12 dB improves noise floor by a similar amount, as shown in the yellow and blue traces. Using a narrow video bandwidth (purple trace) for averaging reduces the measured noise floor but does not affect the measurement of the CW signal.

You can consult your analyzer’s specifications to find its warranted noise floor and adjust for resolution bandwidth, attenuation, etc. That approach may be essential if you’re using the measurements to guarantee performance of your own, but your specific needs are another crucial data point. If you simply want the best performance for a given configuration, you can experiment with attenuation settings versus distortion performance to find the best balance.

Many analyzer specs also include “typical” values for some parameters, and these can be extremely helpful additions. Of course, only you can decide whether the typicals apply, and whether it’s proper for you to rely on them.

If you use Keysight calibration services, they may be another source of information. Measurement results are available online for individual instruments and can include the measurement tolerances involved.

Signal analyzers themselves can be a source of information for improving measurements, and the Noise Floor Extension feature in some Keysight signal analyzers is a useful example. Each analyzer contains a model of its own noise power for all instrument states, and can automatically subtract this power to substantially improve its effective spectrum noise floor.

For microwave measurements, many signal analyzers use preselector filters to remove undesirable mixing products created in the analyzer’s downconversion process. However, these filters have some insertion loss, which increases the analyzer’s effective noise floor. A valuable nugget that you alone have is whether the mixing products or other signals will be a problem in your setup. If not, you can bypass the preselector and further improve the noise floor.

Finally, one often-overlooked tidbit is whether the signal in question is a consistently repeating burst or pulse. For these signals, time averaging can be a powerful tool. This averaging is typically used with vector signal analysis, averaging multiple bursts in the time domain before performing signal analysis. It can improve noise floor dramatically and quickly, and the result can be used for all kinds of signal analysis and demodulation.

Sorry for going on so long. There are other potential sources of information you can add, but these are some of the most useful I’ve found. If you know of others, please add a comment to enlighten the rest of us.