Skip navigation
All Places > Keysight Blogs > Better Measurements: The RF Test Blog > Blog

  Bring your best engineering game, and use all the measurement tools available

The term Internet of Things (IoT) has been around a few years, and sometimes it feels over-hyped. When some folks start musing breathlessly about a near future in which virtually everything will be connected, it feels like they’ve taken the concept a little too far.

Consider cybersecurity problems with Internet-connected devices, from cameras to doorbells to toys. These glitches highlight just one of the ways we aren’t quite ready for universal connectivity. In addition, the questionable utility and uneven functionality of some devices have left many potential users with feelings that range from guardedly cautious to overtly skeptical.

Long before we approach universal connectivity, we will have to contend with another factor that often seems universal: RF interference. The combination of complex radio systems, dense environments, and high user expectations guarantees that interference will be a persistent issue.

The Interference of Things is a newer term that may not be hyped enough. While we don’t need to get overly dramatic about interference problems, much of the growth of wireless applications will depend on solving or avoiding these problems.

One application likely to lead the way is IoT in medical or healthcare settings. A recent blog post by Keysight’s Chris Kelly was my first exposure to the term interference of things, and it’s a good example of the potential seriousness of RF interference. Chris suggests a forward-looking approach, focusing on early debugging and a thoughtful combination of design, simulation, emulation, test and analysis.

He is certainly right about the benefits of anticipating problems, but sometimes you’re plunged into an existing situation like the example he describes: nearly a thousand Wi-Fi devices and expectations that problems will be solved quickly.

As an RF engineer, you’ll draw on your tools, techniques, experience, creativity and insight. While the lab environment and its benchtop equipment provide powerful advantages, the faster path to success may mean going to where the thorny problems are. In her recent post describing an elusive example of RF interference, Jennifer Stark explained how a portable signal analyzer and the reasoning power of a wireless engineer were key. The actual interference offender was a simple device and a simple signal, but it wasn’t going to be found in the lab.

Fortunately, portable signal analyzers are expanding their capabilities and frequency range at a rapid pace. Keysight’s FieldFox, for example, provides measurement and display capabilities that can help you find and troubleshoot RF interference problems away from the lab.

Two example displays from Keysight FieldFox handheld analyze, including channel scanner and real-time spectrum analysis

The automatic channel scanner (left) speeds measurement of spurious and intermodulation products, while optional real-time spectrum analysis (RTSA; right) can uncover short-duration events.

In a crowded RF environment, the dynamics of time-varying signals pose many challenges, and transient interactions can be hard to understand. Perhaps the most powerful analysis tool is the wideband, gap-free signal capture and playback post-processing that is available in VSA software for signal analyzers. Signal captures can be free-run, time-qualified, or triggered by matching an RTSA frequency mask.

With a complete, gap-free signal in memory—including pre-trigger data—you can perform any type of signal analysis in post-processing: adjust span and center frequencies, apply demodulation, and more. For time-dependent interactions, a spectrogram display can be enlightening.

Gap-free spectrogram (spectrum vs time) display from vector signal analyzer (VSA) of 2.4 GHz ISM band, including WLAN, cordless phone and Bluetooth signals

A spectrogram shows how a signal spectrum (each horizontal line) varies with time (vertical axis) and power (color). This gap-free spectrogram with very fine time resolution was generated by post-processing a signal captured in memory.

The analysis and troubleshooting power of this display comes from its ability to represent everything that happened across a range of frequencies over a known time interval. This clear, comprehensive view is powerful information to mix in with your own knowledge of the system, signal and environment.

 

One note: If you’re interested in the medical environment, or in using it as guide to other demanding situations, check out Brad Jolly’s webcast Smart Testing to Limit Your Risk Exposure in Wireless Medical Devices. He’ll explain what to do when life and health depend on reliable radio links.

  How about “ACA out of detent

Some people of my generation viewed the 1960s race to the Moon as an alternative to a military conflict, with the astronauts as the point of the spear. They were the space equivalents of fighter pilots, doing a more civilized kind of combat. Or maybe they were modern-day cowboys, taming the wildest frontier.

I was too young to be thinking explicitly about engineering as a career, but I viewed those on Earth and those flying the ships mainly as engineers. I still do. Consider the first words spoken by Buzz Aldrin and Neil Armstrong after the first lunar touchdown:

   Shutdown. (Armstrong)

   Okay. Engine stop. (Aldrin)

   ACA—out of detent. (Aldrin)

   Out of detent. (Armstrong)

   Mode control—both auto. Descent engine command override—Off. Engine arm—Off. (Aldrin)

   413 is in. (Aldrin)

By contrast, what we almost always hear in news or documentary coverage after “contact light” (when a probe extending from the lander footpads has touched the surface, but the spacecraft has not yet landed) is a pause and something rather more stirring:

   Houston, Tranquility Base here. (Armstrong)

   The Eagle has landed. (Armstrong)

Some people contend that the “real” first words were Neil’s announcement of Tranquility Base, but I disagree. In so many ways, the Apollo program and its predecessors were fundamentally engineering efforts. It was engineering of the first order, performed by thousands of people, that directed construction and testing, which was performed by hundreds of thousands.

The space program encompassed almost all engineering disciplines, especially electrical and computer engineering. Electrical engineers pioneered systems for control, telemetry, navigation, communications, and tracking. Computer engineers made astonishing advances in both miniaturized (for the time) hardware and real-time, fault-tolerant programming that did an extraordinary job of managing priorities.

Those first words reflected an engineering foundation and the mission planning that it drove. The first priority of the astronauts was to execute those plans, to maximize safety and the chances of mission success. When asked what dominated their thinking, astronauts don’t say much about fear or excitement, but rather a focus on avoiding messing up, and using their wits to solve the problems that came up (see also Apollo 13).

Most astronauts were engineers, and later engineering test pilots. Many had advanced degrees, and all had considerable engineering training. For example, Aldrin’s first degree was in in mechanical engineering, and he pioneered rendezvous technology that was essential for the Moon missions.

It’s no surprise, then, that Armstrong and Aldrin first uttered technical jargon, supporting the essential aspects of completing the landing and handling contingencies for a possible immediate abort back to lunar orbit. The ACA-detent discussion referred to a way to tell the guidance system that they were stable on the lunar surface, stopping any useless thruster activity. The “413 is in” comment refers to a command telling the computer that their orientation was horizontal on the surface, removing drift error ambiguity that could endanger any return to orbit.

After the triumphant announcement of the landing, Armstrong, Aldrin, and the Mission Control team immediately returned to the technical essentials, with Armstrong radioing, “Okay. We’re going to be busy for a minute.” Mission Control spent an intense 90 seconds going through the Stay—No stay decision process, while corresponding efforts on the lunar surface took even longer. If you’re interested in more detail, an annotated transcript is available.

The technical effort behind the Moon landing inspired a generation of engineers of all kinds, and recently some have virtually revisited the landing site. On the 45th anniversary of the landing, NASA used the cameras of its Lunar Reconnaissance Orbiter to generate a 3-D survey.

Recent overhead picture of the Apollo 11 landing site, showing the descent stage of the lander, scientific equipment left behind, and tracks of the astronauts.

This annotated composite image shows the reexamination of the first manned lunar exploration site by the cameras and stereo digital elevation model from the Lunar Reconnaissance Orbiter.  (Image from NASA)

All this certainly inspired my own efforts in engineering and science, and I’ve found that the unvarnished details are always plenty exciting, interesting and even inspiring. If you’d like to make your own virtual visit, rich online resources are now available anywhere there’s an Internet connection.

  Making other windows seem a little wasteful

A proverb that’s perhaps 2,000 years old describes the mills of the gods as “grinding slowly but exceedingly fine.” I’d like to flatter myself that it applies to my thinking on some matters but, alas, the only relevant part appears be the slowness. Witness how long it’s taken me to get back to FFT window functions and IF filters for RF and microwave measurements.

In both signal analysis and demodulation, flexibility in windows and related filtering operations is increasingly important. As I described in my earlier post, windows are time-varying amplitude-weighting functions that force the signal samples in a time record to be periodic within each block of sampled data. This removes discontinuity errors at the ends of the records that would foul up the spectrum results. To do this, the weighting coefficients are generally zero at either end, with a value of one in the middle, and a smooth range of increasing and decreasing values on either side.

It’s ironic that window functions actually discard or de-emphasize information (e.g., some data samples) to improve measurements. But of course the vital thing is that they trade away this information for desirable frequency-domain filter characteristics such as flatness or selectivity. For example, here’s a Gaussian window in the time and frequency domains.

Time domain (samples) and frequency domain (bins) parameters of the Gaussian FFT window function

When compared to a uniform weighting of one, the Gaussian window reduces leakage and improves dynamic range by de-emphasizing a large portion of the sampled data. The weighting coefficient (left) is greater than 0.9 for only about 1/5 of the samples. (image from Wikimedia Commons)

I have always been surprised at the amount of sampled data in each time record that windows remove from the spectrum calculation, as they improve dynamic range (reducing leakage or sidelobes) or improve amplitude accuracy (by reducing scalloping error).

As with so many things in engineering, it’s a matter of understanding requirements and cleverly optimizing tradeoffs. Consider the “confined” version of the Gaussian window below.

Time domain (samples) and frequency domain (bins) parameters of a "confined" version of a Gaussian FFT window function

Modest time-domain changes in the confined version of the Gaussian window (left) reduce sidelobes dramatically (right) and improve dynamic range. However, even more signal samples are de-emphasized, with a weighting coefficient greater than 0.9 for only about 1/8 of the samples. (image from Wikimedia Commons)

From the standpoint of dynamic range, at least, it appears that selectively removing information improves spectrum characteristics. Of course, dynamic range is not the only important aspect of spectrum measurements, and another important tradeoff can be illustrated with the Tukey window.

Time domain (samples) and frequency domain (bins) parameters of the Tukey FFT window function

The Tukey window is not impressive in the frequency domain, but is remarkable for how much of the sampled data it retains in the spectrum calculation. Its weighting coefficients are greater than 0.9 for about 5/8 of the signal samples. (image from Wikimedia Commons)

Many important signals in RF measurements are noise-like or noisy, and accurate measurements can demand some way to minimize the variance of results. One very good example is ACPR: the larger amount of data retained by the Tukey window means that fewer time records and FFTs will be necessary to reach the variance required for a valid measurement. Thus, the Tukey window’s combination of reasonable dynamic range and efficient use of samples translates to speed and accuracy in ACPR measurements.

Unfortunately, I can’t say if these are the characteristics and tradeoffs the inventor of the Tukey window had in mind. I had assumed the window’s creator was John Tukey, one of the two modern-day discoverers of the FFT algorithm (with J.W. Cooley in 1965). My online research didn’t clarify whether the window was discovered by him or was named after him.

If you have a few minutes to spare, it’s worth browsing available window functions as an example of intelligent tradeoffs. Because you know a lot about the signals you are trying to measure and what’s most important to you, this can be another example of adding information to a measurement to get better results faster.

  Taking extra care in the lands of the large and the small

Recently, I found myself peering at a dial indicator while checking the blade runout on my shiny new 12-inch miter saw. I’m putting up new trim in my house, and the big blade allows me to make some cuts directly on larger assemblies. However, I’m no professional woodworker, so my motto is when in doubt, measure... and measure again.

Given the size of the blade, details such as its flatness and mounting are especially important to making good cuts and tight joints. These factors got me thinking of recent developments at the other end of the scale in our RF work, namely the small physical geometry of the millimeter-frequency hardware we’re increasingly using to send information or sense things.

The new N9041B UXA X-Series signal analyzer and its two different input connectors are good examples of what happens when you scale frequencies up and geometries down.

Comparing the two RF/millimeter frequency inputs of the Keysight UXA 110 GHz signal analyzer.  The full frequency range is available from the 1 mm connector of RF input 2, while the 2.4 mm connector of RF input 1 provides a more rugged connection when frequency coverage to 50 GHz is sufficient

The two coaxial input connectors on the UXA signal analyzer have different characteristics and capabilities. The 2.4 mm connector of input 1 (left) covers frequencies to 50 GHz with a power limit of 1W. The 1 mm connector (right) covers frequencies to 110 GHz and power levels to 1.8 mW.

RF Input 1 is a normal 2.4 mm front-panel connector and, as is common with test equipment, the gender is male to reduce the chance of damage and encourage the use of adapters as connector savers. This separate input provides several benefits for users of the UXA when measuring signals below 50 GHz. It’s more mechanically robust than 1 mm or 1.85 mm connectors. It can also handle much more power without damage: 1 W vs. 0.0018 W for RF Input 2.

RF input 2 is also male, and has the more complicated and challenging job: covering higher frequencies with its smaller and more delicate geometry.

In addition to its conductor size, two mechanical differences are apparent. First, the connector body has an additional, larger outer thread ring to mate with test-port adapters rather than standard 1 mm adapters. These adapters are mechanically stronger and less susceptible to damage, and are the best way to connect to the 1 mm input (if they’re available).

The second difference is the pair of threaded bosses, one on either side of the connector. These bosses are used to mount an input-connector vise assembly, perhaps the smallest vise you’ll ever use.

The 1 mm 110 GHz RF input 2 of the UXA signal analyzer can be fitted with a vise or clamp assembly to isolate the connector from the higher torque that may be needed for other connectors, cables or adapters.

A small vise or clamp assembly is attached around the 1 mm, 110 GHz input of the UXA signal analyzer, isolating the mounting torque for the adapter from the torque needed for connecting the adapter to cables, waveguide adapters, etc.

The small size of the 1 mm connectors mean that they don’t need—and probably won’t withstand—the torque that’s appropriate for larger connectors. The torque for the 1 mm connector is 3 or 4 inch-pounds, while the torque for 1.85 mm and larger microwave connectors is 8 inch-pounds.

This is a formula for very expensive damage! To prevent it, the vise holds the flats (part of the body) of the 1 mm end of a standard adapter after it has been tightened to the analyzer’s front panel connector, avoiding the transfer of torque used to connect cables or other adapters to the adapter mounted to the instrument.

As discussed here before, torque is important at microwave and millimeter frequencies. DUT connections are difficult enough, and this simple little clamp can neutralize an important source of problems. You can learn more in the connector kit overview.

As for me, if I had just purchased one of these expensive 110 GHz analyzers, I’d be tempted to quickly stencil a warning around the 1 mm connector in fluorescent green: maybe “Are you sure?” or “Are you authorized to use this port?” You can never be too careful in the land of the very small.

benz

It’s RF interference again…

Posted by benz Jul 10, 2017

  The case of the troublesome garage door opener

 

Note from Ben: This is the first in a series of guest posts from Jennifer Stark of Keysight. As discussed here earlier, our increasingly crowded RF environment will result in more interference, and a higher likelihood of it causing problems. To stay ahead of them, you’ll need your creativity, deductive skills, and persistence

 

Interference is everywhere. And often from an unlikely source.

Let's take the case of an engineer (we’ll call him Mike) who recently had a problem with his garage door opener. Mike had recently installed a new garage door opener. Frustratingly, the garage door remotes that our Mike and his wife carried in their cars intermittently failed to activate the garage door opener.

As a first step, Mike called the support line for the manufacturer of the garage door opener to report the defective product. The installation support person walked Mike through a troubleshooting procedure over the phone. The procedure did not identify any reason that the hardware should be defective. At that point, the installation support person gravely pronounced “You have something called RF interference. That’s your problem, not ours.”

It turns out that Mike is an RF engineer, so he took this as an interesting challenge.

Mike used his N9912A FieldFox handheld RF analyzer in spectrum analyzer mode. He cobbled together a homemade antenna for the input connector and started sniffing around the house for RF interference. He identified the target frequencies by pressing the garage door remote button while looking at the RF spectrum.

Waterfall and spectrogram displays are a way to visually understand the time domain behavior and frequency of occurrence of signals and interference. This display is from a handheld spectrum analyzer with additional software that helps in detecting and visualizing interference.

Waterfall and spectrogram displays are useful for spotting interference and understanding its behavior in the time domain. The N9918A-236 Interference Analyzer and Spectrogram software for FieldFox analyzers adds these displays to spectrum measurements.

Armed with this information about the frequency range of interest, Mike set out looking for any signals that were near the frequency of the garage door opener. A diligent engineer, he went all over the house looking for clues. He looked in the garage. He looked upstairs in the house, above the garage. He looked in corners of the house.

Eventually, he discovered a small but significant signal in the kitchen. It appeared to be coming from the refrigerator. This puzzled Mike, but his engineering discipline compelled him to investigate. Unplugging the refrigerator did not eliminate the signal. Checking at a different time of day, Mike discovered that the interfering signal was absent even when the refrigerator was operating. It was a mystery.

Leaning on the kitchen counter to collect his thoughts, Mike took stock of what he had learned:  intermittent garage door issues, signal coming from the kitchen. Then, Mike had an insight. The garage door only failed when his wife was home, so the issue was related to the comings and goings of his wife.

At this point, Mike noticed his wife’s purse in its normal spot on the counter by the refrigerator. Mike investigated his wife’s purse with his FieldFox. Sure enough, the interfering signal was coming from the purse (not from the refrigerator). Inside the purse was the remote key fob for the car. Mike removed the battery from the key fob and the interfering signal immediately went away.

The solution was simple—replace the troublesome key fob. Now the garage door is working properly, Mike is happy, and Mike’s wife is happy. And, the pesky RF interference is no more.

  Engineers that exemplify creativity, and the ability to explain it

School is out and some are on holiday. It’s a good time to briefly widen this blog’s technology focus a bit with one of my occasional off-topic wanderings. This time we’ll look at impressive achievements of some engineers of yore, and a couple of enlightening explanations of their creations.

These days we combine our electrical skill with processors, software, and myriad actuator types to generate virtually any kind of complex mechanical action—wherever we need to connect electrons with the physical world. It’s easy to forget how sophisticated tasks were accomplished in the past, without computers or stepper motors, and how even advanced techniques such as perceptual coding were implemented with physical mechanisms.

All these elements were brought together for me recently in an impressive YouTube explanation by “engineerguy” Bill Hammack of the University of Illinois. In just four minutes, Bill explains several poorly understood aspects of film projectors that evolved in the century between their invention (c.1894) and their replacement by digital cinema technology (c.1999).

Bill uses slow-motion footage and animated diagrams to do a great job of explaining how a projector keeps the film going smoothly across the sound sensor while intermittently starting and stopping the film between the lamp and lens. This precisely executed start-stop motion, projecting the film image only when it isn’t moving, coaxes our vision system into seeing a series of stills as fluid motion.

Bill shows how the motion is produced using a synchronized cam, shuttle, and wobble plate. As I dug deeper, further research showed that some projectors instead use an equally innovative mechanism called a Geneva drive (or Geneva stop), a mechanism that was already old when the first crude projectors were created in the late 19th century. Seeing the shape of the Geneva mechanism sent me to my reproduction of the very old book Five Hundred & Seven Mechanical Movements.

Scanned image of Geneva mechanism or Geneva stop from 1889 book Five Hundred & Seven Mechanical Movements

This composite figure shows two examples of Geneva drives from the mechanisms in Henry T. Brown’s 1896 book Five Hundred & Seven Mechanical Movements. These convert continuous motion to intermittent motion with smooth starts and stops, and have built in limits or “stops.”

I figured I was nearly alone in my interest in in the old book, but that is not the case. Another quick search revealed that these manifold fruits of the Industrial Revolution have been brought into the internet age, with hyperlinks and animation at 507movements.com. The animations are addictive!

The book is a potent antidote to the tendency to forget how clever and imaginative the engineers of the past actually were, though they were often self-taught and worked with limited materials. And, if we take Edison and the Wright Brothers as examples, they were tireless experimenters.

From an 1896 book to the joys of YouTube, there is cleverness in both the engineering and the explaining. If you’re looking for something closer to our RF home, check out Bill’s demonstration of performing Fourier analysis with a mechanical device. You may never think of FFTs in quite the same way again.

  Coherence can make a big difference

Sometimes The Fates of Measurement smile upon you, and sometimes they conspire against you. In many cases, however, it’s hard to tell which—if either—is happening.

More often, I think, they set little traps or tests for us. These are often subtle, but nonetheless important, especially when you’re trying to make the best measurements possible, close to the limits of your test equipment.

In this post I’m focusing on coherence as a factor in ACPR/ACLR measurements. These ratios are a fundamental measurement for RF engineering, quantifying the ability of transmitters to properly share the RF spectrum.

To make the best measurements, it’s essential to understand the analyzer’s contribution to ACP and keep it to a minimum. As we’ve discussed previously, the analyzer itself will contribute power to measurements, and this can be a significant addition when measuring anywhere near its noise or distortion floor.

We might expect this straightforward power contribution to also apply to ACPR measurements, where the signals appear to be a kind of band-limited noise that may slope with frequency. However, it’s important to remember that these are actually distortion measurements, and that the assumption of non-coherence between the signal and the analyzer is no longer a given.

Indeed, an intuitive look at the nonlinearity mechanisms in transmitters and analyzers suggests that coherence of some kind is to be expected. This moves us from power addition (a range of 0 dB to +3dB) to voltage addition and the larger apparent power differences this can cause.

Diagram of error in ACPR/ACLR measurements vs. analyzer mixer level/attenuation. Diagram specifically calls out the case where DUT and analyzer ACPR are same, and how this can cause large ACPR errors due to coherent signal addition or cancellation.

This diagram shows how analyzer mixer level affects ACPR measurement error when the analyzer and DUT distortion are coherent. The largest errors occur when the ACP of the DUT and analyzer are the same, indicated by the vertical dashed red line.

Interestingly, the widest potential error band occurs not where the analyzer ACP is largest but when it is the same as the DUT ACP. Consequently, adjusting the mixer level to minimize total measured ACP may lead you to a false minimum.

There are a number of challenges in optimizing an ACPR measurement:

  •  Noise power subtraction cannot be used due to analyzer/DUT coherence.
  •  Narrow RBWs are no help because they have an equal effect on the apparent power from the analyzer and DUT.
  • Low mixer levels (or higher input attenuation) minimize analyzer-generated distortion but increase measurement noise floor.
  • High mixer levels improve measurement noise floor but increase analyzer-generated distortion.

While the settings for lowest total measurement error are not exactly the same as for minimum analyzer-generated ACP, they are generally very close. In a future post I’ll discuss the differences, and ways to optimize accuracy, no matter what The Fates have in mind for you.

  A quick, intuitive look at what will make them challenging

Noise is fundamental in much of what RF engineers do, and it drives cost/performance tradeoffs in major ways. If you’ve read this blog much, you’ve probably noticed that noise is a frequent focus, and I’m almost always working to find ways to reduce it. You’ve also noticed that I lean toward an intuitive explanation of RF principles and phenomena whenever possible.

In the most recent post here, Nick Ben discussed four fundamentals of noise figure. It’s a useful complement to my previous look at the measurement and the two main ways to make it.

As engineers, we work to develop a keen sense of when we might be venturing into difficult terrain. This helps us anticipate challenging tasks in measurement and design, and it helps us choose the best equipment for the job. In this post I’ll summarize factors that might make noise figure measurements especially troublesome.

First, the most common challenge in noise figure measurements: ensuring that the noise floor of the measurement setup is low enough to separate it from the noise contributed by the DUT. These days, the most frequently used tool for noise figure measurements is a spectrum or signal analyzer, and many offer performance and features that provide an impressively low noise floor for noise figure measurements.

Measurement examples of noise floor for a broad frequency span on Keysight PXA signal analyzer. Lower traces include effect of internal and external preamplifiers.

Internal (middle trace) and external (bottom trace) preamplifiers can dramatically reduce the noise floor of signal analyzers (scale is 4 dB/div). The measurements are from a Keysight PXA X-Series signal analyzer, which also includes a noise subtraction feature as another tool to reduce effective analyzer noise floor.

My instinct is to separate noise figure measurements into four general cases, resulting from two characteristics of the DUT: high or low noise figure versus high or low gain.

I should note that this is something of an oversimplification, and not useful for devices such as attenuators and mixers. For the sake of brevity in this post I’ll limit my discussion to RF amplifiers, and in a future post deal with other devices and the limits of this approach.

Because analyzer noise floor is a critical factor in the measurements, it’s probably no surprise that you’ll have an easier time measuring devices with a relatively high level of output noise. This includes devices that have a poor noise figure, no matter their gain. Less obviously, it also includes devices with a very good noise figure, as long as their gain is high enough.

The intuitive thing to keep in mind is that large amounts of gain will amplify DUT input noise by the same amount, resulting in output noise power large enough to be well above the analyzer’s noise floor.

Thus, the most difficult measurements involve devices with modest gain, especially when their noise figure is very good (low). The resulting noise power at the DUT output is also low, making it difficult to distinguish the noise power at the DUT output from that of the signal analyzer.

In his recent post, Nick also brought up the problems that interference and other (non-noise) signals can cause with noise figure measurements. Shielding your setups and ensuring connection integrity can help, and signal analyzers can identify discrete signals and avoid including them in the noise figure results.

One more complicating factor in noise figure measurements is the impedance mismatches that occur in two places: between the noise source and the DUT, and between the DUT and the analyzer. This problem is generally worse at higher frequencies, making it increasingly relevant in 5G and other millimeter-wave applications. The most thorough way to handle the resulting errors in noise power and gain measurements is to use the “cold source” noise figure method implemented in vector network analyzers.

Noise figure measurements will challenge you in many other ways, but those mentioned above should give noise figure novices a better sense of when it’s most important to be careful with the measurements and cautious in interpreting the results.

benz

The Four Ws of Noise Figure

Posted by benz May 30, 2017

  If you're going to optimize it you have to quantify it

 

Note from Ben Zarlingo: This is the second in our series of guest posts by Nick Ben, a Keysight engineer. Here he turns his attention to the fundamentals of noise figure.

 

In the previous edition of The Four Ws, I discussed the fundamentals of spurious emissions. This time I’m discussing the WHAT, WHY, WHEN and WHERE of characterizing your system’s noise figure (NF) to process low-level signals for improving product design.

What

Noise figure, also known as noise factor, can be defined as the degradation of (or decrease in) the signal-to-noise ratio (SNR) as a signal passes through a system network. In our case the “network” is a spectrum or signal analyzer (SA).

Basically, a low figure means the network adds very little noise (good) and a high noise figure means it adds a lot of noise (bad). The concept fits only those networks that process signals and have at least one input and one output port.

Figure 1, below, provides the fundamental expression for noise figure.

Equation describing noise figure as a power ratio, comparing signal/noise at input and output of two-port device

Figure 1. Noise figure is the ratio of the respective signal-to-noise power ratios at the input and output when the input source temperature is 290 °K.

Additionally, noise figure is usually expressed in decibels:

             NF (in dB) = 10 log (F) = 10 log (No) – 10 log (Ni)

Why and When

Noise figure is a key system parameter when handling small signals, and it lets us make comparisons by quantifying the added noise. Knowing the Noise Figure value, we can calculate a system’s sensitivity from its bandwidth.

 It’s important to remember that a system’s noise figure is separate and distinct from its gain. Once noise is added to the signal, subsequent gain stages amplify signal and noise by the same amount and this does not change the SNR.

 Figure 2.a, below, shows an input to an amplifier, and the peak is 40 dB above the noise floor; Figure 2.b shows the resulting output signal. Gain has boosted the signal and noise levels by 20 dB and added its own noise. As a result, the peak of the output signal is now only 30 dB above the noise floor. Because degradation in the SNR is 10 dB, the amplifier has a 10 dB noise figure.

Relative signal and noise levels compared at the input and output of an amplifier. The noise level increases more than the signal level, due to the noise added by the amplifier.

Figure 2: Examples of a signal at an amplifier’s input (a) and (b) its output. Note that the noise level rises more than the signal level due to noise added by the amplifier circuits. This change in signal and noise is the amplifier noise figure.

Where (& How)

The open question: Where are the system noise sources that affect noise figure? Most noise consists of spontaneous fluctuations caused by ordinary phenomena in the electrical equipment, and this noise is generally flat. We perform measurements on this noise to characterize noise figure. These noise sources fit into two main categories: thermal noise and shot noise.

 One more note: It’s important to consider that some of the power measured in a noise figure measurement may be some type of interference rather than noise. Therefore, it’s critical to be alert for and guard against this by performing measurements in shielded rooms to ensure we’re seeing only the spontaneous noise we want to measure.

 Wrapping Up

If you’d like to learn more about characterizing noise figure and improving your product designs, recent application notes titled Three Hints for Better Noise Figure Measurements and Noise and Noise Figure: Improving and Simplifying Measurements include great explanations and how-to techniques as well as pointers to additional resources. You’ll find both in the growing collection on our signal analysis fundamentals page.

 I hope my second installment of The Four Ws of X provided some information you can use. Please post any comments – positive, constructive, or otherwise – and let me know what you think. If this post was useful, please give it a like and, of course, feel free to share.

  Handling the frequency and bandwidth challenges of 5G, radar, and more

5G technologies and markets are much in the news these days, and for good reasons. The economic potential is large, the opportunities are relatively near-term, and the technological challenges are just the kind of thing RF engineers can get excited about. Whether your focus is on design or test, there is plenty of difficult work ahead in the pursuit of practical ways to fully utilize the potential capacity of the centimeter and millimeter bands.

Unfortunately, much of the analysis and commentary focuses on economic factors and broad-brush coverage of technological challenges. A good overview of a complex subject is essential for resource planning, but it isn’t deep enough for us to see the specific measurement challenges and how we might handle them.

Some measurement experts have a “just right” combination of broad technical knowledge and specific measurement insight to make a contribution here, and I can heartily recommend Keysight’s Pete Cain. He has not only the expertise but also an impressive ability to explain the technical factors and tradeoffs.

Pete recently produced a webcast on millimeter-wave challenges, and it’s a good fit for the needs of the RF/microwave engineer or technical manager who will be dealing with these extreme frequencies and bandwidths. It’s available on-demand now, and I wanted to share a few highlights.

His presentation begins with a discussion of general technology drivers such as the high value of lower-frequency spectrum and the public benefit of shifting existing traffic to higher frequencies to free it up whenever possible. That’s an important issue, and perhaps a matter of future regulation to avoid a tragedy of the commons.

Next, Pete goes on to explain the problem of increased noise that goes along with the wider bandwidths and increased data rates of microwave and millimeter bands. This noise reduces SNR and eventually blunts channel capacity gains, as shown here.

Comparing the maximum spectral efficiency of different channel bandwidths. The S-shaped curve shows how spectral capacity and channel efficiency returns diminish as bandwidths get very wide.

The wide bandwidths available at millimeter-wave frequencies promise dramatic gains in channel capacity. Unfortunately, these bandwidths gather up more noise, and that limits real-world capacity and spectral efficiency.

As shown in the diagram, Pete also discusses spectral efficiency and shows where existing services operate. This is where RF engineers have already turned theory into practical reality, and the landscape of tradeoffs they’ll optimize as millimeter technologies become widespread.

To further inspire the technically inclined, Pete dives deeper into the essentials of high-frequency testing, including the issues of loss and frequency response at microwave and millimeter-wave frequencies. As is often the case, high quality measurements require a combination of hardware, software, and careful measurement technique. In particular, he describes the value of establishing source and analyzer calibration planes right at the DUT, thereby minimizing measurement error.

Diagram shows measurement of a millimeter-wave DUT where the calibration planes of the input and output have been moved to the edges of the DUT itself, for better accuracy.

To optimize accuracy, it’s necessary to move the calibration planes of measurements from the instrument front panels to the DUT signal ports. Software such as the K3101A Signal Optimizer can make this much easier.

Moving the calibration planes to the DUT ports grows more important as frequencies increase. Loss is an issue, of course, but in many cases the thorniest problems are frequency-response effects such as ripple and non-repeatability. Ripple is especially troublesome for very-wideband signals, while repeatability can be compromised by sensitivity to cable movement and routing as well as connector torque and wear.

In the webcast, Pete also compares signal-connection methods, including coax, waveguide, probes, and antennas.

That’s just a quick overview of an enlightening presentation. To see the whole thing, check out the “Millimeter wave Challenges” on-demand webcast—and good luck in the land of very short waves.

  As technologists, we need ways to tell the difference

Over the past few months I’ve been hearing more about a propulsion technology called an EM Drive or EMDrive or, more descriptively, a “resonant cavity thruster.” As a technology that uses high-power microwaves, this sort of thing should be right in line with our expertise. However, that doesn’t seem to be the case because the potential validity of this technique may be more in the domain of physicists—or mystics!

Before powering ahead, let me state my premise: What interests me most is how one might approach claims such as this, especially when a conclusion does not seem clear. In this case, our knowledge of microwave energy and associated phenomena does not seem to be much help, so we’ll have to look to other guides.

First, let’s consider the EM drive. Briefly, it consists of an enclosed conductive cavity in the form of a truncated cone (i.e., a frustum). Microwave energy is fed into the cavity, and some claim a net thrust is produced. It’s only a very small amount of thrust, but it’s claimed to be produced without a reaction mass. This is very much different than established technology such as ion thrusters, which use electric energy to accelerate particles. The diagram below shows the basics.

Diagram of EM drive, showing mechanical configuration, magnetron input to the chamber, and supposed forces that result in a net thrust

This general arrangement of an EM drive mechanism indicates net radiation force and the resulting thrust from the action of microwaves in an enclosed, conductive truncated cone. (image from Wikipedia)

The diagram is clear enough, plus it’s the first time I’ve had the chance to use the word “frustum.” Unfortunately, one thing the diagram and associated explanations seem to lack is a model—quantitative or qualitative, scientific or engineering—that clearly explains how this technology actually works. Some propose the action of “quantum vacuum virtual particles” as an explanation, but that seems pretty hand-wavy to me.

Plenty of arguments, pro and con, are articulated online, and I won’t go into them here. Physicists and experimentalists far smarter than me weigh in, and they are not all in agreement. For example, a paper from experimenters at NASA’s Johnson Space Center has been peer-reviewed and published. Dig in if you’re interested, and make up your own mind.

I’m among those who, after reading about the EM drive, immediately thought “extraordinary claims require extraordinary evidence.” (Carl Sagan made that dictum famous, but I was delighted to learn that it dates back 200 years to our old friend Laplace.) While it may work better as a guideline than a rigid belief, it’s an excellent starting point when drama is high. The evidence in this case is hardly extraordinary, with a measured thrust of only about a micronewton per watt. It’s devilishly hard to reduce experimental uncertainties enough to reliably measure something that small.

I’m also not the first to suspect that this runs afoul of Newton’s third law and the conservation of momentum. A good way to evaluate an astonishing claim is to test it against fundamental principles such as this, and a reaction mass is conspicuously absent. Those who used this fundamentalist approach to question faster-than-light neutrinos were vindicated in good time.

It’s tempting to dismiss the whole thing, but there is still that NASA paper, and the laws of another prominent scientific thinker, Arthur C. Clarke. I’ve previously quoted his third law: “Any sufficiently advanced technology is indistinguishable from magic.” One could certainly claim that this microwave thruster is just technology more advanced than I can understand. Maybe.

Perhaps Clark’s first law is more relevant, and more sobering: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

I’m neither distinguished nor a scientist, but I am a technologist, and perhaps a bit long in the tooth. I should follow Clarke’s excellent example and maintain a mind that is both open and skeptical.

benz

The Four Ws of Spurious Emissions

Posted by benz Apr 30, 2017

  Refining your design and creating happy customers

 

Note from Ben Zarlingo: I invite you to read this guest post by Nick Ben, a Keysight engineer. He discusses the what, why, when and where of spurious emissions before delving into the importance of identifying them in your device’s transmitting signal and thereby improving your product design.

 

If you’re reading this, you’re probably an engineer. That means you may be looking for ways to improve your present and future designs. If yours includes a transmitter, one key to success is checking for spurious emissions that can interfere with signals outside your device’s designated bandwidth. Characterizing spurious behavior can save money and help you create happy and loyal customers.

But wait, you say: WHAT, WHY, WHEN and WHERE can I save money and create happy customers by measuring spurious emissions? I’m glad you asked. Let’s take a look.

What: A Quick Reminder

Ben Z has covered this: Spurs are unwanted stray frequency content that can appear both outside and within the device under test’s (DUT’s) operating bandwidth. Think of a spur as the oddball signal you weren’t expecting to be emanating from your device—but there it is. Just like you wouldn’t want your family’s phones to overlap with one another in the same band, they shouldn’t interfere with that drone hovering outside your window (made you look). If you refer to the traces below, the left side presents a device’s transmitted signal without a spur and the right side reveals an unwanted spurious signal, usually indicating a design flaw.  

 

Two GSM spectrum measurements are compared.  The one on the right contains a spurious signal.

In these side-by-side views of a 1 MHz span at 935 MHz, the presence of an unwanted spur is visible in the trace on the right. Further investigation should identify the cause.

Why and When

In densely populated regions of the frequency spectrum, excessive spurs and harmonics are more likely to be both troublesome and noticed. To prevent interference with devices operating on nearby bands, we need to measure our own spurious emissions.

Where (& How)

Spur measurements are usually performed post-R&D, during the design validation and manufacturing phases. Using a spectrum or signal analyzer, these measurements are presented on a frequency vs. amplitude plot to reveal any undesirable signals. Spur characterization is done over a frequency span of interest using a narrow resolution bandwidth (RBW) filter and auto-coupled sweep-time rules. The sweep normally begins with the fundamental and reaches all the way up to the tenth harmonic.

While current spur-search methods are good for the design validation phase, they aren’t great because measurements are too slow for pass/fail testing of thousands of devices on the production line. These tests are often based on published standards (perhaps from the FCC) that may be described in terms of a spectrum emission mask (SEM). Fortunately, SEM capability is available in Keysight X-Series signal analyzers.

To tackle the issue of slow sweep times and enable faster testing, today’s signal analyzers use digital technologies, especially DSP, to improve measurement speed and performance (see one of Ben’s earlier posts). Ultimately, you can achieve faster sweep times—as much as 49x faster than older analyzers—when chasing low-level signals at wide bandwidths.

Wrapping Up

If you’d like to learn more, a recent application note titled Accelerating Spurious Emission Measurements using Fast-Sweep Techniques includes detailed explanations, techniques, and resources. You’ll find it in the growing collection on our signal analysis fundamentals page.

I hope my first installment of The Four Ws of X provided some information you can use. Please post any comments—positive, constructive, or otherwise—and let me know what you think. If it was useful, please give it a like and, of course, feel free to share.

  Electrical engineers lead the way

Years ago, a manager of mine seemed to constantly speak in analogies. It was his way of exploring the technical and business (and personal!) issues we faced, and after a while I noticed how often he’d begin a sentence with “It’s like...”

His parallels came in a boundless variety, and he was really creative in tying them to our current topic, whatever it was. He was an excellent manager, with a sense of humor and irony, and his analogies generally improved group discussions.

There were exceptions, of course, and misapplying or overextending qualitative analogies is one way these powerful tools can let us down.

The same is true of quantitative analogies, but many have the vital benefit of being testable in ways that qualitative analogies aren’t. Once validated, they can be used to drive real engineering, especially when we can take advantage of established theory and mathematics in new areas.

The best example I’ve found of an electrical engineering (EE) concept with broad applicability in other areas is impedance. It’s particularly powerful as a quantitative analogy for physical or mechanical phenomena, plus the numerical foundations and sophisticated analytical tools of EE are all available.

Some well-established measurements even use the specific words impedance and its reciprocal—admittance. One example is the tympanogram, which plots admittance versus positive and negative air pressure in the human ear.

Two "tympanograms" are impedance measurements of the human hearing system. These diagrams show admittance (inverse of impedance) to better reveal the response of the system at different pressures, to help diagnose problems.

These tympanograms characterize the impedance of human hearing elements, primarily the eardrum and the middle ear cavity behind it, including the bones that conduct sound. The plot at the left shows a typical maximum at zero pressure, while the uniformly low admittance of the one on the right may indicate a middle ear cavity filled with fluid. (Image from Wikimedia Commons)

Interestingly, those who make immittance* measurements of ears speak of them as describing energy transmission, just like an RF engineer might.

Any discussion of energy transmission naturally leads to impedance matching and transformers of one kind or another. That’s where the analogies become the most interesting to me. Once you see the equivalence outside of EE, you start noticing it everywhere: the transmission in a car, the arm on a catapult, the exponential horn on a midrange speaker. One manufacturer of unconventional room fans has even trademarked the term “air multiplier” to describe the conversion of a small volume of high-speed air to a much larger volume at lower speeds.

All of these things can be quantitatively described with the power of the impedance analogy, leading to effective optimization. It’s typically a matter of maximizing energy transfer, though other tradeoffs are illuminated as well.

Maybe my former manager’s affection for analogies rubbed off on me all those years ago. I certainly have a lot of respect for them, and their ability to deliver real engineering insight in so many fields. We EEs can take some degree of pride in leading the way here, even if we’re the only ones who know the whole story.

*A term coined by our old friend H. W. Bode in 1945.

  The difference can be anything from 0 dB to infinity

Most RF engineers are aware that signal measurements are always a combination of the signal in question and any contributions from the measuring device. We usually think in terms of power, and power spectrum has been our most common measurement for decades. Thus, the average power reading at a particular frequency is the result of combining the signal you’re trying to measure plus any extra “stuff” the analyzer chips in.

In many cases we can accept the displayed numbers and safely ignore the contribution of the analyzer because its performance is much better than that of the signal we’re measuring. However, the situation becomes more challenging when the signal in question is so small that it’s comparable to the analyzer’s own contribution. We usually run into this when we’re measuring harmonics, spurious signals, and intermodulation (or its digital-modulation cousin, adjacent channel power ratio, ACPR).

I’ve discussed this situation before, particularly in spurious measurements in and around the analyzer’s noise floor.

Graphic explanation of addition of CW signal and analyzer noise floor. Diagram of apparent and actual signals, actual and displayed signal-to-noise ratio (SNR).

Expanded view of measurement of a CW signal near an analyzer’s noise floor. The analyzer’s own noise affects measurements of both the signal level and signal/noise.

It’s apparent that addition happens between the power of the signal and that of the analyzer’s internal noise. For example, when the actual signal power and the analyzer noise floor are the same, the measured result will be high by 3 dB.

However, it’s essential to understand that the added power is in the form of noise, which is not coherent with the signal—or anything else. The general incoherence of noise is a valuable assumption in many areas of measurement and signal processing.

We can get tripped up when we unknowingly violate this assumption. Consider the addition of these CW examples in the time domain, where the problem is easier to visualize:

Graphic explanation of addition of two equal-amplitude CW signals, showing effect of relative phase. In-phase addition produces a total power 6 dB greater than an individual signal, while out-of-phase addition results in zero net power.

The addition of coherent signals can produce a wide range of results, depending on relative amplitude and phase. In this equal-amplitude example, the result can be a signal with twice the voltage and therefore 6 dB more power (top) or a signal with no power at all (bottom).

I’ve previously discussed the log scaling of power spectrum measurements and the occasionally surprising results. The practical implications for coherent signals are illustrated by the two special cases above: two equal signals with either the same or opposite phase.

When the signals have the same phase, they add to produce one with four times the power, or an amplitude 6 dB higher. With opposite phase the signals cancel each other, effectively hiding the signal of interest and showing only the measurement noise floor. Actual measurements will fall somewhere between these extremes, depending on the phase relationships.

This coherent addition or subtraction isn’t typically a concern with spurious signals; however, it may arise with harmonic and intermodulation or ACPR measurements in which the analyzer’s own distortion products might be coherent with the signal under test. Some distortion products might add to produce a power measurement that is incorrectly high; others could cancel, causing you to miss a genuine distortion product.

I suppose there are two lessons for RF engineers. One: some measurements may need a little more performance margin to get the accuracy you expect for very small signals. The other: be careful about the assumptions for signal addition and the average power of the result.

In a future post I’ll describe what we learned about how this applies to optimizing ACPR measurements. Until then, you can find more information on distortion measurements at our signal analysis fundamentals page.

  Good news about RBW, noise floor, measurement speed, and finding hidden signals

Some things never change, and others evolve while you’re busy with something else. In recent years, RF measurements provide good examples of both, especially in signal analysis fundamentals such as resolution bandwidth filtering, measurement noise floor, and speed or throughput. It can be a challenge to keep up, so I hope this blog and resources such as the one I’ll mention below will help.

One eternal truth: tradeoffs define your performance envelope. Optimizing those tradeoffs is an important part of your engineering contribution. Fortunately, on the measurement side, that envelope is getting substantially larger, due to the combined effect of improved hardware performance plus signal processing that is getting both faster and more sophisticated.

Perhaps the most common tradeoffs made in signal analysis—often done instinctively and automatically by RF engineers—are the settings for resolution bandwidth and attenuation, affecting noise floor and dynamic range due to analyzer-generated distortion. Looking at the figure below, the endpoints of the lines vary according to analyzer hardware performance, but the slopes and the implications for optimizing measurements are timeless.

Graphic shows how signal analyzer noise floor varies with RBW, along with how second and third order distortion vary with mixer level. Intersection of these lines shows mixer level and resolution bandwidth settings that optimize dynamic range

Resolution-bandwidth settings determine noise floor, while attenuation settings determine mixer level and resulting analyzer-produced distortion. Noise floor and distortion establish the basis for the analyzer’s dynamic range.

This diagram has been around for decades, helping engineers understand how to optimize attenuation and resolution bandwidth. For decades, too, RF engineers have come up against a principal limit of the performance envelope, where the obvious benefit of reducing resolution bandwidth collides with the slower sweep speeds resulting from those smaller bandwidths.

That resolution bandwidth limit has been pushed back substantially, with dramatic improvements in ADCs, DACs, and DSP. Digital RBW filters can be hundreds of times faster than analog ones, opening up the use of much narrower resolution bandwidths than had been practical, and giving RF engineers new choices in optimization. As with preamps, the improvements in noise floor or signal-to-noise ratio can be exchanged for benefits ranging from faster throughput, to better margins, to the ability to use less-expensive test equipment.

Improvements in DSP and signal converters have also enabled new types of analysis such as digital demodulation, signal capture and playback, and real-time spectrum analysis. These capabilities are essential to the design, optimization and troubleshooting of new wireless and radar systems.

If you’d like to know more, and take advantage of some of these envelope-expanding capabilities, check out the new application note Signal Analysis Measurement Fundamentals. It provides a deeper dive into techniques and resources, and you’ll find it in the growing collection at our signal analysis fundamentals page.