Skip navigation
All Places > Keysight Blogs > Better Measurements: The RF Test Blog > Blog

  Understand, anticipate, and respect your power limitations

Are IoT and other smart/connected devices the biggest wireless opportunity right now, or the biggest source of hype? I suppose they can be both at the same time, and it’s clear that lots of devices will be designed and sold before we know the true magnitude of this wireless segment.

A crucial element of many wireless devices is operation on battery power. In recent years, this has often meant lithium ion batteries that are recharged once every day or two. These days, however, lots of design effort is transitioning to devices that use primary batteries, ranging from traditional alkaline cells to buttons and lithium-coin cells. These power sources are expected to last months, if not a year or more, despite their small size.

Meeting these power demands will require careful engineering—both RF power and DC power—and a holistic approach, to give you confidence that you’ll get the needed combination of performance and real-world functionality. This is a field with lots of investment and competition, meaning you may not have a second chance to fix a design failure or a development delay.

Exceptional power efficiency doesn’t happen by accident, and it isn’t a result of some tuning or tweaking at the end of the design process. Instead, it starts when devices are being designed, and overall success stems from a sustained process of measuring and optimizing. Two aspects of test & measurement are worth special note in designing for very low power: 

  • Using a power source with realistic limitations
  • Precisely measuring power consumption in all modes of operation, and during transitions

When powering a device or circuit, using a benchtop power supply can actually hide problems from you. Primary cells, especially when they’re very gradually going flat, can be highly imperfect power sources, and their imperfections can change with aging and temperature. Some precision power sources are now available to emulate real-world cells.

 

Diagram shows voltage-current emulation capabilities of Keysight B2961A/62A low-noise power sources

Keysight’s B2961A/62A low-noise power sources can emulate the DC voltage/current output characteristics of many different power sources, providing insight into real-world behavior in limited power conditions.

These advanced power sources can give you early warning of DC power problems while there’s still time and flexibility to design around them. They can also emulate power sources such as solar cells, with their very non-battery characteristics.

As always, if you’re going to optimize something, you have to measure it. On the power measurement side, extended battery life may require the ability to measure small currents, and perhaps a form of power scheduling to avoid excessive demand from simultaneous digital and RF activities. Whether you’re using a real battery or an emulator, instruments such as a DC power analyzer can tell you how much power is being used, and just when it’s needed.

Short term and long term current measurements from Keysight N6075C DC power analyzer

On the Keysight N6705C, the dynamics of current consumption are shown over 30 ms in scope view (left) and over 30 seconds in data logger view (right). Measurements such as these provide a more complete understanding of the real-world power demands of a device or subsystem.

The use of periodic quiescent states is one proven technique for extended battery operation, and it presents its own measurement challenges. Extremely tiny currents must be assessed to understand cumulative consumption, and recent products, such as the Keysight CX3300A device current waveform analyzer, are meant for just that. These analyzers have both analog and digital inputs, and the ability to time-align measurements of both.

In this post I’ve drifted away from my usual focus on RF measurements but, of course, the core concern for us in these DC power issues is to ensure that RF matters are proceeding as they should, no matter the state of DC power. Fortunately, there are ways to use the new power analyzers to trigger RF signal analyzers and thus correlate DC power with RF power and modulation, and that’s a subject for a future post.

  Are you a good spectral citizen?

 

Note from Ben Zarlingo: This is the third in our series of guest posts by Nick Ben, a Keysight engineer. In this post he provides an overview of adjacent channel power, a measure of how well your products play with others.

 

In the previous edition of The Four Ws, I discussed the fundamentals of noise figure. This time I’m discussing the WHAT, WHY, WHEN and WHERE of adjacent channel power (ACP) measurements so that you can ensure your device is only transmitting within its assigned channel and doesn’t interfere with signals in undesignated adjacent channels.

 

What

A key requirement for every wireless transmitting device is that it should only transmit within its assigned channel. To make sure of this, adjacent channel power (ACP) measurements determine the average power or interference a transmitting device generates in the adjacent channels compared to the average power in its assigned channel. This ratio is known as the adjacent channel power ratio (ACPR). ACP measurements use a reference level of 0.00 dBm, and the desire is to have the ACP be as low as possible. A poor ACPR is an indication of spectral spreading or switching transients for the device under test (DUT), which are a big no-no.

General diagram of adjacent channel power measurements, including main transmitter channel and two adjacent channels. Hand-drawn diagram

A look at generated power in a transmitting device’s adjacent channels. Adjacent channels are located above and below the generated power in the transmitting device’s designated channel. The ratio of the two gives you your ACPR.

 

Why and When

ACP is key in ensuring we avoid interference with other signals in adjacent channels where your device has not been licensed (from a regulatory body or agency like the Federal Communications Commission) to transmit.

 

The measurement is made on digital traffic channels. However, it is especially important to make the ACP measurement when there may be more stringent requirements beyond regulatory licensing. For example, Bluetooth, LTE and W-CDMA have ACP as part of their physical layer requirements.

 

Where (& How)

When using a spectrum analyzer, the results of an ACP measurement are displayed as a bar graph or as spectrum data (or a combination), with data at specified offsets. They can also be displayed as a table that includes the actual power of the adjacent channels and your transmitting device’s channel in dBm. This is done in addition to the power relative to the carrier in dBc for both the upper and lower sidebands.

 

As described earlier, to determine ACPR you have to integrate the power in your assigned channel and the power in the adjacent channels, and find the ratios between the integrals.  Keysight’s ACP PowerSuite measurement simplifies this process to give you fast results without manual calculations. All you do is set the channel frequency, bandwidth, and channel offsets for your signal’s specifications. The ACP PowerSuite measurement on X-Series signal analyzers takes care of everything else.

Adjacent channel power ratio measurement screen and partial front panel of Keysight signal analzyer.

Keysight Technologies EXA Signal Analyzer displaying a transmitter output using the ACP measurement, which is one of nine power measurements in the X-Series PowerSuite. Ideally a good signal should not go outside the transmitted channel (purple) into adjacent channels (green). Channel power ratios are shown in the table below the spectrum/channel bar display.

 

Wrapping Up

If you’d like to learn more about making fundamental measurements and spectrum analysis concepts to ensure your transmitting device is behaving as you’d expect, refer to the application note Spectrum Analysis Basics. The application note collects the fundamental knowledge needed to ensure your continued development of a great product. I hope my third installment of The Four Ws of X has provided some worthwhile information that you can use. Please post any comments – positive, constructive, or otherwise – and let me know what you think. If this post was useful give it a like and, of course, feel free to share.

  Spectrum crowding is bad, but I think interference is worse

He’s guilty of some degree of hyperbole, but Lou Frenzel highlights a fundamental issue in his recent column Spectrum Apocalypse: The Coming Death of Wireless. We’re all aware of the limited nature of our shared spectrum resource, and Lou extrapolates to a future in which crowding renders wireless practically unworkable. Fortunately, there are many ways for RF engineers to stave off that dismal future, and Lou summarizes them: modulation schemes and other RF techniques, protocol enhancements, and regulatory steps, and even RF alternatives such as optical.

However, in terms of the day-to-day issues that vex us, I’d argue that interference is more of a headache, and it’s getting worse. As Lou did, I’d like to take the opportunity to summarize the ways to deal with the problem.

In the early days of the vector signal analyzer, we delighted in our newfound ability to completely understand and measure transient signals, especially those that had interference potential because they ventured outside their channel. The VSA’s combination of signal capture+playback, frequency-domain triggering (with pre-trigger delay), and simultaneous multi-domain analysis let us see whatever we wanted, on any time scale. Colorful spectrogram displays made it all clear, as with this marine handheld radio.

Spectrogram (composite spectra over time) of transmitter turn-on event, produced from gap-free capture of RF signal

A spectrogram display details the behavior of a handheld radio as the transmit button is pressed. The carrier and its sidebands gradually stabilize (top to bottom) over several hundred milliseconds.

The spectrogram above was generated by playing back a gap-free RF capture, and we used the little radio for demonstrations, marveling at how it wandered around in a leisurely but purposeful fashion on its way to its programmed frequency. Unfortunately for other users of the band, this wandering crossed a dozen other channels, creating clicks or pops and sometimes PLL unlocks in other receivers every time the transmit button was pressed.

Fortunately, this interference was an annoyance and not a serious problem, due to the nature of the signal traffic and the low geographic density of transmitters. As far as I can tell, this sort of thing, when it was understood at all, was often just tolerated back then.

These days, the spectrum is infinitely more crowded, and many signals are bursted or otherwise time-varying, so transient interference is a much bigger problem. As a wireless engineer—whether you’re a potential interferer or interferee—you need to reliably detect and accurately measure elusive signals.

As Lou did, I’d like to summarize the techniques, and suggest a sequence:

Assess what you know about the possible interference: If you know its amplitude or frequency or timing, you can proceed to signal capture through VSA software, perhaps with a frequency-specific trigger, negative trigger delay, or both, to catch the beginning of a transient.

Apply real-time spectrum analysis (RTSA): If you suspect interference but know little or nothing about it (or want to ensure you find it if it exists), it’s time for RTSA. This is a scalar-only spectrum measurement and does not provide timing, but will ensure that you spot signals even if you know nothing about them. If timing specifics are important, consider the time-qualified trigger.

Use playback or post-processing: Once the signal in question is in the capture memory, use playback or post-processing in the VSA. Deep capture is available with graphical tools to select only the portion you want for analysis, all relative to timing established by IF magnitude, frequency mask, or time qualification.

Explore the signal during playback: You can easily change the center frequency and span to focus on the frequencies of interest. You can change parameters freely, without the need to repeat the capture, as long as the signal was somewhere in the original capture bandwidth. Repeat as necessary.

Change the analysis and display types: This is the first step to fully understanding the interference. You may want to use time and frequency markers, band-power calculations, or even demodulation to identify the signal and its critical characteristics. Spectrograms and density or persistence displays may reveal important signal behavior—and its relationship to desired signals—at a glance.

All these steps can be performed with signal analyzers that have the processing power and vector architecture to do real-time and vector signal analysis.

No matter what tools and steps are involved, the goal is to know the signal. After all, if you can understand the problem and fix it, you can keep the wireless world running until that spectrum apocalypse hits.

  While some are timeless, others evolve

This post will be brief, partly a consequence of the wildfires that have affected the Santa Rosa headquarters of Keysight and modified all our schedules, at least a little. Without getting too metaphysical, the fires and their aftermath are a powerful reminder that things are always changing.

This is certainly true in technology and test equipment. The need for better measurements never abates, and I’d like to say a thing or two about our cooperative efforts to keep pushing forward in applications (yours) and measurements (ours).

I’ve been reminded of the changing nature of measurement fundamentals in the context of my previous post on the innovations of the vector signal analyzer and Keysight’s RF test engineering webcast series on measurement fundamentals.

While some things are timeless—such as resolution bandwidth and power addition—others begin as advanced concepts and gradually become mainstream and even fundamental. Examples include ACPR/ACLR and error vector magnitude (EVM). Many of us can remember the first time someone explained channel power integration or vector error quantities to us, and yet eventually these measurement concepts are taken for granted in the context of more complex figures of merit. How about cumulative ACLR for non-contiguous aggregated carriers?

A similar phenomenon is evident in the core signal analyzer architecture described in the previous post. Vector signal analyzers began as a new type of analyzer, with a different architecture to preserve calibrated vector signal information. Eventually, this advanced architecture became common in lab-grade signal analyzers, and vector signal analysis transitioned to be an embedded software application for the analyzer platform.

The old/new duality is evident in the RF test webcast series mentioned above. The next installment deals with fundamentals.

Portion of Web page describing signal analysis fundamentals webcast, to be presented by Dave Engelder

Dave Engelder’s webcast will cover fundamentals, informed by his considerable involvement in new-generation analyzer architectures.

It’s a little ironic that Dave, the presenter for the fundamentals webcast, has spent a great deal of his time in recent years on product definition and planning for the newest generation of signal analyzers and their advanced capabilities.

Some fundamentals are timeless and unchanging, and others eventually give way to superior techniques and technologies. I suspect that dealing with an evolving mix of the timeless and the temporary is our fate as long as we work in RF.

  A new RF tool leveraged two evolving technologies

These days, vector signal analyzers (VSAs) are in broad use, especially in wireless, aerospace, and defense applications. They’re essential for designing the complex modulated and time-varying signals that have become ubiquitous.

However, VSAs haven’t been around nearly as long as spectrum or network analyzers, and I can remember the process—including an informal contest—that yielded the name “vector signal analyzer.” This month marks the 25th anniversary of the first VSA, and I’d like to take a brief look back.

Lots of technical forces were coming together as the 1980s came to a close, both in terms of signals and the equipment to test them. Mobile phones were entering a period of what would become explosive growth, and the transition from analog (1G) to digital modulation (2G) was the way to handle the increase in traffic.

In test equipment, signal processing and digitizers were improving rapidly, and some low-frequency signal analyzers had switched from analog filters to digital ones in their intermediate frequency (IF) stages. Technology and demand were both in place.

These forces converged in the single test and measurement division that had a deep background in both swept and FFT-based products. HP’s Lake Stevens Instrument Division had already produced the first low-frequency swept network and spectrum analyzers (e.g., up to 200 MHz) with digital IF sections. That put the division in a unique position to combine the classic superheterodyne architecture with high performance ADCs and DSP.

Resolution bandwidth filters could be all digital, with better speed, accuracy, and selectivity. Virtually any resolution bandwidth could be produced, from sub-hertz to several megahertz. Perhaps most significant, the entire signal chain could preserve signal phase and therefore vector content.

Processing a signal’s complete vector information was important for obvious and less-obvious reasons. Vector processing allows for accurate, selective analog demodulation, fully separating amplitude from phase or frequency modulation. It also provides complete pulse analysis, and the potential for digital demodulation.

A key decision in this area, driven by a need for accurate pulse analysis, was to perform continuous time-domain vector calibration across the analyzer signal chain. This improvement on frequency-domain calibration was later to be essential to precise digital demodulation of all kinds in the VSA.

Over a span of several years, all of this evolving and improving technology was subjected to extensive discussions and trials with potential customers. Their feedback was crucial to the definition and implementation of the first VSA and, in many ways, they taught us what a VSA should really be. Many changes and refinements were made along the way, and in October 1992 we introduced the first RF vector signal analyzer, the HP 89440A.

89440A RF vector signal analyzer catalog photo. Part of family including 89410A and eventually 89600 vector signal analyzer.

This image from the HP test and measurement catalog shows the first RF vector signal analyzer, the 89441A. The bottom section contains the RF receiver and companion source. The two-channel top section was also available as the (baseband) 89410A VSA.

Vector Signal Analyzer—I suppose the name seems obvious in retrospect, but it wasn’t so clear at the time. We were aware that it was a new type of analyzer, one we expected would be an enduring category, and one we wanted to get right. I can’t recall the other candidate names, but remember voting for VSA. After all, it was a signal analyzer—not just a spectrum analyzer—that provided vector results.

Wireless and aerospace/defense test engineers quickly grasped the possibilities. Shortly after introduction, we took the analyzer to its first trade show: engineers were lining up for demos. The signal views and insights provided by the frequency+time+modulation combination was compelling, and we were able to show waterfall and spectrogram displays, along with complete signal capture and playback.

Within the year we added digital demodulation, and the wireless revolution picked up steam. VSAs helped enable the new transmission schemes, from CDMA to high-order QAM, multi-carrier signals, OFDM, and MIMO. Software enhancements allowed the VSA to track the emerging technologies and standards, giving engineers a reliable test solution early in the design process.

Though the name and measurements would continue, the VSA as a separate analyzer type gradually yielded to newer “signal analyzers” with digital vector processing. These analyzers started with swept scalar spectrum analysis, and VSA capability became an option to the base hardware.

Now separate and embedded, Keysight’s 89600 VSA software continues the tradition of supporting the leading edge of wireless technology. The latest example: a new VSA software release supports pre-5G modulation analysis, and will evolve along with the standard.

It’s been a busy quarter century for all of us, and I expect VSAs will be just as useful for the next one.

  Even great minds fail sometimes, but we can revise our mental models

Though this blog is created by a company that makes hardware and software, the core of our work—and yours—is problem solving and optimization. In the past, I’ve said that success in this work demands creativity, judgement, and intuition. I’ve also written (several times) about my fascination with failures of intuition and ways we might understand and correct such failures.

One way to look at intuition in engineering is to see it as the result of mental models of a phenomenon or situation. Failures, then, can be corrected by finding the errors in these mental models.

We learn from those who go before us, and a great example of a failure (and its correction) by a famous engineer/scientist prompted me correct one of my own mental models. It was an error many of us share, and though I realized it was wrong a long time ago, it’s only now that I can clearly explain the defect.

The trailblazer is the famous rocket pioneer Robert Goddard. He was also a pioneer in our own field, more than a century ago inventing and patenting one of the first practical vacuum tubes for amplifying signals. If he hadn’t focused most of his energy on rockets, Goddard would probably be as famous in electronics as his contemporary Lee de Forest (but that’s a story for another day).

Goddard knew that stability would be a problem for his rockets, and he had no directional control system for them. To compensate, he used a cumbersome design that placed the combustion chamber and exhaust of his first liquid-fuel rocket at the top, with the heavy fuel and oxidizer tanks at the bottom. Having a rocket engine pointed at its own fuel is clearly inviting problems, but Goddard thought it was worth it.

He was mistaken, falling for what would later be described as the pendulum rocket fallacy. He was a brilliant engineer and experimenter, however, and corrected this erroneous mental model in rockets built just a couple of months later.

My own error in this area involved helicopters and their stability. Decades ago, I was surprised to learn that they are inherently very unstable. A friend—an engineer who flew model ’copters—gave a simplified explanation: The main mass of the helicopter may hang from the rotor disk, but when that disk tilts, it’s thrust vector tilts in the same direction. The lateral component of that vector causes more tilt and, unfortunately, also reduces the lift component. The process quickly runs away, and the helicopter seems to want to dive to the ground.

It’s similar to an inverted pendulum, and just the opposite of what my intuition would have predicted. It explains why pilots of non-stabilized helicopters describe the experience as balancing on the top of a pole: they must sense small accelerations and correct before motions build up.

While the explanation corrected and improved my mental model, my intuition was woefully unable to handle the claim that the aircraft below was relatively stable and easy to fly.

Hiller VZ-1 Pawnee flying platform in flight. The platform is similar to a helicopter, but rotors are in the form of a ducted fan at the bottom of the craft. Stability is generally better than a helicopter

The Hiller “Pawnee” flying platform of the 1950s used ducted fans beneath the engine and payload/pilot. Despite its appearance, it is easier to fly than a helicopter. (public domain image via Wikipedia)

This aircraft certainly does look like an inverted pendulum, though that perception is actually another fallacious mental model. 

One explanation comes from the powerful engineering analysis technique of imagining small increments of time and motion. If the flying platform tilts, the thrust vector of the fans tilts in the same direction, just as with the helicopter. However, in this instance the thrust is applied below the center of mass rather than above. The tendency to cause a tilt is applied in the opposite direction and is therefore not self-reinforcing.

I don’t believe the flying platform arrangement is inherently stable, but it is much less unstable than a helicopter. I once flew a flying platform simulator in a museum, and it was indeed straightforward to control.

So, let us acknowledge the power of our engineering intuition and salute our mental models as immensely useful guides. But let’s also remain vigilant in remembering that even the best engineers sometimes get things wrong, and it’s essential to be willing to make corrections.

  Avoiding extra functionality in your circuits

A common saying among electrical engineers is that if you want an oscillator, try building an amplifier. This is all too often accurate, and unwanted oscillations in amplifiers can be a problem for designs at almost any frequency, from very low to high. Of course, if you design an oscillator it probably won’t, at least not at first, but that’s a topic for another day.

Although not part of the saying, it’s important to realize that if you do get an unintentional oscillator, it’s likely to be a terrible one, both unstable and unpredictable. That has consequences for RF and baseband measurements, and we’ll get to that in a bit.

My first taste of these parasitic oscillations outside of a college lab was working with a manufacturing engineer to troubleshoot a 13 MHz function generator. Some samples of the generators were found to be oscillating at frequencies over 80 MHz. The oscillations were found almost by accident, because none of us expected significant output at over six times the generator’s highest frequency!

In today’s wireless environment, unintended oscillations can be a potent source of interference. They’re also more consequential because the spectrum is so crowded, and vital or high-profile services are more likely to be affected. And, as with that function generator, undesirable output signals may be overlooked for a while because they are so far outside of the normal operating frequency range.

Years after the function generator adventure, I heard from a Keysight (then Agilent) systems engineer about a much more serious case of parasitic oscillations. Engineers in the Moss Landing area on Monterey Bay had been using an HP/Agilent signal analyzer, a handheld radio, and a directional antenna to track down signals that were intermittently but persistently jamming GPS signals. The jamming extended well outside the harbor entrance, so it was a clear hazard to navigation.

The interference was maddeningly inconsistent and the directional antenna was of limited help due to strong reflections. As described in an article by the investigators, they eventually tracked the problem to parasitic oscillations in an active TV antenna on a boat at the marina.

Surprisingly, eliminating the first interferer did not completely fix the problem, and instead revealed two more accidental jammers. At least two of the three used the same amplifier board, where a design change had provoked the oscillations.

The instability that made these jammers so hard to find provides some lessons for RF engineers. The principal one is that the oscillations may not be there when you happen to be looking for them, so to ensure their absence you’ll need to explore a wider-than-usual range of frequencies and operating conditions.

Though self-exciting, the oscillators at Moss Landing were sensitive to a variety of factors, some predictable and others capricious: temperature; power supply voltage and its fluctuations; fluorescent lights; antenna configuration; building wiring; and nearby metal objects. Even the motion of the hand of a researcher 10 feet away could alter the frequency by several megahertz. Tellingly, the handheld radio could demodulate the fluctuating output to reveal the distinctive sound of a bilge pump!

It’s clearly essential to test your designs with real-world variations, and these days you have the added challenge that desirable signals and interference are both time-varying. Fortunately, you can take advantage of the signal processing and display capabilities available in signal analyzers to catch virtually everything. For example, spectrograms and real-time spectrum processing can digest and display vast amounts of data, highlighting even the briefest or most agile signals.

Simulated narrowband interference in a group of satellite channels. Real time spectrum analysis (density) display and spectrogram display, showing spectrum vs time

Intermittent, low duty cycle interference in satellite channels is clearly revealed in a real-time spectrum display (top), and the time-varying nature of the interference is shown by the spectrogram display (bottom).

Signal analyzers such as Keysight’s X-Series can also be equipped with VSA software for in-depth analysis and demodulation, and gap-free signal recordings can be made to allow flexible post-processing of transient events.

Of course, amplifiers aren’t the only source of spurious and troublesome oscillations. I once built a FET speed control for a remote-controlled car, and its interference disabled the car’s own radio. The switching frequency was only 1 kHz, but during each brief transition, the FETs oscillated wildly, with powerful harmonics reaching nearly 1 GHz. If only my intent had been to produce an unstable, high-power comb generator!

  A pre-filter to manage an excess of information

In the first two decades of the modern spectrum analyzer—say from the 1960s to the 1980s—it was arguably possible for an RF engineer to know almost everything about making spectrum measurements. One reason: the signals and the analyzers were relatively simple.

Signals were generally assumed to be CW or pulsed CW. Noise and signal-to-noise measurements were straightforward adjustments from spectrum results. Pulsed signals were similarly measured by interpreting conventional spectrum results.

RF spectrum analyzers used a superheterodyne architecture with fundamental mixing, and microwave analyzers used harmonic mixing to cover the higher bands. A semiautomatic technique was adequate to identify images or other false responses, if preselection was not available.

Things have changed immensely in the past 25 or 30 years: the transition from analog to digital modulation; the development of advanced radar and EW systems; and the overcrowding of the airwaves we all share. In lockstep fashion, the corresponding measurement standards have grown in size and complexity.

All these advances have driven a process of mutual bootstrapping, one that has transformed spectrum analyzers into signal analyzers. Built around a wealth of digital technologies, today’s analyzers offer options for modulation analysis, vector signal analysis, and real-time spectrum analysis.

Thus, software has become a vital part of these signal analyzer solutions, frequently in the form of measurement applications. Some, such as PowerSuite, are broad and general purpose. Others are highly specific, designed to make complex sets of measurements in compliance with a particular standard such as LTE or 802.11ac.

These synergistic tools—hardware and software—are now essential for RF engineering. However, if success depended only on starting an app and pressing the right buttons, there would be no need for clever and dedicated engineers. In the real world, successful design and troubleshooting require myriad measurements and setups, and a deep understanding of the results.

If we can no longer know everything about our signals and tools, how do we ensure that we know the right things? I can make no guarantees, but I can offer a few suggestions to help you stay current while keeping the time and effort reasonable.

Discussion forums and blogs: Those that focus on test equipment, such as the one Keysight hosts, are a way to explore common issues and interact with other users, including experts from the manufacturers. Test and measurement blogs are often a companion to the forums, providing news and commentary in specific areas.

Webcasts, both live and recorded: Because I’m rarely in complete control of my schedule, I especially appreciate recorded webcasts. They’re a source of the specific information I need, just when I need it, even late at night or on a weekend. Search engines and webcast collection pages can help you find the one you need.

Articles on common problems and errors: A surprisingly useful type of article is an expert explanation of the most common measurement errors or problems in a given area. At their best, these articles can be a pre-filter that distills unmanageable amounts of measurement knowledge into actionable advice. Such articles also tend to be relevant across time and evolving technology, as this one from Keysight’s Bob Nelson demonstrates. For example, he explains how the log of an average is not the same as the average of a log, and how display detectors yield different answers from the same measurement data.

A single measurement data set is processed by three display detectors to produce three measurement traces, signified by different colors. The detectors are peak, minimum, and "normal."

Three display detectors produce three different measurement traces from the same data set. The “correct” answer depends on the purpose of your measurement.

Serendipity: I can’t count the number of times I’ve learned something important and useful just by chance. The source may have been an offhand comment, an article stumbled upon, a random encounter with another engineer, or the intersection of a search engine and my inherent curiosity. While I can’t rely on these happy accidents, I must confess to feeling slightly uncomfortable with how often they occur.

I realize that for most of you, measurements are a means to an end, enabling your real job: doing the engineering that drives the next waves of ever-advancing technology. It is often an uphill trek that leaves precious little time for simply keeping up. If you have any additional tips for success, please chime in with a comment.

  When gems turn to coal, engineers get creative

Despite their flaws, I have described YIG preselector filters as the gems in microwave signal analyzers. These preselectors solve a problem created when mixers are used to downconvert signals for analysis: The mixers produce multiple outputs from a single input frequency, including the main high-side and low-side ones, and many smaller ones. Unless removed somehow, these outputs can cause false signals to appear in the analyzer display, especially in wide frequency spans.

The preselection process removes false responses using a bandpass preselector filter that tracks the appropriate mixing mode in the analyzer, removing signals before they get measured in the final IF stage. It’s a tried-and-true approach, and a combination of automatic alignment and characterization keeps the preselectors centered while accounting for their insertion loss.

Unfortunately, these little gems rapidly lose their luster as frequencies climb into the millimeter range, becoming increasingly impractical above about 50 GHz. That’s a more serious limitation now, as designers dive into the challenges of the bands at 60 GHz and need accurate measurements of power, spurious, and harmonics. The wide bandwidth and time-varying or noise-like behavior of the signals in question only compound the challenge.

The goal, as always, is accurate and unambiguous results over wide frequency ranges with a single connection. The new N9041B UXA X-Series signal analyzer, 110 GHz, uses a conventional YIG preselector below 50 GHz, but employs two other—software-centric—methods at higher frequencies. Both techniques identify false signals and remove them from measurements, but avoid the insertion loss of the preselector filter. This loss would otherwise directly impact the sensitivity of the analyzer, a critical factor at millimeter frequencies where power is precious.

Wideband spectrum to 100 GHz showing displayed average noise level (DANL) of N9041B UXA signal analyzer. Marker shows DANL better than -147 dBm/Hz at 85 GHz

Maintaining a low noise floor is increasingly difficult—and increasingly important—at millimeter frequencies. This wideband measurement from an N9041B signal analyzer indicates an average noise floor of better than 147 dBm/Hz at 85 GHz.

One straightforward technique involves a combination of image shifting and image suppression. The analyzer makes two sweeps of the same span, with the local oscillator (LO) frequency shifted so that each of the two main mixing modes—high-side and low-side—is used to convert the frequencies in question for measurement. The two measurements are then compared.

Because of the frequency symmetry of the results, real signals appear at the same frequency while false ones shift. In theory, it’s then a simple matter to display the minimum values of each measurement point, removing the false signals and leaving the real ones.

In practice, however, this shift-and-remove technique has limitations. It usually requires a subsequent measurement with a narrower span to accurately measure signal power. Additionally, the selection of the minimum value for display of each measured point distorts the amplitude values of the noise-like signals that are so common these days.

Fortunately, adding information to the measurement process and applying creative signal processing can counter these limitations, yielding accurate measurements of wideband and dynamic signals and avoiding the need for user to take additional steps. The added information comes in two forms:

The noise floor and alternate trace information is used to create a threshold that enables removal of images from measurement data without causing a negative power bias for noise-like signals.

Of course, the goal of all these operations is to provide accuracy and ease of use over a wide range of frequencies and signal types. Ideally, measurements from 50 to 110 GHz will be as direct and reliable as those at lower frequencies, enabling the engineering of the next generation of communications and imaging solutions.

That’s it for now. Look for more information about software preselection and its application to specific signals and measurements in posts to come.

  Bring your best engineering game, and use all the measurement tools available

The term Internet of Things (IoT) has been around a few years, and sometimes it feels over-hyped. When some folks start musing breathlessly about a near future in which virtually everything will be connected, it feels like they’ve taken the concept a little too far.

Consider cybersecurity problems with Internet-connected devices, from cameras to doorbells to toys. These glitches highlight just one of the ways we aren’t quite ready for universal connectivity. In addition, the questionable utility and uneven functionality of some devices have left many potential users with feelings that range from guardedly cautious to overtly skeptical.

Long before we approach universal connectivity, we will have to contend with another factor that often seems universal: RF interference. The combination of complex radio systems, dense environments, and high user expectations guarantees that interference will be a persistent issue.

The Interference of Things is a newer term that may not be hyped enough. While we don’t need to get overly dramatic about interference problems, much of the growth of wireless applications will depend on solving or avoiding these problems.

One application likely to lead the way is IoT in medical or healthcare settings. A recent blog post by Keysight’s Chris Kelly was my first exposure to the term interference of things, and it’s a good example of the potential seriousness of RF interference. Chris suggests a forward-looking approach, focusing on early debugging and a thoughtful combination of design, simulation, emulation, test and analysis.

He is certainly right about the benefits of anticipating problems, but sometimes you’re plunged into an existing situation like the example he describes: nearly a thousand Wi-Fi devices and expectations that problems will be solved quickly.

As an RF engineer, you’ll draw on your tools, techniques, experience, creativity and insight. While the lab environment and its benchtop equipment provide powerful advantages, the faster path to success may mean going to where the thorny problems are. In her recent post describing an elusive example of RF interference, Jennifer Stark explained how a portable signal analyzer and the reasoning power of a wireless engineer were key. The actual interference offender was a simple device and a simple signal, but it wasn’t going to be found in the lab.

Fortunately, portable signal analyzers are expanding their capabilities and frequency range at a rapid pace. Keysight’s FieldFox, for example, provides measurement and display capabilities that can help you find and troubleshoot RF interference problems away from the lab.

Two example displays from Keysight FieldFox handheld analyze, including channel scanner and real-time spectrum analysis

The automatic channel scanner (left) speeds measurement of spurious and intermodulation products, while optional real-time spectrum analysis (RTSA; right) can uncover short-duration events.

In a crowded RF environment, the dynamics of time-varying signals pose many challenges, and transient interactions can be hard to understand. Perhaps the most powerful analysis tool is the wideband, gap-free signal capture and playback post-processing that is available in VSA software for signal analyzers. Signal captures can be free-run, time-qualified, or triggered by matching an RTSA frequency mask.

With a complete, gap-free signal in memory—including pre-trigger data—you can perform any type of signal analysis in post-processing: adjust span and center frequencies, apply demodulation, and more. For time-dependent interactions, a spectrogram display can be enlightening.

Gap-free spectrogram (spectrum vs time) display from vector signal analyzer (VSA) of 2.4 GHz ISM band, including WLAN, cordless phone and Bluetooth signals

A spectrogram shows how a signal spectrum (each horizontal line) varies with time (vertical axis) and power (color). This gap-free spectrogram with very fine time resolution was generated by post-processing a signal captured in memory.

The analysis and troubleshooting power of this display comes from its ability to represent everything that happened across a range of frequencies over a known time interval. This clear, comprehensive view is powerful information to mix in with your own knowledge of the system, signal and environment.

 

One note: If you’re interested in the medical environment, or in using it as guide to other demanding situations, check out Brad Jolly’s webcast Smart Testing to Limit Your Risk Exposure in Wireless Medical Devices. He’ll explain what to do when life and health depend on reliable radio links.

  How about “ACA out of detent

Some people of my generation viewed the 1960s race to the Moon as an alternative to a military conflict, with the astronauts as the point of the spear. They were the space equivalents of fighter pilots, doing a more civilized kind of combat. Or maybe they were modern-day cowboys, taming the wildest frontier.

I was too young to be thinking explicitly about engineering as a career, but I viewed those on Earth and those flying the ships mainly as engineers. I still do. Consider the first words spoken by Buzz Aldrin and Neil Armstrong after the first lunar touchdown:

   Shutdown. (Armstrong)

   Okay. Engine stop. (Aldrin)

   ACA—out of detent. (Aldrin)

   Out of detent. (Armstrong)

   Mode control—both auto. Descent engine command override—Off. Engine arm—Off. (Aldrin)

   413 is in. (Aldrin)

By contrast, what we almost always hear in news or documentary coverage after “contact light” (when a probe extending from the lander footpads has touched the surface, but the spacecraft has not yet landed) is a pause and something rather more stirring:

   Houston, Tranquility Base here. (Armstrong)

   The Eagle has landed. (Armstrong)

Some people contend that the “real” first words were Neil’s announcement of Tranquility Base, but I disagree. In so many ways, the Apollo program and its predecessors were fundamentally engineering efforts. It was engineering of the first order, performed by thousands of people, that directed construction and testing, which was performed by hundreds of thousands.

The space program encompassed almost all engineering disciplines, especially electrical and computer engineering. Electrical engineers pioneered systems for control, telemetry, navigation, communications, and tracking. Computer engineers made astonishing advances in both miniaturized (for the time) hardware and real-time, fault-tolerant programming that did an extraordinary job of managing priorities.

Those first words reflected an engineering foundation and the mission planning that it drove. The first priority of the astronauts was to execute those plans, to maximize safety and the chances of mission success. When asked what dominated their thinking, astronauts don’t say much about fear or excitement, but rather a focus on avoiding messing up, and using their wits to solve the problems that came up (see also Apollo 13).

Most astronauts were engineers, and later engineering test pilots. Many had advanced degrees, and all had considerable engineering training. For example, Aldrin’s first degree was in in mechanical engineering, and he pioneered rendezvous technology that was essential for the Moon missions.

It’s no surprise, then, that Armstrong and Aldrin first uttered technical jargon, supporting the essential aspects of completing the landing and handling contingencies for a possible immediate abort back to lunar orbit. The ACA-detent discussion referred to a way to tell the guidance system that they were stable on the lunar surface, stopping any useless thruster activity. The “413 is in” comment refers to a command telling the computer that their orientation was horizontal on the surface, removing drift error ambiguity that could endanger any return to orbit.

After the triumphant announcement of the landing, Armstrong, Aldrin, and the Mission Control team immediately returned to the technical essentials, with Armstrong radioing, “Okay. We’re going to be busy for a minute.” Mission Control spent an intense 90 seconds going through the Stay—No stay decision process, while corresponding efforts on the lunar surface took even longer. If you’re interested in more detail, an annotated transcript is available.

The technical effort behind the Moon landing inspired a generation of engineers of all kinds, and recently some have virtually revisited the landing site. On the 45th anniversary of the landing, NASA used the cameras of its Lunar Reconnaissance Orbiter to generate a 3-D survey.

Recent overhead picture of the Apollo 11 landing site, showing the descent stage of the lander, scientific equipment left behind, and tracks of the astronauts.

This annotated composite image shows the reexamination of the first manned lunar exploration site by the cameras and stereo digital elevation model from the Lunar Reconnaissance Orbiter.  (Image from NASA)

All this certainly inspired my own efforts in engineering and science, and I’ve found that the unvarnished details are always plenty exciting, interesting and even inspiring. If you’d like to make your own virtual visit, rich online resources are now available anywhere there’s an Internet connection.

  Making other windows seem a little wasteful

A proverb that’s perhaps 2,000 years old describes the mills of the gods as “grinding slowly but exceedingly fine.” I’d like to flatter myself that it applies to my thinking on some matters but, alas, the only relevant part appears be the slowness. Witness how long it’s taken me to get back to FFT window functions and IF filters for RF and microwave measurements.

In both signal analysis and demodulation, flexibility in windows and related filtering operations is increasingly important. As I described in my earlier post, windows are time-varying amplitude-weighting functions that force the signal samples in a time record to be periodic within each block of sampled data. This removes discontinuity errors at the ends of the records that would foul up the spectrum results. To do this, the weighting coefficients are generally zero at either end, with a value of one in the middle, and a smooth range of increasing and decreasing values on either side.

It’s ironic that window functions actually discard or de-emphasize information (e.g., some data samples) to improve measurements. But of course the vital thing is that they trade away this information for desirable frequency-domain filter characteristics such as flatness or selectivity. For example, here’s a Gaussian window in the time and frequency domains.

Time domain (samples) and frequency domain (bins) parameters of the Gaussian FFT window function

When compared to a uniform weighting of one, the Gaussian window reduces leakage and improves dynamic range by de-emphasizing a large portion of the sampled data. The weighting coefficient (left) is greater than 0.9 for only about 1/5 of the samples. (image from Wikimedia Commons)

I have always been surprised at the amount of sampled data in each time record that windows remove from the spectrum calculation, as they improve dynamic range (reducing leakage or sidelobes) or improve amplitude accuracy (by reducing scalloping error).

As with so many things in engineering, it’s a matter of understanding requirements and cleverly optimizing tradeoffs. Consider the “confined” version of the Gaussian window below.

Time domain (samples) and frequency domain (bins) parameters of a "confined" version of a Gaussian FFT window function

Modest time-domain changes in the confined version of the Gaussian window (left) reduce sidelobes dramatically (right) and improve dynamic range. However, even more signal samples are de-emphasized, with a weighting coefficient greater than 0.9 for only about 1/8 of the samples. (image from Wikimedia Commons)

From the standpoint of dynamic range, at least, it appears that selectively removing information improves spectrum characteristics. Of course, dynamic range is not the only important aspect of spectrum measurements, and another important tradeoff can be illustrated with the Tukey window.

Time domain (samples) and frequency domain (bins) parameters of the Tukey FFT window function

The Tukey window is not impressive in the frequency domain, but is remarkable for how much of the sampled data it retains in the spectrum calculation. Its weighting coefficients are greater than 0.9 for about 5/8 of the signal samples. (image from Wikimedia Commons)

Many important signals in RF measurements are noise-like or noisy, and accurate measurements can demand some way to minimize the variance of results. One very good example is ACPR: the larger amount of data retained by the Tukey window means that fewer time records and FFTs will be necessary to reach the variance required for a valid measurement. Thus, the Tukey window’s combination of reasonable dynamic range and efficient use of samples translates to speed and accuracy in ACPR measurements.

Unfortunately, I can’t say if these are the characteristics and tradeoffs the inventor of the Tukey window had in mind. I had assumed the window’s creator was John Tukey, one of the two modern-day discoverers of the FFT algorithm (with J.W. Cooley in 1965). My online research didn’t clarify whether the window was discovered by him or was named after him.

If you have a few minutes to spare, it’s worth browsing available window functions as an example of intelligent tradeoffs. Because you know a lot about the signals you are trying to measure and what’s most important to you, this can be another example of adding information to a measurement to get better results faster.

  Taking extra care in the lands of the large and the small

Recently, I found myself peering at a dial indicator while checking the blade runout on my shiny new 12-inch miter saw. I’m putting up new trim in my house, and the big blade allows me to make some cuts directly on larger assemblies. However, I’m no professional woodworker, so my motto is when in doubt, measure... and measure again.

Given the size of the blade, details such as its flatness and mounting are especially important to making good cuts and tight joints. These factors got me thinking of recent developments at the other end of the scale in our RF work, namely the small physical geometry of the millimeter-frequency hardware we’re increasingly using to send information or sense things.

The new N9041B UXA X-Series signal analyzer and its two different input connectors are good examples of what happens when you scale frequencies up and geometries down.

Comparing the two RF/millimeter frequency inputs of the Keysight UXA 110 GHz signal analyzer.  The full frequency range is available from the 1 mm connector of RF input 2, while the 2.4 mm connector of RF input 1 provides a more rugged connection when frequency coverage to 50 GHz is sufficient

The two coaxial input connectors on the UXA signal analyzer have different characteristics and capabilities. The 2.4 mm connector of input 1 (left) covers frequencies to 50 GHz with a power limit of 1W. The 1 mm connector (right) covers frequencies to 110 GHz and power levels to 1.8 mW.

RF Input 1 is a normal 2.4 mm front-panel connector and, as is common with test equipment, the gender is male to reduce the chance of damage and encourage the use of adapters as connector savers. This separate input provides several benefits for users of the UXA when measuring signals below 50 GHz. It’s more mechanically robust than 1 mm or 1.85 mm connectors. It can also handle much more power without damage: 1 W vs. 0.0018 W for RF Input 2.

RF input 2 is also male, and has the more complicated and challenging job: covering higher frequencies with its smaller and more delicate geometry.

In addition to its conductor size, two mechanical differences are apparent. First, the connector body has an additional, larger outer thread ring to mate with test-port adapters rather than standard 1 mm adapters. These adapters are mechanically stronger and less susceptible to damage, and are the best way to connect to the 1 mm input (if they’re available).

The second difference is the pair of threaded bosses, one on either side of the connector. These bosses are used to mount an input-connector vise assembly, perhaps the smallest vise you’ll ever use.

The 1 mm 110 GHz RF input 2 of the UXA signal analyzer can be fitted with a vise or clamp assembly to isolate the connector from the higher torque that may be needed for other connectors, cables or adapters.

A small vise or clamp assembly is attached around the 1 mm, 110 GHz input of the UXA signal analyzer, isolating the mounting torque for the adapter from the torque needed for connecting the adapter to cables, waveguide adapters, etc.

The small size of the 1 mm connectors mean that they don’t need—and probably won’t withstand—the torque that’s appropriate for larger connectors. The torque for the 1 mm connector is 3 or 4 inch-pounds, while the torque for 1.85 mm and larger microwave connectors is 8 inch-pounds.

This is a formula for very expensive damage! To prevent it, the vise holds the flats (part of the body) of the 1 mm end of a standard adapter after it has been tightened to the analyzer’s front panel connector, avoiding the transfer of torque used to connect cables or other adapters to the adapter mounted to the instrument.

As discussed here before, torque is important at microwave and millimeter frequencies. DUT connections are difficult enough, and this simple little clamp can neutralize an important source of problems. You can learn more in the connector kit overview.

As for me, if I had just purchased one of these expensive 110 GHz analyzers, I’d be tempted to quickly stencil a warning around the 1 mm connector in fluorescent green: maybe “Are you sure?” or “Are you authorized to use this port?” You can never be too careful in the land of the very small.

benz

It’s RF interference again…

Posted by benz Jul 10, 2017

  The case of the troublesome garage door opener

 

Note from Ben: This is the first in a series of guest posts from Jennifer Stark of Keysight. As discussed here earlier, our increasingly crowded RF environment will result in more interference, and a higher likelihood of it causing problems. To stay ahead of them, you’ll need your creativity, deductive skills, and persistence

 

Interference is everywhere. And often from an unlikely source.

Let's take the case of an engineer (we’ll call him Mike) who recently had a problem with his garage door opener. Mike had recently installed a new garage door opener. Frustratingly, the garage door remotes that our Mike and his wife carried in their cars intermittently failed to activate the garage door opener.

As a first step, Mike called the support line for the manufacturer of the garage door opener to report the defective product. The installation support person walked Mike through a troubleshooting procedure over the phone. The procedure did not identify any reason that the hardware should be defective. At that point, the installation support person gravely pronounced “You have something called RF interference. That’s your problem, not ours.”

It turns out that Mike is an RF engineer, so he took this as an interesting challenge.

Mike used his N9912A FieldFox handheld RF analyzer in spectrum analyzer mode. He cobbled together a homemade antenna for the input connector and started sniffing around the house for RF interference. He identified the target frequencies by pressing the garage door remote button while looking at the RF spectrum.

Waterfall and spectrogram displays are a way to visually understand the time domain behavior and frequency of occurrence of signals and interference. This display is from a handheld spectrum analyzer with additional software that helps in detecting and visualizing interference.

Waterfall and spectrogram displays are useful for spotting interference and understanding its behavior in the time domain. The N9918A-236 Interference Analyzer and Spectrogram software for FieldFox analyzers adds these displays to spectrum measurements.

Armed with this information about the frequency range of interest, Mike set out looking for any signals that were near the frequency of the garage door opener. A diligent engineer, he went all over the house looking for clues. He looked in the garage. He looked upstairs in the house, above the garage. He looked in corners of the house.

Eventually, he discovered a small but significant signal in the kitchen. It appeared to be coming from the refrigerator. This puzzled Mike, but his engineering discipline compelled him to investigate. Unplugging the refrigerator did not eliminate the signal. Checking at a different time of day, Mike discovered that the interfering signal was absent even when the refrigerator was operating. It was a mystery.

Leaning on the kitchen counter to collect his thoughts, Mike took stock of what he had learned:  intermittent garage door issues, signal coming from the kitchen. Then, Mike had an insight. The garage door only failed when his wife was home, so the issue was related to the comings and goings of his wife.

At this point, Mike noticed his wife’s purse in its normal spot on the counter by the refrigerator. Mike investigated his wife’s purse with his FieldFox. Sure enough, the interfering signal was coming from the purse (not from the refrigerator). Inside the purse was the remote key fob for the car. Mike removed the battery from the key fob and the interfering signal immediately went away.

The solution was simple—replace the troublesome key fob. Now the garage door is working properly, Mike is happy, and Mike’s wife is happy. And, the pesky RF interference is no more.

  Engineers that exemplify creativity, and the ability to explain it

School is out and some are on holiday. It’s a good time to briefly widen this blog’s technology focus a bit with one of my occasional off-topic wanderings. This time we’ll look at impressive achievements of some engineers of yore, and a couple of enlightening explanations of their creations.

These days we combine our electrical skill with processors, software, and myriad actuator types to generate virtually any kind of complex mechanical action—wherever we need to connect electrons with the physical world. It’s easy to forget how sophisticated tasks were accomplished in the past, without computers or stepper motors, and how even advanced techniques such as perceptual coding were implemented with physical mechanisms.

All these elements were brought together for me recently in an impressive YouTube explanation by “engineerguy” Bill Hammack of the University of Illinois. In just four minutes, Bill explains several poorly understood aspects of film projectors that evolved in the century between their invention (c.1894) and their replacement by digital cinema technology (c.1999).

Bill uses slow-motion footage and animated diagrams to do a great job of explaining how a projector keeps the film going smoothly across the sound sensor while intermittently starting and stopping the film between the lamp and lens. This precisely executed start-stop motion, projecting the film image only when it isn’t moving, coaxes our vision system into seeing a series of stills as fluid motion.

Bill shows how the motion is produced using a synchronized cam, shuttle, and wobble plate. As I dug deeper, further research showed that some projectors instead use an equally innovative mechanism called a Geneva drive (or Geneva stop), a mechanism that was already old when the first crude projectors were created in the late 19th century. Seeing the shape of the Geneva mechanism sent me to my reproduction of the very old book Five Hundred & Seven Mechanical Movements.

Scanned image of Geneva mechanism or Geneva stop from 1889 book Five Hundred & Seven Mechanical Movements

This composite figure shows two examples of Geneva drives from the mechanisms in Henry T. Brown’s 1896 book Five Hundred & Seven Mechanical Movements. These convert continuous motion to intermittent motion with smooth starts and stops, and have built in limits or “stops.”

I figured I was nearly alone in my interest in in the old book, but that is not the case. Another quick search revealed that these manifold fruits of the Industrial Revolution have been brought into the internet age, with hyperlinks and animation at 507movements.com. The animations are addictive!

The book is a potent antidote to the tendency to forget how clever and imaginative the engineers of the past actually were, though they were often self-taught and worked with limited materials. And, if we take Edison and the Wright Brothers as examples, they were tireless experimenters.

From an 1896 book to the joys of YouTube, there is cleverness in both the engineering and the explaining. If you’re looking for something closer to our RF home, check out Bill’s demonstration of performing Fourier analysis with a mechanical device. You may never think of FFTs in quite the same way again.