Skip navigation
All Places > Keysight Blogs > Better Measurements: The RF Test Blog > Blog > 2017 > August
2017

  When gems turn to coal, engineers get creative

Despite their flaws, I have described YIG preselector filters as the gems in microwave signal analyzers. These preselectors solve a problem created when mixers are used to downconvert signals for analysis: The mixers produce multiple outputs from a single input frequency, including the main high-side and low-side ones, and many smaller ones. Unless removed somehow, these outputs can cause false signals to appear in the analyzer display, especially in wide frequency spans.

The preselection process removes false responses using a bandpass preselector filter that tracks the appropriate mixing mode in the analyzer, removing signals before they get measured in the final IF stage. It’s a tried-and-true approach, and a combination of automatic alignment and characterization keeps the preselectors centered while accounting for their insertion loss.

Unfortunately, these little gems rapidly lose their luster as frequencies climb into the millimeter range, becoming increasingly impractical above about 50 GHz. That’s a more serious limitation now, as designers dive into the challenges of the bands at 60 GHz and need accurate measurements of power, spurious, and harmonics. The wide bandwidth and time-varying or noise-like behavior of the signals in question only compound the challenge.

The goal, as always, is accurate and unambiguous results over wide frequency ranges with a single connection. The new N9041B UXA X-Series signal analyzer, 110 GHz, uses a conventional YIG preselector below 50 GHz, but employs two other—software-centric—methods at higher frequencies. Both techniques identify false signals and remove them from measurements, but avoid the insertion loss of the preselector filter. This loss would otherwise directly impact the sensitivity of the analyzer, a critical factor at millimeter frequencies where power is precious.

Wideband spectrum to 100 GHz showing displayed average noise level (DANL) of N9041B UXA signal analyzer. Marker shows DANL better than -147 dBm/Hz at 85 GHz

Maintaining a low noise floor is increasingly difficult—and increasingly important—at millimeter frequencies. This wideband measurement from an N9041B signal analyzer indicates an average noise floor of better than 147 dBm/Hz at 85 GHz.

One straightforward technique involves a combination of image shifting and image suppression. The analyzer makes two sweeps of the same span, with the local oscillator (LO) frequency shifted so that each of the two main mixing modes—high-side and low-side—is used to convert the frequencies in question for measurement. The two measurements are then compared.

Because of the frequency symmetry of the results, real signals appear at the same frequency while false ones shift. In theory, it’s then a simple matter to display the minimum values of each measurement point, removing the false signals and leaving the real ones.

In practice, however, this shift-and-remove technique has limitations. It usually requires a subsequent measurement with a narrower span to accurately measure signal power. Additionally, the selection of the minimum value for display of each measured point distorts the amplitude values of the noise-like signals that are so common these days.

Fortunately, adding information to the measurement process and applying creative signal processing can counter these limitations, yielding accurate measurements of wideband and dynamic signals and avoiding the need for user to take additional steps. The added information comes in two forms:

The noise floor and alternate trace information is used to create a threshold that enables removal of images from measurement data without causing a negative power bias for noise-like signals.

Of course, the goal of all these operations is to provide accuracy and ease of use over a wide range of frequencies and signal types. Ideally, measurements from 50 to 110 GHz will be as direct and reliable as those at lower frequencies, enabling the engineering of the next generation of communications and imaging solutions.

That’s it for now. Look for more information about software preselection and its application to specific signals and measurements in posts to come.

  Bring your best engineering game, and use all the measurement tools available

The term Internet of Things (IoT) has been around a few years, and sometimes it feels over-hyped. When some folks start musing breathlessly about a near future in which virtually everything will be connected, it feels like they’ve taken the concept a little too far.

Consider cybersecurity problems with Internet-connected devices, from cameras to doorbells to toys. These glitches highlight just one of the ways we aren’t quite ready for universal connectivity. In addition, the questionable utility and uneven functionality of some devices have left many potential users with feelings that range from guardedly cautious to overtly skeptical.

Long before we approach universal connectivity, we will have to contend with another factor that often seems universal: RF interference. The combination of complex radio systems, dense environments, and high user expectations guarantees that interference will be a persistent issue.

The Interference of Things is a newer term that may not be hyped enough. While we don’t need to get overly dramatic about interference problems, much of the growth of wireless applications will depend on solving or avoiding these problems.

One application likely to lead the way is IoT in medical or healthcare settings. A recent blog post by Keysight’s Chris Kelly was my first exposure to the term interference of things, and it’s a good example of the potential seriousness of RF interference. Chris suggests a forward-looking approach, focusing on early debugging and a thoughtful combination of design, simulation, emulation, test and analysis.

He is certainly right about the benefits of anticipating problems, but sometimes you’re plunged into an existing situation like the example he describes: nearly a thousand Wi-Fi devices and expectations that problems will be solved quickly.

As an RF engineer, you’ll draw on your tools, techniques, experience, creativity and insight. While the lab environment and its benchtop equipment provide powerful advantages, the faster path to success may mean going to where the thorny problems are. In her recent post describing an elusive example of RF interference, Jennifer Stark explained how a portable signal analyzer and the reasoning power of a wireless engineer were key. The actual interference offender was a simple device and a simple signal, but it wasn’t going to be found in the lab.

Fortunately, portable signal analyzers are expanding their capabilities and frequency range at a rapid pace. Keysight’s FieldFox, for example, provides measurement and display capabilities that can help you find and troubleshoot RF interference problems away from the lab.

Two example displays from Keysight FieldFox handheld analyze, including channel scanner and real-time spectrum analysis

The automatic channel scanner (left) speeds measurement of spurious and intermodulation products, while optional real-time spectrum analysis (RTSA; right) can uncover short-duration events.

In a crowded RF environment, the dynamics of time-varying signals pose many challenges, and transient interactions can be hard to understand. Perhaps the most powerful analysis tool is the wideband, gap-free signal capture and playback post-processing that is available in VSA software for signal analyzers. Signal captures can be free-run, time-qualified, or triggered by matching an RTSA frequency mask.

With a complete, gap-free signal in memory—including pre-trigger data—you can perform any type of signal analysis in post-processing: adjust span and center frequencies, apply demodulation, and more. For time-dependent interactions, a spectrogram display can be enlightening.

Gap-free spectrogram (spectrum vs time) display from vector signal analyzer (VSA) of 2.4 GHz ISM band, including WLAN, cordless phone and Bluetooth signals

A spectrogram shows how a signal spectrum (each horizontal line) varies with time (vertical axis) and power (color). This gap-free spectrogram with very fine time resolution was generated by post-processing a signal captured in memory.

The analysis and troubleshooting power of this display comes from its ability to represent everything that happened across a range of frequencies over a known time interval. This clear, comprehensive view is powerful information to mix in with your own knowledge of the system, signal and environment.

 

One note: If you’re interested in the medical environment, or in using it as guide to other demanding situations, check out Brad Jolly’s webcast Smart Testing to Limit Your Risk Exposure in Wireless Medical Devices. He’ll explain what to do when life and health depend on reliable radio links.

  How about “ACA out of detent

Some people of my generation viewed the 1960s race to the Moon as an alternative to a military conflict, with the astronauts as the point of the spear. They were the space equivalents of fighter pilots, doing a more civilized kind of combat. Or maybe they were modern-day cowboys, taming the wildest frontier.

I was too young to be thinking explicitly about engineering as a career, but I viewed those on Earth and those flying the ships mainly as engineers. I still do. Consider the first words spoken by Buzz Aldrin and Neil Armstrong after the first lunar touchdown:

   Shutdown. (Armstrong)

   Okay. Engine stop. (Aldrin)

   ACA—out of detent. (Aldrin)

   Out of detent. (Armstrong)

   Mode control—both auto. Descent engine command override—Off. Engine arm—Off. (Aldrin)

   413 is in. (Aldrin)

By contrast, what we almost always hear in news or documentary coverage after “contact light” (when a probe extending from the lander footpads has touched the surface, but the spacecraft has not yet landed) is a pause and something rather more stirring:

   Houston, Tranquility Base here. (Armstrong)

   The Eagle has landed. (Armstrong)

Some people contend that the “real” first words were Neil’s announcement of Tranquility Base, but I disagree. In so many ways, the Apollo program and its predecessors were fundamentally engineering efforts. It was engineering of the first order, performed by thousands of people, that directed construction and testing, which was performed by hundreds of thousands.

The space program encompassed almost all engineering disciplines, especially electrical and computer engineering. Electrical engineers pioneered systems for control, telemetry, navigation, communications, and tracking. Computer engineers made astonishing advances in both miniaturized (for the time) hardware and real-time, fault-tolerant programming that did an extraordinary job of managing priorities.

Those first words reflected an engineering foundation and the mission planning that it drove. The first priority of the astronauts was to execute those plans, to maximize safety and the chances of mission success. When asked what dominated their thinking, astronauts don’t say much about fear or excitement, but rather a focus on avoiding messing up, and using their wits to solve the problems that came up (see also Apollo 13).

Most astronauts were engineers, and later engineering test pilots. Many had advanced degrees, and all had considerable engineering training. For example, Aldrin’s first degree was in in mechanical engineering, and he pioneered rendezvous technology that was essential for the Moon missions.

It’s no surprise, then, that Armstrong and Aldrin first uttered technical jargon, supporting the essential aspects of completing the landing and handling contingencies for a possible immediate abort back to lunar orbit. The ACA-detent discussion referred to a way to tell the guidance system that they were stable on the lunar surface, stopping any useless thruster activity. The “413 is in” comment refers to a command telling the computer that their orientation was horizontal on the surface, removing drift error ambiguity that could endanger any return to orbit.

After the triumphant announcement of the landing, Armstrong, Aldrin, and the Mission Control team immediately returned to the technical essentials, with Armstrong radioing, “Okay. We’re going to be busy for a minute.” Mission Control spent an intense 90 seconds going through the Stay—No stay decision process, while corresponding efforts on the lunar surface took even longer. If you’re interested in more detail, an annotated transcript is available.

The technical effort behind the Moon landing inspired a generation of engineers of all kinds, and recently some have virtually revisited the landing site. On the 45th anniversary of the landing, NASA used the cameras of its Lunar Reconnaissance Orbiter to generate a 3-D survey.

Recent overhead picture of the Apollo 11 landing site, showing the descent stage of the lander, scientific equipment left behind, and tracks of the astronauts.

This annotated composite image shows the reexamination of the first manned lunar exploration site by the cameras and stereo digital elevation model from the Lunar Reconnaissance Orbiter.  (Image from NASA)

All this certainly inspired my own efforts in engineering and science, and I’ve found that the unvarnished details are always plenty exciting, interesting and even inspiring. If you’d like to make your own virtual visit, rich online resources are now available anywhere there’s an Internet connection.

  Making other windows seem a little wasteful

A proverb that’s perhaps 2,000 years old describes the mills of the gods as “grinding slowly but exceedingly fine.” I’d like to flatter myself that it applies to my thinking on some matters but, alas, the only relevant part appears be the slowness. Witness how long it’s taken me to get back to FFT window functions and IF filters for RF and microwave measurements.

In both signal analysis and demodulation, flexibility in windows and related filtering operations is increasingly important. As I described in my earlier post, windows are time-varying amplitude-weighting functions that force the signal samples in a time record to be periodic within each block of sampled data. This removes discontinuity errors at the ends of the records that would foul up the spectrum results. To do this, the weighting coefficients are generally zero at either end, with a value of one in the middle, and a smooth range of increasing and decreasing values on either side.

It’s ironic that window functions actually discard or de-emphasize information (e.g., some data samples) to improve measurements. But of course the vital thing is that they trade away this information for desirable frequency-domain filter characteristics such as flatness or selectivity. For example, here’s a Gaussian window in the time and frequency domains.

Time domain (samples) and frequency domain (bins) parameters of the Gaussian FFT window function

When compared to a uniform weighting of one, the Gaussian window reduces leakage and improves dynamic range by de-emphasizing a large portion of the sampled data. The weighting coefficient (left) is greater than 0.9 for only about 1/5 of the samples. (image from Wikimedia Commons)

I have always been surprised at the amount of sampled data in each time record that windows remove from the spectrum calculation, as they improve dynamic range (reducing leakage or sidelobes) or improve amplitude accuracy (by reducing scalloping error).

As with so many things in engineering, it’s a matter of understanding requirements and cleverly optimizing tradeoffs. Consider the “confined” version of the Gaussian window below.

Time domain (samples) and frequency domain (bins) parameters of a "confined" version of a Gaussian FFT window function

Modest time-domain changes in the confined version of the Gaussian window (left) reduce sidelobes dramatically (right) and improve dynamic range. However, even more signal samples are de-emphasized, with a weighting coefficient greater than 0.9 for only about 1/8 of the samples. (image from Wikimedia Commons)

From the standpoint of dynamic range, at least, it appears that selectively removing information improves spectrum characteristics. Of course, dynamic range is not the only important aspect of spectrum measurements, and another important tradeoff can be illustrated with the Tukey window.

Time domain (samples) and frequency domain (bins) parameters of the Tukey FFT window function

The Tukey window is not impressive in the frequency domain, but is remarkable for how much of the sampled data it retains in the spectrum calculation. Its weighting coefficients are greater than 0.9 for about 5/8 of the signal samples. (image from Wikimedia Commons)

Many important signals in RF measurements are noise-like or noisy, and accurate measurements can demand some way to minimize the variance of results. One very good example is ACPR: the larger amount of data retained by the Tukey window means that fewer time records and FFTs will be necessary to reach the variance required for a valid measurement. Thus, the Tukey window’s combination of reasonable dynamic range and efficient use of samples translates to speed and accuracy in ACPR measurements.

Unfortunately, I can’t say if these are the characteristics and tradeoffs the inventor of the Tukey window had in mind. I had assumed the window’s creator was John Tukey, one of the two modern-day discoverers of the FFT algorithm (with J.W. Cooley in 1965). My online research didn’t clarify whether the window was discovered by him or was named after him.

If you have a few minutes to spare, it’s worth browsing available window functions as an example of intelligent tradeoffs. Because you know a lot about the signals you are trying to measure and what’s most important to you, this can be another example of adding information to a measurement to get better results faster.