Skip navigation
All Places > Keysight Blogs > Better Measurements: The RF Test Blog > Blog

  Handling the frequency and bandwidth challenges of 5G, radar, and more

5G technologies and markets are much in the news these days, and for good reasons. The economic potential is large, the opportunities are relatively near-term, and the technological challenges are just the kind of thing RF engineers can get excited about. Whether your focus is on design or test, there is plenty of difficult work ahead in the pursuit of practical ways to fully utilize the potential capacity of the centimeter and millimeter bands.

Unfortunately, much of the analysis and commentary focuses on economic factors and broad-brush coverage of technological challenges. A good overview of a complex subject is essential for resource planning, but it isn’t deep enough for us to see the specific measurement challenges and how we might handle them.

Some measurement experts have a “just right” combination of broad technical knowledge and specific measurement insight to make a contribution here, and I can heartily recommend Keysight’s Pete Cain. He has not only the expertise but also an impressive ability to explain the technical factors and tradeoffs.

Pete recently produced a webcast on millimeter-wave challenges, and it’s a good fit for the needs of the RF/microwave engineer or technical manager who will be dealing with these extreme frequencies and bandwidths. It’s available on-demand now, and I wanted to share a few highlights.

His presentation begins with a discussion of general technology drivers such as the high value of lower-frequency spectrum and the public benefit of shifting existing traffic to higher frequencies to free it up whenever possible. That’s an important issue, and perhaps a matter of future regulation to avoid a tragedy of the commons.

Next, Pete goes on to explain the problem of increased noise that goes along with the wider bandwidths and increased data rates of microwave and millimeter bands. This noise reduces SNR and eventually blunts channel capacity gains, as shown here.

Comparing the maximum spectral efficiency of different channel bandwidths. The S-shaped curve shows how spectral capacity and channel efficiency returns diminish as bandwidths get very wide.

The wide bandwidths available at millimeter-wave frequencies promise dramatic gains in channel capacity. Unfortunately, these bandwidths gather up more noise, and that limits real-world capacity and spectral efficiency.

As shown in the diagram, Pete also discusses spectral efficiency and shows where existing services operate. This is where RF engineers have already turned theory into practical reality, and the landscape of tradeoffs they’ll optimize as millimeter technologies become widespread.

To further inspire the technically inclined, Pete dives deeper into the essentials of high-frequency testing, including the issues of loss and frequency response at microwave and millimeter-wave frequencies. As is often the case, high quality measurements require a combination of hardware, software, and careful measurement technique. In particular, he describes the value of establishing source and analyzer calibration planes right at the DUT, thereby minimizing measurement error.

Diagram shows measurement of a millimeter-wave DUT where the calibration planes of the input and output have been moved to the edges of the DUT itself, for better accuracy.

To optimize accuracy, it’s necessary to move the calibration planes of measurements from the instrument front panels to the DUT signal ports. Software such as the K3101A Signal Optimizer can make this much easier.

Moving the calibration planes to the DUT ports grows more important as frequencies increase. Loss is an issue, of course, but in many cases the thorniest problems are frequency-response effects such as ripple and non-repeatability. Ripple is especially troublesome for very-wideband signals, while repeatability can be compromised by sensitivity to cable movement and routing as well as connector torque and wear.

In the webcast, Pete also compares signal-connection methods, including coax, waveguide, probes, and antennas.

That’s just a quick overview of an enlightening presentation. To see the whole thing, check out the “Millimeter wave Challenges” on-demand webcast—and good luck in the land of very short waves.

  As technologists, we need ways to tell the difference

Over the past few months I’ve been hearing more about a propulsion technology called an EM Drive or EMDrive or, more descriptively, a “resonant cavity thruster.” As a technology that uses high-power microwaves, this sort of thing should be right in line with our expertise. However, that doesn’t seem to be the case because the potential validity of this technique may be more in the domain of physicists—or mystics!

Before powering ahead, let me state my premise: What interests me most is how one might approach claims such as this, especially when a conclusion does not seem clear. In this case, our knowledge of microwave energy and associated phenomena does not seem to be much help, so we’ll have to look to other guides.

First, let’s consider the EM drive. Briefly, it consists of an enclosed conductive cavity in the form of a truncated cone (i.e., a frustum). Microwave energy is fed into the cavity, and some claim a net thrust is produced. It’s only a very small amount of thrust, but it’s claimed to be produced without a reaction mass. This is very much different than established technology such as ion thrusters, which use electric energy to accelerate particles. The diagram below shows the basics.

Diagram of EM drive, showing mechanical configuration, magnetron input to the chamber, and supposed forces that result in a net thrust

This general arrangement of an EM drive mechanism indicates net radiation force and the resulting thrust from the action of microwaves in an enclosed, conductive truncated cone. (image from Wikipedia)

The diagram is clear enough, plus it’s the first time I’ve had the chance to use the word “frustum.” Unfortunately, one thing the diagram and associated explanations seem to lack is a model—quantitative or qualitative, scientific or engineering—that clearly explains how this technology actually works. Some propose the action of “quantum vacuum virtual particles” as an explanation, but that seems pretty hand-wavy to me.

Plenty of arguments, pro and con, are articulated online, and I won’t go into them here. Physicists and experimentalists far smarter than me weigh in, and they are not all in agreement. For example, a paper from experimenters at NASA’s Johnson Space Center has been peer-reviewed and published. Dig in if you’re interested, and make up your own mind.

I’m among those who, after reading about the EM drive, immediately thought “extraordinary claims require extraordinary evidence.” (Carl Sagan made that dictum famous, but I was delighted to learn that it dates back 200 years to our old friend Laplace.) While it may work better as a guideline than a rigid belief, it’s an excellent starting point when drama is high. The evidence in this case is hardly extraordinary, with a measured thrust of only about a micronewton per watt. It’s devilishly hard to reduce experimental uncertainties enough to reliably measure something that small.

I’m also not the first to suspect that this runs afoul of Newton’s third law and the conservation of momentum. A good way to evaluate an astonishing claim is to test it against fundamental principles such as this, and a reaction mass is conspicuously absent. Those who used this fundamentalist approach to question faster-than-light neutrinos were vindicated in good time.

It’s tempting to dismiss the whole thing, but there is still that NASA paper, and the laws of another prominent scientific thinker, Arthur C. Clarke. I’ve previously quoted his third law: “Any sufficiently advanced technology is indistinguishable from magic.” One could certainly claim that this microwave thruster is just technology more advanced than I can understand. Maybe.

Perhaps Clark’s first law is more relevant, and more sobering: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

I’m neither distinguished nor a scientist, but I am a technologist, and perhaps a bit long in the tooth. I should follow Clarke’s excellent example and maintain a mind that is both open and skeptical.

benz

The Four Ws of Spurious Emissions

Posted by benz Apr 30, 2017

  Refining your design and creating happy customers

 

Note from Ben Zarlingo: I invite you to read this guest post by Nick Ben, a Keysight engineer. He discusses the what, why, when and where of spurious emissions before delving into the importance of identifying them in your device’s transmitting signal and thereby improving your product design.

 

If you’re reading this, you’re probably an engineer. That means you may be looking for ways to improve your present and future designs. If yours includes a transmitter, one key to success is checking for spurious emissions that can interfere with signals outside your device’s designated bandwidth. Characterizing spurious behavior can save money and help you create happy and loyal customers.

But wait, you say: WHAT, WHY, WHEN and WHERE can I save money and create happy customers by measuring spurious emissions? I’m glad you asked. Let’s take a look.

What: A Quick Reminder

Ben Z has covered this: Spurs are unwanted stray frequency content that can appear both outside and within the device under test’s (DUT’s) operating bandwidth. Think of a spur as the oddball signal you weren’t expecting to be emanating from your device—but there it is. Just like you wouldn’t want your family’s phones to overlap with one another in the same band, they shouldn’t interfere with that drone hovering outside your window (made you look). If you refer to the traces below, the left side presents a device’s transmitted signal without a spur and the right side reveals an unwanted spurious signal, usually indicating a design flaw.  

 

Two GSM spectrum measurements are compared.  The one on the right contains a spurious signal.

In these side-by-side views of a 1 MHz span at 935 MHz, the presence of an unwanted spur is visible in the trace on the right. Further investigation should identify the cause.

Why and When

In densely populated regions of the frequency spectrum, excessive spurs and harmonics are more likely to be both troublesome and noticed. To prevent interference with devices operating on nearby bands, we need to measure our own spurious emissions.

Where (& How)

Spur measurements are usually performed post-R&D, during the design validation and manufacturing phases. Using a spectrum or signal analyzer, these measurements are presented on a frequency vs. amplitude plot to reveal any undesirable signals. Spur characterization is done over a frequency span of interest using a narrow resolution bandwidth (RBW) filter and auto-coupled sweep-time rules. The sweep normally begins with the fundamental and reaches all the way up to the tenth harmonic.

While current spur-search methods are good for the design validation phase, they aren’t great because measurements are too slow for pass/fail testing of thousands of devices on the production line. These tests are often based on published standards (perhaps from the FCC) that may be described in terms of a spectrum emission mask (SEM). Fortunately, SEM capability is available in Keysight X-Series signal analyzers.

To tackle the issue of slow sweep times and enable faster testing, today’s signal analyzers use digital technologies, especially DSP, to improve measurement speed and performance (see one of Ben’s earlier posts). Ultimately, you can achieve faster sweep times—as much as 49x faster than older analyzers—when chasing low-level signals at wide bandwidths.

Wrapping Up

If you’d like to learn more, a recent application note titled Accelerating Spurious Emission Measurements using Fast-Sweep Techniques includes detailed explanations, techniques, and resources. You’ll find it in the growing collection on our signal analysis fundamentals page.

I hope my first installment of The Four Ws of X provided some information you can use. Please post any comments—positive, constructive, or otherwise—and let me know what you think. If it was useful, please give it a like and, of course, feel free to share.

  Electrical engineers lead the way

Years ago, a manager of mine seemed to constantly speak in analogies. It was his way of exploring the technical and business (and personal!) issues we faced, and after a while I noticed how often he’d begin a sentence with “It’s like...”

His parallels came in a boundless variety, and he was really creative in tying them to our current topic, whatever it was. He was an excellent manager, with a sense of humor and irony, and his analogies generally improved group discussions.

There were exceptions, of course, and misapplying or overextending qualitative analogies is one way these powerful tools can let us down.

The same is true of quantitative analogies, but many have the vital benefit of being testable in ways that qualitative analogies aren’t. Once validated, they can be used to drive real engineering, especially when we can take advantage of established theory and mathematics in new areas.

The best example I’ve found of an electrical engineering (EE) concept with broad applicability in other areas is impedance. It’s particularly powerful as a quantitative analogy for physical or mechanical phenomena, plus the numerical foundations and sophisticated analytical tools of EE are all available.

Some well-established measurements even use the specific words impedance and its reciprocal—admittance. One example is the tympanogram, which plots admittance versus positive and negative air pressure in the human ear.

Two "tympanograms" are impedance measurements of the human hearing system. These diagrams show admittance (inverse of impedance) to better reveal the response of the system at different pressures, to help diagnose problems.

These tympanograms characterize the impedance of human hearing elements, primarily the eardrum and the middle ear cavity behind it, including the bones that conduct sound. The plot at the left shows a typical maximum at zero pressure, while the uniformly low admittance of the one on the right may indicate a middle ear cavity filled with fluid. (Image from Wikimedia Commons)

Interestingly, those who make immittance* measurements of ears speak of them as describing energy transmission, just like an RF engineer might.

Any discussion of energy transmission naturally leads to impedance matching and transformers of one kind or another. That’s where the analogies become the most interesting to me. Once you see the equivalence outside of EE, you start noticing it everywhere: the transmission in a car, the arm on a catapult, the exponential horn on a midrange speaker. One manufacturer of unconventional room fans has even trademarked the term “air multiplier” to describe the conversion of a small volume of high-speed air to a much larger volume at lower speeds.

All of these things can be quantitatively described with the power of the impedance analogy, leading to effective optimization. It’s typically a matter of maximizing energy transfer, though other tradeoffs are illuminated as well.

Maybe my former manager’s affection for analogies rubbed off on me all those years ago. I certainly have a lot of respect for them, and their ability to deliver real engineering insight in so many fields. We EEs can take some degree of pride in leading the way here, even if we’re the only ones who know the whole story.

*A term coined by our old friend H. W. Bode in 1945.

  The difference can be anything from 0 dB to infinity

Most RF engineers are aware that signal measurements are always a combination of the signal in question and any contributions from the measuring device. We usually think in terms of power, and power spectrum has been our most common measurement for decades. Thus, the average power reading at a particular frequency is the result of combining the signal you’re trying to measure plus any extra “stuff” the analyzer chips in.

In many cases we can accept the displayed numbers and safely ignore the contribution of the analyzer because its performance is much better than that of the signal we’re measuring. However, the situation becomes more challenging when the signal in question is so small that it’s comparable to the analyzer’s own contribution. We usually run into this when we’re measuring harmonics, spurious signals, and intermodulation (or its digital-modulation cousin, adjacent channel power ratio, ACPR).

I’ve discussed this situation before, particularly in spurious measurements in and around the analyzer’s noise floor.

Graphic explanation of addition of CW signal and analyzer noise floor. Diagram of apparent and actual signals, actual and displayed signal-to-noise ratio (SNR).

Expanded view of measurement of a CW signal near an analyzer’s noise floor. The analyzer’s own noise affects measurements of both the signal level and signal/noise.

It’s apparent that addition happens between the power of the signal and that of the analyzer’s internal noise. For example, when the actual signal power and the analyzer noise floor are the same, the measured result will be high by 3 dB.

However, it’s essential to understand that the added power is in the form of noise, which is not coherent with the signal—or anything else. The general incoherence of noise is a valuable assumption in many areas of measurement and signal processing.

We can get tripped up when we unknowingly violate this assumption. Consider the addition of these CW examples in the time domain, where the problem is easier to visualize:

Graphic explanation of addition of two equal-amplitude CW signals, showing effect of relative phase. In-phase addition produces a total power 6 dB greater than an individual signal, while out-of-phase addition results in zero net power.

The addition of coherent signals can produce a wide range of results, depending on relative amplitude and phase. In this equal-amplitude example, the result can be a signal with twice the voltage and therefore 6 dB more power (top) or a signal with no power at all (bottom).

I’ve previously discussed the log scaling of power spectrum measurements and the occasionally surprising results. The practical implications for coherent signals are illustrated by the two special cases above: two equal signals with either the same or opposite phase.

When the signals have the same phase, they add to produce one with four times the power, or an amplitude 6 dB higher. With opposite phase the signals cancel each other, effectively hiding the signal of interest and showing only the measurement noise floor. Actual measurements will fall somewhere between these extremes, depending on the phase relationships.

This coherent addition or subtraction isn’t typically a concern with spurious signals; however, it may arise with harmonic and intermodulation or ACPR measurements in which the analyzer’s own distortion products might be coherent with the signal under test. Some distortion products might add to produce a power measurement that is incorrectly high; others could cancel, causing you to miss a genuine distortion product.

I suppose there are two lessons for RF engineers. One: some measurements may need a little more performance margin to get the accuracy you expect for very small signals. The other: be careful about the assumptions for signal addition and the average power of the result.

In a future post I’ll describe what we learned about how this applies to optimizing ACPR measurements. Until then, you can find more information on distortion measurements at our signal analysis fundamentals page.

  Good news about RBW, noise floor, measurement speed, and finding hidden signals

Some things never change, and others evolve while you’re busy with something else. In recent years, RF measurements provide good examples of both, especially in signal analysis fundamentals such as resolution bandwidth filtering, measurement noise floor, and speed or throughput. It can be a challenge to keep up, so I hope this blog and resources such as the one I’ll mention below will help.

One eternal truth: tradeoffs define your performance envelope. Optimizing those tradeoffs is an important part of your engineering contribution. Fortunately, on the measurement side, that envelope is getting substantially larger, due to the combined effect of improved hardware performance plus signal processing that is getting both faster and more sophisticated.

Perhaps the most common tradeoffs made in signal analysis—often done instinctively and automatically by RF engineers—are the settings for resolution bandwidth and attenuation, affecting noise floor and dynamic range due to analyzer-generated distortion. Looking at the figure below, the endpoints of the lines vary according to analyzer hardware performance, but the slopes and the implications for optimizing measurements are timeless.

Graphic shows how signal analyzer noise floor varies with RBW, along with how second and third order distortion vary with mixer level. Intersection of these lines shows mixer level and resolution bandwidth settings that optimize dynamic range

Resolution-bandwidth settings determine noise floor, while attenuation settings determine mixer level and resulting analyzer-produced distortion. Noise floor and distortion establish the basis for the analyzer’s dynamic range.

This diagram has been around for decades, helping engineers understand how to optimize attenuation and resolution bandwidth. For decades, too, RF engineers have come up against a principal limit of the performance envelope, where the obvious benefit of reducing resolution bandwidth collides with the slower sweep speeds resulting from those smaller bandwidths.

That resolution bandwidth limit has been pushed back substantially, with dramatic improvements in ADCs, DACs, and DSP. Digital RBW filters can be hundreds of times faster than analog ones, opening up the use of much narrower resolution bandwidths than had been practical, and giving RF engineers new choices in optimization. As with preamps, the improvements in noise floor or signal-to-noise ratio can be exchanged for benefits ranging from faster throughput, to better margins, to the ability to use less-expensive test equipment.

Improvements in DSP and signal converters have also enabled new types of analysis such as digital demodulation, signal capture and playback, and real-time spectrum analysis. These capabilities are essential to the design, optimization and troubleshooting of new wireless and radar systems.

If you’d like to know more, and take advantage of some of these envelope-expanding capabilities, check out the new application note Signal Analysis Measurement Fundamentals. It provides a deeper dive into techniques and resources, and you’ll find it in the growing collection at our signal analysis fundamentals page.

A few months ago, Keysight’s Brad Frieden and I both wrote about downconversion and sampling, related to wireless and other RF/microwave signals. Brad's article in Microwaves & RF appeared about two weeks before my blog post, though I somehow missed it.

His main focus was on oscilloscopes and improving signal-to-noise ratio (SNR) in measurements of pulsed RF signals. He described the use of digital downconversion, resampling, and filtering to trade excess bandwidth for improved noise floor.

Inside Keysight, debates about scopes versus signal analyzers can become quite animated. One reason: we have slightly different biases to how we look at signals. Engineers in some areas reach first for oscilloscopes, while others have always leaned on spectrum and signal analyzers. It’s more than a general preference for time or frequency domain analysis, but that’s a start.

In test, those distinctions are fading among manufacturers and end users. On the supply side, oscilloscopes are extending frequency coverage into the microwave and millimeter ranges, and signal analyzers are expanding bandwidth to handle wider signals in aerospace/defense and wireless. Happily, both platforms can use the same advanced vector signal analyzer software that provides comprehensive time-, frequency-, and modulation-domain measurements.

On the demand side, frequencies and bandwidths are expanding rapidly in wireless and aerospace/defense applications. That’s why both types of instruments have roles to play.

But if they run the same software and can make many of the same measurements, how do you choose? I’ll give some guidelines here, so that your requirements and priorities guide your choice.

Bandwidth: In the last 15 years, this change has pulled oscilloscopes into RF measurements because they can handle the newest and widest signals. In some cases they’re used in combination with signal analyzers, digitizing the analyzer’s IF output at a bandwidth wider than its own sampler. That’s still the case sometimes, even as analyzer bandwidths have reached 1 GHz, and external sampling can extend the bandwidth 5 GHz! It’s telling that, in his article on oscilloscopes, Brad speaks of 500 MHz as a reduced bandwidth.

Accuracy, noise floor, dynamic range: In a signal analyzer, the downconvert-and-digitize architecture is optimized for signal fidelity, at some cost in digitizing bandwidth. That often makes them the only choice for distortion and spectrum emissions measurements such as harmonics, spurious, intermodulation, and adjacent-channel power. Inside the analyzer, the processing chain is characterized and calibrated to maximize measurement accuracy and frequency stability, especially for power and phase noise measurements.

Sensitivity: With their available internal and external preamps, narrow bandwidths, noise subtraction and powerful averaging, signal analyzers have the edge in finding and measuring tiny signals. Although Brad explained processing gain and some impressive improvements in noise floor for narrowband measurements with oscilloscopes, he also noted that these gains did not enhance distortion or spurious performance.

Multiple channels: Spectrum and signal analyzers have traditionally been single-channel instruments, while oscilloscope architectures often support two to four analog channels. Applications such as phased arrays and MIMO may require multiple coherent channels for some measurements, including digital demodulation. If the performance benefits of signal analyzers are needed, an alternative is a PXI-based modular signal analyzer.

Measurement speed: To perform the downconversion, filtering and resampling needed for RF measurements, oscilloscopes acquire an enormous number of samples and then perform massive amounts of data reduction and processing. This can be an issue when throughput is important.

With expanding frequency ranges and support from sophisticated VSA software, the overlap between analyzers and oscilloscopes is increasing constantly. Given the demands of new and enhanced applications, this choice is good news for RF engineers—frequently letting you stick with whichever operating paradigm you prefer.

  Wrestling information from a hostile universe

I can’t make a serious case that the universe is actively hostile, but when you’re making spurious and related measurements, it can seem that way. In terms of information theory, it makes sense: The combination of limited signal-to-noise ratio and the wide spans required mean that you’re gathering lots of information where the effective data rate is low. Spurious and spectrum emission mask (SEM) measurements will be slower and more difficult than you’d like.

In a situation such as this, the opportunity to increase overall productivity is so big that it pays to improve these measurements however we can. In this post I’ll summarize some recent developments that may help, and point to resources that include timeless best practices.

First, let’s recap the importance of spurious and spectrum emission and the reasons why we persist in the face of adversity. Practical spectrum sharing requires tight control of out-of-band signals, so we measure to find problems and comply with standards. Measurements must be made over a wide span, often 10 times our output frequency. However, resolution bandwidths (RBW) are typically narrow to get the required sensitivity. Because sweep time increases inversely with the square of the RBW, sensitivity can come at a painful cost in measurement time.

The result is that these age-old measurements, basic in comparison to things such as digital demodulation, can consume a big chunk of the total time required for RF testing.

Now on to the good news: advances in signal processing and ADCs can make a big difference in measurement time with no performance penalty. Signal analyzers with digital RBW filters can be set to sweep much faster than analog ones, and the analyzers can precisely compensate for the dynamic effects of the faster sweep. Many years ago, when first used at low frequencies, this oversweep processing provided about a 4x improvement. In the more recent history of RF measurements, the enhanced version is called fast sweep, and the speed increase can be 50x.

Two spectrum measurements with equivalent resolution bandwidths and frequency spans are compared.  The fast sweep capability improves sweep speed by a factor of nearly 50 times.

In equivalent measurements of a 26.5 GHz span, the fast sweep feature in an X-Series signal analyzer reduces sweep time from 35.5 seconds to 717 ms, an improvement of nearly 50 times.

I’m happy to report that DSP and filter technologies continue to march forward, and the improvement from oversweep to fast sweep has been extended for some of the most challenging measurements and narrowest RBW settings. For bandwidths of 4.7 kHz and narrower, a newly enhanced fast sweep for most Keysight X-Series signal analyzers provides a further 8x improvement over the original. This speed increase will help with some of the measurements that hurt productivity the most.

Of course, the enduring challenges of spurious measurements can be met by a range of solutions, not all of them new. Keysight’s proven PowerSuite measurement application includes flexible spurious emission testing and has been a standard feature of all X-Series signal analyzers for many years.

A measurement application in a signal analyzer has a number of benefits for spurious measurements, including pass/fail testing and limit lines, automatic identification of spurs, and generation of a results table.

The PowerSuite measurement application includes automatic spurious emission measurements, such as spectrum emission mask, that include tabular results. Multiple frequency ranges can be configured, each with independent resolution and video bandwidths, detectors, and test limits.

PowerSuite allows you to improve spurious measurements by adding information to the tests, measuring only the required frequencies. Another way to add information and save time is to use the customized spurious and spectrum emissions tests included in standard-specific measurement applications.

A new application note, Accelerating Spurious Emission Measurements using Fast-Sweep Techniques, includes more detailed explanations, techniques, and resources. You’ll find it in the growing collection at our signal analysis fundamentals page.

  Jet engines, furry hoods, and a nod to four years of the Better Measurements blog

This blog gives me occasional freedom to explore technology and phenomena that have only a peripheral relationship to RF design and measurement. Sometimes it feels like a permission slip to cut class and wander off to interesting places with remarkable analogs to our world. I say this by way of warning that it may take me a while to get around to something central to RF engineering this time.

This little side trip begins with high-bypass turbofans and the artistic-looking scallops or chevrons on the outer nacelles and sometimes the turbine cores.

Turbofan jet engine with chevrons or scallops on trailing edge of engine nacelle and engine core, to reduce turbulence and noise.

The NASA-developed chevrons or scallops at the trailing edge of this turbofan engine reduce engine noise by causing a more gradual blending of air streams of different velocities. This reduces shear and the resulting noise-creating turbulence. They look cool, too. (Image from Wikimedia Commons)

In my mental model, shear is the key here. Earlier turbojets had a single outlet with very high velocity, creating extreme shear speeds as the exhaust drove into the ambient air. The large speed differences created lots of turbulence and corresponding noise.

Turbofans reduce noise dramatically by accelerating another cylinder of air surrounding the hot, high-speed turbine core. This cylinder is faster than ambient, but slower than the core output, creating an intermediate-speed air stream and two mixing zones. The shear speeds are now much lower, reducing turbulence and noise.

The chevrons further smooth the blending of air streams, so turbulence and noise are both improved. It’s a complicated technique to engineer, but effective, passive, and simple to implement.

Shear is useful in understanding many other technologies, modern and ancient. When I saw those nacelles I thought of the Inuit and the hoods of their parkas with big furry rims or “ruffs.” In a windy and bitter environment they reduce shear at the edges of the hood, creating a zone of calmer air around the face. A wind tunnel study confirmed Inuit knowledge that the best fur incorporates hairs of varying length and stiffness, anticipating—in microcosm—the engine nacelle chevrons.

The calm air zone reduces wind chill, and it also reduces noise. Years ago I had a parka hood with a simple non-furry nylon rim that would howl at certain air speeds and angles.

Another, more modern example is the large, furry microphone windscreen called (I am not making this up) a “dead cat.” At the cost of some size, weight, and delicacy (imagine the effect of rain) this is perhaps the most effective way to reduce wind noise.

The opposite approach to shear and noise is equally instructive. “Air knife” techniques have been used for years to remove fluids from surfaces, and you can now find them in hand dryers in public restrooms. They inevitably make a heck of a racket because the concentrated jet of air and resulting shear are also what makes them effective in knocking water from your hands. Personally, I don’t like the tradeoffs in this case.

In RF applications, we generally avoid the voltage and current equivalents of shear and undesirable signal power. When we can’t adequately reduce the power, we shape it to make it less undesirable, or we push it around to a place where it will cause less trouble. For example, PLLs in some frequency synthesizers can be set to optimize phase noise at narrow versus wide offsets.

Switch-mode power supplies are another example of undesirable power, typically because of the high dv/dt and di/dt of their pulsed operation. It isn’t usually the total power that causes them to fail EMC tests, but the power concentrated at specific frequencies. From a regulatory point of view, an effective solution can be to modulate or dither the switching frequency to spread the power out.

One final example is the tactic of pushing some noise out of band. Details are described in an article on delta-sigma modulation for data converters. Oversampling and noise shaping shift much of the noise to frequencies where it can be removed with filtering.

I’m sure that’s enough wandering for now, but before I finish this post I wanted to note that we’ve passed the four-year anniversary of the first post here. I’d like to thank all of you who tolerate my ramblings, and encourage you to use the comments to add information, suggest topics, or ask questions. Thanks for reading!

  Taking advantage of knowledge your signal analyzer doesn’t have

To respect the value of your time and the limits of your patience, I try to keep these posts relatively short and tightly focused. Inevitably, some topics demand more space, and this follow up to December’s post Exchanging Information You’ve Got for Time You Need is one of those. Back then, I promised additional suggestions for adding information to the measurement process to optimize the balance of performance and speed for your needs.

Previously, I used the example of a distortion measurement, setting attenuation to minimize analyzer distortion. This post touches on the other common scenario of improving analyzer noise floor, including the choice of input attenuation. A lower noise floor is important for finding and accurately measuring small signals, and for measuring the noise of a DUT.

One of the first items to add is the tolerable amount of analyzer noise contribution and the amplitude error it can cause. If you’re measuring the noise of your DUT, the table below from my post on low SNR measurements summarizes the effects.

Noise ratio or signal/noise ratio or SNR and measurement error from analyzer noise floor or DANL

Examples of amplitude measurement error values—always positive—resulting from measurements made near the noise floor. Analyzer noise in the selected resolution bandwidth adds to the input signal.

Only you can decide how much error from analyzer noise is acceptable in the measurement; however, a 10 dB noise ratio with 0.41 dB error is not a bad place to start. It’s worth noting that a noise ratio of about 20 dB is required if the error is to be generally negligible.

Sadly, the input attenuation setting for best analyzer noise floor is not the same as that for best distortion performance. The amount of analyzer distortion you can tolerate is another useful factor. Reducing attenuation will improve analyzer noise floor and SNR, but at some point the cost in analyzer distortion performance may outweigh the benefit. And remember that video averaging provides a “free” noise floor benefit of 2.5 dB for measurements of CW signals.

Comparison of signal measurement and analyzer noise floor for different values of input attenuation and use of video averaging to improve effective noise floor

Reducing input attenuation by 12 dB improves noise floor by a similar amount, as shown in the yellow and blue traces. Using a narrow video bandwidth (purple trace) for averaging reduces the measured noise floor but does not affect the measurement of the CW signal.

You can consult your analyzer’s specifications to find its warranted noise floor and adjust for resolution bandwidth, attenuation, etc. That approach may be essential if you’re using the measurements to guarantee performance of your own, but your specific needs are another crucial data point. If you simply want the best performance for a given configuration, you can experiment with attenuation settings versus distortion performance to find the best balance.

Many analyzer specs also include “typical” values for some parameters, and these can be extremely helpful additions. Of course, only you can decide whether the typicals apply, and whether it’s proper for you to rely on them.

If you use Keysight calibration services, they may be another source of information. Measurement results are available online for individual instruments and can include the measurement tolerances involved.

Signal analyzers themselves can be a source of information for improving measurements, and the Noise Floor Extension feature in some Keysight signal analyzers is a useful example. Each analyzer contains a model of its own noise power for all instrument states, and can automatically subtract this power to substantially improve its effective spectrum noise floor.

For microwave measurements, many signal analyzers use preselector filters to remove undesirable mixing products created in the analyzer’s downconversion process. However, these filters have some insertion loss, which increases the analyzer’s effective noise floor. A valuable nugget that you alone have is whether the mixing products or other signals will be a problem in your setup. If not, you can bypass the preselector and further improve the noise floor.

Finally, one often-overlooked tidbit is whether the signal in question is a consistently repeating burst or pulse. For these signals, time averaging can be a powerful tool. This averaging is typically used with vector signal analysis, averaging multiple bursts in the time domain before performing signal analysis. It can improve noise floor dramatically and quickly, and the result can be used for all kinds of signal analysis and demodulation.

Sorry for going on so long. There are other potential sources of information you can add, but these are some of the most useful I’ve found. If you know of others, please add a comment to enlighten the rest of us.

benz

Stealing MIMO

Posted by benz Jan 19, 2017

  An unacknowledged “borrowing”

Few things in technology are completely original, “from the ground up” innovations. Of course, that’s no knock on those who creatively move technology forward, whether they’re making a new idea practical, applying existing technology in a new area, or fixing the multitude of problems that always crop up.

All of us know how much hard work it is to drive something from the bright idea stage to the point where it can be manufactured and deployed to satisfy a need and make a profit. Therefore, it’s no surprise that many important technologies have recognizable—if not always acknowledged—precursors.

Today’s example is a technology precursor that is both remarkably close and remarkably unknown: the use of multiple-input/multiple-output (MIMO) concepts in modal analysis of structures and its claim for priority over MIMO in RF communications. The parallels in terms of the fundamental concept—the matrix math and the very physical principles involved—are so close that I doubt you’ll object to the rather pejorative term used in the title of this post. If it’s an exaggeration, I’d argue it’s forgivable.

First, let’s a look at a recent example of the technology that came first: understanding the dynamic response of a structure by stimulating it and analyzing it at multiple locations simultaneously. This is called MIMO modal analysis, referring to the various “modes of vibration” that occur in the structure.

MIMO modal analysis structural test of wind turbine rotor. Multiple input, multiple-output

MIMO structural (modal) analysis of a wind turbine rotor. Three electromechanical shakers convert stimulus signals (e.g., noise or sinusoids) into mechanical inputs to the turbine. The response of the turbine blades is measured in multiple locations simultaneously by small accelerometers. (Image from Wikimedia Commons)

Practical MIMO analysis of structures dates back to the 1970s, perhaps 20 years before it was clearly conceptualized for communications. The technique was first used for modal testing of aircraft and satellites for several reasons. First, the inherent parallelism of the approach dramatically improved test time, reducing or eliminating the need to reposition shakers and accelerometers and repeat tests, to reveal all structural modes. The time value of prototype test articles meant that MIMO saved enough money to pay for the additional test equipment.

In addition, structural vibration frequencies were low enough to be within the range of early generation ADCs, DACs, and—most importantly—DSP resources. MIMO analysis is very computationally intensive, requiring vector or complex calculations on all signals. The results were (and still are) used to animate an image of the structure at specific resonant frequencies and the associated modes or deformation shapes (e.g., bending, torsion, etc.).

I wasn’t deeply familiar with the details of MIMO structural analysis when I first heard of MIMO for RF communications, but remember thinking, “This looks like it’s essentially the same thing.” Both kinds of MIMO involve a full vector understanding of the propagation of energy from multiple origins to multiple destinations. This understanding is used to make the propagation paths separable, via matrix math (e.g., eigenvectors), despite the fact that the propagation is happening simultaneously over a single frequency range.

The separated paths can be used to understand structural deformation at multiple locations, due to multiple inputs simultaneously. They can also be used to create multiple signal paths from a single RF channel, dramatically increasing capacity. Just what modern wireless so desperately needs!  It simply took nearly three decades for the necessary wideband real-time processing to become practical.

In the years since I first heard about wireless MIMO, I’ve encountered remarkably few RF engineers who are aware of the technique’s history and the very direct parallels. In retrospect, I guess I shouldn’t be surprised. I imagine the intersection of structural and RF engineers who both know MIMO is vanishingly small. Also, the developers of RF MIMO do not seem to call out the similarities in any of their explanations.

Nonetheless, the commonality in everything from name to mathematics to the fundamental physics is one of the most impressive examples of technology reuse I’ve ever encountered. We may never know whether it’s an example of fully independent innovation or some truly inspired and creative borrowing.

benz

Your Signal Analyzer, Unleashed

Posted by benz Jan 4, 2017

  Signal analyzers are flexible, upgradable measurement platforms. This infographic has suggestions and reminders of how to maximize the value of yours.

Click to enlarge.

Infographic describes the multiple capabilities available in signal analyzers, including measurement applications, real time signal analysis, vector signal analysis, digital modulation analysis, and performance optimization such as noise subtraction and fast sweeps.

Learn how to maximize the power and performance of your signal analyzers, visit www.keysight.com/find/spectrumanalysisbasics

Make better measurements by using all the information you can get

Some of the things we do as part of good measurement practice are part of a larger context, and understanding this context can help you save time and money. Sometimes they’ll save our sanity when we’re trying to meet impossible targets, but that’s a topic for another day.

Take, for example, the simple technique for setting the best input level to measure harmonic distortion with a signal analyzer. The analyzer itself contributes distortion that rises with the signal level at its input mixer, and that distortion can interfere with that of the signal under test. To avoid this problem, the analyzer’s attenuator—sitting between the input and the mixer—is adjusted to bring the mixer signal level down to a point where analyzer distortion can be ignored.

It’s a straightforward process of comparing measured distortion at different attenuation values, as shown below.

2nd second harmonic distortion in signal analyzer caused by overdriving analyzer input imixer

Adjusting analyzer attenuation to minimize its contribution to measured second harmonic distortion. Increasing attenuation (blue trace) improves the distortion measurement, reducing the distortion contributed by the analyzer.

When increases in attenuation produce no improvement in measured distortion, the specific analyzer and measurement configuration are optimized for the specific signal in question. This is a deceptively powerful result, customized beyond what might be done from published specifications.

Of course, the reason to not use a really low mixer level (i.e., lots of attenuation) is to avoid an excessively high noise level in the measurement and thereby lose low-level signals. Indeed, if the goal is to reveal spurious or other signals, less attenuation will be used and distortion performance will be traded for improved signal/noise ratio (SNR).

Setting attenuation levels to one type of performance is standard practice, and it’s useful to understand this in the larger context of adding information to customize the measurement to your priorities. This approach has a number of benefits that I’ve summarized previously in the graphic below.

Spectrum measurement as trading off cost, time, information.Information can be added to improve cost-time tradeoff.

A common tradeoff is between cost, time, and information. Information is an output of the measurement process, but it can also be an input, improving the process.

The interactions and tradeoffs are worth a closer look. One possible choice is to purchase an analyzer with higher performance in terms of distortion, noise, and accuracy. Alternatively, you can compensate for the increased noise (due to the high attenuation value) by using a narrower resolution bandwidth (RBW). You’ll get good performance in both distortion and noise floor, but at the cost of increased measurement time.

That alternative approach involves using your engineering knowledge to add information to the process, perhaps in ways you haven’t considered. The optimization of distortion and noise described above is a good example. Only you know how to best balance the tradeoffs as they apply to your priorities—and adjusting the measurement setup means adding information that the analyzer does not have.

There are other important sources of information to add to this process, ones that can improve the overall balance of performance or speed without other compromises. Some involve information you possess, and some use information from Keysight and from the analyzer itself.

I’ll describe them in a part 2 post here next month. In the meantime you can find updated information about spectrum analysis techniques at our new page Spectrum Analysis At Your Fingertips. We’ll be adding information there and, as always, here at the Better Measurements RF Test Blog.

  It can be both. A look at the engineering behind some beautiful footage

We’ll be back to our regular RF/microwave test focus shortly, but sometimes at the holidays I celebrate by taking a look at other examples of remarkable engineering. I have a strong interest in space technology, so you can imagine where this post is going.

Let’s get right to the fun and the art. Then to the fascinating engineering behind it. Even if you didn’t know what you were watching, you’ve probably seen the footage of staging (i.e., stage separation and ignition) of the Apollo program Saturn V moon rockets. Some have realized the artistic potential and have added appropriate music. Here’s a recent example: I suggest you follow the link and enjoy the 75-second video.

Welcome back. Now let’s discuss a little of the amazing engineering and physics involved.

Staging is an elaborately choreographed event, involving huge masses, high speeds, and a very precise time sequence. By the time you see the beginning of stage separation at 0:15 in the linked video, the first-stage engines have been shut down and eight retrorockets in the conical engine fairings at the base of the first stage have been firing for a couple of seconds. They’ve got a combined thrust of more than 700,000 pounds and are used with pyrotechnics to disconnect and separate the first stage from the second.

At this point the rocket is traveling about 6,000 mph, more horizontally than vertically. The first stage looks like it’s falling back to earth, but its vertical velocity is still about 2,000 mph. It’s almost 40 miles high and in a near-vacuum, and will coast upward about another 30 miles over the next 90 seconds.

In the meantime, the five engines of the second stage will be burning to take the spacecraft much higher and faster, nearly to orbit. Before they can do this, the temporarily weightless fuel in the stage must be settled back to the bottom of the tanks so it can be pumped into the engines. That’s the job of eight solid-rocket ullage motors—it’s an old and interesting term, look it up!—that fire before and during the startup of the second-stage engines.

Those engines start up at around 0:20, an event that’s easy to miss. They burn liquid hydrogen and liquid oxygen, and the combustion product is just superheated steam. For the rest of this segment you’re looking at the Earth through an exhaust plume corresponding to more than one million pounds of thrust, and it’s essentially invisible. That clear rocket exhaust is important in explaining the orange glow in the image below, at about 0:36 in the video.

Still frame from film of Apollo Saturn V staging interstage release

Aft view from Saturn V second stage, just after separation of interstage segment. The five Rocketdyne J2 engines burn liquid hydrogen and oxygen, producing a clear exhaust. Image from NASA.

The first and second stages have two planes of separation, and the structural cylinder between them is called the interstage. It’s a big thing, 33 feet in diameter and 18 feet tall. It’s also heavy, with the now-spent ullage motors being mounted there, and so it’s the kind of thing you want to leave behind when you’re done with it. Once the second stage is firing correctly and its trajectory is stable, pyrotechnics disconnect the interstage.

That’s what’s happening in the image above, and perhaps the most interesting thing about this segment is how the interstage is lit up as it’s impacted not once but twice by the hot exhaust plume. I saw this video many years ago and always wondered what the bursts of fire were; now I know.

The video shows one more stage separation, with its own remarkable engineering and visuals. At 0:43 the view switches to one forward from that same second stage, and the process of disconnection and separation happens again. The pyrotechnics, retrorockets, and ullage motors are generally similar. The fire from the retros is visible as a bright glow from 0:44 and when it fades, at about 0:46, the firing of the three ullage motors is clearly visible.

The next fascinating thing happens at 0:47, when the single J2 engine ignites right before our eyes. We get to see right down the throat of a rocket engine as it lights up and achieves stable combustion. Once again, the exhaust plume is clear, and only the white dot of the flame in its combustion chamber is visible.

The third stage accelerates away, taking the rest of the vehicle to orbit. Remarkably, for an engine using cryogenic fuels, this special J2 can be restarted, and that’s just what will happen after one or two Earth orbits. After vehicle checkout, the second burn of this stage propels the spacecraft towards its rendezvous with the Moon.

I could go on—and on and on—but by now you may have had enough of this 50-year-old technology. If not, you can go elsewhere, online and offline, and find information and images of all kinds. The links and references in my previous Apollo post are a good start.

  A single analyzer can cover 3 Hz to 110 GHz. Is that the best choice for you?

I haven’t made many measurements at millimeter frequencies, and I suspect that’s true of most who read this blog. All the same, we follow developments in this area—and I can think of several reasons why.

First, these measurements and applications are getting a lot of press coverage, ranging from stories about the technology to opinion pieces discussing the likely applications and their estimated market sizes. That makes sense, given the expansion of wireless and other applications to higher frequencies and wider bandwidths. As a fraction of sales—still comparatively small—more money and engineering effort will be devoted to solving the abundant design and manufacturing problems associated with these frequencies.

I suppose another reason involves a kind of self-imposed challenge or discipline, reminiscent of Kennedy’s speech about going to the moon before 1970, where he said the goal would “serve to organize and measure the best of our energies and skills.” Getting accurate, reliable measurements at millimeter frequencies will certainly challenge us to hone our skills and pay close attention to factors that we treat more casually at RF or even microwave.

Of course, this self-improvement also has a concrete purpose if we accept the inevitability of the march to higher frequencies and wider bandwidths. For some, millimeter skills are just part of the normal process of gathering engineering expertise to stay current and be ready for what’s next.

In an earlier post I mentioned the new N9041B UXA X-Series signal analyzer and focused attention on the 1 mm connectors used to get coaxial coverage of frequencies to 110 GHz. In this post I’ll summarize two common signal analyzer choices at these frequencies and some of their tradeoffs.

Two types of solutions are shown below. The first practical millimeter measurements were made with external mixers, but signal analyzers with direct coverage to the millimeter bands are becoming more common.

M1970 & M1971 Smart external waveguide mixers (left) and N9041B UXA X-Series signal analyzer (right)

External mixing (left) has long been a practical and economical way to make millimeter frequency measurements. Signal analyzers such as the new N9041B UXA (right) bring the performance and convenience of a single-instrument solution with continuous direct coverage from 3 Hz to 100 GHz.

My post on external mixing described how the approach effectively moves the analyzer’s first mixer outside of the analyzer itself. The analyzer supplies an LO drive signal to the mixer and receives—sometimes through the same cable—a downconverted IF signal to process and display. This provides a lower-cost solution, where analyzers with lower frequency coverage handle millimeter signals through a direct waveguide input.

In use, this setup is more complicated than a one-box solution, but innovations such as smart mixers with USB plug-and-play make the connection and calibration process more convenient. An external mixer can be a kind of “remote test head,” extending the analyzer’s input closer to the DUT and simplifying waveguide connections or the location of an antenna for connectorless measurements.

The drawbacks of external mixers include their banded nature (i.e., limited frequency coverage) and lack of input conditioning such as filters, attenuators, and preamps. In addition, their effective measurement bandwidth is limited by the IF bandwidth of the host analyzer, a problem for the very wideband signals used so often at millimeter frequencies. Finally, external mixing often requires some sort of signal-identification process to separate the undesirable mixer products that appear as false or alias signals in the analyzer display. Signal identification is straightforward with narrowband signals, but can be impractical with very wide ones.

Millimeter signal analyzers offer a measurement solution that is better in almost all respects, but is priced accordingly. They provide direct, continuous coverage, calibrated results and full specifications. Their filters and processing eliminate the need for signal identification, and their input conditioning makes it easier to optimize for sensitivity or dynamic range.

The new N9041B UXA improves on current one-box millimeter solutions in several ways. Continuous coverage now extends to 110 GHz and—critically—analysis bandwidths are extended to 1 GHz internally and to 5 GHz or more with external sampling.

Sensitivity is another essential for millimeter frequency measurements. Power is hard to come by at these frequencies, and the wide bandwidths used can gather substantial noise, limiting SNR. The DANL of the UXA is better than -150 dBm/Hz all the way to 110 GHz and, along with careful connections, should yield excellent spurious and emissions measurements.

Millimeter measurements, especially wideband ones, will continue to be demanding, but the tools are in place to handle them as they become a bigger part of our engineering efforts.