Skip navigation
All Places > Keysight Blogs > Better Measurements: The RF Test Blog > Blog

Stealing MIMO

Posted by benz Jan 19, 2017

  An unacknowledged “borrowing”

Few things in technology are completely original, “from the ground up” innovations. Of course, that’s no knock on those who creatively move technology forward, whether they’re making a new idea practical, applying existing technology in a new area, or fixing the multitude of problems that always crop up.

All of us know how much hard work it is to drive something from the bright idea stage to the point where it can be manufactured and deployed to satisfy a need and make a profit. Therefore, it’s no surprise that many important technologies have recognizable—if not always acknowledged—precursors.

Today’s example is a technology precursor that is both remarkably close and remarkably unknown: the use of multiple-input/multiple-output (MIMO) concepts in modal analysis of structures and its claim for priority over MIMO in RF communications. The parallels in terms of the fundamental concept—the matrix math and the very physical principles involved—are so close that I doubt you’ll object to the rather pejorative term used in the title of this post. If it’s an exaggeration, I’d argue it’s forgivable.

First, let’s a look at a recent example of the technology that came first: understanding the dynamic response of a structure by stimulating it and analyzing it at multiple locations simultaneously. This is called MIMO modal analysis, referring to the various “modes of vibration” that occur in the structure.

MIMO modal analysis structural test of wind turbine rotor. Multiple input, multiple-output

MIMO structural (modal) analysis of a wind turbine rotor. Three electromechanical shakers convert stimulus signals (e.g., noise or sinusoids) into mechanical inputs to the turbine. The response of the turbine blades is measured in multiple locations simultaneously by small accelerometers. (Image from Wikimedia Commons)

Practical MIMO analysis of structures dates back to the 1970s, perhaps 20 years before it was clearly conceptualized for communications. The technique was first used for modal testing of aircraft and satellites for several reasons. First, the inherent parallelism of the approach dramatically improved test time, reducing or eliminating the need to reposition shakers and accelerometers and repeat tests, to reveal all structural modes. The time value of prototype test articles meant that MIMO saved enough money to pay for the additional test equipment.

In addition, structural vibration frequencies were low enough to be within the range of early generation ADCs, DACs, and—most importantly—DSP resources. MIMO analysis is very computationally intensive, requiring vector or complex calculations on all signals. The results were (and still are) used to animate an image of the structure at specific resonant frequencies and the associated modes or deformation shapes (e.g., bending, torsion, etc.).

I wasn’t deeply familiar with the details of MIMO structural analysis when I first heard of MIMO for RF communications, but remember thinking, “This looks like it’s essentially the same thing.” Both kinds of MIMO involve a full vector understanding of the propagation of energy from multiple origins to multiple destinations. This understanding is used to make the propagation paths separable, via matrix math (e.g., eigenvectors), despite the fact that the propagation is happening simultaneously over a single frequency range.

The separated paths can be used to understand structural deformation at multiple locations, due to multiple inputs simultaneously. They can also be used to create multiple signal paths from a single RF channel, dramatically increasing capacity. Just what modern wireless so desperately needs!  It simply took nearly three decades for the necessary wideband real-time processing to become practical.

In the years since I first heard about wireless MIMO, I’ve encountered remarkably few RF engineers who are aware of the technique’s history and the very direct parallels. In retrospect, I guess I shouldn’t be surprised. I imagine the intersection of structural and RF engineers who both know MIMO is vanishingly small. Also, the developers of RF MIMO do not seem to call out the similarities in any of their explanations.

Nonetheless, the commonality in everything from name to mathematics to the fundamental physics is one of the most impressive examples of technology reuse I’ve ever encountered. We may never know whether it’s an example of fully independent innovation or some truly inspired and creative borrowing.


Your Signal Analyzer, Unleashed

Posted by benz Jan 4, 2017

  Signal analyzers are flexible, upgradable measurement platforms. This infographic has suggestions and reminders of how to maximize the value of yours.

Click to enlarge.

Infographic describes the multiple capabilities available in signal analyzers, including measurement applications, real time signal analysis, vector signal analysis, digital modulation analysis, and performance optimization such as noise subtraction and fast sweeps.

Learn how to maximize the power and performance of your signal analyzers, visit

Make better measurements by using all the information you can get

Some of the things we do as part of good measurement practice are part of a larger context, and understanding this context can help you save time and money. Sometimes they’ll save our sanity when we’re trying to meet impossible targets, but that’s a topic for another day.

Take, for example, the simple technique for setting the best input level to measure harmonic distortion with a signal analyzer. The analyzer itself contributes distortion that rises with the signal level at its input mixer, and that distortion can interfere with that of the signal under test. To avoid this problem, the analyzer’s attenuator—sitting between the input and the mixer—is adjusted to bring the mixer signal level down to a point where analyzer distortion can be ignored.

It’s a straightforward process of comparing measured distortion at different attenuation values, as shown below.

2nd second harmonic distortion in signal analyzer caused by overdriving analyzer input imixer

Adjusting analyzer attenuation to minimize its contribution to measured second harmonic distortion. Increasing attenuation (blue trace) improves the distortion measurement, reducing the distortion contributed by the analyzer.

When increases in attenuation produce no improvement in measured distortion, the specific analyzer and measurement configuration are optimized for the specific signal in question. This is a deceptively powerful result, customized beyond what might be done from published specifications.

Of course, the reason to not use a really low mixer level (i.e., lots of attenuation) is to avoid an excessively high noise level in the measurement and thereby lose low-level signals. Indeed, if the goal is to reveal spurious or other signals, less attenuation will be used and distortion performance will be traded for improved signal/noise ratio (SNR).

Setting attenuation levels to one type of performance is standard practice, and it’s useful to understand this in the larger context of adding information to customize the measurement to your priorities. This approach has a number of benefits that I’ve summarized previously in the graphic below.

Spectrum measurement as trading off cost, time, information.Information can be added to improve cost-time tradeoff.

A common tradeoff is between cost, time, and information. Information is an output of the measurement process, but it can also be an input, improving the process.

The interactions and tradeoffs are worth a closer look. One possible choice is to purchase an analyzer with higher performance in terms of distortion, noise, and accuracy. Alternatively, you can compensate for the increased noise (due to the high attenuation value) by using a narrower resolution bandwidth (RBW). You’ll get good performance in both distortion and noise floor, but at the cost of increased measurement time.

That alternative approach involves using your engineering knowledge to add information to the process, perhaps in ways you haven’t considered. The optimization of distortion and noise described above is a good example. Only you know how to best balance the tradeoffs as they apply to your priorities—and adjusting the measurement setup means adding information that the analyzer does not have.

There are other important sources of information to add to this process, ones that can improve the overall balance of performance or speed without other compromises. Some involve information you possess, and some use information from Keysight and from the analyzer itself.

I’ll describe them in a part 2 post here next month. In the meantime you can find updated information about spectrum analysis techniques at our new page Spectrum Analysis At Your Fingertips. We’ll be adding information there and, as always, here at the Better Measurements RF Test Blog.

  It can be both. A look at the engineering behind some beautiful footage

We’ll be back to our regular RF/microwave test focus shortly, but sometimes at the holidays I celebrate by taking a look at other examples of remarkable engineering. I have a strong interest in space technology, so you can imagine where this post is going.

Let’s get right to the fun and the art. Then to the fascinating engineering behind it. Even if you didn’t know what you were watching, you’ve probably seen the footage of staging (i.e., stage separation and ignition) of the Apollo program Saturn V moon rockets. Some have realized the artistic potential and have added appropriate music. Here’s a recent example: I suggest you follow the link and enjoy the 75-second video.

Welcome back. Now let’s discuss a little of the amazing engineering and physics involved.

Staging is an elaborately choreographed event, involving huge masses, high speeds, and a very precise time sequence. By the time you see the beginning of stage separation at 0:15 in the linked video, the first-stage engines have been shut down and eight retrorockets in the conical engine fairings at the base of the first stage have been firing for a couple of seconds. They’ve got a combined thrust of more than 700,000 pounds and are used with pyrotechnics to disconnect and separate the first stage from the second.

At this point the rocket is traveling about 6,000 mph, more horizontally than vertically. The first stage looks like it’s falling back to earth, but its vertical velocity is still about 2,000 mph. It’s almost 40 miles high and in a near-vacuum, and will coast upward about another 30 miles over the next 90 seconds.

In the meantime, the five engines of the second stage will be burning to take the spacecraft much higher and faster, nearly to orbit. Before they can do this, the temporarily weightless fuel in the stage must be settled back to the bottom of the tanks so it can be pumped into the engines. That’s the job of eight solid-rocket ullage motors—it’s an old and interesting term, look it up!—that fire before and during the startup of the second-stage engines.

Those engines start up at around 0:20, an event that’s easy to miss. They burn liquid hydrogen and liquid oxygen, and the combustion product is just superheated steam. For the rest of this segment you’re looking at the Earth through an exhaust plume corresponding to more than one million pounds of thrust, and it’s essentially invisible. That clear rocket exhaust is important in explaining the orange glow in the image below, at about 0:36 in the video.

Still frame from film of Apollo Saturn V staging interstage release

Aft view from Saturn V second stage, just after separation of interstage segment. The five Rocketdyne J2 engines burn liquid hydrogen and oxygen, producing a clear exhaust. Image from NASA.

The first and second stages have two planes of separation, and the structural cylinder between them is called the interstage. It’s a big thing, 33 feet in diameter and 18 feet tall. It’s also heavy, with the now-spent ullage motors being mounted there, and so it’s the kind of thing you want to leave behind when you’re done with it. Once the second stage is firing correctly and its trajectory is stable, pyrotechnics disconnect the interstage.

That’s what’s happening in the image above, and perhaps the most interesting thing about this segment is how the interstage is lit up as it’s impacted not once but twice by the hot exhaust plume. I saw this video many years ago and always wondered what the bursts of fire were; now I know.

The video shows one more stage separation, with its own remarkable engineering and visuals. At 0:43 the view switches to one forward from that same second stage, and the process of disconnection and separation happens again. The pyrotechnics, retrorockets, and ullage motors are generally similar. The fire from the retros is visible as a bright glow from 0:44 and when it fades, at about 0:46, the firing of the three ullage motors is clearly visible.

The next fascinating thing happens at 0:47, when the single J2 engine ignites right before our eyes. We get to see right down the throat of a rocket engine as it lights up and achieves stable combustion. Once again, the exhaust plume is clear, and only the white dot of the flame in its combustion chamber is visible.

The third stage accelerates away, taking the rest of the vehicle to orbit. Remarkably, for an engine using cryogenic fuels, this special J2 can be restarted, and that’s just what will happen after one or two Earth orbits. After vehicle checkout, the second burn of this stage propels the spacecraft towards its rendezvous with the Moon.

I could go on—and on and on—but by now you may have had enough of this 50-year-old technology. If not, you can go elsewhere, online and offline, and find information and images of all kinds. The links and references in my previous Apollo post are a good start.

  A single analyzer can cover 3 Hz to 110 GHz. Is that the best choice for you?

I haven’t made many measurements at millimeter frequencies, and I suspect that’s true of most who read this blog. All the same, we follow developments in this area—and I can think of several reasons why.

First, these measurements and applications are getting a lot of press coverage, ranging from stories about the technology to opinion pieces discussing the likely applications and their estimated market sizes. That makes sense, given the expansion of wireless and other applications to higher frequencies and wider bandwidths. As a fraction of sales—still comparatively small—more money and engineering effort will be devoted to solving the abundant design and manufacturing problems associated with these frequencies.

I suppose another reason involves a kind of self-imposed challenge or discipline, reminiscent of Kennedy’s speech about going to the moon before 1970, where he said the goal would “serve to organize and measure the best of our energies and skills.” Getting accurate, reliable measurements at millimeter frequencies will certainly challenge us to hone our skills and pay close attention to factors that we treat more casually at RF or even microwave.

Of course, this self-improvement also has a concrete purpose if we accept the inevitability of the march to higher frequencies and wider bandwidths. For some, millimeter skills are just part of the normal process of gathering engineering expertise to stay current and be ready for what’s next.

In an earlier post I mentioned the new N9041B UXA X-Series signal analyzer and focused attention on the 1 mm connectors used to get coaxial coverage of frequencies to 110 GHz. In this post I’ll summarize two common signal analyzer choices at these frequencies and some of their tradeoffs.

Two types of solutions are shown below. The first practical millimeter measurements were made with external mixers, but signal analyzers with direct coverage to the millimeter bands are becoming more common.

M1970 & M1971 Smart external waveguide mixers (left) and N9041B UXA X-Series signal analyzer (right)

External mixing (left) has long been a practical and economical way to make millimeter frequency measurements. Signal analyzers such as the new N9041B UXA (right) bring the performance and convenience of a single-instrument solution with continuous direct coverage from 3 Hz to 100 GHz.

My post on external mixing described how the approach effectively moves the analyzer’s first mixer outside of the analyzer itself. The analyzer supplies an LO drive signal to the mixer and receives—sometimes through the same cable—a downconverted IF signal to process and display. This provides a lower-cost solution, where analyzers with lower frequency coverage handle millimeter signals through a direct waveguide input.

In use, this setup is more complicated than a one-box solution, but innovations such as smart mixers with USB plug-and-play make the connection and calibration process more convenient. An external mixer can be a kind of “remote test head,” extending the analyzer’s input closer to the DUT and simplifying waveguide connections or the location of an antenna for connectorless measurements.

The drawbacks of external mixers include their banded nature (i.e., limited frequency coverage) and lack of input conditioning such as filters, attenuators, and preamps. In addition, their effective measurement bandwidth is limited by the IF bandwidth of the host analyzer, a problem for the very wideband signals used so often at millimeter frequencies. Finally, external mixing often requires some sort of signal-identification process to separate the undesirable mixer products that appear as false or alias signals in the analyzer display. Signal identification is straightforward with narrowband signals, but can be impractical with very wide ones.

Millimeter signal analyzers offer a measurement solution that is better in almost all respects, but is priced accordingly. They provide direct, continuous coverage, calibrated results and full specifications. Their filters and processing eliminate the need for signal identification, and their input conditioning makes it easier to optimize for sensitivity or dynamic range.

The new N9041B UXA improves on current one-box millimeter solutions in several ways. Continuous coverage now extends to 110 GHz and—critically—analysis bandwidths are extended to 1 GHz internally and to 5 GHz or more with external sampling.

Sensitivity is another essential for millimeter frequency measurements. Power is hard to come by at these frequencies, and the wide bandwidths used can gather substantial noise, limiting SNR. The DANL of the UXA is better than -150 dBm/Hz all the way to 110 GHz and, along with careful connections, should yield excellent spurious and emissions measurements.

Millimeter measurements, especially wideband ones, will continue to be demanding, but the tools are in place to handle them as they become a bigger part of our engineering efforts.

  Honing your engineering skills and staying alert for serendipity

The signals we deal with continue to become more complex and challenging. So do the standards that govern them and the measurements those standards specify. It’s a trend that shows no sign of slowing, and perhaps it’s some comfort that this trend is a source of intellectual adventure and job security. That’s the positive spin on it anyway!

Even single-domain spectrum measurements such as ACPR can evolve to a sobering complexity. Compare the W-CDMA and LTE-Advanced measurements below.

W-CDMA ACLR ACPR measurement

LTE-A cumulative ACLR CACLR measurement

The W-CDMA ACLR measurement (top) compares an active channel with two adjacent and alternate channels. The cumulative ACLR (CACLR) measurement (bottom) is considerably more complex, combining the power of multiple, non-contiguous carriers in a carrier aggregation configuration.

While tools such as measurement applications in signal analyzers help deal with the complexity of these measurements, they won’t handle all your needs for non-standard signals, troubleshooting, and custom measurements for components and manufacturing tests.

More often than not, finding all the measurement information you’re going to need involves an in-the-moment mix of trusted sources and geeky serendipity. Louis Pasteur’s famous quote comes to mind: “Fortune favors the prepared mind.” In this case, our “preparation” includes intentional and accidental encounters with application notes, books, blogs, symposium presentations, Web searches, and magazine articles.

I’ve been thinking about magazine articles because they cover both ends of the time scale that are important in making good measurements: classic measurement wisdom and techniques for handling the newest signals.

Under the heading “classic wisdom,” I was recently looking for the power-integration equation used for band power in a signal analyzer, vital for measurements such as those above. I found it in one of a series of articles by Bob Nelson at Microwaves & RF. Bob’s articles cover some classic or evergreen topics, and in this case he explains important points about measuring noise or noise-like signals.

By chance, the same day I started drafting this post, I received an email with a link to another of Bob’s articles on an evergreen topic, understanding measurement uncertainties. He not only describes the uncertainties, but gives an example of how to use extensive analyzer specifications to get the best performance in a specific situation. I have looked at this from a different angle myself, here on this blog.

Of course, the “latest and greatest” end of the time scale deserves just as much attention from magazine editors. Articles from experts with access to lots of test equipment provide timely explanations of measurements and usually add important context. Their guidance is even more valuable when the article also contains a list of references or links to related content. Expertly curated pointers are worth a lot, helping me avoid missing something important.

Worrying about missing something important is what keeps me alert to chance encounters with useful information: sometimes I don’t know what I don’t know. I pay special attention to things that would be counter to my assumptions or intuition because they are the source of errors I won’t even be watching for.

In the area of signal analysis, Keysight is starting to collect a foundation of measurement information on one page, starting with the fundamentals of signal analysis. Along with the other sources of information—and some serendipity—it can help us keep up and keep out of trouble.

  Are lifetime and performance ratings wrong, or is it measurement practice?

A guest post to the RF Test Blog from Keysight’s Michael Lux

A while back, Ben discussed the potential for one type of connector damage and supplied a pretty good example:

Millimeter frequency 2.92 mm K connectors with damaged center connector collets

Damage to the collet or female center conductors of two 2.92 mm K connectors has rendered them useless.

Although the long-term performance and durability of these 2.92 mm and other microwave connectors has been verified in controlled conditions with good measurement practice, real-world experience doesn’t always correlate. Keysight’s long experience in calibration and repair offers some insight, and also helps ensure that you get the performance and durability you’ve paid for.

As the amount of work going on at higher frequencies continues to increase—and as inexperienced engineers and technicians are being asked to make measurements—strong foundational skills and good practices are essential for effective high-frequency measurements. Poor practices produce inferior and less-reliable measurements, wasting resources and risking damage to instruments, accessories, and the devices under test.

Details that can be ignored at lower frequencies really begin to matter at microwave, and especially in the millimeter bands. For these more sensitive tasks, investing in some low-cost, online training for your team members can go a long way toward avoiding costly errors. Even a modest amount of training can save substantial amounts of time and resources.

In an effort to solve these problems, Keysight has launched the first in a series of self-paced eLearning courses. Beginning with an RF/µW Fundamentals package, and continuing throughout the next 12 months with additional courses, Keysight will be rolling out an eLearning curriculum and live training sessions focused on areas that will benefit most from a small investment in training.

A good example is proper cable and connector use and care. We’ve found that problems in this area are a major factor in the need for equipment repair and accessory replacement. Believing an ounce of prevention is worth a pound of cure, Keysight has released an online module focused solely on the fundamentals of cable and connector care. For less than the cost of some individual cables or connectors, this course provides “voice of experience” guidance that can reliably change behavior and improve results.

Untrained users can unintentionally misuse or under-utilize new technology, and can take up to five times longer to achieve the same results as a trained user. When you combine these issues with the added cost of unnecessary help desk calls and lost productivity, a modest investment in training is definitely worth a look. For more information about Keysight’s eLearning training courses see RF/µW Measurement Fundamentals Program.

A note from Ben:

From the beginning we intended to include posts from others here, and this is the first. Blogs are a good way to handle news, personal experience, and nuggets of wisdom. However, they aren’t so good at comprehensive coverage or changing behavior, and Michael describes training that can help in this area.

  Don’t sacrifice the benefits of improved equipment by missing the basics

Just a decade ago I would have found it hard to believe, but millimeter-frequency applications above 50 GHz are going genuinely mainstream. Wireless HD, 802.11ad wireless networking, 5G cellular, and automotive radar are high-profile examples of an important trend, supported by remarkable advances in semiconductor technology.

I’m not surprised at the existence of applications at these frequencies, but I am surprised that retail prices begin under $200 for Wireless HD, and radar options are available on fairly pedestrian cars, albeit the higher-end versions. Bringing this technology to the mass market at the price points required for large-scale adoption poses tremendous challenges in design and manufacturing.

The engineering challenges have been a little less daunting as more millimeter test equipment has become available with single-box direct-connection coverage to 67 GHz. Keysight’s recent introduction of the N9041B UXA X-Series signal analyzer breaks new ground, increasing direct coverage in coax to 110 GHz. The new signal analyzer provides low noise, good accuracy, and wide bandwidth—1 GHz for internal sampling and 5 GHz for external—to allow engineers to focus on their designs and measurement results instead of pulling together multi-part test solutions where calibration and repeatability may be in question.

However, all this hard-won (and rather expensive) performance can be compromised if you miss even one of the fundamental practices for good measurements at these very high frequencies. The millimeter range is generally defined as 30–300 GHz, with wavelengths down to 1 mm, and these tiny wavelengths are the heart of many problems and challenges.

Connectors are a good place to start because they exemplify so many of the ways that millimeter measurements will test you. Here’s a close-up of female and male 1 mm connectors, which are mode-free to more than 110 GHz.

1 mm millimeter connectors male female

Female and male 1 mm connectors. The center pin in the jack on the right is only one-quarter of 1 mm in diameter, generally too small for me to see without magnification. (Image from Wikimedia Commons)

The small size and precise geometry of millimeter connectors and cables demand special machining and fabrication. They are necessarily somewhat expensive and inescapably more delicate as frequencies increase and dimensions decrease. In a previous post I described their frequency ranges and intermating possibilities, and here I’ll note that, despite mechanical compatibility, all intermating and even same-connector mating still produces impedance problems that should be avoided wherever possible.

Perhaps the most important connection is the one at the front panel of the test equipment. Despite their delicacy, male connectors are better than the alternative at millimeter frequencies. The usual practice is to attach a female-to-female “connector saver” at the instrument, but this choice is complicated by the fact that impedance problems and loss through cables and connectors or adapters also get worse as frequencies increase. In some cases, it’s worth the cost and trouble of acquiring custom cabling with correct gender at each end, especially considering how precious power and performance are at these frequencies. Custom cabling also allows the cables to be as short as possible. Indeed, one tactic that is sometimes overlooked is to simply move the DUT and instrument as close to each other as practical.

The picture above suggests another area of best practices: connector care. These connectors do not appear obviously damaged or displaced, but some contamination is clearly present. Because of their tiny dimensions, special cleaning materials and techniques are needed for microwave and millimeter connectors. Connector gauges are also important to ensure that mechanical dimensions are within the tight tolerances that provide a reasonable impedance match. For more detail on torque and other coaxial connection issues and practices, see the classic (old!) application note Principles of Microwave Connector Care (AN-326), Keysight literature number 5954-1566.

Proper connector torque is another fundamental for good millimeter connections, and in a previous post on torque   I discussed the mechanical essentials and ways to avoid damaging these pricey little parts.

Finally, given the connection losses in coax and the sometimes cumbersome physical implications of waveguide, you may want to consider external mixers. Keysight’s Smart Harmonic Mixers cover 50-110 GHz and make this approach much more convenient and accurate than previous mixers. They allow you to create a remote test head, placing the measurement plane right at the DUT. While they lack the IF bandwidth of the new UXA signal analyzer, they do allow non-millimeter signal analyzers to cover these high frequencies.

These fundamentals are hardly “basics,” but they’re straightforward practices to implement—and they’ll help you pass the tests you’ll face as your designs take you well into millimeter territory.


Originally posted Oct 4, 2016

Sometimes the answer is not in the first place you look

Real-time spectrum analyzer (RTSA) capability has gradually moved from a specialized measurement made by specialized instruments to an option available for many spectrum and signal analyzers. Keysight recently added an RTSA option to FieldFox handheld analyzer models that include RF/microwave spectrum analyzers and combination analyzers (spectrum analyzer, vector network analyzer and cable/ antenna tester).

We’ve come a long way from the early 1990s, when RTSA was available only in surveillance-focused RF spectrum analyzers with purpose-built signal processing hardware. A decade later RTSA became a type of RF/microwave analyzer for more general use, though still with a dedicated architecture to meet the signal processing demands.

A few years later, the increasing power of ASICs and FPGAs allowed RTSA to be folded into mainstream signal analyzers as an option that, in many cases, can be retrofitted to existing ones. The FieldFox models provide real-time bandwidth to 10 MHz and frequency coverage to 50 GHz, accurately measuring signals as brief as 12 µs. They can detect—but may not accurately measure—signals as brief as 22 ns. Impressive performance for a general purpose tool that is handheld, battery powered, and environmentally sealed.

This “density” display from a FieldFox microwave combination analyzer with the RTSA option represents the dynamics of a number of different digitally modulated signals with different colors signifying frequency of occurrence.

This “density” display from a FieldFox microwave combination analyzer with the RTSA option represents the dynamics of a number of different digitally modulated signals with different colors signifying frequency of occurrence.

To catch these brief events, the FieldFox handhelds calculate 120,000 spectra per second. The benchtop signal analyzers are even faster, covering real-time bandwidths to 510 MHz and detecting signals as short as 3.3 ns.

However, detection and basic measurement of a transient is sometimes just the first step in solving a problem, and I thought I’d describe additional tools and techniques that will tell you what you need to know.

After power and frequency, timing is the element that can help you assess the significance of a signal and begin to understand cause and effect. Spectral sequences such as spectrograms put power, frequency, and timing in a single display that dramatically enhances understanding. A simple process of saving and displaying successive spectra has great intuitive leverage.

Some of the most problematic signals and events are both brief and highly intermittent. In these situations the essential element of the measurement solution is triggering, from basic magnitude triggering to frequency-mask and time-qualified triggers. These techniques take advantage of real-time calculations of signal magnitude and spectrum to set a customized trap for the specific signal or event you’re chasing. The analyzer can do the tedious work while you get a cup of coffee.

One of the most powerful and intuitive tools for elusive signals is signal capture and playback. This is generally performed with 89600 VSA software, operating on the same benchtop signal analyzer platform as RTSA. The entire signal is streamed to memory, without gaps, for playback and flexible post-processing. Since the signal is captured in the time domain, any type of analysis—in any domain—can be selected after the fact.

Capture and playback is a sort of time machine for the RF engineer. The signal analyzers and VSA software support negative trigger delays, allowing analysis before the trigger event. They also implement signal resampling, so you can go back in time and change your mind about center frequency and span. Magic! You can even change the speed of playback to see events in all their subtle detail.


When the signal or behavior seems to be aggressively hiding from you, the most effective approach is to compound these measurement techniques: Real-time analysis enables frequency mask triggering, which enables signal capture with a negative trigger delay. You can lay in wait, missing nothing while you enjoy your coffee or tea, and capture the most elusive problem for later analysis of any type you choose.

Originally posted Sept 21, 2016


No—but the way forward has implications for RF and microwave testing

In Electronic Design, Lou Frenzel recently asked a couple of provocative questions, and in this post I’d like to look at both from a different angle (I previously discussed one in the context of engineering imagination). Specifically, Lou wondered whether communications data rates have really maxed outand whether smartphones have peaked. He is taking a forward look through the mists of time and, as I once learned as a product-line forecaster, “It is difficult to make predictions, especially about the future.” *

Back then, my experience made it clear I was no seer and, on a consistent basis, nobody else was either. Nature seems to abhor this sort of thing. However, I could be Captain Obvious and do a good job pointing out big things that were relatively clear. I’d like to take a stab at that here.

First, data rates: We all know that what matters in the end is effective data rate, the actual rate—including latency—that you or I or our customers can get at any given time and place. By that standard there is a lot that needs to be done, and a lot that can be done.

There is no talk of repealing Shannon-Hartley, but modern systems are aiming for MIMO up to 8×8 and QAM as dense as 4096 to crowd more bits into the available hertz in a practical fashion. Combine these techniques with wider bandwidths and more OFDM subcarriers and the potentialdata rates are seriously impressive: LTE-A proposes 1.6 Gbps for wireless, and DOCSIS 3.1 promises a 10 Gbps wired downstream link.

Denser constellations encode more bits in each symbol, increasing data rates for a given bandwidth. This VSA display of errors and coding includes a constellation of 4096 states or 12 bits/symbol.

Denser constellations encode more bits in each symbol, increasing data rates for a given bandwidth. This VSA display of errors and coding includes a constellation of 4096 states or 12 bits/symbol.

Spectral efficiency is just the platform for a host of improvements and enhancements that will deliver more data in more places. That’s the key to the effective data rates that matter much more than the theoretical ones in the press releases.

In my experience, the inability to get 20 Mb/s in any particular location pales in comparison to the inability to get even 1 Mb/s in so many locations. Even IoT and smart homes or businesses generally prioritize a reliable connection over a high data rate.

Fat pipes for direct connections and backhaul will help because high capacity can usually be sliced up to give more users an adequate sliver of the bandwidth pie. Adaptive and cognitive systems can make the most of the existing and evolving environment. Beamforming and small—pico or femto—cells can make better use of the local RF landscape. Multi-mode devices can use the best one of several radios to move the data of the moment.

Millimeter frequencies are yet another approach to matching a need with an improving technology. These can exploit the (currently) open country of much higher frequencies and bandwidths, and turn limited range into an advantage in situations such as wireless video, where frequency reuse can be room-to-room.

I’m hoping that these technology improvements are paired with server and infrastructure investments, to reduce latency. It’s another factor that often means more than Mb/s, and can mean a lot in terms of user satisfaction.

I won’t try to predict details, but I know this is good news for RF engineers. All these tactics for increasing effective data rate require innovative design and manufacturing to yield devices that perform well enough, meet standards, and hit the cost targets essential to success in the marketplace.

As for Lou’s speculation that smartphones may have peaked, he may be right about smartphonesper se, but I look at this in a different way. Those glossy little (or not) slabs should instead be seen as the computer/communications/information/audio-video image device you have with you constantly—and “smartphone” sells them short. Seen in that light, the opportunity for transformative growth is huge.

Consider virtual and augmented reality, especially in the context of the explosive success of apps such as Pokemon Go. The need for high-resolution, low-latency video will drive data rates and the evolution of the things we call smartphones in directions most of us haven’t yet imagined. While the value and wisdom of those directions is open to debate, the demands on RF design and test seem very clear to this Captain Obvious.


* Attributed to everyone from Niels Bohr to Yogi Berra, but probably from a Danish humorist in 1948 or earlier.

Originally posted Aug 24, 2016


Sometimes a common understanding is not common


It’s always an interesting experience when I find that one of my assumptions is wrong, or at least not very right. Yes, it’s a chance to learn and grow and all that, but it sometimes provokes puzzlement or a little disappointment. This is one of those puzzling and slightly disappointing cases.

I first encountered the “near-far” term and associated concepts in an internal training talk by an R&D project manager with deep experience in RF circuits and measurements. He was explaining different dynamic range specifications in terms of distortion and interference. His context was the type of real-world problems that affect spec and design tradeoffs.

In his talk, the project manager explained some of the ways design considerations and performance requirements depend on distance. After a couple of examples his meaning was clear in terms of the wide range of distances involved and the multitude of implications for RF engineering. Whether over-the-air or within a device, physical distance really matters.

At the time, I thought it was a neat umbrella concept, linking everything from intentional wireless communications and associated unintentional interference to the undesirable coupling that is an ever-present challenge in today’s multi-transceiver wireless devices. For example, even small spurious or harmonic products can cause problems with unrelated radios in a compact device where 5 cm is near and 5 km—the base station you want to reach—is far.

That talk was a formative experience for me, way back in the 1980s. I kept near-far considerations and configurations in my mind as I learned about wireless technologies, equipment design and tradeoffs, and avoidance of interference problems. The near-far concept illuminated the issues behind a wide range of schemes and implementations.

Though I didn’t hear the near-far concept too often, I assumed most RF engineers thought along those lines and presumably used those terms. A recent Web search for near-far problem let me know my assumption was faulty. The search results are relatively modest and mostly focus on the“hearability problem” in CDMA wireless schemes. This is an excellent example of a near-far situation, where transmitters at shorter distances are received at higher power, making it difficult for correlators—which see every signal but the target as noisy interference—to extract smaller signals.

Measuring power in the code domain separates received power according to individual transmitters and their codes. Demodulation is most effective when code powers are equal.

Measuring power in the code domain separates received power according to individual transmitters and their codes. Demodulation is most effective when code powers are equal. This example is from the N9073C W-CDMA/HSPA+ X-Series Measurement App.

However, I don’t think the narrowing of meaning is a matter of the era when the term came into use, since I heard the term years before CDMA, and some people were using it years before that. Perhaps it reflects the fact that CDMA was a very interesting and high profile example, and a single association with the term was thus established.

In any case, I think this shrunken use of the term is unfortunate. Careful consideration of potential near-far issues can help engineers avoid serious problems, or at least address them early on, before solutions are foreclosed or too much money is spent.

One cautionary example is the c.2010 effort by LightSquared (now Ligado Networks) to expand mobile 4G-LTE coverage using terrestrial base stations in a band originally intended for satellites. The band was adjacent to some GPS frequencies, and the switch from satellite distances (far) to terrestrial ones (near) dramatically increased the likelihood and severity of interference problems. The large reduction in distance upset earlier assumptions about relative signal strength—assumptions that drove the design, performance, and cost of many GPS receivers.

The potential interference problems prevented approval of the original LightSquared plan, and the fate of its portion of the L-Band is not yet determined. Whatever it is, I expect it will more fully account for the near-far issues, along with the cost and performance requirements related to both new and existing equipment.

The near-far concept also has a probability dimension. As you’d expect, some sins of RF interference are more likely to be a critical issue as the density of radios in our environment continues its dramatic increase. Some problems that were once far away are getting nearer all the time.


To appease my own curiosity, I’ll leave you with two questions: Have you encountered the near-far concept? Or do you rely on a touchstone idea, learned from an experienced hand, that isn’t as widely known as you once thought?


Baseband and IF Sampling

Posted by benz Oct 14, 2016

Originally posted Aug 9, 2016


Different ways to get your signal bits


There’s a long history of synergy and a kind of mutual bootstrapping in the technology of test equipment and the devices it’s used to develop and manufacture. Constant technology improvements lead to welcome—and sometimes crucial—improvements in RF performance. It’s a virtuous cycle that powers our field, but it also presents us with some challenging choices as the landscape evolves.

Signal analyzers and digital oscilloscopes have exemplified these improvements and illustrate the complex choices facing RF engineers. The latest signal analyzer, for example, covers bandwidths as wide as 1 GHz at frequencies up to 50 GHz. New oscilloscopes offer bandwidths as wide as 63 GHz, solidly in the millimeter range. Other oscilloscopes and digitizers, at more modest prices, cover the cellular and WLAN RF bands.

The established solution for spectrum analysis and demodulation of RF/microwave signals is the signal analyzer, and it’s logical to wonder if the technology advances in digital oscilloscopes and signal analysis software have changed your choices. If both hardware platforms can sample the bandwidths and operating frequencies used, how do you get your bits and, ultimately, the results you need?

The answer begins with an understanding of the two different approaches to sampling signals, summarized in these dramatically simplified block diagrams. First, a look at IF sampling:

In this architecture, the signal is downconverted and band-limited before being digitized. Sampling is performed on the intermediate frequency (IF) stage output.

In this architecture, the signal is downconverted and band-limited before being digitized. Sampling is performed on the intermediate frequency (IF) stage output.

In signal analyzers, the sampling frequency is related to the maximum bandwidth required to represent the signal under test. That frequency is usually low compared to the center frequency of the signal under test, and there is no need to change it with changes in signal center frequency.

The alternative, called baseband sampling, involves direct sampling of the entire signal under test, from DC to at least its highest occupied frequency: CF + ½ OccBW.

Here, the signal undergoes minimal processing before being digitized. The lowpass filter ensures that frequencies above the ADC’s Nyquist sampling criterion do not produce false or alias products in the processed results.

Here, the signal undergoes minimal processing before being digitized. The lowpass filter ensures that frequencies above the ADC’s Nyquist sampling criterion do not produce false or alias products in the processed results.

The signal under test is completely represented by baseband sampling and any type of analysis can be performed. Narrowband analysis as performed with a spectrum/signal analyzer—in the time, frequency, and modulation domains—is achieved by implementing filters, mixers, resamplers, and demodulators in DSP. Keysight’s 89600 VSA software is the primary tool for these tasks and many others, and it runs on a variety of sampling platforms.

We thus have two paths to the signal analysis we need, and we’re back to the earlier question about the best sampling choice among evolving technologies. The answer is primarily driven by performance requirements, the operating frequencies and bandwidths involved, and the resulting demands on sample rate.

The architecture of IF sampling allows for analog downconversion and filtering to dramatically reduce the required sample rate. This process has been thoroughly optimized in performance and cost, and focuses ADC performance on the essential signal. Other frequencies are excluded, and the limited bandwidth allows for ADCs with the best resolution, accuracy, and dynamic range.

With baseband sampling, frequency conversion and filtering are done in DSP, requiring a vast amount of digital data reduction to focus analysis on the band in question. This must precede processing for signal-analysis results such as spectrum or demodulation.

The tradeoffs explain why spectrum analysis and demodulation are generally performed using IF sampling. However, the technological evolution mentioned above explains the increasing use of baseband sampling for RF and microwave signal analysis. ADCs and DSPs are improving in cost and quality, and are frequently available on the RF engineer’s bench in the form of high-resolution oscilloscopes. RF and modulation quality performance may be adequate for many measurements, and the extremely wide analysis bandwidths available may be an excellent solution to the demands of radar, EW, and the latest wideband or aggregated-carrier wireless schemes.


Ultimately, personal preference is a factor that can’t be ignored. Do you look for your first insights in the time or frequency domain before delving into measurements such as demodulation? The software and hardware available these days may give you just the choice you want.


An Intuitive Look at Noise Figure

Posted by benz Oct 14, 2016

Originally posted Jul 22, 2016


An overview to complement your equations


After some recent conversations about noise figure measurements, I’ve been working to refresh my knowledge of what they mean and how they’re made. My goal was to get the essential concepts intuitively clear in my mind, in a way that would persist and therefore guide me as I looked at measurement issues that I’ll be writing about soon.


Maybe my summary will be helpful to you, too. As always, feel free to comment with any suggestions or corrections.

  • Noise figure is a two-port measurement, defined as an input/output ratio of signal-to-noise measurements—so it’s a ratio of ratios. The input ratio may be explicitly measured or may be implied, such as assuming that it’s simply relative to the thermal noise of a perfect passive 50Ω.
  • The input/output ratio is called noise factor, and when expressed in dB it’s called noise figure. Noise figure is easier to understand in the context of typical RF measurements, and therefore more common.
  • It’s a measure of the extra noise contributed by a circuit, such as an amplifier, beyond that of an ideal element that would provide gain with no added noise. For example, an ideal amplifier with 10 dB of gain would have 10 dB more noise power at its output than its input, but would still have a perfect noise figure of 0 dB.

It’s important to understand that noise figure measurements must accurately account for circuit gain because it directly affects measured output noise and therefore noise figure. Gain errors translate directly to noise figure errors.

The Y factor method is the most common way to make these measurements. A switchable, calibrated noise source is connected to the DUT input and a noise figure analyzer or signal analyzer is connected to the output. An external preamp may be added to optimize analyzer signal/noise and improve the measurement.

The central element of the noise source is a diode, driven to an avalanche condition to produce a known quantity of noise power. The diode is not a very good 50Ω impedance, so it is often followed by an attenuator to improve impedance match with the presumed 50Ω DUT.

The central element of the noise source is a diode, driven to an avalanche condition to produce a known quantity of noise power. The diode is not a very good 50Ω impedance, so it is often followed by an attenuator to improve impedance match with the presumed 50Ω DUT.

The noise figure meter or signal analyzer switches the noise source on and off and compares the results, deriving both DUT gain and noise figure versus frequency. It’s a convenient way to make the measurements needed for noise figure, and specifications are readily available for both the noise source and the analyzer.

However, the impedance match between the noise source and the DUT affects the power that is actually delivered to the DUT and therefore the gain calculated by measuring its output. The impedance match is generally very good at low frequencies and with an attenuator in the noise source output. This enables accurate estimates of measurement uncertainty.

Unfortunately, as you approach millimeter frequencies, impedances are less ideal, gains are lower, and noise source output declines. Noise figure measurements are more challenging, and uncertainty is harder to estimate. In at least one upcoming post, I’ll discuss these problems and some practical solutions and measurement choices.


Why go to all the trouble? Whether or not it has mass, noise is a critical factor in many applications. By making individual or incremental noise figure measurements, you can identify and quantify noise contributors in your designs. This is the knowledge that will help you minimize noise, and optimize the cost and performance tradeoffs that are an important part of the value you add as an RF engineer.

Originally posted Jul 8, 2016


GPS and the skill, creativity and imagination of engineers


When it comes to predicting the future, I’m not sure if RF engineers are any better or worse than others—say economists or the general public. If you limit predictions to the field of electronics and communications, engineers have special insight, but they will still be subject to typical human biases and foibles.

However, when it comes to adapting to the future as it becomes their present, I’d argue that engineers show amazing skill. The problem solving and optimizing that comes instinctively to engineers give them tremendous ability to take advantage of opportunities, both technical and otherwise. Some say skillful problem solving is the defining characteristic of an engineer.

GPS is a good example of both adaptation and problem-solving, and it’s on my mind because of historical and recent developments.

It was originally envisioned as primarily a navigation system, and the scientists and engineers involved did an impressive job of predicting a technological future that could be implemented on a practical basis. Development began in 1973, with the first satellite lunch in 1978, so the current system that includes highly accurate but very inexpensive receivers demonstrates impressive foresight. Indeed, the achievable accuracy is so high in some implementations that it is much better than even the dimensions of receive antennas, and special choke ring antennas are used to take advantage of it.

In some systems, GPS accuracy is better than the dimensions of the receive antenna, and in surveying you’ve probably seen precision radially symmetric antennas such as this ring type. Diagram from the US Patent and Trademark Office, patent #6040805

In some systems, GPS accuracy is better than the dimensions of the receive antenna, and in surveying you’ve probably seen precision radially symmetric antennas such as this ring type. Diagram from the US Patent and Trademark Office, patent #6040805

Over the years, GPS has increasingly been used provide another essential parameter: time. As a matter of fact, the timing information from GPS may now be a more important element of our daily lives than navigation or location information. It’s especially important in keeping cellular systems synchronized, and it’s also used with some wireline networks, the electrical power grid, and even financial banking and trading operations.

As is so often the case, the dependencies and associated risks are exposed when something goes wrong. In January of this year, in the process of decommissioning one GPS satellite, the U.S. Air Force set the clocks wrong on about 15 others. The error was only 13 microseconds, but it caused about 12 hours of system problems and alarms for telecommunications companies. Local oscillators can provide a “holdover time” of about a day in these systems, so a 12-hour disturbance got everyone’s attention.

Outages such as this are a predictable part of our technological future, whether from human error, jamming, hardware failure, or a natural disaster such as the Carrington Event. The fundamental challenge is to find ways to adapt or, better yet, to do the engineering in advance to be able to respond without undue hardship or delay.

RF engineering obviously has a major role to play here, and at least two technologies are currently practical as alternates or supplements to GPS:

  • The proposed eLORAN system would replace several earlier LORAN systems that have been shut down in recent years. The required engineering is no barrier, but legislative support is another matter. In addition to serving as a GPS backup, eLORAN offers better signal penetration into buildings, land and water.
  • Compact, low-power atomic frequency references can offer independence from GPS, or may provide greatly extended holdover times. Their modest cost should allow wide adoption in communications systems.

As legendary computer scientist Alan Kay once said, “The best way to predict the future is to invent it.” If past is prolog, and I believe it is, I’m confident RF engineers will continue to be among the best at designing for the future, adapting to technology opportunities, and solving the problems that arise along the way.


Insidious Measurement Errors

Posted by benz Oct 14, 2016

Originally posted Jun 23, 2016


How to avoid fooling yourself


Some years ago I bought an old building lot for a house, and hired a surveyor because the original lot markers were all gone. It was a tough measurement task because nearby reference monuments had also gone missing since the lot was originally platted. Working from the markers in several adjacent plats, the surveyor placed new ones on my lot—but when he delayed the official recording of the survey I asked him why. His reply: he didn’t want to “drag other plat errors into the new survey.” Ultimately, it took three attempts before he was satisfied with the placement.

Land surveys are different from RF measurements, but some important principles apply to both. Errors sometimes stack up in unfortunate ways, and an understanding of insidious error mechanisms is essential if you want to avoid fooling yourself. This is especially true when you’re gathering more information to better understand measurement uncertainty.

Keysight engineers have the advantage of working in an environment that is rich in measurement hardware and expertise. They have access to multiple measurement tools for comparing different approaches, along with calibration and metrology resources. I thought I’d take a minute to discuss a few things they’ve learned and approaches they’ve taken that may help you avoid sneaky errors.

Make multiple measurements and compare. I’m sure you’re already doing this in some ways—it’s an instinctive practice for test engineers, and can give you an intuitive sense of consistency and measurement variability. Here’s an example of three VSWR measurements.

VSWR of three different signal analyzers in harmonic bands 1-4. With no input attenuation, mismatch is larger than it would otherwise be. The 95% band for VSWR is about 1.6 dB.

VSWR of three different signal analyzers in harmonic bands 1-4. With no input attenuation, mismatch is larger than it would otherwise be. The 95% band for VSWR is about 1.6 dB.

It’s always a good idea to keep connections short and simple, but it’s worth trying different DUT connections to ensure that a cable or connector—or even a specific bit of contamination—isn’t impairing many measurements in a consistent way that’s otherwise hard to spot. The same thing applies to calibration standards and adapters.

The multiple-measurements approach also applies when using different types of analyzer. Signal analyzers can approach the accuracy of RF/microwave power meters, and each can provide a check on an error by the other.

Adjust with one set of equipment and verify with another. DUTs may be switched from one station to another, or elements such as power sensors may be exchanged periodically to spot problems. This can be done on a sample or audit basis to minimize cost impacts.

In estimating uncertainty, understand the difference between worst case and best estimate. As Joe Gorin noted in a comment on an earlier post “The GUM, in an appendix, explains that the measurement uncertainty should be the best possible estimate, not a conservative estimate. When we know the standard deviation, we can make better estimates of the uncertainty than we can when we have only warranted specifications.” A more thorough understanding of the performance of the tools you have may be an inexpensive way to make measurements better.

Make sure the uncertainties you estimate are applicable to the measurements you make. Room temperature specifications generally apply from 20 to 30 °C, but the “chimney effect” within system racks and equipment stacks can make instruments much warmer than the ambient temperature.

Take extra care as frequencies increase. Mismatch can be the largest source of uncertainty in RF/microwave measurements, and it generally gets worse as frequencies increase. Minimizing it can be worth an investment in better cables, attenuators, adapters, and torque wrenches.


This isn’t meant to suggest that you adopt an excessively paranoid outlook—but it’s safe to assume the subtle errors really are doing their best to hide from you while they subvert your efforts. Said another way, it’s always best to be alert and diverse in your approaches.