Skip navigation
All Places > Keysight Blogs > Better Measurements: The RF Test Blog > Blog > 2016 > October
2016

  Don’t sacrifice the benefits of improved equipment by missing the basics

Just a decade ago I would have found it hard to believe, but millimeter-frequency applications above 50 GHz are going genuinely mainstream. Wireless HD, 802.11ad wireless networking, 5G cellular, and automotive radar are high-profile examples of an important trend, supported by remarkable advances in semiconductor technology.

I’m not surprised at the existence of applications at these frequencies, but I am surprised that retail prices begin under $200 for Wireless HD, and radar options are available on fairly pedestrian cars, albeit the higher-end versions. Bringing this technology to the mass market at the price points required for large-scale adoption poses tremendous challenges in design and manufacturing.

The engineering challenges have been a little less daunting as more millimeter test equipment has become available with single-box direct-connection coverage to 67 GHz. Keysight’s recent introduction of the N9041B UXA X-Series signal analyzer breaks new ground, increasing direct coverage in coax to 110 GHz. The new signal analyzer provides low noise, good accuracy, and wide bandwidth—1 GHz for internal sampling and 5 GHz for external—to allow engineers to focus on their designs and measurement results instead of pulling together multi-part test solutions where calibration and repeatability may be in question.

However, all this hard-won (and rather expensive) performance can be compromised if you miss even one of the fundamental practices for good measurements at these very high frequencies. The millimeter range is generally defined as 30–300 GHz, with wavelengths down to 1 mm, and these tiny wavelengths are the heart of many problems and challenges.

Connectors are a good place to start because they exemplify so many of the ways that millimeter measurements will test you. Here’s a close-up of female and male 1 mm connectors, which are mode-free to more than 110 GHz.

1 mm millimeter connectors male female

Female and male 1 mm connectors. The center pin in the jack on the right is only one-quarter of 1 mm in diameter, generally too small for me to see without magnification. (Image from Wikimedia Commons)

The small size and precise geometry of millimeter connectors and cables demand special machining and fabrication. They are necessarily somewhat expensive and inescapably more delicate as frequencies increase and dimensions decrease. In a previous post I described their frequency ranges and intermating possibilities, and here I’ll note that, despite mechanical compatibility, all intermating and even same-connector mating still produces impedance problems that should be avoided wherever possible.

Perhaps the most important connection is the one at the front panel of the test equipment. Despite their delicacy, male connectors are better than the alternative at millimeter frequencies. The usual practice is to attach a female-to-female “connector saver” at the instrument, but this choice is complicated by the fact that impedance problems and loss through cables and connectors or adapters also get worse as frequencies increase. In some cases, it’s worth the cost and trouble of acquiring custom cabling with correct gender at each end, especially considering how precious power and performance are at these frequencies. Custom cabling also allows the cables to be as short as possible. Indeed, one tactic that is sometimes overlooked is to simply move the DUT and instrument as close to each other as practical.

The picture above suggests another area of best practices: connector care. These connectors do not appear obviously damaged or displaced, but some contamination is clearly present. Because of their tiny dimensions, special cleaning materials and techniques are needed for microwave and millimeter connectors. Connector gauges are also important to ensure that mechanical dimensions are within the tight tolerances that provide a reasonable impedance match. For more detail on torque and other coaxial connection issues and practices, see the classic (old!) application note Principles of Microwave Connector Care (AN-326), Keysight literature number 5954-1566.

Proper connector torque is another fundamental for good millimeter connections, and in a previous post on torque   I discussed the mechanical essentials and ways to avoid damaging these pricey little parts.

Finally, given the connection losses in coax and the sometimes cumbersome physical implications of waveguide, you may want to consider external mixers. Keysight’s Smart Harmonic Mixers cover 50-110 GHz and make this approach much more convenient and accurate than previous mixers. They allow you to create a remote test head, placing the measurement plane right at the DUT. While they lack the IF bandwidth of the new UXA signal analyzer, they do allow non-millimeter signal analyzers to cover these high frequencies.

These fundamentals are hardly “basics,” but they’re straightforward practices to implement—and they’ll help you pass the tests you’ll face as your designs take you well into millimeter territory.

 

Originally posted Oct 4, 2016

Sometimes the answer is not in the first place you look

Real-time spectrum analyzer (RTSA) capability has gradually moved from a specialized measurement made by specialized instruments to an option available for many spectrum and signal analyzers. Keysight recently added an RTSA option to FieldFox handheld analyzer models that include RF/microwave spectrum analyzers and combination analyzers (spectrum analyzer, vector network analyzer and cable/ antenna tester).

We’ve come a long way from the early 1990s, when RTSA was available only in surveillance-focused RF spectrum analyzers with purpose-built signal processing hardware. A decade later RTSA became a type of RF/microwave analyzer for more general use, though still with a dedicated architecture to meet the signal processing demands.

A few years later, the increasing power of ASICs and FPGAs allowed RTSA to be folded into mainstream signal analyzers as an option that, in many cases, can be retrofitted to existing ones. The FieldFox models provide real-time bandwidth to 10 MHz and frequency coverage to 50 GHz, accurately measuring signals as brief as 12 µs. They can detect—but may not accurately measure—signals as brief as 22 ns. Impressive performance for a general purpose tool that is handheld, battery powered, and environmentally sealed.

This “density” display from a FieldFox microwave combination analyzer with the RTSA option represents the dynamics of a number of different digitally modulated signals with different colors signifying frequency of occurrence.

This “density” display from a FieldFox microwave combination analyzer with the RTSA option represents the dynamics of a number of different digitally modulated signals with different colors signifying frequency of occurrence.

To catch these brief events, the FieldFox handhelds calculate 120,000 spectra per second. The benchtop signal analyzers are even faster, covering real-time bandwidths to 510 MHz and detecting signals as short as 3.3 ns.

However, detection and basic measurement of a transient is sometimes just the first step in solving a problem, and I thought I’d describe additional tools and techniques that will tell you what you need to know.

After power and frequency, timing is the element that can help you assess the significance of a signal and begin to understand cause and effect. Spectral sequences such as spectrograms put power, frequency, and timing in a single display that dramatically enhances understanding. A simple process of saving and displaying successive spectra has great intuitive leverage.

Some of the most problematic signals and events are both brief and highly intermittent. In these situations the essential element of the measurement solution is triggering, from basic magnitude triggering to frequency-mask and time-qualified triggers. These techniques take advantage of real-time calculations of signal magnitude and spectrum to set a customized trap for the specific signal or event you’re chasing. The analyzer can do the tedious work while you get a cup of coffee.

One of the most powerful and intuitive tools for elusive signals is signal capture and playback. This is generally performed with 89600 VSA software, operating on the same benchtop signal analyzer platform as RTSA. The entire signal is streamed to memory, without gaps, for playback and flexible post-processing. Since the signal is captured in the time domain, any type of analysis—in any domain—can be selected after the fact.

Capture and playback is a sort of time machine for the RF engineer. The signal analyzers and VSA software support negative trigger delays, allowing analysis before the trigger event. They also implement signal resampling, so you can go back in time and change your mind about center frequency and span. Magic! You can even change the speed of playback to see events in all their subtle detail.

 

When the signal or behavior seems to be aggressively hiding from you, the most effective approach is to compound these measurement techniques: Real-time analysis enables frequency mask triggering, which enables signal capture with a negative trigger delay. You can lay in wait, missing nothing while you enjoy your coffee or tea, and capture the most elusive problem for later analysis of any type you choose.

Originally posted Sept 21, 2016

 

No—but the way forward has implications for RF and microwave testing

In Electronic Design, Lou Frenzel recently asked a couple of provocative questions, and in this post I’d like to look at both from a different angle (I previously discussed one in the context of engineering imagination). Specifically, Lou wondered whether communications data rates have really maxed outand whether smartphones have peaked. He is taking a forward look through the mists of time and, as I once learned as a product-line forecaster, “It is difficult to make predictions, especially about the future.” *

Back then, my experience made it clear I was no seer and, on a consistent basis, nobody else was either. Nature seems to abhor this sort of thing. However, I could be Captain Obvious and do a good job pointing out big things that were relatively clear. I’d like to take a stab at that here.

First, data rates: We all know that what matters in the end is effective data rate, the actual rate—including latency—that you or I or our customers can get at any given time and place. By that standard there is a lot that needs to be done, and a lot that can be done.

There is no talk of repealing Shannon-Hartley, but modern systems are aiming for MIMO up to 8×8 and QAM as dense as 4096 to crowd more bits into the available hertz in a practical fashion. Combine these techniques with wider bandwidths and more OFDM subcarriers and the potentialdata rates are seriously impressive: LTE-A proposes 1.6 Gbps for wireless, and DOCSIS 3.1 promises a 10 Gbps wired downstream link.

Denser constellations encode more bits in each symbol, increasing data rates for a given bandwidth. This VSA display of errors and coding includes a constellation of 4096 states or 12 bits/symbol.

Denser constellations encode more bits in each symbol, increasing data rates for a given bandwidth. This VSA display of errors and coding includes a constellation of 4096 states or 12 bits/symbol.

Spectral efficiency is just the platform for a host of improvements and enhancements that will deliver more data in more places. That’s the key to the effective data rates that matter much more than the theoretical ones in the press releases.

In my experience, the inability to get 20 Mb/s in any particular location pales in comparison to the inability to get even 1 Mb/s in so many locations. Even IoT and smart homes or businesses generally prioritize a reliable connection over a high data rate.

Fat pipes for direct connections and backhaul will help because high capacity can usually be sliced up to give more users an adequate sliver of the bandwidth pie. Adaptive and cognitive systems can make the most of the existing and evolving environment. Beamforming and small—pico or femto—cells can make better use of the local RF landscape. Multi-mode devices can use the best one of several radios to move the data of the moment.

Millimeter frequencies are yet another approach to matching a need with an improving technology. These can exploit the (currently) open country of much higher frequencies and bandwidths, and turn limited range into an advantage in situations such as wireless video, where frequency reuse can be room-to-room.

I’m hoping that these technology improvements are paired with server and infrastructure investments, to reduce latency. It’s another factor that often means more than Mb/s, and can mean a lot in terms of user satisfaction.

I won’t try to predict details, but I know this is good news for RF engineers. All these tactics for increasing effective data rate require innovative design and manufacturing to yield devices that perform well enough, meet standards, and hit the cost targets essential to success in the marketplace.

As for Lou’s speculation that smartphones may have peaked, he may be right about smartphonesper se, but I look at this in a different way. Those glossy little (or not) slabs should instead be seen as the computer/communications/information/audio-video image device you have with you constantly—and “smartphone” sells them short. Seen in that light, the opportunity for transformative growth is huge.

Consider virtual and augmented reality, especially in the context of the explosive success of apps such as Pokemon Go. The need for high-resolution, low-latency video will drive data rates and the evolution of the things we call smartphones in directions most of us haven’t yet imagined. While the value and wisdom of those directions is open to debate, the demands on RF design and test seem very clear to this Captain Obvious.

 

* Attributed to everyone from Niels Bohr to Yogi Berra, but probably from a Danish humorist in 1948 or earlier.

Originally posted Aug 24, 2016

 

Sometimes a common understanding is not common

 

It’s always an interesting experience when I find that one of my assumptions is wrong, or at least not very right. Yes, it’s a chance to learn and grow and all that, but it sometimes provokes puzzlement or a little disappointment. This is one of those puzzling and slightly disappointing cases.

I first encountered the “near-far” term and associated concepts in an internal training talk by an R&D project manager with deep experience in RF circuits and measurements. He was explaining different dynamic range specifications in terms of distortion and interference. His context was the type of real-world problems that affect spec and design tradeoffs.

In his talk, the project manager explained some of the ways design considerations and performance requirements depend on distance. After a couple of examples his meaning was clear in terms of the wide range of distances involved and the multitude of implications for RF engineering. Whether over-the-air or within a device, physical distance really matters.

At the time, I thought it was a neat umbrella concept, linking everything from intentional wireless communications and associated unintentional interference to the undesirable coupling that is an ever-present challenge in today’s multi-transceiver wireless devices. For example, even small spurious or harmonic products can cause problems with unrelated radios in a compact device where 5 cm is near and 5 km—the base station you want to reach—is far.

That talk was a formative experience for me, way back in the 1980s. I kept near-far considerations and configurations in my mind as I learned about wireless technologies, equipment design and tradeoffs, and avoidance of interference problems. The near-far concept illuminated the issues behind a wide range of schemes and implementations.

Though I didn’t hear the near-far concept too often, I assumed most RF engineers thought along those lines and presumably used those terms. A recent Web search for near-far problem let me know my assumption was faulty. The search results are relatively modest and mostly focus on the“hearability problem” in CDMA wireless schemes. This is an excellent example of a near-far situation, where transmitters at shorter distances are received at higher power, making it difficult for correlators—which see every signal but the target as noisy interference—to extract smaller signals.

Measuring power in the code domain separates received power according to individual transmitters and their codes. Demodulation is most effective when code powers are equal.

Measuring power in the code domain separates received power according to individual transmitters and their codes. Demodulation is most effective when code powers are equal. This example is from the N9073C W-CDMA/HSPA+ X-Series Measurement App.

However, I don’t think the narrowing of meaning is a matter of the era when the term came into use, since I heard the term years before CDMA, and some people were using it years before that. Perhaps it reflects the fact that CDMA was a very interesting and high profile example, and a single association with the term was thus established.

In any case, I think this shrunken use of the term is unfortunate. Careful consideration of potential near-far issues can help engineers avoid serious problems, or at least address them early on, before solutions are foreclosed or too much money is spent.

One cautionary example is the c.2010 effort by LightSquared (now Ligado Networks) to expand mobile 4G-LTE coverage using terrestrial base stations in a band originally intended for satellites. The band was adjacent to some GPS frequencies, and the switch from satellite distances (far) to terrestrial ones (near) dramatically increased the likelihood and severity of interference problems. The large reduction in distance upset earlier assumptions about relative signal strength—assumptions that drove the design, performance, and cost of many GPS receivers.

The potential interference problems prevented approval of the original LightSquared plan, and the fate of its portion of the L-Band is not yet determined. Whatever it is, I expect it will more fully account for the near-far issues, along with the cost and performance requirements related to both new and existing equipment.

The near-far concept also has a probability dimension. As you’d expect, some sins of RF interference are more likely to be a critical issue as the density of radios in our environment continues its dramatic increase. Some problems that were once far away are getting nearer all the time.

 

To appease my own curiosity, I’ll leave you with two questions: Have you encountered the near-far concept? Or do you rely on a touchstone idea, learned from an experienced hand, that isn’t as widely known as you once thought?

benz

Baseband and IF Sampling

Posted by benz Oct 14, 2016

Originally posted Aug 9, 2016

 

Different ways to get your signal bits

 

There’s a long history of synergy and a kind of mutual bootstrapping in the technology of test equipment and the devices it’s used to develop and manufacture. Constant technology improvements lead to welcome—and sometimes crucial—improvements in RF performance. It’s a virtuous cycle that powers our field, but it also presents us with some challenging choices as the landscape evolves.

Signal analyzers and digital oscilloscopes have exemplified these improvements and illustrate the complex choices facing RF engineers. The latest signal analyzer, for example, covers bandwidths as wide as 1 GHz at frequencies up to 50 GHz. New oscilloscopes offer bandwidths as wide as 63 GHz, solidly in the millimeter range. Other oscilloscopes and digitizers, at more modest prices, cover the cellular and WLAN RF bands.

The established solution for spectrum analysis and demodulation of RF/microwave signals is the signal analyzer, and it’s logical to wonder if the technology advances in digital oscilloscopes and signal analysis software have changed your choices. If both hardware platforms can sample the bandwidths and operating frequencies used, how do you get your bits and, ultimately, the results you need?

The answer begins with an understanding of the two different approaches to sampling signals, summarized in these dramatically simplified block diagrams. First, a look at IF sampling:

In this architecture, the signal is downconverted and band-limited before being digitized. Sampling is performed on the intermediate frequency (IF) stage output.

In this architecture, the signal is downconverted and band-limited before being digitized. Sampling is performed on the intermediate frequency (IF) stage output.

In signal analyzers, the sampling frequency is related to the maximum bandwidth required to represent the signal under test. That frequency is usually low compared to the center frequency of the signal under test, and there is no need to change it with changes in signal center frequency.

The alternative, called baseband sampling, involves direct sampling of the entire signal under test, from DC to at least its highest occupied frequency: CF + ½ OccBW.

Here, the signal undergoes minimal processing before being digitized. The lowpass filter ensures that frequencies above the ADC’s Nyquist sampling criterion do not produce false or alias products in the processed results.

Here, the signal undergoes minimal processing before being digitized. The lowpass filter ensures that frequencies above the ADC’s Nyquist sampling criterion do not produce false or alias products in the processed results.

The signal under test is completely represented by baseband sampling and any type of analysis can be performed. Narrowband analysis as performed with a spectrum/signal analyzer—in the time, frequency, and modulation domains—is achieved by implementing filters, mixers, resamplers, and demodulators in DSP. Keysight’s 89600 VSA software is the primary tool for these tasks and many others, and it runs on a variety of sampling platforms.

We thus have two paths to the signal analysis we need, and we’re back to the earlier question about the best sampling choice among evolving technologies. The answer is primarily driven by performance requirements, the operating frequencies and bandwidths involved, and the resulting demands on sample rate.

The architecture of IF sampling allows for analog downconversion and filtering to dramatically reduce the required sample rate. This process has been thoroughly optimized in performance and cost, and focuses ADC performance on the essential signal. Other frequencies are excluded, and the limited bandwidth allows for ADCs with the best resolution, accuracy, and dynamic range.

With baseband sampling, frequency conversion and filtering are done in DSP, requiring a vast amount of digital data reduction to focus analysis on the band in question. This must precede processing for signal-analysis results such as spectrum or demodulation.

The tradeoffs explain why spectrum analysis and demodulation are generally performed using IF sampling. However, the technological evolution mentioned above explains the increasing use of baseband sampling for RF and microwave signal analysis. ADCs and DSPs are improving in cost and quality, and are frequently available on the RF engineer’s bench in the form of high-resolution oscilloscopes. RF and modulation quality performance may be adequate for many measurements, and the extremely wide analysis bandwidths available may be an excellent solution to the demands of radar, EW, and the latest wideband or aggregated-carrier wireless schemes.

 

Ultimately, personal preference is a factor that can’t be ignored. Do you look for your first insights in the time or frequency domain before delving into measurements such as demodulation? The software and hardware available these days may give you just the choice you want.

benz

An Intuitive Look at Noise Figure

Posted by benz Oct 14, 2016

Originally posted Jul 22, 2016

 

An overview to complement your equations

 

After some recent conversations about noise figure measurements, I’ve been working to refresh my knowledge of what they mean and how they’re made. My goal was to get the essential concepts intuitively clear in my mind, in a way that would persist and therefore guide me as I looked at measurement issues that I’ll be writing about soon.

 

Maybe my summary will be helpful to you, too. As always, feel free to comment with any suggestions or corrections.

  • Noise figure is a two-port measurement, defined as an input/output ratio of signal-to-noise measurements—so it’s a ratio of ratios. The input ratio may be explicitly measured or may be implied, such as assuming that it’s simply relative to the thermal noise of a perfect passive 50Ω.
  • The input/output ratio is called noise factor, and when expressed in dB it’s called noise figure. Noise figure is easier to understand in the context of typical RF measurements, and therefore more common.
  • It’s a measure of the extra noise contributed by a circuit, such as an amplifier, beyond that of an ideal element that would provide gain with no added noise. For example, an ideal amplifier with 10 dB of gain would have 10 dB more noise power at its output than its input, but would still have a perfect noise figure of 0 dB.

It’s important to understand that noise figure measurements must accurately account for circuit gain because it directly affects measured output noise and therefore noise figure. Gain errors translate directly to noise figure errors.

The Y factor method is the most common way to make these measurements. A switchable, calibrated noise source is connected to the DUT input and a noise figure analyzer or signal analyzer is connected to the output. An external preamp may be added to optimize analyzer signal/noise and improve the measurement.

The central element of the noise source is a diode, driven to an avalanche condition to produce a known quantity of noise power. The diode is not a very good 50Ω impedance, so it is often followed by an attenuator to improve impedance match with the presumed 50Ω DUT.

The central element of the noise source is a diode, driven to an avalanche condition to produce a known quantity of noise power. The diode is not a very good 50Ω impedance, so it is often followed by an attenuator to improve impedance match with the presumed 50Ω DUT.

The noise figure meter or signal analyzer switches the noise source on and off and compares the results, deriving both DUT gain and noise figure versus frequency. It’s a convenient way to make the measurements needed for noise figure, and specifications are readily available for both the noise source and the analyzer.

However, the impedance match between the noise source and the DUT affects the power that is actually delivered to the DUT and therefore the gain calculated by measuring its output. The impedance match is generally very good at low frequencies and with an attenuator in the noise source output. This enables accurate estimates of measurement uncertainty.

Unfortunately, as you approach millimeter frequencies, impedances are less ideal, gains are lower, and noise source output declines. Noise figure measurements are more challenging, and uncertainty is harder to estimate. In at least one upcoming post, I’ll discuss these problems and some practical solutions and measurement choices.

 

Why go to all the trouble? Whether or not it has mass, noise is a critical factor in many applications. By making individual or incremental noise figure measurements, you can identify and quantify noise contributors in your designs. This is the knowledge that will help you minimize noise, and optimize the cost and performance tradeoffs that are an important part of the value you add as an RF engineer.

Originally posted Jul 8, 2016

 

GPS and the skill, creativity and imagination of engineers

 

When it comes to predicting the future, I’m not sure if RF engineers are any better or worse than others—say economists or the general public. If you limit predictions to the field of electronics and communications, engineers have special insight, but they will still be subject to typical human biases and foibles.

However, when it comes to adapting to the future as it becomes their present, I’d argue that engineers show amazing skill. The problem solving and optimizing that comes instinctively to engineers give them tremendous ability to take advantage of opportunities, both technical and otherwise. Some say skillful problem solving is the defining characteristic of an engineer.

GPS is a good example of both adaptation and problem-solving, and it’s on my mind because of historical and recent developments.

It was originally envisioned as primarily a navigation system, and the scientists and engineers involved did an impressive job of predicting a technological future that could be implemented on a practical basis. Development began in 1973, with the first satellite lunch in 1978, so the current system that includes highly accurate but very inexpensive receivers demonstrates impressive foresight. Indeed, the achievable accuracy is so high in some implementations that it is much better than even the dimensions of receive antennas, and special choke ring antennas are used to take advantage of it.

In some systems, GPS accuracy is better than the dimensions of the receive antenna, and in surveying you’ve probably seen precision radially symmetric antennas such as this ring type. Diagram from the US Patent and Trademark Office, patent #6040805

In some systems, GPS accuracy is better than the dimensions of the receive antenna, and in surveying you’ve probably seen precision radially symmetric antennas such as this ring type. Diagram from the US Patent and Trademark Office, patent #6040805

Over the years, GPS has increasingly been used provide another essential parameter: time. As a matter of fact, the timing information from GPS may now be a more important element of our daily lives than navigation or location information. It’s especially important in keeping cellular systems synchronized, and it’s also used with some wireline networks, the electrical power grid, and even financial banking and trading operations.

As is so often the case, the dependencies and associated risks are exposed when something goes wrong. In January of this year, in the process of decommissioning one GPS satellite, the U.S. Air Force set the clocks wrong on about 15 others. The error was only 13 microseconds, but it caused about 12 hours of system problems and alarms for telecommunications companies. Local oscillators can provide a “holdover time” of about a day in these systems, so a 12-hour disturbance got everyone’s attention.

Outages such as this are a predictable part of our technological future, whether from human error, jamming, hardware failure, or a natural disaster such as the Carrington Event. The fundamental challenge is to find ways to adapt or, better yet, to do the engineering in advance to be able to respond without undue hardship or delay.

RF engineering obviously has a major role to play here, and at least two technologies are currently practical as alternates or supplements to GPS:

  • The proposed eLORAN system would replace several earlier LORAN systems that have been shut down in recent years. The required engineering is no barrier, but legislative support is another matter. In addition to serving as a GPS backup, eLORAN offers better signal penetration into buildings, land and water.
  • Compact, low-power atomic frequency references can offer independence from GPS, or may provide greatly extended holdover times. Their modest cost should allow wide adoption in communications systems.

As legendary computer scientist Alan Kay once said, “The best way to predict the future is to invent it.” If past is prolog, and I believe it is, I’m confident RF engineers will continue to be among the best at designing for the future, adapting to technology opportunities, and solving the problems that arise along the way.

benz

Insidious Measurement Errors

Posted by benz Oct 14, 2016

Originally posted Jun 23, 2016

 

How to avoid fooling yourself

 

Some years ago I bought an old building lot for a house, and hired a surveyor because the original lot markers were all gone. It was a tough measurement task because nearby reference monuments had also gone missing since the lot was originally platted. Working from the markers in several adjacent plats, the surveyor placed new ones on my lot—but when he delayed the official recording of the survey I asked him why. His reply: he didn’t want to “drag other plat errors into the new survey.” Ultimately, it took three attempts before he was satisfied with the placement.

Land surveys are different from RF measurements, but some important principles apply to both. Errors sometimes stack up in unfortunate ways, and an understanding of insidious error mechanisms is essential if you want to avoid fooling yourself. This is especially true when you’re gathering more information to better understand measurement uncertainty.

Keysight engineers have the advantage of working in an environment that is rich in measurement hardware and expertise. They have access to multiple measurement tools for comparing different approaches, along with calibration and metrology resources. I thought I’d take a minute to discuss a few things they’ve learned and approaches they’ve taken that may help you avoid sneaky errors.

Make multiple measurements and compare. I’m sure you’re already doing this in some ways—it’s an instinctive practice for test engineers, and can give you an intuitive sense of consistency and measurement variability. Here’s an example of three VSWR measurements.

VSWR of three different signal analyzers in harmonic bands 1-4. With no input attenuation, mismatch is larger than it would otherwise be. The 95% band for VSWR is about 1.6 dB.

VSWR of three different signal analyzers in harmonic bands 1-4. With no input attenuation, mismatch is larger than it would otherwise be. The 95% band for VSWR is about 1.6 dB.

It’s always a good idea to keep connections short and simple, but it’s worth trying different DUT connections to ensure that a cable or connector—or even a specific bit of contamination—isn’t impairing many measurements in a consistent way that’s otherwise hard to spot. The same thing applies to calibration standards and adapters.

The multiple-measurements approach also applies when using different types of analyzer. Signal analyzers can approach the accuracy of RF/microwave power meters, and each can provide a check on an error by the other.

Adjust with one set of equipment and verify with another. DUTs may be switched from one station to another, or elements such as power sensors may be exchanged periodically to spot problems. This can be done on a sample or audit basis to minimize cost impacts.

In estimating uncertainty, understand the difference between worst case and best estimate. As Joe Gorin noted in a comment on an earlier post “The GUM, in an appendix, explains that the measurement uncertainty should be the best possible estimate, not a conservative estimate. When we know the standard deviation, we can make better estimates of the uncertainty than we can when we have only warranted specifications.” A more thorough understanding of the performance of the tools you have may be an inexpensive way to make measurements better.

Make sure the uncertainties you estimate are applicable to the measurements you make. Room temperature specifications generally apply from 20 to 30 °C, but the “chimney effect” within system racks and equipment stacks can make instruments much warmer than the ambient temperature.

Take extra care as frequencies increase. Mismatch can be the largest source of uncertainty in RF/microwave measurements, and it generally gets worse as frequencies increase. Minimizing it can be worth an investment in better cables, attenuators, adapters, and torque wrenches.

 

This isn’t meant to suggest that you adopt an excessively paranoid outlook—but it’s safe to assume the subtle errors really are doing their best to hide from you while they subvert your efforts. Said another way, it’s always best to be alert and diverse in your approaches.

Originally posted Jun 7, 2016

 

Is there an unlimited need for speed, or are we perfecting buggy whips?

 

Contrary to some stereotypes, engineers—especially the most effective ones—are both intuitive and creative. The benefits and tradeoffs of intuition are significant, and that’s why I’ve written here about its limits and ways it might be enhanced. As for creativity, I don’t think I understand it very well at all, and would welcome any perspective that will improve my own.

Recently, on the same day, I ran across two articles that, together, made me think about the role of imagination in engineering. It is clearly another vital talent to develop, and maybe I can learn a little about this element of creativity in the bargain.

The first was Lou Frenzel’s piece in Electronic Design, wondering about the practical limits of data bandwidth, along with its potential uses. The other was the announcement by Facebook and Microsoft of an upcoming 160 Tbit/s transatlantic cable link called MAREA that will reach from Spain to Virginia. I had to blink and read the figure again: the units really are terabits per second.

That figure made me dredge up past skepticism about an announcement, some years back, of 40 Gbit optical links. I remember wondering what applications—even in aggregate—could possibly consume such vast capacity, especially because many such links were in the works. I also wondered just how much higher things could go, scratching my head in the same way Lou did. Now I find myself reading about a cable that will carry 4,000 times more information than the 40 Gbit one, and concluding that I was suffering from afailure of imagination.

Imagination is anything but a mystical concept, and has real technical and business consequences. One of the most famous examples is the 1967 fire in the Apollo 1 spacecraft, where the unforeseen effects of a high-pressure oxygen environment during a ground test turned a small fire into a catastrophe. In congressional hearings about the accident, astronaut Frank Borman—by many accounts the most blunt and plain-spoken engineer around—spoke of the fatal fire’s ultimate cause as a failure of imagination.

Frank Borman, right, the astronaut representative on the Apollo 1 review board, reframed the cause of the Apollo 1 tragedy as essentially a failure of imagination. One clear risk was eliminated because the rocket wasn’t fueled, and perhaps that blinded NASA to the certainly fatal consequences of fire plus high-pressure oxygen in an enclosed space. (Images from Wikimedia Commons)

Frank Borman, right, the astronaut representative on the Apollo 1 review board, reframed the cause of the Apollo 1 tragedy as essentially a failure of imagination. One clear risk was eliminated because the rocket wasn’t fueled, and perhaps that blinded NASA to the certainly fatal consequences of fire plus high-pressure oxygen in an enclosed space. (Images from Wikimedia Commons)

In RF engineering, the hazards we face are much more benign, but are still significant for our work and our careers. We may miss a better way to solve a problem, fail to anticipate a competitor’s move, or underestimate the opportunity for—or the demands on—a new application.

That’s certainly what happened when I tried to imagine how large numbers of 40 Gbit links could ever be occupied. I thought about exchanging big data files, transmitting lots of live video—even the HD that was then on its way—and a considerable expansion of mobile services. However, I completely failed to imagine things like video streaming from smartphones en masse, ubiquitous cloud computing, an Internet of umpteen things, and virtual reality (VR).

VR stands out as an example of multi-faceted innovation that was hardly a blip on the horizon a few years ago—but it now promises to be a huge consumer of both downlink and uplink bandwidth. It demands fast, high-resolution video and typically low latency, and the benefits are compelling. As one example, it converts 3D video from a passively consumed product to an immersive experience with lots of instructional and entertainment applications. Just one among many future bandwidth drivers, I’m sure.

It’s been observed that data bandwidth for wireless links is often about one generation behind that of common wired ones. Both have been growing rapidly, and though I can’t imagine exactly what the demand drivers will be, I agree with Lou that we won’t reach limits soon, and there will continue to be plenty of interesting challenges.

For RF engineers it will mean solving the countless problems that stand between lofty wireless (and wired) standards and the practical, affordable products that will make them reality.

 

Note: This post has been edited to correct a bits/bytes error.

Originally posted May 18, 2016

 

Improving your chances of finding the signals you want

 

Real-time spectrum analyzers (RTSAs) are very useful tools when you’re working with time-varying or agile signals and dynamic signal environments. That’s true of lots of RF engineering tasks these days.

A good way to define real time is that every digital sample of the signal is used in calculating spectrum results. The practical implication is that you don’t miss any signals or behavior, no matter how brief or infrequent. In other words, the probability of intercept (POI) is 100 percent or nearly so.

Discussions of real-time analysis and the tracking of elusive signals are often all-or-nothing, implying that RTSAs are the only effective way to find and measure elusive signals. In many cases, however, the problems we face aren’t so clear-cut. Duty cycles may be low, or the signal behavior in question very infrequent and inconsistent, but the phenomenon to be measured still occurs perhaps once per second or more often. You need a POI much greater than the fraction of a percent that’s typical of wideband swept spectrum measurements, but you may not need the full 100 percent. In this post I’d like to talk about a couple of alternatives that will make use of tools that may already be on your bench.

A good example is the infamous 2.4 GHz ISM band, home to WLANs, Bluetooth, cordless phones, barbecue thermometers, microwave ovens, and any odd thing you IoT engineers may dream up. Using the 89600 VSA software, I made two measurements of this 100 MHz band, changing only the number of frequency points calculated. That setting affected the RBW and time-record length, as you can see here.

Two spectrum measurements of the 2.4 GHz ISM band, made with the 89600 VSA software. The upper trace is the default 800-point result, while the lower trace represents 102,400 points. This represents a 128x longer time record, long enough to include a Bluetooth hop in addition to the wider WLAN burst.

Two spectrum measurements of the 2.4 GHz ISM band, made with the 89600 VSA software. The upper trace is the default 800-point result, while the lower trace represents 102,400 points. This represents a 128x longer time record, long enough to include a Bluetooth hop in addition to the wider WLAN burst.

The 102,400-point measurement has several advantages for a measurement such as this. First, it truly is a gap-free measurement: For the duration of the longer time record, it is a real-time measurement. Next, it contains more information and is much more likely to catch signals with a low duty cycle. It has a narrower RBW, making it easier to separate signals in the band, and revealing more of the structure of each signal. When viewed in the time domain, it can show much more of the pulse and burst signal behaviors in the band.

Another advantage of the larger/longer 100K-point measurement is not as obvious. The total calculation and display time does not increase nearly as rapidly as the number of points, making the larger FFT more efficient and increasing the POI. In my specific example, the overall compute and display speed is almost 20 times faster per point, with a corresponding increase in POI. It’s that much more likely that elusive signals will be found—or noticed—even without an RTSA.

For the RF engineer, however, this flood of results can be hard to use effectively. It’s difficult to relate the many successive traces to signal behavior or band activity as they fly by. The key to a solution is to add another dimension to the display, typically representing when or how often amplitude and frequency values occurred. Here are two displays of a 40 MHz portion of that ISM band.

Many measurement results can be combined in a single trace to help understand the behavior of a signal or the activity in a frequency band. The top trace shows how often specific amplitude and frequency values occurred over many measurements. The bottom trace uses color to show how recently the values occurred, producing a persistence display.

Many measurement results can be combined in a single trace to help understand the behavior of a signal or the activity in a frequency band. The top trace shows how often specific amplitude and frequency values occurred over many measurements. The bottom trace uses color to show how recently the values occurred, producing a persistence display.

These traces make it easier to intuitively interpret dynamic behavior over time and understand the frequency vs. recency of that behavior. Thus, the combination of large FFT size and cumulative color displays may provide the dramatic improvement in POI that you need to find a problem. For precise measurements of elusive signals and dynamic behavior, the 89600 VSA offers other features, including time capture/playback (another variation on real-time measurements over a finite period) and spectrograms created from captured signals.

As professional problem solvers, we can figure out when a finite-duration, gap-free measurement is sufficient and when the continuous capability of an RTSA is the turbo-charged tool we need. In either case, it’s all about harnessing the right amounts of processing power and display capability for the task at hand.

Originally posted May 3, 2016

 

Understanding how it limits dynamic range and what to do about it

 

A talented RF engineer and friend of mine is known for saying “Life is a microcosm of phase noise.” He’s an expert in designing low-noise oscillators and measuring phase noise, so I suppose this conceptual inversion is a natural way for him to look at life. He taught me a lot about phase noise and, although I never matched his near-mystical perspective on it, I have been led to wonder if noise has mass.

A distinctly non-mystical aspect of phase noise is its effect on optimizing distortion measurements, and I recently ran across an explanation worth sharing.

A critical element of engineering is developing an understanding of how phenomena interact in the real world, and making the best of them. For example, to analyze signal distortion with the best dynamic range you need to understand the relationships between second- and third-order dynamic range and noise in a spectrum analyzer. These curves illustrate how the phenomena relate:

The interaction of mixer level with noise and second- and third-order distortion determine the best dynamic range in a signal analyzer. The mixer level is set by changing the analyzer’s input attenuator.

The interaction of mixer level with noise and second- and third-order distortion determine the best dynamic range in a signal analyzer. The mixer level is set by changing the analyzer’s input attenuator.

From classics such as Application Note 150, you probably already know the drill here: In relative terms, analyzer noise floor and second-order distortion change 1 dB—albeit in opposite directions—for every 1 dB change in attenuation or mixer level, and third-order distortion increases 2 dB for each 1 dB increase in mixer level. Therefore, the best attenuation setting for distortion depends on how these phenomena interact, especially where the curves intersect.

The optimum attenuator setting does not precisely match the intersections, though it is very close. The actual dynamic range at that setting is also very close to optimum, though it is about 3 dB worse than the intersection minimum suggests, due to the addition of the noise and distortion.

That’s where your own knowledge and insight come in. The attenuation for the best second-order dynamic range is different from that for the best third-order dynamic range, and the choice depends on your signals and the frequency range you want to measure. Will analyzer-generated second-order or third-order distortion be the limiting factor?

Of course, you can shift the intersections to better locations if you reduce RBW to lower the analyzer noise floor, but that can make sweeps painfully slow.

Fortunately, because you’re the kind of clever engineer who reads this blog, you know about technologies such as noise power subtraction and fast sweep that reduce noise or increase sweep speed without the need to make other tradeoffs.

Another factor may need to be considered if measuring third-order products, one that is often overlooked: analyzer phase noise.

In this two-tone intermod example with a 10 kHz tone spacing, the analyzer’s phase noise at that same 10 kHz offset limits distortion measurement performance to -80 dBc. Without this phase noise the dynamic range would be about 88 dB.

In this two-tone intermod example with a 10 kHz tone spacing, the analyzer’s phase noise at that same 10 kHz offset limits distortion measurement performance to -80 dBc. Without this phase noise the dynamic range would be about 88 dB.

I suppose it’s easiest to think of the analyzer’s phase noise as contributing to its noise floor in an amount corresponding to its phase noise at the tone offset you’re using. Narrower offsets will be more challenging and, as usual, better phase noise will yield better measurements.

That’s where clever engineering comes in again. Analyzer designers are always working to improve phase noise, and the latest approach is a major change to the architecture of the analyzer’s local oscillator (LO): the direct digital synthesizer LO. This technology is now available in two of Keysight’s high-performance signal analyzers and will improve a variety of measurements.

The focus of this post has been on two-tone measurements but, of course, many digitally modulated signals can be modeled as large numbers of closely spaced tones. Phase noise continues to matter, even if the equivalent distortion measurements are ACP/ACPR instead of IMD.

Once again, noise is intruding on our measurement plans—or maybe it’s just lurking nearby.

 

Perhaps this post only proves that my perceptions of phase noise still don’t reach into the mystical realm. Here’s hoping your adventures in phase noise will help you achieve second- and third-order insights.

Originally posted Apr 18, 2016

 

Get all the signal analyzer performance you pay for

 

After perusing a pair of new application briefs, I was impressed by the improvements in signal analyzers driven by the combination of evolving semiconductor technology and plain old competition. I’ll write a little about the history here, but will also highlight a few of the benefits and encourage you to take advantage of them as much as possible. They’ll be welcome help with the complex signals and stringent standards you deal with every day.

Intel’s Gordon Moore and David House are credited with a 1960s prediction that has come to be known as Moore’s law. With uncanny accuracy, it continues to forecast the accelerating performance of computers, and this means a lot to RF engineers, too. Dr. Moore explicitly considered the prospects for analog semiconductors in his 1965 paper, writing that “Integration will not change linear systems as radically as digital systems.” *

Note that as radically qualifier. Here’s my mental model for the relative change rates.

Comparing the rate of change in performance over time of the semiconductor-based elements of signal analyzers. Processors have improved the fastest, though analog circuit performance has improved dramatically as well.

Comparing the rate of change in performance over time of the semiconductor-based elements of signal analyzers. Processors have improved the fastest, though analog circuit performance has improved dramatically as well.

In analyzer architecture and performance, it seems sensible to separate components and systems into those that are mainly digital, mainly analog, or depend on both for improved performance.

It’s no surprise that improvements in each of these areas reinforce each other. Indeed, the performance of today’s signal analyzers is possible only because of substantial, coordinated improvement throughout the block diagram.

Digital technology has worked its way through the signal analyzer processing chain, beginning with the analog signal representing the detected power in the IF. Instead of being fed to the Y-axis of the display to drive a storage CRT, the signal was digitized at a low rate to be sent to memory and then on to a screen.

The next step—probably the most consequential for RF engineers—was to sample the downconverted IF signal directly. With enough sampling speed and fidelity, complex I/Q (vector) sampling could represent the complete IF signal, opening the door to vector signal analysis.

Sampling technology has worked its way through the processing chain in signal analyzers. It’s now possible to make measurements at millimeter frequencies with direct (baseband) sampling, though limited performance and high cost mean most that RF/microwave measurements will continue to be made by IF sampling and processing.

Sampling technology has worked its way through the processing chain in signal analyzers. It’s now possible to make measurements at millimeter frequencies with direct (baseband) sampling, though limited performance and high cost mean most that RF/microwave measurements will continue to be made by IF sampling and processing.

This is where competition comes in, as the semiconductor triple-play produced an alternative to traditional RF spectrum analyzers: the vector signal analyzer (VSA). Keysight—then part of HP—introduced these analyzers as a way to handle the demands of the time-varying and digitally modulated signals that were critical to rapid wireless growth in the 1990s.

A dozen years later, competitive forces and incredibly fast processing produced RF real-time spectrum analyzers (RTSAs) that calculated the scalar spectrum as fast as IF samples came in. Even the most elusive signals had no place to hide.

VSAs and RTSAs were originally separate types of analyzers, but continuing progress in semiconductors has allowed both to become options for signal-analyzer platforms such as Keysight’s X-Series.

This takes us back to my opening admonition that you should get all you’ve paid for in signal analyzer performance and functionality. Fast processing and digital IF technologies improve effective RF performance through features such as fast sweep, fast ACPR measurements, and noise power subtraction. These capabilities may already be in your signal analyzers if they’re part of the X-Series. If these features are absent, you can add them through license key upgrades.

The upgrade situation is the same with frequency ranges, VSA, RTSA, and related features such as signal capture/playback and advanced triggering (e.g., frequency-mask and time-qualified). The compounding benefits of semiconductor advances yield enhanced performance and functionality to meet wireless challenges, and those two new application briefs may give you some useful suggestions.

 

* “Cramming more components onto integrated circuits,” Electronics, Volume 38, Number 8, April 19, 1965

benz

The Power and Peril of Intuition

Posted by benz Oct 14, 2016

Originally posted Apr 2, 2016

 

This one really is rocket science

 

For engineers, intuition is uniquely powerful, but not infallible. It can come from physical analogies, mathematics, similar previous experiences, pattern recognition, the implications of fundamental natural laws, and many other sources.

I have a lot of respect for intuitive analysis and conclusions, but I’m especially interested in situations where it fails us, and especially in understanding why it failed. Because I like to avoid erroneous conclusions, I often find that understanding the specific error points to a better intuitive approach.

Previously, I wrote about one intuitive assumption and its flaws, and also about a case in which a simple intuitive approach was perfectly accurate. This time, I’d like to discuss another example related to power and energy, albeit one that has very little to do with RF measurements.

Let me set the stage with this diagram of an interplanetary spacecraft, making a close pass to a planet as a way to steal kinetic energy and use that boost to get where it’s going faster.

An interplanetary spacecraft is directed to pass near a planet, using the planet’s gravity field to change the spacecraft trajectory and accelerate it in the desired direction. Equivalent rocket burns at time intervals a and b produce different changes in the spacecraft’s speed and kinetic energy. (Mars photo courtesy of NASA)

An interplanetary spacecraft is directed to pass near a planet, using the planet’s gravity field to change the spacecraft trajectory and accelerate it in the desired direction. Equivalent rocket burns at time intervals a and b produce different changes in the spacecraft’s speed and kinetic energy. (Mars photo courtesy of NASA)

One of the most powerful tools in the engineer’s kit is the law of conservation of energy. RF engineers may not use it as often as mechanical engineers or rocket scientists, but it’s an inherent part of many of our calculations. For reasons that escape me now, I was thinking about how we get spacecraft to other planets using a combination of rockets and gravity assist maneuvers, and I encountered a statement that initially played havoc with my concept of the conservation of energy.

Because most rocket engines burn with a constant thrust and have a fixed amount of fuel, I intuitively assumed that it wouldn’t matter when, in the gravity-assist sequence, the spacecraft burned its fuel. Thrust over time produces a fixed delta-V and that should be it… right?

Nobody has repealed the law of conservation of energy, but I was misapplying it. One clue is the simple equation for work or energy, which is force multiplied by distance. When a spacecraft is traveling faster—say finishing its descent into a planet’s gravity well before climbing back out—it will travel farther during the fixed-duration burn. Force multiplied by an increased distance produces an increase in kinetic energy and a higher spacecraft speed.*

My intuition protested: “How could this be? The math is unassailable, but the consequences don’t make sense. Where did the extra energy come from?”

One answer that satisfied, at least partially, is that burning at time b rather than time a in the diagram above gives the planet the chance to accelerate the spacecraft’s fuel before it’s burned off. The spacecraft has more kinetic energy at the start of the burn than it would have otherwise.

Another answer is that the law of conservation of energy applies to systems, and I had defined the system too narrowly. The planet, its gravity field, and its own kinetic energy must all be considered.

Fortunately, intuition is as much about opportunity for extra insight as it is about the perils of misunderstanding. Lots of RF innovations have come directly from better, deeper intuitive approaches. In the wireless world, CDMA and this discussion of MIMO illustrate intuition-driven opportunities pretty well. Refining and validating your own intuition can’t help but make you a better engineer.

Originally posted Mar 18, 2016

 

“OK, Jamie, let’s go to the high-speed”

 

There are times when understanding an event or phenomenon—or simply finding a problem—demands a view using a different time scale. I’m a fan of the Mythbusters TV series, and I can’t count the number of times when the critical element in understanding the myth was a review of high-speed camera footage. I’m sure the priority was mostly on getting exciting images for good TV, but high-speed footage was often the factor that really explained what was going on.

Another common element of those Mythbusters experiments was their frequent one-shot nature, and high-speed cameras were critical for this as well. Single events were trapped or captured so that they could be examined over and over, from different angles and at different speeds.

Time capture, also called signal capture, came to the general RF analyzer world in the early 1990s with the introduction of vector signal analyzers (VSAs), whose block diagram was a natural fit for the capability. While it was primarily a matter of adding fast memory and a user interface for playback or post-processing, significant innovation went into implementing a practical magnitude trigger and achieving trigger-timing alignment.

The block diagram of a VSA or a signal analyzer with a digital IF section is good foundation for time capture, and it required the expansion of just two blocks. Capture/playback is especially useful for the time-varying signals that VSAs were designed to handle.

The block diagram of a VSA or a signal analyzer with a digital IF section is good foundation for time capture, and it required the expansion of just two blocks. Capture/playback is especially useful for the time-varying signals that VSAs were designed to handle.

Over the years, I’ve used time capture for many different measurements and think it’s really under-used as a tool for RF/microwave applications in wireless, aerospace/defense, and EMI. It’s an excellent way to leverage the knowledge of the RF engineer, and it’s easy to use: first select the desired frequency and span and then press the record button.

The insight-creating and problem-solving magic comes during playback or post-processing. Captures are gap-free, and playback speed in VSA software can be adjusted over a huge range. Just press the play button and explore wherever your insight leads. You can see the time, frequency, and modulation domains at the same time, with any number of different measurements and trace types. You can easily navigate large capture buffers with numeric and graphical controls, and even mark a specific section to replay or loop continuously so you can see everything that happened.

A simple capture/playback of a transmitter switching on shows a transient amplitude event in the bottom trace. The top two traces use variable persistence to show the signal spectrum and RF envelope as playback proceeds in small steps.

A simple capture/playback of a transmitter switching on shows a transient amplitude event in the bottom trace. The top two traces use variable persistence to show the signal spectrum and RF envelope as playback proceeds in small steps.

Today’s signals are highly dynamic, the RF spectrum is crowded, and design requirements are stringent. You often need to optimize and troubleshoot in all three domains—time, frequency, and modulation—at once. You have the skill and the knowledge, but you need a total view of the signal or system behavior. In my experience, there’s nothing to match the confidence and insight that follow from seeing everything that happened during a particular time and frequency interval.

I’ll write about some specific measurement examples and techniques in posts to come. In the meantime, feel free to try out time capture on your own signals. The 89600 VSA software includes a free trial mode that works with all of the Keysight X-Series signal analyzers and many other Keysight instruments, too. Just press that red record button and then press play. It’ll make you feel like an RF Mythbuster, too.

Originally posted Mar 4, 2016

 

Here we come to save the day!

 

“Space” said author Douglas Adams, “is big. Really big. You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist, but that’s just peanuts to space.”*

Low Earth orbit (LEO) is a significant chunk of space, but its bigness wasn’t enough to save the working satellite Iridium 33 from a 2009 collision with Russia’s defunct Kosmos 2251 communications satellite. The impact at an altitude of about 500 miles was the first time two satellites had collided at hypervelocity, and the results were not pretty.

Yellow dots represent estimated debris from the 2009 collision of two LEO satellites. This graphic from Wikimedia Commons shows the debris location about 50 minutes after the collision. The structure of the debris field is an excellent example of conservation of momentum.

Yellow dots represent estimated debris from the 2009 collision of two LEO satellites. This graphic from Wikimedia Commons shows the debris location about 50 minutes after the collision. The structure of the debris field is an excellent example of conservation of momentum.

The danger of collisions such as this is highest in LEO, which spans altitudes of 100 – 1200 miles. The danger is a function of the volume of that space, the large number of objects there, and their widely varying orbits. The intersections of these orbital paths provide countless opportunities for destructive high-velocity collisions.

It’s estimated that the 2009 collision alone produced 1000 pieces of debris four inches or larger in size, and countless smaller fragments. Because objects as small as a pea can disable a satellite, and because larger ones can turn a satellite into another cloud of impactors, the danger to vital resources in LEO is clear.

This chain reaction hazard or “debris cascade” was detailed as far back as 1978 by NASA’s Donald Kessler in a paper that led to the scary term Kessler syndrome.

The concept is scary because there’s no simple way to avoid the problem. What’s worse, our existing tools aren’t fully up to the task of identifying objects and accurately predicting collisions. The earlier 1961-vintage ground-based VHF radar system could track only those objects bigger than a large beach ball, and accuracy was not sufficient to allow the Iridium satellite to move out of danger.

Cue the RF/microwave engineering cavalry: With their skill and the aid of signal analyzers, signal generators,network analyzers, and the rest of the gear we’re so fond of, they have created a new space fence. Operating in the S-band, this large-scale phased-array radar will have a wide field of view and the ability to track hundreds of thousands of objects as small as a marble with the accuracy required to predict collisions.

Alas, predicting collisions is most of what we can do to avoid a Kessler catastrophe. Though the company designing and building the fence mentions “cleaning up the stratosphere,” it’s Mother Nature and the very faint traces of atmosphere in LEO that will do most of the job. Depending on altitude, mass, and cross-section, the cleaning process can take decades or longer.

In the meantime, we’ll have to make the most of our new tools, avoid creating new debris, and perhaps de-orbit a few big potential offenders such as Envistat.

There may be another opportunity for the engineering cavalry to save the day. There are proposals for powerful lasers, aimed with unbelievable precision, to blast one side of orbiting debris, creating a pulse of vapor that will aim objects to atmospheric destruction and render them mostly harmless.*  I’m looking forward to the RF/microwave designs for tracking and aiming that will make that possible.

 

* From the book The Hitchhiker’s Guide to the Galaxy

Originally posted Feb 19, 2016

 

Borrowing stuff from phones and tablets that we helped make possible

 

Gary Glitter asked it first, but people my age probably remember Joan Jett’s version: “Do you wanna touch?” In the last few years, test and measurement companies have decided the answer is yes.

In combination with wireless networks, touchscreens have proven themselves in the realm of smartphones and tablets. I’ll admit that I was initially skeptical about instruments, however, based on my experience with laptops. For most regular laptop use, I couldn’t see the benefit of suspending my heavy arm in front of me to make imprecise selections of text or numbers, while my fingers partially blocked my view.

I guess it’s a matter of choosing the right application, and I think Apple should get a lot of credit for recognizing that, then creating and refining the touch UI. They’ve done this so thoroughly that it’s hard for me to see which gestures and other touch elements are inspired innovation and which are simply taking advantage of innate human wiring.

It seems clear that touchscreens for certain UI tasks are a win, so what about RF instruments?

The hardkey/softkey interface in analyzers has proven to be remarkably durable, but it’s about 35 years old. That’s ancient compared to everything happening behind the front panel. Sure, it was a great step up from analog switches, knobs and buttons as instruments evolved to digital control of measurement parameters in the 1970s.  The hardkey/softkey interface leveraged the display paradigm of terminals and other computer displays, as analyzers gained computing power and signals—and corresponding measurements—became much more complex.

Instruments have continued to borrow UI schemes from the world of computers. Looking back about 15 years, I can still remember the first PC-based vector signal analyzer and the novelty of changing center frequency, span and display scaling by clicking and dragging a window with the mouse. On the other hand, clickable hot spots and adjusting parameters with the scroll wheel felt completely natural from the start. Just point to what you want to change and tweak it.

Touchscreens have been used on some RF signal analyzers in recent years, though the early ones were often the less-sensitive—and very slightly blurry—resistive types, and all the ones I’m familiar with were single-touch. Borrowing liberally from tablets and phones, Keysight has recently introduced a new line of X-Series signal analyzers with multi-touch screens that are nearly 70% larger than their touch-less predecessors, with nearly three times the screen area.

The new X-Series signal analyzers include much larger displays and a multi-touch UI. On-screen buttons and hot spots are sized for fingertips, providing direct access to measurement settings and bringing measurements within two touches. Two-finger gestures such as pinching or spreading can change display scaling or parameters such as frequency center and span.

The new X-Series signal analyzers include much larger displays and a multi-touch UI. On-screen buttons and hot spots are sized for fingertips, providing direct access to measurement settings and bringing measurements within two touches. Two-finger gestures such as pinching or spreading can change display scaling or parameters such as frequency center and span.

Touchscreens are cumbersome unless the rest of the UI does its part with control features compatible with the size of a finger. In the X-Series the larger display was the first step. Some tasks are still easier to accomplish with hardkeys and a knob, and perhaps that will never change.

User menus have previously been available in signal analyzers, though they weren’t used very often—and perhaps it was because they weren’t easy to set up. Today, the ease of creating and accessing them in the X-Series analyzers with touch may change that. It’s a feature worth exploring if you’re working with the same group of settings over and over again.

I’ve written a lot before about making better measurements. A multi-touch UI, if done right, should be a way to instead make measurements better. Of course, whether they’re better for you is a matter of preference and the needs of your application. If you get the chance, give these new analyzers a try.

benz

Vexing Connections

Posted by benz Oct 14, 2016

Originally posted Feb 1, 2016

 

RF sins that may not be your fault

 

Many years ago, I knew an engineer who used to chuckle when he used the term “compromising emanations” to describe unintentional RF output. Today, most of us use the less colorful term “interference” to refer to signals that appear where they should not, or are more powerful than regulations allow. We’re likely more concerned about coexisting in the RF spectrum or violating a standard than we are about revealing something.

Wireless systems seem infinitely capable of generating unintended signals, and one of the more interesting is the rusty bolt effect.

A rusty bolt can form a metal-to-metal oxide connection that is rectifying rather than simply resistive. (Image from Wikimedia Commons)

A rusty bolt can form a metal-to-metal oxide connection that is rectifying rather than simply resistive. (Image from Wikimedia Commons)

I recently ran across a discussion of this when looking into the causes and consequences of imperfect connections in RF systems. Though I’ve previously written about connections of various kinds, including coaxial connectors, cables, adapters and waveguide, I’ve focused more on geometry and impedance than metal-to-metal contact.

Dealing with the wrong impedance is one thing, but for some time I’ve wanted to better understand why so many bad electrical contacts tend to be rectifying rather than Ohmic. Not surprisingly, it involves semiconductors. Accidental semiconductors, but semiconductors nonetheless.

Some oxides are conductive and some are insulating, but a number of common metal oxides are semiconductors. Oxidation or other corrosion—say from skin oils—makes it easy to produce a metal-to-semiconductor contact and a resulting nonlinearity.

Voltage/current curves for Ohmic and rectifying contacts. The nonlinear curve of a rectifying contact is essentially that of a diode.

Voltage/current curves for Ohmic and rectifying contacts. The nonlinear curve of a rectifying contact is essentially that of a diode.

Nonlinear connections are problematic in wireless, primarily because of the RF distortion products they produce. In the simple sinewave case, they create energy at harmonic frequencies, and when multiple signals are present they produce intermodulation distortion. The intermodulation distortion is particularly troublesome because it can appear in-band or in nearby bands, and at many frequencies at once.

Modern multi-channel systems, including base stations and carrier-aggregation schemes, create many simultaneous signals to “exercise” these nonlinearities and create distortion products. The distortion may be described as passive intermodulation (PIM) because it’s generated without powered elements. The rusty bolt example involves high currents through imperfect antenna or metal structure connections, though wireless systems offer many other opportunities for nonlinear mischief.

One of the most maddening characteristics of this phenomenon is its elusive nature. Outdoor antennas are subject to strain from wind and temperature changes as well as weathering from salt air or acid rain. Nonlinearities can appear and disappear, seemingly at random. Even indoor wireless transmitters have to contend with mechanical stresses, changing humidity and temperature, and contamination of all kinds.

In many cases, astute mechanical design and mitigation of oxidation or contamination will help eliminate nonlinear connections. Because Ohmic metal-to-semiconductor connections are essential to their products, semiconductor manufacturers are a good source of information and techniques.

At some point, of course, you need to make spectrum measurements to find intermodulation problems or verify that emissions are within limits. Signal analyzes do the job well, and many measurement applications are available for popular standards to automate setup, perform measurements, and provide pass/fail results. They’re the most efficient way to ensure you avoid sins that you’d rather not be blamed for.

benz

All of our RF Sins Exposed

Posted by benz Oct 14, 2016

Originally posted Jan 15, 2016

 

Trespassing is harder to miss in  a densely occupied country

 

The 802.11ah wireless standard mentioned in my last post is promising, but it highlights a challenge that’s facing many engineers in the wireless space: out-of-band or out-of-channel emissions.

In an article from Electronic Products via Digi-Key’s article library, Jack Shandle writes: “Two significant design issues in the 915-MHz band are: 1) The third, fourth, and fifth harmonics all fall in restricted bands, which imposes some design constraints on output filtering. 2) Although it is unlicensed in North America, Australia and South Korea, the band is more strictly regulated in other parts of the world.

Of course, the higher allowed transmit power and improved propagation of the 915 MHz band—compared to the 2.4 GHz band—adds to the potential for interference. But these days, your harmonic and spurious emissions don’t have to fall in restricted bands to be a concern. Compared to previous eras, the modern wireless spectrum is so crowded that excess emissions are far more likely to cause someone a problem and be noticed. Wireless standards are correspondingly stringent.

For RF engineers, the interference challenges exist in both the frequency and time domains, and this can make problems harder to find and diagnose. The time-domain concerns are not new, affecting any TDMA scheme—including the one used in my 20-plus-year-old marine VHF handheld. Using Keysight vector signal analyzers, I long ago discovered that the little radio walked all over a dozen channels in the first 250 ms after each press of the transmit key. A newer handheld was actually worse in terms of spectrum behavior, but settled down more quickly.

Back then, that behavior was neither noticed nor troublesome, and I don’t suppose anyone would complain even today. However, that quaint FM radio is nothing like the vast number of sophisticated wireless devices that crowd the bands today. Even a single smartphone uses multiple radios and multiple bands, and interference is something that must be discovered and fixed at the earliest stages to reduce cost and risk.

Given the dynamic nature of the signals and their interactions, gaining confidence that you’ve found all the undesirable signals is tough. Using the processing power of today’s signal analyzers is a good first step.

This composite real-time spectrum analysis (RTSA) display shows both calculated density and absolute spectrum peaks. Real-time spans of up to 500 MHz are available, letting you know you’ve seen everything that happened over that span and in that measurement interval.

This composite real-time spectrum analysis (RTSA) display shows both calculated density and absolute spectrum peaks. Real-time spans of up to 500 MHz are available, letting you know you’ve seen everything that happened over that span and in that measurement interval.

Though RTSA makes the task easier and the results more certain, RF engineers have been finding small and elusive signals for many years. Peak-hold functions and peak detectors have been available in spectrum analyzers since the early days and they’re effective, if sometimes time-consuming.

Minimizing noise in the measurement is essential for finding small signals, but the traditional approach of reducing RBW can make sweep times unreasonably long. Fast-sweep features and noise subtraction are available in some signal analyzers, leveraging signal processing to expand the speed/performance envelope. Keysight’s noise floor extension is particularly effective with noise-like signals such as digital modulation.

Of course, finding harmonic and spurious emissions is only half the battle. A frequency reading may be all you need to deduce their origin, but in many cases you need more information to decisively place blame.

In addition to frequency, the most useful things to know about undesirable signals are their spectral shape and timing. That means isolating the suspects and relating them to the timing of other signals. One traditional approach is a zero-span measurement, centered on the signal of interest. It’s a narrow view of the problem but it may be enough.

Far more powerful tools are available using the memory and processing power of today’s signal analyzers.Frequency-mask triggering is derived from RTSA and can isolate the signal for display or generate a trigger for complete capture and playback of signals. Signal recording is usually done with the 89600 VSA softwareand can include capture of events that occurred before the trigger.

For even more complex time relationships, the VSA software borrows from oscilloscopes to provide a time-qualified trigger for RF engineers. Command of both the time and frequency domains is the most effective path to interference solutions.

If you don’t have these advanced tools, you can add them to existing signal analyzers with a minimum of fuss. With your RF intuition and good tools, interference has no place to hide.

Originally posted Dec 28, 2015

 

New activity on familiar terrain

High-profile developments in wireless networking usually involve ever-wider bandwidths and ever-higher operating frequencies. These support the insatiable need for increased wireless capacity, and they parallel mobile developments such as 5G cellular. And if the prognosticators are correct, more users generating more traffic will compete with increasing traffic from non-human users such as the Internet of things (IoT) and machine-to-machine (M2M) communications.

The increasing need for data capacity is undeniable, but the focus on throughput seems a bit narrow to me. I’m probably a typical wireless user—if there is such a thing—and find myself more often dissatisfied with data link availability or reliability than with capacity.

For example, Wi-Fi in my family room is mostly useless when the microwave in the kitchen is on. Sure, I could switch to a 5.8 GHz wireless router, but those signals don’t travel as far, and I would probably relocate the access point if I made the change. Another example: The 1.9 GHz DECT cordless phone in the family room will cover the front yard and the mailbox, but the one in my downstairs office won’t. A phone doesn’t demand much data throughput for voice, but it must provide a reliable connection. Yes, I can carry my mobile phone and forward to it, but I sometimes appreciate the lack of a tether.

I often think about the digital cordless phone I had a dozen years ago, operating on the 900 MHz ISM band with a simple 12-bit PN code for spread spectrum. Its range was hundreds of yards with obstructions and over half a mile in the open.

I’ve been reading a little about the proposed new 802.11ah wireless networking standard in that same 900 MHz band, and thinking about the implications. Two important technical factors are the limited width of the band—902 to 928 MHz—and improved signal propagation compared to the 2.4 and 5.8 GHz bands. In the technical press you’ll frequently see a diagram similar to this one:

Lower frequencies generally propagate better, and the difference can be significant in terms of network coverage in a house or office space. Of course, practical range depends on many other factors as well.

Lower frequencies generally propagate better, and the difference can be significant in terms of network coverage in a house or office space. Of course, practical range depends on many other factors as well.

The diagram is certainly oversimplified, in particular neglecting any band-crowding, interference or obstruction issues. Nonetheless, the potential range benefits are obvious. Some claim that real-world distances of more than a kilometer are feasible, and the 900 MHz band may allow higher effective transmit power than 2.4 or 5.8 GHz.

Throughput, however, is modest compared to other WLAN standards. Data can be sent using a down-scaled version of the 802.11a/g physical layer for data rates ranging from 100 Kb/s to more than 25 Mb/s. Significantly, the standard supports power-saving techniques including predefined active/quiescent periods.

As usual, the standard has many elements, supporting a variety of potential uses, and a few are likely to dominate. Those mentioned most often relate to IoT and M2M. Compared to existing Wi-Fi, 802.11ah should be better optimized for the required combination of data rate, range and power consumption.

Although that presumption seems reasonable, recent history tells us that attractive combinations of spectrum and PHY layer will be bent to unanticipated purposes. I think there are many situations in which users would be happy to trade transfer speed for a high-reliability link with longer range.

From an RF engineering standpoint, improved propagation is a double-edged sword. Current WLAN range relates well to the scale of homes and small businesses, naturally providing a degree of geographic multiplexing and frequency reuse due to lower interference potential. The combination of propagation and transmit power in the narrower 900 MHz band will change tradeoffs and challenge radio designers.

The 802.11ah standard is expected to be ratified sometime in 2016, and RF tools are already available. Keysight’s WLAN measurement applications for RF signal generators and signal analyzers already support the standard, and vector signal analysis is supported with pre-stored settings in the custom OFDM demodulation of the 89600 VSA software Option BHF.

With established alternatives ranging from ZigBee to 802.11ac, some are skeptical about the success of this effort in the relatively neglected 900 MHz ISM band. It’s a fool’s errand to try to predict the future, but it seems to me this band has too much going for it to remain under-occupied.

Originally posted Dec 3, 2015

 

An engineering field where the products you’re designing are constantly trying to kill you

The holiday season is here and it’s been a while since I have wandered off topic. I hope you’ll indulge me a little, and trust my promise that this post will bring some insight about the business we’re in, at least by happy contrast.

It’s a good time of the year to reflect on things we’re thankful for, and in this post I’d like to introduce you to a book about a fascinating field of R&D: developing rocket fuel. Compared to work on rocket fuel, our focus on RF technology has at least two major advantages, including longevity and personal safety.

Let’s talk first about safety and the excitement engendered by the lack of it. Robert Goddard is generally credited with the first development and successful launch of a liquid-fueled rocket. Here he is with that rocket, just before its launch in March of 1926.

Robert Goddard stands next to his first liquid-fueled rocket before the successful test flight of March 16, 1926. The combustion chamber is at the top and the fuel tanks are below. The pyramidal structure is the fixed launch frame. (photo by Esther Goddard, from the Great Images in NASA collection)

Robert Goddard stands next to his first liquid-fueled rocket before the successful test flight of March 16, 1926. The combustion chamber is at the top and the fuel tanks are below. The pyramidal structure is the fixed launch frame. (photo by Esther Goddard, from the Great Images in NASA collection)

In my mind, the popular image of Goddard has been primarily that of an experimenter, skewed by the footage we’ve all seen of his launches. In reality, he was also a remarkable theoretician, very early on deriving the fundamental parameters and requirements of atmospheric, lunar, and interplanetary flight.

He also showed good sense in choosing gasoline as his primary rocket fuel, generally with liquid oxygen as the oxidizer. This may seem like a dangerous combination, but it was tame compared to what came just a few years later.

That brings me to the fascinating book about the development of liquid rocket fuels. The author is John D. Clark, a scientist, chemist, science/science-fiction writer, and developer of fuels much more exotic than those Goddard used. The introduction to the book was written by author Isaac Asimov and it describes the danger of these fuels very well:

There are, after all, some chemicals that explode shatteringly, some that flame ravenously, some that corrode hellishly, some that poison sneakily, and some that stink stenchily. As far as I know, though, only liquid rocket fuels have all these delightful properties combined into one delectable whole.

Delectable indeed! And if they don’t get you right away, they’re patient: it’s no surprise that many of these fuels are highly carcinogenic.

The book is titled Ignition! An Informal History of Liquid Rocket Propellants. It was published in 1972 and is long out of print, but a scan is available at the link. Fittingly, the book opens with two pictures of a rocket engine test cell, before and after an event called a “hard start.” Perhaps rocket engineers think the term “massive explosion” is too prejudicial.

For many spacecraft and missiles, the most practical fuels are hypergolic, those that burn instantly on contact, requiring no separate ignition source. Clark describes their benefits and extravagant hazards in the chapter “The Hunting of the Hypergol.” The suit on the technician in this picture and the cautions printed on the tank give some idea of the potential for excitement with these chemicals.

Hydrazine, one part of a hypergolic rocket-fuel pair, is loaded on the Messenger spacecraft. The warnings on the tank note that the fuel is corrosive, flammable, and poisonous. The protective gear on the technician gives some idea of the dangers of this fuel. (NASA image via Wikimedia commons)

Hydrazine, one part of a hypergolic rocket-fuel pair, is loaded on the Messenger spacecraft. The warnings on the tank note that the fuel is corrosive, flammable, and poisonous. The protective gear on the technician gives some idea of the dangers of this fuel. (NASA image via Wikimedia commons)

Clark is a skilled writer with a delightful sense of humor, and the book is a great read for holiday downtime at home or on the road. However, it is also a little sad to hear that most of the development adventure in this area came to an end many years ago. Clark writes:

This is, in many ways, an auspicious moment for such a book. Liquid propellant research, active during the late’40s, the ’50s, and the first half of the ’60s, has tapered off to a trickle, and the time seems ripe for a summing up, while the people who did the work are still around to answer questions.

So in addition to being thankful that we’re doing research on things that aren’t constantly trying to kill us, we can also be grateful for a degree of career longevity. RF/microwave engineering has been a highly active field for decades and promises to continue for decades more. No moon suits or concrete bunkers required.

 

Plus, while we give up a certain degree of excitement, we don’t need to wear a moon suit to prepare for tests and we don’t need run them from a concrete bunker.

Originally posted Nov 13, 2015

 

A practical way to revisit decisions about bandwidth and frequency range

No matter how carefully you consider your test equipment choices, your needs will sometimes evolve beyond the capabilities you’ve purchased. You may face a change in standards or technology, or the need to improve manufacturing productivity or margins. Business decisions may take you in a different direction, with some opportunities evaporating and new ones cropping up.

The one thing you can predict with confidence is that your technological future won’t turn out quite the way you expect. Since test equipment is probably a significant part of your capital-asset base, and your crystal ball will always have hazy inclusions, you have to find the best ways to adapt after the fact.

Analyzer software and firmware can certainly help. New and updated measurement applications are often available, tracking standards as they evolve. Firmware and operating-system updates can be performed as needed, though they’re likely more difficult and sometimes more disruptive than just installing an app.

In some cases, however, the new demands may be more fundamental. The most common examples are increasing measurement bandwidth and extended frequency range, both being recurring themes in wireless applications.

Of course, the obvious solution is a new analyzer. You get a chance to polish your crystal ball, make new choices, and hope for the best. Unfortunately, there is not always capital budget for new equipment, and the purchase-and-redeployment process burns time and energy better spent on engineering.

If the analyzer is part of a modular system, it may be practical to change individual modules to get the capability you need, without the expense of complete replacement. Of course, there are still details like capital budget, management of asset numbers and instrument re-calibration.

One approach to upgrading instrument fundamentals is sometimes called a “forklift upgrade,” a term borrowed from major IT upgrades requiring actual forklifts. In the case of test equipment, it’s a tongue-in-cheek reference to the process of lifting up the instrument serial-number plate and sliding a new instrument underneath. For instruments not designed to be upgradable, this term applies pretty well.

Fortunately, the forklift upgrade reflects a prejudice that is out of date for analyzers such as Keysight’s X-Series. Almost any available option can be retrofitted after purchase, even for analyzers purchased years ago.

Fundamental characteristics such as analysis bandwidth, frequency range, and real time spectrum analysis (RTSA) can be added to Keysight X-Series signal analyzers at any time after purchase. This example shows a 160 MHz real-time display and a frequency-mask trigger (inset) on an instrument upgraded from the base 3.6 GHz frequency range.

Fundamental characteristics such as analysis bandwidth, frequency range, and real time spectrum analysis (RTSA) can be added to Keysight X-Series signal analyzers at any time after purchase. This example shows a 160 MHz real-time display and a frequency-mask trigger (inset) on an instrument upgraded from the base 3.6 GHz frequency range.

Upgradability is part of the analyzer design, implemented in several ways. The internal architecture is highly modular, including the RF/microwave front end, IF digitizing, and DSP. The main CPU, disk/SSD memory, and its external digital interfaces are directly upgradable by the user.

For RF engineers, this is the best substitute for time travel. Hardware upgrades include installation, calibration, and a new warranty, with performance specifications identical to those of a new instrument.

There are organizational and process benefits as well, avoiding the need for new instrument purchase approvals and changes in tracking for asset and serial numbers.

 

If the decisions of the past have left you in a box, check out the new application brief on analyzer upgrades for a way out. If the box you’re in looks more like a signal generator, Keysight has solutions to that problem too.

Originally posted Nov 2, 2015

 

How to discard information and improve your measurements

If your primary focus is RF/microwave analysis, you may not be familiar with “windows” or “window functions,” and they may not be a factor in your explicit measurement choices. However, it’s worth knowing a little about them for at least two reasons: you may already be using them, and they can help you makebetter measurements in demanding situations.

Windows have been an essential feature of the fast Fourier transform (FFT) architecture of low-frequency analyzers for many years. FFT processing has also been added to many high-frequency analyzers as a way to implement narrow resolution bandwidths (RBWs) while optimizing measurement speed. Finally, FFT processing is central to vector signal analyzers (VSAs) and OFDM demodulation in particular.

FFTs calculate a spectrum from a block of samples called a time record, and the FFT algorithm assumes that the time record repeats endlessly. That assumption is valid for signals that are periodic over the selected time record, but it causes discontinuity errors for signals that are not. In the FFT spectrum results, the errors create a spreading of the spectral energy called leakage.

The solution is to somehow force the signal to be periodic within the time record, and the most common approach is to multiply the time record by a weighting function that reduces amplitude to zero at both ends of the time record, as shown below.

In this example of a non-repeating sine wave, the FFT algorithm’s assumption that signals repeat for each time record produces the erroneous signal in the second waveform. The window or weighting function removes the discontinuities before the FFT calculates the spectrum.

In this example of a non-repeating sine wave, the FFT algorithm’s assumption that signals repeat for each time record produces the erroneous signal in the second waveform. The window or weighting function removes the discontinuities before the FFT calculates the spectrum.

As an engineer, you’d expect tradeoffs from this weighting process, which effectively discards some samples and down-weights others. That is indeed the case and, among other things, the windowing widens the RBW. It also creates sidelobes of varying amplitude and frequency spacing, depending on the window shape.

The good news is that window shapes can be chosen to optimize the tradeoffs for specific measurements, such as prioritizing frequency resolution, amplitude accuracy or sidelobe level.

I’ll discuss examples of those tradeoffs in a future post, but first I’d like to show what’s possible in the best-case, where the signal is periodic in the time record and the uniform window—equivalent to no windowing—can be used.

Two gated spectrum measurements are made of an off-air capture of an 802.11n signal in the 5 GHz band. The gate is set to match a portion of the training sequence, which is periodic or self-windowing. The uniform window of the 89600 VSA software can be used in this case, providing enough frequency resolution in the bottom trace to measure individual OFDM subcarriers.

Two gated spectrum measurements are made of an off-air capture of an 802.11n signal in the 5 GHz band. The gate is set to match a portion of the training sequence, which is periodic or self-windowing. The uniform window of the 89600 VSA software can be used in this case, providing enough frequency resolution in the bottom trace to measure individual OFDM subcarriers.

In this measurement, gated sweep in the 89600 VSA software has been configured to align with a portion of the training sequence which is self-windowing. The selected uniform window is actually no window at all, and no signal samples are discarded or down-weighted. In this special case no tradeoffs are needed between accuracy, frequency resolution and dynamic range.

As an aside, this training sequence includes every second subcarrier, transmitted at equal amplitude. The peaks describe the ragged frequency response that receivers have to deal with in the real world.

 

Vector signal analyzers use FFTs for spectrum analysis all the time, but modern signal analyzers such as Keysight’s X-Series automatically choose between FFTs and swept digital filters as needed. In a future post or two I’ll discuss how to optimize FFT analysis and select windows to extract maximum information and improve measurement speed in swept measurements.

Originally posted Oct 12, 2015

 

I wonder if these MIT researchers can quantify their directivity?

I’m fascinated by correspondences between phenomena in RF engineering and other fields, and it isn’t just a matter of curiosity. These correspondences are also enlightening, and sometimes guide genuine technological advances.

An interesting cross-domain example is the recent MIT announcement of a technique for removing unwanted reflections from photos taken through windows. We’ve all experienced this problem, feeling the surprised disappointment when the photo includes obvious reflections we didn’t notice when composing the picture. At least with digital cameras, we can usually spot the problem while there’s still a chance to take another photo and fix or reduce it.

That surprised disappointment is actually a pointer to the kind of solution the MIT folks have produced. If you haven’t seen it already, take a look at the before/after in the MIT press release.

The uncorrected image is likely to be familiar, and the strength of the reflections is often much greater in the resulting image than it was perceived by the photographer. The perceptual shift is likely caused by our visual system’s ability to automatically do a version of what the MIT technique attempts to do: separate the reflection from the desired image and subtract or ignore it.

The MIT technique doesn’t identify the reflection directly, but it can recognize pairs of them. That’s useful because the unwanted reflections often come from both the front and rear surfaces of the intervening glass, with an apparent offset.

Unwanted reflections from photography through a window—such as the photographer’s hand or body—usually appear in offset pairs, originating from both the front and rear surfaces of the glass. Blame me, not MIT, for any errors or oversimplification in this diagram.

Unwanted reflections from photography through a window—such as the photographer’s hand or body—usually appear in offset pairs, originating from both the front and rear surfaces of the glass. Blame me, not MIT, for any errors or oversimplification in this diagram.

When reading about the technique, my first thought was the similarity to network analysis and its powerful tools for separating and quantifying incident and reflected energy. The analogy breaks down when considering the separation methods, however. The gang at MIT look for the reflection pairs, perhaps with something similar to two-dimensional autocorrelation. RF/microwave engineers usually make use of a directional coupler or bridge.

Directional couplers separate incident and reflected energy, and a critical performance parameter is directivity or how well the coupler can separate the energy moving in each direction.

Directional couplers separate incident and reflected energy, and a critical performance parameter is directivity or how well the coupler can separate the energy moving in each direction.

Of course, I now find myself wondering about the effective directivity of the MIT separation-and-removal scheme, and if they think of it in those terms. Probably not, though that would be a ready-made way to quantify how well they’re doing and it might help in optimizing the technique.

Recently, I’ve written about improving measurement accuracy. However, in thinking about these tools and techniques, I realized that separating signals to measure the right one is fundamental to making better RF measurements of all kinds. Indeed, the separation process is often more difficult than the core measurement itself.

Spectrum analyzers naturally use their RBW filters to separate signals into their different frequency elements, but it may also be critical to separate them by their behavior or their time duration and timing, or to separate them from the analyzer’s own noise.

 

I could go on and on, and branch off into optical separation techniques such as steganography. Now that I’m looking for such methods, I see them everywhere and resolve to consider signal separation explicitly as an essential step to accurate measurements.

Originally posted Sept 25, 2015

 

Is noise driving me mad, or just driving my interest in statistics?

I suspect that many of you, like me, have no special interest in statistics per se. However, we’ve also learned how powerful statistics can be in revealing the characteristics of our world when noise obscures them. This is especially true with our circuits and signals.

Two commonly used statistical tools are averaging and analysis of sample distributions. A while back I finally got around to looking at a distribution question that had been bugging me, and now it’s time to understand one aspect of averaging a little better.

Averaging is probably the most common statistical tool in our world, and we are often using one or more forms at once, even if we’re not explicitly aware of doing so.

Averaging is used a lot because it’s powerful and easy to implement. Even the early spectrum analyzers had analog video bandwidth filters, typically averaging the log-scaled video signal. These days many signal analyzers perform averaging using fast DSP. The speed is a real benefit because noise may be noisier than we expect and we need all the variance reduction we can get.

Years ago, I learned a rule of thumb for averaging that was useful, even though it was wrong: The variance of a measurement decreases inversely with the square root of the number of independent samples averaged. It’s a way to quantify the amount of averaging required to achieve the measurement consistency you need.

It’s a handy guide, but I remembered it incorrectly. It is the standard deviation that goes down with the square root of the number of independent samples averaged; variance is the square of standard deviation.

An essential condition is sometimes overlooked in applying this rule: The samples must be independent, not correlated with each other by processes such as filtering or smoothing. For example, using a narrow video bandwidth (VBW) will constrain the effective sample rate for averaging by the IF detector, no matter how fast the samples are produced. The same goes for the RBW filter, where the averaging effect of the VBW can be ignored if it is at least three times wider than the RBW (another rule of thumb).

What does the effect of correlated samples look like in real spectrum measurements? Performing a fixed number of averages with independent and correlated samples makes it easy to see.

A 50-trace average is performed on spectrum measurements in a vector signal analyzer. In the top trace the samples are independent or uncorrelated. In the bottom trace the samples are correlated by overlap processing of the data, resulting in a smaller averaging effect.

A 50-trace average is performed on spectrum measurements in a vector signal analyzer. In the top trace the samples are independent or uncorrelated. In the bottom trace the samples are correlated by overlap processing of the data, resulting in a smaller averaging effect.

For convenience in generating this example I used the 89600 VSA software and trace averaging with overlap processing of recorded data. In overlap processing, successive spectrum calculations include a mix of new and previously processed samples. This is similar in concept to a situation in which an averaging display detector samples much faster than the VBW filter. The average is valid, but variance and standard deviation do not decrease as much as the number of samples in the average would suggest.

You probably won’t face this too often, though if you find yourself frustrated with averaging that doesn’t smooth your data much as expected, you might question the independent samples condition. Fortunately, measurement applications are written with this in mind, and some allow you to increase average counts if you need even more stable results.

If issues such as this are important to you, or if you frequently contend with noise and noise-like signals, I suggest the current version of the classic application note Spectrum and Signal Analyzer Measurements and Noise. The explanations and measurement techniques in the note are some of the most practical and effective you’ll find anywhere.

Finally, it’s time for your daily new three-letter acronym: NID. It stands for normally and independently distributed data. It applies here and it’s a familiar concept in statistics, but apparently an uncommon term for RF engineers.

Originally posted Sept 10, 2015

 

K connectors, microwave measurements and careful plumbing

Over the years, I’ve heard several engineers speculate on alternative lives as plumbers. It’s a career that requires some technical knowledge and pays well, but can be shut off entirely—mentally and physically—at the end of the day and on weekends. One of the engineers lived next door to a plumber, so his wistful musings were probably well informed.

As a homeowner, I’ve done my share of amateur plumbing, and there is certainly satisfaction in a job well done—or at least one that doesn’t leak too much.

Of course, the plumbing that pays my bills is a rather different kind, and requires an even greater degree of care and precision. For example, the specifications for microwave and millimeter connector gauges show resolution better than 1/10,000 inch, or around 0.002 mm.

I’ve been looking into high-frequency connectors to make sense of something a friend said to me while discussing different connector types. When the subject of the 2.92 mm or “K” connector came up he said, “I have two of those and they’re both broken.”

I didn’t ask for details, but had heard elsewhere that 2.92s might not be as robust as their 2.4 or 3.5 mm cousins. One online source mentioned a thinner outer shell for the conductor, while another mentioned potential damage to the center conductor.

On the other hand, the K connector offers some distinct advantages in microwave and millimeter connections. It covers frequencies to 40 GHz or higher and is mode-free to about 45 GHz. It also intermates with 3.5 mm and SMA assemblies.

To help avoid damage, the 2.92 mm male connector is designed with a shorter center pin, ensuring that the outer shell is engaged before the center conductors make contact. The outer shell is thick and should be relatively strong.

The situation became clearer when I got a close look at two damaged 2.92 mm connectors. It helped me understand the dimensional requirements of a 40+ GHz connector that can mate with 3.5 mm and SMA connectors.

Damage to the collet or female center conductors of two 2.92 mm K connectors has rendered them useless. The fingers of the slotted contacts are bent or missing, likely from mating with a bad SMA male connector.

Damage to the collet or female center conductors of two 2.92 mm K connectors has rendered them useless. The fingers of the slotted contacts are bent or missing, likely from mating with a bad SMA male connector.

The 2.92 mm connectors should not be prone to damage when used with other 2.92 mm connectors, but intermating with SMA connectors—one of the benefits of this family—is more likely to be destructive.

For a brief explanation, start with the rule of thumb for determining the maximum frequency of coax:divide 120 GHz by the inner diameter D (in mm) of the outer conductor. The outer diameter d of the inner conductor is constrained to a specific D/d ratio to obtain the desired impedance. With a fixed d, the comparatively large center pin of a K connector results in very thin slotted contacts for the female center conductor.

Combine these thin contacts with SMA connectors that have looser tolerances, and which are more likely to have misaligned or projecting center pins. The result is a higher risk for damage to connectors that are otherwise robust and high-performance when mated with their own kind.

It’s logical to assume that a 3.5 mm connector, with larger d and thicker, stronger contacts, would be less likely to suffer damage from mating with an SMA. This appears to be the case, though insertion forces—and the chance of increased wear—may be higher.

It took a while for me to figure this out. One reason: some resources online were simply wrong, claiming, for example, that 2.92 mm connectors have thin outer walls (often true of SMA) and that metrology-grade versions are not available.

I now understand this small-scale plumbing a little better and can appreciate K connectors more fairly. They perform very well, are durable, and offer intermating advantages. Of course, you’ve got to take care when using them around SMA hardware, but that’s a good idea for 3.5 mm connectors as well.

 

SMA hardware also can be a hazard to 2.4mm and 1.85 mm connectors, and it’s worth paying close attention to the mating habits of the expensive plumbing on your bench. It’s an essential part of getting the performance you’ve paid for.

Originally posted Aug 24, 2015

 

How well do you know what you know, and how well do you need to know it anyway?

We choose, purchase and maintain test equipment because we want answers: how big, how pure, how fast, how wide, and so on. The answers are essential to our success in design and manufacturing, but they come at a cost. Therefore, we want to make the most of them, and I have written previously about improving measurements by adding information.

There are many ways to add information, including time averaging of repetitive signals and subtracting known noise power from a measurement. I’ve recently discussed using the information from periodic calibration of individual instruments as a way to get insight into the likely—as opposed to the specified—accuracy for actual measurements. If you’re paying for calibration and the information gathered during the process, it’s wise to make the most of it. Here’s an example, from calibration, of the measured frequency response of an individual PXA signal analyzer:

Frequency response of one PXA signal analyzer as measured during periodic calibration. The measured performance and measurement uncertainty are shown in comparison to the warranted specification value.

Frequency response of one PXA signal analyzer as measured during periodic calibration. The measured performance and measurement uncertainty are shown in comparison to the warranted specification value.

In the cal lab, this analyzer is performing much better than its hard specs, even after accounting for measurement uncertainty. That’s not surprising, given that the specs must account for environmental conditions, unit-to-unit variation, drift, and our own measurement uncertainty.

Of course, if you’re using this particular instrument for a similar measurement in similar conditions, it’s logical to expect that flatness will be closer to the measured ±0.1 dB than to the specified ±0.35 dB. How can we take advantage of this extra performance?

Not surprisingly, the answer depends on a number of factors, many specific to your situation. I’ll offer a few thoughts and guidelines here, gathered from experts at Keysight.

Begin by understanding your measurement goals and responsibilities. You may be looking for a best estimate rather than a traceable result to use in the design phase, knowing the ultimate performance will be verified later by other equipment or methods. In this situation, the minimum and maximum uncertainty values shown above (dotted red lines) might lead you to comfortably expect ±0.15 dB flatness.

On the other hand, you may be dealing with the requirements and guidelines in standards documents such as ISO17025, ANSI Z540.3 and ILAC G8. While calibration results are relevant, relying on them is more complicated than using the warranted specs. The calibration results apply only to a specific instrument and measurement conditions, so equivalent instruments can’t be freely swapped. In addition, you must also explicitly account for measurement conditions rather than relying on the estimates of stability and other factors that are embedded in Keysight’s spec margins.

These factors don’t rule out using calibration results in calculating total measurement uncertainty and, in some cases, it may be the most practical way to achieve the lowest levels of measurement uncertainty—but using them can complicate how you verify and maintain test systems. You’ll want to identify the assumptions inherent in your methods and have a process to verify them, to avoid insidious problems.

Measurement uncertainty is not the only element of test plan design, and calibration results can help in other ways. Consider the measured and specified values for displayed average noise level (DANL) in the following graph.

The actual and specified average noise levels of a PXA signal analyzer are shown over a range of 3.6 to 50 GHz. Where measurement speed is a consideration, the actual DANL may be a better guide than the specifications in optimizing settings such as resolution bandwidth.

The actual and specified average noise levels of a PXA signal analyzer are shown over a range of 3.6 to 50 GHz. Where measurement speed is a consideration, the actual DANL may be a better guide than the specifications in optimizing settings such as resolution bandwidth.

In this example the actual DANL is 5 to 10 dB better than specified, and this has implications for the test engineer. When measuring low-level signals or noise, it’s necessary to select an RBW narrow enough to reduce the noise contributed by the signal analyzer. Narrow RBWs can lead to slow measurements, so there’s a real benefit to understanding the actual noise level as a way to use RBWs that are as wide—and therefore as fast—as possible.

When your measurements and test plans are especially demanding, it makes sense to use all the information available. Guardbanding is part of a Keysight calibration service that includes the most complete set of calibration results such as those above. For easy access to calibration results without tracking paper through your organization, you can use the free Infoline service that comes with all calibrations.

Originally posted Aug 10, 2015

 

Fascinating connections between very different phenomena

In engineering, one of the most interesting experiences is to encounter an analog of something familiar, but in an entirely different field. I bet we’ve all had this recognition of similarity and felt the intellectual thrill of discovering parallels and symmetry. It’s also the source of theoretical breakthroughs, as described in Thomas Kuhn’s classic The Structure of Scientific Revolutions.

I can claim nothing so grand, but thought it might be interesting to summarize the journey that began with my efforts to understand and explain digital demodulation and the resulting error vector signals 20 years ago.

In a previous post, I explained the meaning of the error vector signal and how it represented the residual after the intended digital modulation was removed from a signal. The magnitude of the error vector signal (EVM) is well known and frequently used as an overall quality metric; however, the full, complex signal is more powerful in terms of diagnostics and insight.

The error vector is calculated as the complex difference between a measured signal and one with the same symbol sequence and perfect modulation. In performing demodulation with a vector signal analyzer, I figured it should be possible to hide a small modulated signal inside a larger one, making it almost impossible to detect unless one already knew of its presence. The error vector residual after demodulation should then be due mostly to the hidden signal and one should be able to demodulate it. Here’s an example of my signal spectra and results.

One signal—about 30 dB smaller—is hidden inside another in the spectrum at left. After the larger signal is demodulated and removed, the resulting error vector signal is successfully demodulated at right.

One signal—about 30 dB smaller—is hidden inside another in the spectrum at left. After the larger signal is demodulated and removed, the resulting error vector signal is successfully demodulated at right.

I was surprised at how well this process worked, even with differences in signal power of 30 dB or more. Noise didn’t seem to be a major problem unless it caused a significant number of symbol errors at the physical layer. When those occurred, they fouled up the calculation of the perfect signal that is subtracted from the received one, preventing accurate calculation of the residual.

I explained these experiments to a VSA-savvy R&D project manager: he said it looked like I’d “created an oddball version of CDMA.” It took me a while to appreciate the significance of what he’d said, but it did indeed seem to be an analog to CDMA.

When I ran across a paper about steganography, however, I recognized the similarity immediately. Though steganography comes in many forms and has a long history, I find the most instructive and satisfying examples to be graphic ones such as this pair.

The image of the cat at right is hidden in the image of the tree at left. The hidden image is recovered from the cover (carrier?) image by removing everything but the two least significant bits of the color channels and re-scaling the result. (Images from Wikimedia Commons)

The image of the cat at right is hidden in the image of the tree at left. The hidden image is recovered from the cover (carrier?) image by removing everything but the two least significant bits of the color channels and re-scaling the result. (Images from Wikimedia Commons)

A critical element in any version of this process is how the respective signals from the cover and hidden image are separated. Orthogonal codes and image intensity are just two of many approaches; you can see others at the Wikipedia link above.

The examples of different steganography types and signal-separation techniques are nearly endless, and in wireless communications I suspect MIMO is another one. In wireless it also seems that processing tasks such as separating signals from noise or dealing with low—or even negative—signal-to-noise ratios can be viewed through the lens of steganography.

Originally posted Jul 24, 2015

 

When you’re playing cat-and-mouse with tricky signals

Hertz-minded RF engineers are becoming more and more comfortable with the time domain and, in particular, with simultaneous operations and signal analysis in the time and frequency domains. Part of the reason is that modern systems—from radar to communications—must be optimized in both domains. Of course, many systems are also frightfully complex in both domains, but that’s a post for another day.

The other part of the reason for this dual-domain focus is defensive: things can go wrong in both domains and engineers will need to find and fix the problems—or at least convincingly point to the guilty party.

Fixing a problem often begins with a clear, unambiguous look at the signal in question. That’s not much of a challenge for a CW signal, and even pulsed or TDMA signals can be handled with proven techniques that have been around for years.

Unfortunately, getting a clear look at the contents of today’s crowded frequency bands is difficult, and getting more so. You’re often looking for one signal among many, and it may be present for only a tiny fraction of your measurement time. To compound the elusiveness, the signal may also be hopping or switching channels.

The challenge is obvious in unlicensed spectrum like the ISM bands, where there are lots of users, minimal supervision, and many different signal types. Even in the licensed bands you may need to track down brief, unintended emissions from users on other bands, including harmonics, spurious or transient switching effects.

As is so often the case, the solution is to take advantage of powerful DSP and borrow something from oscilloscopes, our friendly experts in the time domain: the time-qualified trigger (TQT).

As the name implies, this trigger adds one or more time-qualification parameters to other trigger types such as frequency mask or IF magnitude. Here’s the TQT applied to a magnitude trigger in the 89600 VSA software:

A time-qualification parameter T1 is applied to an IF magnitude trigger on an RF pulse. A data-acquisition trigger is generated only if the pulse stays above the IF magnitude level (dashed horizontal line) for an interval longer than T1. A pre-trigger (negative) delay is used to move the data acquisition earlier and capture the entire pulse.

A time-qualification parameter T1 is applied to an IF magnitude trigger on an RF pulse. A data-acquisition trigger is generated only if the pulse stays above the IF magnitude level (dashed horizontal line) for an interval longer than T1. A pre-trigger (negative) delay is used to move the data acquisition earlier and capture the entire pulse.

The simplest time qualification is to trigger when an event lasts longer than a selected time T1. Using the two available time parameters, T1 and T2, provides three additional qualifications:

  • Duration less than T1
  • Duration between T1 and T2
  • Duration less than T1 and greater than T2

Of course, the point in time when the trigger conditions are all satisfied is unlikely to be the point at which you want to begin sampling and measurement. The VSA solutions include adjustable trigger delay to solve this problem, and the negative (pre-trigger) delay is frequently the most useful. It allows you to wait until you’ve found the exact signal you want and then go back in time to see the beginning of it or even earlier events.

Speaking of time travel, triggers such as TQT and IF magnitude can also be used in the VSA software to initiate signal recordings or time captures. Complete, gap-free blocks of frequency and time data can be streamed to memory for any—or multiple—types of post processing. Both center frequency and span can be changed after capture, to examine different signals and to track down cause and effect.

 

For RF engineers, the frequency domain is also a critical qualifier, and the frequency mask trigger (FMT) in real-time spectrum analysis is a powerful complement to TQT. FMT and TQT can be used together to qualify measurement data blocks in both domains simultaneously, trapping fleeting signals or capturing dynamic behavior with the speed and thoroughness of a hungry barn cat chasing lazy mice.

Originally posted Jul 13, 2015

 

This is why we can’t have nice things at microwave and millimeter frequencies

Well, of course, we can have nice things at very high frequencies, but it’s more difficult and gets progressively harder as frequencies increase. I just couldn’t resist invoking the “can’t have nice things” meme, and to parents everywhere it has a certain resonance.

In many applications, the operating frequencies of the systems we design and test are increasing as part of our endless quest for available bandwidth. From direct improvements in data throughput to increasing resolution in radar-based synthetic vision, the requisite tradeoffs apply equally to test equipment and DUTs.

An intuitive understanding of life at higher frequencies was on my mind recently after reading an email that mentioned a classic rule of thumb: A perfect frequency doubler increases phase noise by 6 dB. Here’s an example of a synthesizer output at successively doubled frequencies.

Successive doubling of the output of a frequency synthesizer increases phase noise by 6 dB for each step.

Successive doubling of the output of a frequency synthesizer increases phase noise by 6 dB for each step.

Of course, if the doubler is not a perfect device, then the increase will be larger than 6 dB because the doubler adds noise or undesirable phase deviation.

Why 6 dB? Perhaps that’s where different intuitive approaches can help. Years ago, when I first heard it, the 6 dB figure made sense from a time-domain perspective. If a deviation is constant in terms of time, the phase deviation at twice the frequency will be twice as large. Doubling the phase deviation—a linear term—will increase sideband power by 6 dB.

Heuristically, this intuitive approach feels correct, but I’ve learned to be cautious about relying too much on my intuition. Fortunately, more rigorous calculations—albeit based on approximations and simplifications—yield the same answer. Until I wrote this post, I didn’t realize that my approach also involved a version of the small-angle approximation.

A more general expression of this relationship applies to multipliers other than two:

20 log10 (N) dB where N is the multiplier constant

In practical microwave and millimeter systems, multipliers greater than two are common, placing a real premium on the phase noise performance of the fundamental oscillators. This applies equally to microwave and millimeter test equipment, in which the choice of local oscillator frequencies is a balance between performance at fundamental frequencies and required range of multipliers or harmonic numbers.

That balance can indeed yield nice things at very high frequencies. Here’s an example of the phase noise of a signal analyzer at 67 GHz using external mixing.

This measurement of a low-noise millimeter source reveals the phase noise of a Keysight PXA X-series signal analyzer using a V-band smart external mixer at 67 GHz. The DUT, a PXG signal generator with a low-noise option, has even lower phase noise.

This measurement of a low-noise millimeter source reveals the phase noise of a Keysight PXA X-series signal analyzer using a V-band smart external mixer at 67 GHz. The DUT, a PXG signal generator with a low-noise option, has even lower phase noise.

Frequency dividers are another example of this relationship, and can be treated as multipliers with a constant less than one. For example, a divide-by-two circuit (N = 0.5) yields an improvement of 6 dB, making it a practical and effective way to reduce phase noise.

Where do you get your insight into relationships such as this? Do you lean on visual approaches, mathematical calculations or something else altogether? Feel free to add a comment and share your perspective.

Originally posted Jul 1, 2015

 

Admirable explanations and embarrassing memories

On a recent long-distance drive across western states, we encountered lightning and its usual transient interference across the AM radio band. That distinctive sound crackling from the receiver links two things in my mind: the first spark-gap wireless transmitters and my own unintentional transmissions from a spark gap driving a seven-foot Tesla coil.

Like many RF engineers, I’m fascinated by the history of radio, including the first steps on the path to where we are today. Unfortunately, I didn’t get much of this in school because our lectures only went as far back as the invention of negative feedback in the late 1920s. Practical spark-gap transmitters predated this by several decades.

That early history is enlightening, and I wanted to share an excellent—and underappreciated—explanation of it: a 1991 episode of the British Channel 4 series The Secret Life of Machines. It’s an understatement to call the series quirky and low-budget, but it’s also brilliant and entertaining. In here I do my best to create effective explanations of technical topics, but the hosts of this series have talent that I can only envy.

To see what I mean and get a glimpse of the earliest history of wireless, take a look at the series episode The Secret Life of the Radio Set. This YouTube link is one of several places where you can see the episode and others. You might want to look at the episode before reading the rest of this post. Go ahead. I’ll wait.

Welcome back. In the video, I was particularly struck by the sparks in both the transmitters and receivers. By the time I saw it, I was aware of the growing problems with spectral crowding and interference, and was working with the narrowband TDMA technologies that were being introduced to enable second-generation wireless phones. Videos of the spark-gap transmitters were an effective attention-getter in all kinds of talks about new and more spectrally efficient systems.

Early in my life as a practicing engineer my extracurricular activities included spark gaps and circuits that were the very opposite of spectrally efficient. In my defense, I didn’t come up with the design and, anyway, it was for a good cause. Here are a couple of pictures of the building process of that seven-foot Tesla coil.

Winding a mile of fine insulated wire on a Plexiglas tube to form the final stage of a Tesla coil. The completed winding is shown at right and, yes, that is a plumber’s toilet flange serving as a base anchor at the far end. Also yes, on the left that is your humble author as a younger, darker-haired practicing engineer.

Winding a mile of fine insulated wire on a Plexiglas tube to form the final stage of a Tesla coil. The completed winding is shown at right and, yes, that is a plumber’s toilet flange serving as a base anchor at the far end. Also yes, on the left that is your humble author as a younger, darker-haired practicing engineer.

The completed Tesla coil was inefficient by every measure. It was large and used high-voltage capacitors made from three-foot square panes of glass with heavy aluminum foil glued to each side. It was power hungry, driven by three neon-sign transformers that each produced 15 kV and 200 mA. I didn’t realize it at the time but it was a spectral monster, radiating power over a bandwidth that makes me cringe when I think about it now. It even made all 12 fluorescent tubes in the garage ceiling glow every time we switched it on.

Fortunately, we operated it for only a few seconds at a time, as part of a charity Halloween show. It was the centerpiece of our “Frankenstein laboratory,” sending bolts of lightning as the monster came to life and broke free to terrorize the crowd. Kids would run from the lab in a panic, only to get right back in line for the next show.

As with the radio industry of the last century, I quickly moved on to much more narrowband and civilized electromagnetic radiators. But every time I hear lightning crackle on the AM radio or the clattery, ringing buzz of a spark gap, I think of the true meaning of broadband and hope there is some sort of statute of limitations on my spectrum transgressions.

Originally posted Jun 18, 2015

 

Apply information about the individual instruments on your bench

I suppose design and measurement challenges can be a valuable contribution to job security. After all, if a clever and creative person like you has to struggle to hit the targets and balance the tradeoffs, you can’t be replaced with someone less talented—or by a mere set of algorithms.

However, this general promise of increased job security is scant comfort when you’re dealing with a need to improve yield, reduce the cost of test, increase margins, or otherwise engineer your way out of a jam. From time to time, you need a new tactic or insight that will inspire a novel solution to a problem.

This is ground we have walked before, looking for ways to transcend the “designer’s holy triangle” and previous posts have explained how adding information to the measurement process can be a powerful problem solver. One approach is to take advantage of published measurements of typical performance in test equipment to more accurately estimate measurement uncertainty.

In a comment on the post that described that approach, Joe Gorin explained it clearly: “What good is this accuracy if it is not a warranted specification? How can it be used in my measurement uncertainty computations? This accuracy is of great value even when not warranted. Most of us who deal with uncertainty must conform with ISO standards which call for using the statistical methods of the GUM (Guide to the Expression of Uncertainty in Measurement). The GUM, in an appendix, explains that the measurement uncertainty should be the best possible estimate, not a conservative estimate.”

To arrive at the best possible estimate, another—often overlooked—source of information is available to many of us: calibration reports for individual instruments.

The power level accuracy of an individual microwave signal generator is shown in a report generated during periodic calibration. The guaranteed specification is shown as green dashed lines (high and low limits) while blue dots represent specific measurements and the pink brackets indicate the associated uncertainty.

The power level accuracy of an individual microwave signal generator is shown in a report generated during periodic calibration. The guaranteed specification is shown as green dashed lines (high and low limits) while blue dots represent specific measurements and the pink brackets indicate the associated uncertainty.

It may not surprise you that the measured performance of this signal generator is much better than the guaranteed specifications. After all, generic specifications must apply to every one of that model and account for environmental conditions and other factors that apply to only a minority of use cases. In this example, instrument-specific information can be added to the process of determining total measurement uncertainty, yielding a substantial improvement.

Keysight calibration services test all warranted specifications for all product configurations. The resulting calibration data is available online in graphic and tabular form at Infoline for Keysight-calibrated instruments, a process that’s much easier than tracking down paper certificates inside your organization. This testing regime and data access is not universal in the industry, so if you’re not using Keysight calibration services you’ll need to check with your vendor.

The optimal use of this additional information will depend on your needs and the specifics of your measurement situation. So far I’ve only described the availability of the data, but I’m looking deeper into the practicalities of using it and will share more in my next post on this topic.

In addition, a discussion and excellent set of references are available in a paper discussing methods for pass/fail conformance that complies with industry standards.

I didn’t learn about calibration in school and my exposure to it in practice has been sporadic. However, I’ve been learning more about it in the past few months and have been impressed with the measures taken in factory and field calibration to ensure accuracy and determine its parameters. You should take advantage of all that effort—and the calibrations you pay for—whenever it will help.

benz

Pulse Analysis: From Many, One*

Posted by benz Oct 13, 2016

Originally posted Jun 4, 2015

 

Lots of measurements of a stochastic process may provide the deterministic number you seek

For much of my measurement career, many measurement situations have been a search for The One True Number, or at least the closest approximation I could manage. I have complained about measurements that are more stochastic than deterministic and how noise makes my work life difficult in multiple ways, including excess consumption of my remaining days on this mortal coil.

To be fair, I have also had to recognize the occasional usefulness of noise, and generally accept that it’s an inescapable part of our universe. It’s similar to my views on insects: I don’t like most of them, but I’m pretty sure there would be big problems if they weren’t here.

Recently, I’ve been looking at tools and techniques for measuring RF pulses in radar applications, and it seemed that I had entered a kind of alternate measurement domain. In the past, I’ve made many measurements of individual radar pulses, usually with the 89600 VSA software. Using a wide range of RF front ends, this software quantifies anything you might want to know about a pulse: all kinds of frequency, amplitude (average power, power vs. time, on/off ratio), timing, and modulation parameters such as chirp linearity or modulation quality. With the VSA’s time capture and repeated playback capabilities, you can make most of these measurements on a single pulse (from one, many).

No matter how accurate or comprehensive those measurements may be, they are inadequate in one important respect for applications such as radar: They do not account for the consistency of the pulses in question. The VSA software has taken a pulse-by-pulse approach and generally does not indicate repeatability, stochastic characteristics, or trends in the pulse trains or sequences.

Understanding some aspects of radar performance requires a kind of meta-analysis, quantifying the trends or repeatability limits of various parameters of the signals in question. The recent addition of option BHQ to the 89600 VSA software adds this large-scale statistical view in the form of a measurement application for pulse analysis. One typical measurement, aggregating the behavior of a multitude of pulses, is the histogram.

This histogram of best-fit FM results summarizes the behavior of thousands of pulses, automatically identifying and quantifying outliers.

This histogram of best-fit FM results summarizes the behavior of thousands of pulses, automatically identifying and quantifying outliers.

Radar is a prime example of a system in which repeatability is of critical importance, and where trend behavior can be invaluable in design optimization.

The inevitable question, however, is which parameter to analyze for trends or other statistical measures. This is where the experience, insight and intuition of the radar engineer come into play. As is true in wireless, this is another example of measurement software, powerful DSP and large multi-trace displays working together to leverage the talents of the design engineer.

The radar measurement application automatically identifies and measures large numbers of pulses. Multi-trace displays with both graphical and tabular data take advantage of an engineer’s pattern recognition to spot anomalous behavior or identify connections and causes.

The radar measurement application automatically identifies and measures large numbers of pulses. Multi-trace displays with both graphical and tabular data take advantage of an engineer’s pattern recognition to spot anomalous behavior or identify connections and causes.

Clever software and processing power is no substitute for engineering skill, but it helps distill the magnitude and complexity of pulse trains filled with complex signals. While it may not yield a single value as The One True Number, it can mitigate the risks of measuring too few pulses or analyzing too few parameters together.

If you’re interested in this sort of data reduction and analysis, please visitwww.keysight.com/find/radar.

* ”From many, one” is a common translation of “E pluribus unum” from the Great Seal of the United States

Originally posted May 22, 2015

 

Signals and noise in the optical realm

It looks like I’m not the only one who finds myself wrestling with noise quite a bit, and recent developments in digital photography spurred me to briefly depart from my usual focus (pun intended) on the RF world .

I’m not departing very much, though, because digital photography can be seen as a two-dimensional type of signal analysis. Not surprisingly, many of the electrical engineers I know have at least a hobbyist interest in photography, and for quite a few it’s more than that. Our engineering knowledge helps a lot in understanding the technical aspects of making a good photograph, and I’d like to explain one recent development here.

The megapixel race in digital imaging is abating, perhaps because sensor resolution now exceeds the performance of some lenses and autofocus systems. I see this as a positive development, shifting attention to other important factors such as sensitivity or low-light performance. Sensitivity is as critical as resolution in those all-too-common situations when light is scarce and camera shake or subject movement render long exposures impractical.

Camera sensitivity goes back to the days of film, and the parameter called ISO quantifies it. In film, this sensitivity is related to grain size, but in digital imaging it’s more closely related to gain applied to the signal coming from the sensor. In an interesting correspondence, high ISO settings in a digital camera will produce noisier images that echo the coarser grain of high-ISO film.

This dance of gain and noise is awfully familiar to all of us, and I wonder if we should be suggesting to the digital imaging folks some sort of measure based on noise figure.

Today’s best digital cameras offer impressive sensitivity, driving new emphasis on a parameter near and dear to all of us: dynamic range. In the last several years, dramatic improvements in dynamic range have made cameras that are almost ISO-invariant, and this provides a big benefit for photographers.

Here’s my crude attempt at a graphical representation of the situation.

This digital image “tone flow” diagram shows how a scene with wide dynamic range may be clipped and compressed in the process of capture and conversion to JPEG format. If you rotate this diagram 90 degrees to the left, it corresponds well with the amplitude levels of an RF signal measurement.

This digital image “tone flow” diagram shows how a scene with wide dynamic range may be clipped and compressed in the process of capture and conversion to JPEG format. If you rotate this diagram 90 degrees to the left, it corresponds well with the amplitude levels of an RF signal measurement.

For RF engineers, this is familiar territory. Wider dynamic range in a measurement tool is always a good thing, and sometimes there is no substitute.

Taking advantage of this ISO-invariance is simple, though perhaps not intuitive. Instead of exposing normally for a challenging scene, the metering is set to capture desired highlights as a raw sensor output—not JPEG—file format. This may leave parts of the scene apparently underexposed, but the raw format preserves the full dynamic range of the sensor, and this allows all the tones to be brought into the desired relationship for the end result. In an ISO-invariant camera deep shadows may be brought up several stops or more without significant noise problems.

The result is more easily demonstrated than described, and an article at dpreview.com discusses the theory with examples. The folks at DPReview even consulted with Professor Eric Fossum, the inventor of the modern CMOS camera sensors that make this possible.

 

In a related article they also discuss the sources of noise in digital imaging, and once again there are parallels to our common vexations. I’m sure Boltzmann is in there somewhere.

Originally posted May 8, 2016

 

With a benefit or two that should not remain invisible

Though we don’t always think of them in quite this way, signal measurements such as low-level spurious involve the collection of a great deal of information, and thus can be frustratingly slow. I’ve described how the laws of physics sometimes help us, but this bit of good fortune confers only a modest benefit.

Some years ago, the advent of digital RBW filters in signal analyzers brought gains in speed and performance. The improved shape factor and consistent bandwidth yielded better accuracy, and the predictable dynamic response allowed sweep speeds to be increased by a factor of two to four. The effects of a faster sweep were correctable in real time as long as the speed wasn’t increased too much.

The idea of correcting for even faster sweep speeds was promising, and the benefits have gotten more attractive as spurious, harmonics and other performance specifications get ever tighter. To meet these requirements, the principal technique for reducing noise level in a spectrum or signal analyzer is to reduce RBW, with noise floor dropping 10 dB for each 10x reduction in RBW.

Unfortunately, sweep time lengthens with the square of the RBW reduction. A 100x increase in measurement time for a 10 dB improvement in signal-to-noise is a painful tradeoff.

As has occurred in the past, clever algorithms and faster DSP have combined to improve measurements and relieve the tedium for the RF engineer:

These two measurements cover the same frequency span with the same resolution bandwidth. Option FS1 in the Keysight X-Series signal analyzers (bottom) improves measurement rate by about 50 times.

These two measurements cover the same frequency span with the same resolution bandwidth. Option FS1 in the Keysight X-Series signal analyzers (bottom) improves measurement rate by about 50 times.

Fast ASIC processing in the signal analyzer corrects for the frequency, amplitude and bandwidth effects of sweeping the RBW filters at speeds up to about 50 times faster than the traditional minimal-error speed. This improvement applies to swept—not FFT—measurements and is most beneficial when RBW is approximately 10 kHz or greater.

While the speed benefits are obvious, another may be nearly invisible:  narrower RBWs also [update: see note below]  improve repeatability.

This graph compares the repeatability (vertical axis) of fast sweep and traditional sweep. The lower level and shallower slope of the blue line show both improved repeatability and less dependence on sweep time.

This graph compares the repeatability (vertical axis) of fast sweep and traditional sweep. The lower level and shallower slope of the blue line show both improved repeatability and less dependence on sweep time.

The magnitude of the speed improvement depends on measurement specifics and analyzer configuration, but they’re achieved automatically and with no tradeoff in specifications. If slow measurements are increasing your ambient level of tedium, find more information about this technique in our fast sweep application note.

Note: Improved measurement speed and repeatability are alternative benefits in this case, contrary to the implication of my original wording. You can use the same measurement time and get improved repeatability, or you can improve measurement time without improving repeatability. I apologize for the confusion.

Originally posted Apr 22, 2015

 

A nagging little question finally gets my attention

In a recent post on measurement accuracy and the use of supplemental measurement data, the measured accuracy in the figure was given in terms of the mean and standard deviations. Error bounds or statistics are often provided in terms of standard deviation, but why that measure? Why not the mean or average deviation, something that is conceptually similar and measures approximately the same thing?

I’ve wondered about standard and average deviation since my college days, but my curiosity was never quite strong enough to compel me to find the differences, and I don’t recall my books or my teachers ever explaining the practicalities of the choice. Because I’m working on a post on variance reduction in measurements, this blog is the spur I need to learn a little more about how statistics meets the needs of real-world measurements.

First, a quick summary: Standard deviation and mean absolute—or mean average—deviation are both ways to express the spread of sampled data. If you average the absolute value of sample deviations from the mean, you get the mean or average deviation. If you instead square the deviations, the average of the squares is the variance, and the square root of the variance is the standard deviation.

For the normal or Gaussian distributions that we see so often, expressing sample spread in terms of standard deviations neatly represents how often certain deviations from the mean can be expected to occur.

This plot of a normal or Gaussian distribution is labeled with bands that are one standard deviation in width. The percentage of samples expected to fall within that band is shown numerically. (Image from Wikimedia Commons)

This plot of a normal or Gaussian distribution is labeled with bands that are one standard deviation in width. The percentage of samples expected to fall within that band is shown numerically. (Image from Wikimedia Commons)

Totaling up the percentages in each standard deviation band provides some convenient rules of thumb for expected sample spread:

  • About one in three samples will fall outside one standard deviation
  • About one in twenty samples will fall outside two standard deviations
  • About one in 300 samples will fall outside three standard deviations

Compared to mean deviation, the squaring operation makes standard deviation more sensitive to samples with larger deviation. This sensitivity to outliers is often appropriate in engineering, due to their rarity and potentially larger effects.

Standard deviation is also friendlier to mathematical operations because squares and roots are generally easier to handle than absolute values in operations such as differentiation and integration.

Engineering use of standard deviation and Gaussian distribution is not limited to one dimension. For example, in new calculations of mismatch error the complementary elements of the reflection coefficient both have Gaussian distributions. Standard deviation measures—such as the 95% or two standard deviation limit—provide a practical representation of the expected error distribution.

I’ve written previously about how different views of data can each be useful, depending on your focus. Standard and mean deviation measures are no exception, and it turns out there’s a pretty lively debate in some quarters. Some contend, for example, that mean deviation is a better basis on which to make conclusions if the samples include any significant amount of error.

 

I have no particular affection for statistics, but I have lots of respect for the insight it can provide and its power in making better and more efficient measurements in our noisy world.

Originally posted Apr 7, 2015

 

Is something wrong with this picture?

Many of the things that intrigue me do not have the same effect on an average person. However, you are also not an average person—or you wouldn’t be reading this blog. Thus, I hope you’ll find the following image and explanation as interesting and useful as I did. Take a close look at this Keysight X-Series signal analyzer and the bits I’ve highlighted:

The frequency range of this MXA signal analyzer extends to 26.5 GHz but it is equipped with a Type N input connector. Because N connectors are normally rated to 11 or 18 GHz, do we have a problem?

The frequency range of this MXA signal analyzer extends to 26.5 GHz but it is equipped with a Type N input connector. Because N connectors are normally rated to 11 or 18 GHz, do we have a problem?

One up-front confession: I looked at this combination of frequency range and input connector for years before it struck me as strange. I vaguely remembered that N connectors were meant for lower frequencies and finally took the time to look it up.

The explanation is only a little complicated, including some clever engineering to optimize tradeoffs, and it’s worth understanding. As always with microwaves and connections, it’s a matter of materials, precision and geometry.

First, the short summary: The N connectors used in Keysight’s 26 GHz instruments are specially designed and constructed, and their characteristics are accounted for in the instrument specifications. If you’re working above 18 GHz and using appropriate adapters such as those in the 11878 Adapter Kit, you can measure with confidence. Just connect the N-to-3.5mm adapter at the instrument front panel and use 3.5 mm or SMA hardware from there.

Why use the N connector on a 26 GHz instrument in the first place? Why not an instrument-grade 3.5 mm connector that will readily connect to common SMA connectors as well? The main reason is the strength and durability of the N connector when dealing with the bumps, twists and frequent reconnections that test equipment must endure—and still ensure excellent performance. Precision N connectors offer a combination of robustness and consistent performance that is unique in the RF/microwave world. They’re also easy to align and are generally tightened by hand.

However, there is that small matter of limited frequency range. Standard N connectors are rated to 11 GHz and precision ones to 18 GHz. Above 18 GHz, conductor size and geometry can allow amplitude and phase errors due to the moding phenomenon I described in a previous post.

Moding is a resonance phenomenon from the larger dimensions of the N connector, and the solution involves a change in the construction of the instrument’s precision N connector. This special connector has a combination of a slotless inner shield, a support bead of a special material, and higher precision construction. As a result, resonances can be eliminated or reduced to such a small magnitude that the N connector is the overall best choice for test equipment over this frequency range.

 

There you have it, the practical advantages of N connectors over the full 26.5 GHz frequency range, without a performance penalty.

Originally posted Mar 10, 2015

 

Precisely small is more of a challenge than precisely big

Recently, I’ve been looking at sensitivity measurements and getting acquainted with the difficulty of doing things correctly at very low signal levels. It’s an interesting challenge and I thought it would be useful to share a couple of surprising lessons about specifications and real-world performance.

From the outset, I’ll concede that data sheets and detailed specifications can be boring. Wading through all that information is a tedious task, but it’s the key to performance you can count on, and specs are a reason to buy test equipment in the first place. Also, extensive specifications are better than the alternative.

Sensitivity measurements show the role and benefits of a good data sheet in helping you perform challenging tests. Say, for example, you’ve got a sensitivity target of 1 µV and you need a signal just that size because the desired tolerance is ±1 dB. In a 50Ω system, that single microvolt is −107 dBm, and 1 dB differences amount to only about 10 nV.

The hard specs for a Keysight MXG X-Series microwave signal generator are ±1.6 dB and extend to −90 dBm, so there are issues with the performance required in this situation. However, it’s worth keeping in mind that the specs cover a wide range of operating conditions, well beyond what you’ll encounter in this case.

Once again this is a good time to consider adding information to the measurement process as a way to get more from it without changing the test equipment. A relevant item from the signal generator data sheet illustrates my point.

The actual performance of a set of MXG microwave signal generators is shown over 20 GHz, and the statistical distribution is provided as well. Though the measurement conditions are not as wide as for hard specs, these figures are a better indication of performance in most situations.

The actual performance of a set of MXG microwave signal generators is shown over 20 GHz, and the statistical distribution is provided as well. Though the measurement conditions are not as wide as for hard specs, these figures are a better indication of performance in most situations.

The performance suggested by this graph is very impressive—much better than the hard specs over a very wide frequency range—and it applies to the kind of low output level we need for our sensitivity measurement. Accuracy is almost always better than ±0.1 dB, dramatically better than the hard spec.

The graph also includes statistical information that relates to the task at hand. Performance bounds are given for ±one standard deviation, and this provides a 68% confidence level if the distribution is normal (Gaussian). If I understand the math, a tolerance of ±0.2 dB would then correspond to two standard deviations and better than 95% confidence.

The time spent wading through a data sheet is amply rewarded, and the right confidence can then be attached to the performance of a tricky measurement. The confidence you need in your own measurements may be different, but the principle is the same and the process of adding information will improve your results.

 

So far, we’ve taken advantage of information that is generic to the instrument model involved. Even more specific information may be available to you, and I’ll discuss that in a future post.

Originally posted Feb 28, 2015

 

Intuition is powerful but if you don’t frame questions well it can mislead you

Contrary to the popular stereotype, good engineers are creative and intuitive. Indeed, these characteristics are essential tools for successful engineering.

I have great respect for the power of intuitive approaches to problems, and I see at least two big benefits. First, intuition can gather diffuse or apparently unrelated facts that enable exceptionally powerful analysis. Second, it often provides an effective shortcut for answers to complex questions, saving time and adding efficiency.

Of course, intuition is not infallible, and I’m always intrigued by its failure. It makes sense to pay attention to these situations because they provide lessons about using intuitive thinking without being misled by it. Two of my favorite examples are the Monty Hall Problem and why mirrors appear to reverse left and right but not up and down.

I’d argue that a common factor in most intuition failures is not so much the reasoning process itself but the initial framing of the question. If you start with a misapprehension of some part of the problem or question, even a perfect chain of reasoning will fail you.

As a useful RF example, let’s look at an intuition failure in “sub-kTB” signal measurements. Among RF engineers, kTB is shorthand for -174 dBm/Hz*, which is the power delivered by a 50Ω thermal source into a 50Ω load at room temperature. It should therefore be the best possible noise level—or, more accurately, noise density or PSD—you could obtain in a signal analyzer that has a perfect 0 dB noise figure.

Not surprisingly, many engineers also see this as the lowest possible signal level one could measure, a kind of noise floor or barrier that one could not see beyond or measure beneath. As a matter of fact, even this level should not be achievable because signal analyzers contribute some noise of their own.

This intuitive expectation of an impenetrable noise floor is logical but flawed, as demonstrated by the measurement example below that uses Keysight’s Noise Floor Extension (NFE) feature in a signal analyzer. Here, a multi-tone signal with very low amplitude is measured near the signal analyzer’s noise floor.

The noise marker shows that the effective noise floor of the measurement (blue) is actually below kTB after NFE removes most of the analyzer’s noise. The inset figure shows how a signal produces a detectable bump in the analyzer’s pre-NFE noise floor (yellow), even though it’s about 5 dB below that noise floor.

The noise marker shows that the effective noise floor of the measurement (blue) is actually below kTB after NFE removes most of the analyzer’s noise. The inset figure shows how a signal produces a detectable bump in the analyzer’s pre-NFE noise floor (yellow), even though it’s about 5 dB below that noise floor.

I’ve previously described NFE, and for this discussion I’ll summarize by saying that it allows some analyzers to accurately estimate their own noise contribution and then automatically subtract most of it from the measurement. The result is a substantial improvement in effective noise floor and the ability to separate very small signals from noise.

While it is indeed correct that kTB is a noise floor that cannot be improved, or even matched in an analyzer, the error in intuition is in associating this in a 1:1 fashion with an ultimate measurement limit. As discussed previously, signal and noise power levels—even very small ones—can be reliably added or subtracted to refine raw measurement results.

kTB and related noise in analyzers are phenomena whose values, when averaged, are predictable when the measurement conditions and configuration are known. Consequently, subtracting analyzer noise power can be seen as adding information to the measurement process, in turn allowing more information to be taken from the measurement result.

OK, so measuring below kTB is perhaps more of a parlor trick than a practical need. However, an intuitive understanding of its possibility illuminates some important aspects of making better RF measurements of those tiny signals that so frequently challenge us.

 

* You may instead see the figure -177 dBm/Hz for kTB. This refers to a slightly different noise level measurement than that of a spectrum or signal analyzer, as explained at the link.

Originally posted Feb 18, 2015   

 

You can “turn it up to eleven” as long as you don’t leave it there

When I first heard the term “envelope tracking” I thought of the classic investigative/surveillance technique called “mail cover” in which law enforcement gets the postal service to compile information from the outside of envelopes. The practice was in the news a while back due to its use with digital communications.

Learning a little more, I quickly realized that it has nothing to do with the mail but, like the mail, has precedent that reaches back many years. “Riding the gain” or “gain riding” is a manual process that has been used for decades in audio recording and other applications where excessive dynamic range is a problem. Its use predates vinyl records, though I first encountered it in my previous life as a radio announcer, broadcasting live events.

When I was riding the gain, it was a manual process of twisting a knob, trying to reduce input dynamic range to something a small-town AM transmitter could handle. I was part of a crude feedback system, prone to delay and overshoot, as I’m sure my listeners would attest.

These days, envelope tracking is another example of how digital processing is used to solve analog problems. In this case it’s the conflict between amplifier efficiency and the wide variations in the RF envelope of digital modulation. If the power supply of an RF amplifier can be dynamically adjusted according to the power needed by modulation, it can—at every instant—be operating at its most efficient point.

In envelope tracking an RF power amplifier is constantly adjusted to track the envelope of the modulated input signal. The amplifier operates at higher efficiency and lower temperature, using less battery power and potentially creating less adjacent-channel interference.

In envelope tracking an RF power amplifier is constantly adjusted to track the envelope of the modulated input signal. The amplifier operates at higher efficiency and lower temperature, using less battery power and potentially creating less adjacent-channel interference.

Power efficiency has always been a major driver in mobile communications and its importance continues to grow. Batteries are limited by the size and weight of the handsets users are willing to carry and, yet again, Moore’s Law points the way to improvement. Available DSP now has the high speed and low power consumption to calculate RF envelope power on the fly. The envelope value is fed to a power supply with sufficient bandwidth or response time to adjust its drive of the RF power amplifier accordingly.

An envelope tracking power amplifier (ETPA) is dynamically controlled for optimum efficiency by tracking the required RF envelope power. The tracking is based on envelope calculations from the I/Q signal, modified by a shaping table.

An envelope tracking power amplifier (ETPA) is dynamically controlled for optimum efficiency by tracking the required RF envelope power. The tracking is based on envelope calculations from the I/Q signal, modified by a shaping table.

This all seems fairly straightforward but, of course, is anything but. The calculation and response times are very short, and a high degree of time alignment is required. Power supplies must be extremely responsive and still very efficient. All of the DSP must itself be power efficient, to avoid compromising the fundamental power benefit.

Envelope tracking is a downstream solution to power amplifier efficiency, joining previous upstream techniques such as crest factor reduction and macro-scale approaches such as digital predistortion. To a great extent, all rely on sophisticated algorithms implemented in fast DSP.

That’s where Keysight’s design and test tools come in. You can find a collection of application notes and other information at www.keysight.com/find/ET.

 

With envelope tracking you can now turn your amplifiers up to eleven when you need to, and still have a battery that lasts all day.

Originally posted Feb 2, 2015

 

Sometimes I need to be reminded to take my own advice

Recently, I’ve been looking into measuring spurious signals and the possibility of using periodic calibration results to improve productivity. I’ll share more about that in a future post, but for now it seemed useful to summarize what I’ve learned—or re-learned—about new and traditional ways to measure spurs.

Spur measurements can be especially time-consuming because they’re usually made over wide frequency ranges and require high sensitivity. Unlike harmonics, spur locations are typically not known beforehand so the only choice is to sweep across wide spans using narrow resolution bandwidths (RBWs) to reduce the analysis noise floor. With spurs near that noise floor, getting the required accuracy and repeatability can be a slow, tedious job.

An engineer experienced in optimizing these measurements reminded me of advice I’ve heard—and shared—before: Don’t measure where you don’t need to, and don’t make measurements that don’t matter.

The first “don’t” is self-explanatory. The frequency spectrum is wide, but the important region is mercifully much narrower—and we should enjoy every possible respite from tedium.

The second “don’t” is less obvious. It’s a reminder to begin with a careful look at the DUT and which measurements are required, and how good those measurements need to be. For example:

  • Do you need to measure specific spur frequencies and amplitudes, or is a limit test sufficient?
  • How much accuracy and variance are acceptable? What noise floor or signal/noise and averaging are needed to achieve this?
  • Are the potential spurs CW? Modulated? Impulsive?

The answers will help you define an efficient test plan and select the features in a signal analyzer that dramatically improve spur measurements.

One especially useful feature is the spurious measurement application. It allows you to build a custom set of frequency ranges, each with optimized settings for RBW, filtering, detectors, etc. You measure only where needed, as shown below.

With the measurement application, you can set up multiple analysis ranges and optimize the settings for each. Measurements are made automatically, with pass/fail limit testing.

With the measurement application, you can set up multiple analysis ranges and optimize the settings for each. Measurements are made automatically, with pass/fail limit testing.

This application is helpful in ATE environments, offloading tasks from the system CPU. It’s also worth considering in R&D because, unfortunately, spur measurements usually have to be repeated… many, many times.

Some recent innovations in digital filters and DSP have dramatically improved sweep rates for narrow RBWs in signal analyzers such as Keysight’s PXA. With sufficient processing, RBW filters can now be swept up to 50 times faster without compromising amplitude or frequency accuracy. The benefit is greatest for RBWs of several to several hundred kilohertz, as is typical for spur measurements (see this recent app note).

One factor that can muck up the works is the presence of non-CW spurs. For example, TDMA schemes often produce time-varying spurs. This violates key assumptions underlying traditional search techniques and makes it much tougher to detect and measure spurs.

Fortunately, signal analyzers have evolved to handle these challenges. In TDMA systems, sync or trigger signals are often available to align gated sweeps that analyze signals only during the desired part of the TDMA frame.

Perhaps the most powerful tool for finding impulsive or transient spurs is the real-time analyzer, which can process all the information in a frequency span without gaps and produce a spectrogram or density display that reveals even the most intermittent signals.

The best tool for precisely measuring time-varying spurs is vector signal analyzer (VSA) software. The software uses RF/microwave signal analyzers, oscilloscopes, etc., to completely capture signals for any type of frequency-, time- and modulation-domain analysis. Signals can be recorded for flexible post-processing as a way to accurately measure all their characteristics from a single acquisition, solving the problem of aligning measurement to the time-varying nature of the signal.

 

It’s no secret that spur detection and measurement are both difficult and essential, but with the right advice and the right equipment you can minimize the tedium.

benz

Does RF Noise have Mass?

Posted by benz Oct 13, 2016

Originally posted on Jan 21, 2015

I’m not usually a fan of noise, but there are exceptions

My provisional assumption is that noise does indeed have mass. I support that notion with the following hare-brained chain of reasoning: The subject of noise has a gravity-like pull that compels me to write about it more than anything else. Because gravity comes from mass, noise therefore must have mass.Voila!

My previous posts dealing with noise have all been about minimizing it. Averaging away its effects, estimating the errors it causes, predicting and then subtracting noise power, and so on. Sometimes I just complain about it or wax philosophical.

I even created a webcast titled “conquering noise” but, of course, that was a bit of a conceit. Noise is a fundamental natural phenomenon and it is never vanquished. Instead, I have mentioned that noise can be beneficial in some circumstances—and now it’s time to describe one.

A few years ago, a colleague was using Keysight’s Advanced Design System (ADS) software to create 10 MHz WiMAX MIMO signals that included impairments. He started by adding attenuation to one transmitter, but after finding little or no effect on modulation quality, he added a 2 MHz bandpass filter to one channel, as shown below.

Surely a filter that removed most of one channel would confound the demodulator. Comparing the spectra of the two channels, the effect is dramatic.

Surely a filter that removed most of one channel would confound the demodulator. Comparing the spectra of the two channels, the effect is dramatic.

Spectrum of two simulated WiMAX signals with 10 MHz bandwidth. The signal in the bottom trace has been modified by a 2 MHz bandpass filter.

 

Spectrum of two simulated WiMAXX signals with 10 MHz bandwidth. The signal in the bottom trace has been modified by a 2 MHz bandpass filter. 

All that filtering in one channel had no significant effect on modulation quality! The VSA software he was using—as an embedded element in the simulation—showed the filter in the spectrum and the channel frequency response, but in demodulation it caused no problem.

He emailed a recording of the signal and I duplicated his results using the VSA software on my PC. I then told him he could “fix” the problem by simply adding some noise to the signals.

This may seem like an odd way to solve the problem, but in this case the simulation didn’t match reality in the way it responded to drastic channel filtering. The mismatch was due to the fact that the simulated signals were noise-free, and the channel equalization in demodulation operations could therefore perfectly correct for filter impairments, no matter how large they were.

In many ways it’s the opposite of the adaptive equalization used in real-world situations with high noise levels, and I have previously cautioned you to be careful what you ask for. When there is no noise, you can correct signals as much as you want, without ill effects.

Of course, “no noise” is not the world we live in or design for, and as much as I hate to admit it, there are times when it’s beneficial to add some.

 

There are certainly other uses for noise. Those also have that peculiar massive attraction and I know I’ll write about it again soon.

benz

RF, Protocol and the Law

Posted by benz Oct 13, 2016

Originally posted on Jan 12, 2015

 

RF engineering and a layer hierarchy that extends all the way to the spectral authorities

In our day jobs we focus mainly on the physical layer of RF communications, and there is certainly enough challenge there for a lifetime of productive work. The analog and digitally modulated signals we wrestle with are the foundation of an astonishing worldwide expansion of communications.

Of course, the physical layer is just the first of many in modern systems. Engineering success often involves interaction with higher layers that are commonly described in diagrams such as the OSI model shown below.

 

The Open Systems Interconnection (OSI) model uses abstraction layers to build a conceptual model of the functions of a communications system. The physical layer is the essential foundation, but many other layers are needed to make communication efficient and practical. (Image from Wikimedia Commons)

The Open Systems Interconnection (OSI) model uses abstraction layers to build a conceptual model of the functions of a communications system. The physical layer is the essential foundation, but many other layers are needed to make communication efficient and practical. (Image from Wikimedia Commons)

The OSI model is a good way to build an understanding of systems and to figure out how to make them work, but sometimes we need to add even more layers to see the whole picture. A good example comes from a recent event that caught my eye.

Many news outlets reported that some hotels in one chain in the US were “jamming” private Wi-Fi hotspots to force convention-goers to use the hotel’s for-fee Wi-Fi service. The term jamming grabbed my attention because it sounded like a very aggressive thing to do to the 2.4 GHz ISM band, which functions as a sort of worldwide public square in the spectral world. I figured regulatory authorities such as our FCC would take a pretty dim view of this sort of thing.

As is so often the case, many general news organizations were being less than precise. The hotel chain was actually blocking Wi-Fi rather than jamming it. This is something that happens not at the physical layer—RF jamming—but a few layers higher.

According to the FCC, hotel employees “had used containment features of a Wi-Fi monitoring system” to prevent people from connecting to their own personal Wi-Fi networks. Speculation from network experts is that the Wi-Fi monitoring system could be programmed to flood the area with de-authentication or disassociation packets that would affect access points and clients other than those of the hotel.

It may not surprise you that the FCC also objected to this use of the ISM band, and the result was a $600,000 settlement with the hotel to resolve the issue. The whole RF story thus extends the OSI model to at least a few more levels, including the vendor of the monitoring system, the hotel management and—at least one layer above them!—the FCC itself.

I suppose you can insert some legislative and political layers in there somewhere if you want, but I’m happy to focus my effort on wrangling the physical layer and those near it. Keysight signal generators and signal analyzers are becoming more capable above the physical layer, with features such as Wireless Link Analysis to perform layer 2 and layer 3 analysis of LTE-FDD UL and DL signals.

 

In the end, I hope there are ways to resolve these issues and give everyone fair access to the unlicensed portions of our shared spectrum. I dread a situation in which a market emerges for access points or hotspots with counter-blocking technology and a resulting arms race that could leave us all without access.