Skip navigation
All Places > Keysight Blogs > Better Measurements: The RF Test Blog > Blog > 2016 > December
2016

Make better measurements by using all the information you can get

Some of the things we do as part of good measurement practice are part of a larger context, and understanding this context can help you save time and money. Sometimes they’ll save our sanity when we’re trying to meet impossible targets, but that’s a topic for another day.

Take, for example, the simple technique for setting the best input level to measure harmonic distortion with a signal analyzer. The analyzer itself contributes distortion that rises with the signal level at its input mixer, and that distortion can interfere with that of the signal under test. To avoid this problem, the analyzer’s attenuator—sitting between the input and the mixer—is adjusted to bring the mixer signal level down to a point where analyzer distortion can be ignored.

It’s a straightforward process of comparing measured distortion at different attenuation values, as shown below.

2nd second harmonic distortion in signal analyzer caused by overdriving analyzer input imixer

Adjusting analyzer attenuation to minimize its contribution to measured second harmonic distortion. Increasing attenuation (blue trace) improves the distortion measurement, reducing the distortion contributed by the analyzer.

When increases in attenuation produce no improvement in measured distortion, the specific analyzer and measurement configuration are optimized for the specific signal in question. This is a deceptively powerful result, customized beyond what might be done from published specifications.

Of course, the reason to not use a really low mixer level (i.e., lots of attenuation) is to avoid an excessively high noise level in the measurement and thereby lose low-level signals. Indeed, if the goal is to reveal spurious or other signals, less attenuation will be used and distortion performance will be traded for improved signal/noise ratio (SNR).

Setting attenuation levels to one type of performance is standard practice, and it’s useful to understand this in the larger context of adding information to customize the measurement to your priorities. This approach has a number of benefits that I’ve summarized previously in the graphic below.

Spectrum measurement as trading off cost, time, information.Information can be added to improve cost-time tradeoff.

A common tradeoff is between cost, time, and information. Information is an output of the measurement process, but it can also be an input, improving the process.

The interactions and tradeoffs are worth a closer look. One possible choice is to purchase an analyzer with higher performance in terms of distortion, noise, and accuracy. Alternatively, you can compensate for the increased noise (due to the high attenuation value) by using a narrower resolution bandwidth (RBW). You’ll get good performance in both distortion and noise floor, but at the cost of increased measurement time.

That alternative approach involves using your engineering knowledge to add information to the process, perhaps in ways you haven’t considered. The optimization of distortion and noise described above is a good example. Only you know how to best balance the tradeoffs as they apply to your priorities—and adjusting the measurement setup means adding information that the analyzer does not have.

There are other important sources of information to add to this process, ones that can improve the overall balance of performance or speed without other compromises. Some involve information you possess, and some use information from Keysight and from the analyzer itself.

I’ll describe them in a part 2 post here next month. In the meantime you can find updated information about spectrum analysis techniques at our new page Spectrum Analysis At Your Fingertips. We’ll be adding information there and, as always, here at the Better Measurements RF Test Blog.

  It can be both. A look at the engineering behind some beautiful footage

We’ll be back to our regular RF/microwave test focus shortly, but sometimes at the holidays I celebrate by taking a look at other examples of remarkable engineering. I have a strong interest in space technology, so you can imagine where this post is going.

Let’s get right to the fun and the art. Then to the fascinating engineering behind it. Even if you didn’t know what you were watching, you’ve probably seen the footage of staging (i.e., stage separation and ignition) of the Apollo program Saturn V moon rockets. Some have realized the artistic potential and have added appropriate music. Here’s a recent example: I suggest you follow the link and enjoy the 75-second video.

Welcome back. Now let’s discuss a little of the amazing engineering and physics involved.

Staging is an elaborately choreographed event, involving huge masses, high speeds, and a very precise time sequence. By the time you see the beginning of stage separation at 0:15 in the linked video, the first-stage engines have been shut down and eight retrorockets in the conical engine fairings at the base of the first stage have been firing for a couple of seconds. They’ve got a combined thrust of more than 700,000 pounds and are used with pyrotechnics to disconnect and separate the first stage from the second.

At this point the rocket is traveling about 6,000 mph, more horizontally than vertically. The first stage looks like it’s falling back to earth, but its vertical velocity is still about 2,000 mph. It’s almost 40 miles high and in a near-vacuum, and will coast upward about another 30 miles over the next 90 seconds.

In the meantime, the five engines of the second stage will be burning to take the spacecraft much higher and faster, nearly to orbit. Before they can do this, the temporarily weightless fuel in the stage must be settled back to the bottom of the tanks so it can be pumped into the engines. That’s the job of eight solid-rocket ullage motors—it’s an old and interesting term, look it up!—that fire before and during the startup of the second-stage engines.

Those engines start up at around 0:20, an event that’s easy to miss. They burn liquid hydrogen and liquid oxygen, and the combustion product is just superheated steam. For the rest of this segment you’re looking at the Earth through an exhaust plume corresponding to more than one million pounds of thrust, and it’s essentially invisible. That clear rocket exhaust is important in explaining the orange glow in the image below, at about 0:36 in the video.

Still frame from film of Apollo Saturn V staging interstage release

Aft view from Saturn V second stage, just after separation of interstage segment. The five Rocketdyne J2 engines burn liquid hydrogen and oxygen, producing a clear exhaust. Image from NASA.

The first and second stages have two planes of separation, and the structural cylinder between them is called the interstage. It’s a big thing, 33 feet in diameter and 18 feet tall. It’s also heavy, with the now-spent ullage motors being mounted there, and so it’s the kind of thing you want to leave behind when you’re done with it. Once the second stage is firing correctly and its trajectory is stable, pyrotechnics disconnect the interstage.

That’s what’s happening in the image above, and perhaps the most interesting thing about this segment is how the interstage is lit up as it’s impacted not once but twice by the hot exhaust plume. I saw this video many years ago and always wondered what the bursts of fire were; now I know.

The video shows one more stage separation, with its own remarkable engineering and visuals. At 0:43 the view switches to one forward from that same second stage, and the process of disconnection and separation happens again. The pyrotechnics, retrorockets, and ullage motors are generally similar. The fire from the retros is visible as a bright glow from 0:44 and when it fades, at about 0:46, the firing of the three ullage motors is clearly visible.

The next fascinating thing happens at 0:47, when the single J2 engine ignites right before our eyes. We get to see right down the throat of a rocket engine as it lights up and achieves stable combustion. Once again, the exhaust plume is clear, and only the white dot of the flame in its combustion chamber is visible.

The third stage accelerates away, taking the rest of the vehicle to orbit. Remarkably, for an engine using cryogenic fuels, this special J2 can be restarted, and that’s just what will happen after one or two Earth orbits. After vehicle checkout, the second burn of this stage propels the spacecraft towards its rendezvous with the Moon.

I could go on—and on and on—but by now you may have had enough of this 50-year-old technology. If not, you can go elsewhere, online and offline, and find information and images of all kinds. The links and references in my previous Apollo post are a good start.

  A single analyzer can cover 3 Hz to 110 GHz. Is that the best choice for you?

I haven’t made many measurements at millimeter frequencies, and I suspect that’s true of most who read this blog. All the same, we follow developments in this area—and I can think of several reasons why.

First, these measurements and applications are getting a lot of press coverage, ranging from stories about the technology to opinion pieces discussing the likely applications and their estimated market sizes. That makes sense, given the expansion of wireless and other applications to higher frequencies and wider bandwidths. As a fraction of sales—still comparatively small—more money and engineering effort will be devoted to solving the abundant design and manufacturing problems associated with these frequencies.

I suppose another reason involves a kind of self-imposed challenge or discipline, reminiscent of Kennedy’s speech about going to the moon before 1970, where he said the goal would “serve to organize and measure the best of our energies and skills.” Getting accurate, reliable measurements at millimeter frequencies will certainly challenge us to hone our skills and pay close attention to factors that we treat more casually at RF or even microwave.

Of course, this self-improvement also has a concrete purpose if we accept the inevitability of the march to higher frequencies and wider bandwidths. For some, millimeter skills are just part of the normal process of gathering engineering expertise to stay current and be ready for what’s next.

In an earlier post I mentioned the new N9041B UXA X-Series signal analyzer and focused attention on the 1 mm connectors used to get coaxial coverage of frequencies to 110 GHz. In this post I’ll summarize two common signal analyzer choices at these frequencies and some of their tradeoffs.

Two types of solutions are shown below. The first practical millimeter measurements were made with external mixers, but signal analyzers with direct coverage to the millimeter bands are becoming more common.

M1970 & M1971 Smart external waveguide mixers (left) and N9041B UXA X-Series signal analyzer (right)

External mixing (left) has long been a practical and economical way to make millimeter frequency measurements. Signal analyzers such as the new N9041B UXA (right) bring the performance and convenience of a single-instrument solution with continuous direct coverage from 3 Hz to 100 GHz.

My post on external mixing described how the approach effectively moves the analyzer’s first mixer outside of the analyzer itself. The analyzer supplies an LO drive signal to the mixer and receives—sometimes through the same cable—a downconverted IF signal to process and display. This provides a lower-cost solution, where analyzers with lower frequency coverage handle millimeter signals through a direct waveguide input.

In use, this setup is more complicated than a one-box solution, but innovations such as smart mixers with USB plug-and-play make the connection and calibration process more convenient. An external mixer can be a kind of “remote test head,” extending the analyzer’s input closer to the DUT and simplifying waveguide connections or the location of an antenna for connectorless measurements.

The drawbacks of external mixers include their banded nature (i.e., limited frequency coverage) and lack of input conditioning such as filters, attenuators, and preamps. In addition, their effective measurement bandwidth is limited by the IF bandwidth of the host analyzer, a problem for the very wideband signals used so often at millimeter frequencies. Finally, external mixing often requires some sort of signal-identification process to separate the undesirable mixer products that appear as false or alias signals in the analyzer display. Signal identification is straightforward with narrowband signals, but can be impractical with very wide ones.

Millimeter signal analyzers offer a measurement solution that is better in almost all respects, but is priced accordingly. They provide direct, continuous coverage, calibrated results and full specifications. Their filters and processing eliminate the need for signal identification, and their input conditioning makes it easier to optimize for sensitivity or dynamic range.

The new N9041B UXA improves on current one-box millimeter solutions in several ways. Continuous coverage now extends to 110 GHz and—critically—analysis bandwidths are extended to 1 GHz internally and to 5 GHz or more with external sampling.

Sensitivity is another essential for millimeter frequency measurements. Power is hard to come by at these frequencies, and the wide bandwidths used can gather substantial noise, limiting SNR. The DANL of the UXA is better than -150 dBm/Hz all the way to 110 GHz and, along with careful connections, should yield excellent spurious and emissions measurements.

Millimeter measurements, especially wideband ones, will continue to be demanding, but the tools are in place to handle them as they become a bigger part of our engineering efforts.