Skip navigation
All Places > Keysight Blogs > Better Measurements: The RF Test Blog > Blog > 2016 > September
2016

Better Measurements: The RF Test Blog

September 2016 Previous month Next month
benz

Loose Nut Danger

Posted by benz Sep 23, 2016

Originally posted Jan 27, 2013

 

Lurking on your workbench

Some hazards come from unexpected directions and find ways to bypass safety measures.  One example is mechanical connector damage to high frequency cables and adapters, especially when SMA and precision 3.5mm connectors are used in an environment where 2.4mm and 1.85mm are also used.  As usual, a picture is worth a thousand words:

The picture of the 3.5 mm-to-2.4 mm adapter at the top center explains things best.  Note that the outer shoulder length of the outer conductor is different and that the thread pitches are different as well.  This ensures that the connector nuts won’t fit if the wrong connectors are used.  So far, so good.

The picture of the 3.5 mm-to-2.4 mm adapter at the top center explains things best.  Note that the outer shoulder length of the outer conductor is different and that the thread pitches are different as well.  This ensures that the connector nuts won’t fit if the wrong connectors are used.  So far, so good.

The danger is easy to see if you examine the collets (female center connectors) at both ends of the adapter, shown in the pictures at the upper left and upper right.  Note the differences in the collet thickness and the acceptable center pin size for the two connectors.  If the center pin of the SMA connector at the lower left (intermatable with precision 3.5 mm) is inserted in the collet of the 2.4mm connector at the upper right, the 2.4mm connector will be ruined.  The damage would be the same if the male connector used was 3.5mm.

But of course the outer nut prevents that from happening.  Or does it?

The safety measure implemented through different connector dimensions and thread pitches (as described above) only works if the nut on the male connector is restrained at the end of the cable and not allowed to slide back or away from the end connector.  As you can see from the picture at the lower right that is not the case with this particular bit of semi-rigid coaxial cable and SMA connector.  The other dimensions of the outer connector are compatible and with a little force the SMA connector can fulfill its damaging destiny.

One piece of good fortune for the unlucky engineer is that female 2.4 mm and 1.85 mm connectors are not generally present on instrument front panels.  Male connectors are used instead (details in an upcoming post) and so the damage is usually limited to adapters and cables.  They’re expensive enough on their own, but much less costly than replacing instrument panel connectors and recalibrating.

Connector nuts on 3.5 mm hardware are almost always restrained but the practice is not universal, and is much less common on inexpensive SMA hardware.  And of course the restraint (sometimes a snap ring) can come adrift on almost any connector.

Thus this hazard is present in any environment where 2.4 mm or 1.85 mm connectors are used, including most millimeter-wave and some microwave applications.

You get bonus hazard detection points if you noticed the additional problem with the slightly extended and bent center pin of the SMA connector at the lower left.

 

This high frequency hardware is expensive and somewhat delicate, and damage can hurt performance even when connectors aren’t destroyed outright.  Gear gets even more expensive and delicate as operating frequencies get higher, so be careful out there!

Originally posted Jan 30, 2013

 

Even more Gaussian than Gaussian

Sometimes it’s not just a matter of analog vs. digital tradeoffs and a technology reaches the stage where digital technologies are really better overall.  This tends to be the case when the digital technologies have progressed to the point where cost and processing performance are no longer limiting factors.  Such is the case with spectrum/signal analyzers and digital IF (resolution bandwidth) filters as illustrated below.

Comparing shape of analog and digital filters in a spectrum analyzer

Comparing shape of analog and digital filters in a spectrum analyzer

The black trace is taken from a traditional swept spectrum analyzer with analog Gaussian resolution bandwidth (RBW) filters.  They’re also sometimes referred to as synchronously-tuned filters.  The shape is called Gaussian, though it’s only the very middle/top of the passband that’s especially Gaussian in shape.  The skirts of the filter make it look rather more triangular overall.

These partly-Gaussian filters were a good choice for swept spectrum analyzers for several reasons.  The Gaussian shape at the top was narrow and provided good selectivity for closely-spaced signals of similar amplitude, allowing spectrum analyzers to do one of their main jobs (separating and measuring such signals) well.  In addition the filters could be swept at a reasonable rate of RBW2/2 Hz/second with only modest frequency and amplitude errors due to the effects of sweeping.

As implemented, however, these filters also had drawbacks.  To minimize frequency and amplitude errors when measuring to meet all specifications they were often swept at about RBW2/8 Hz/second, slowing many measurements considerably.  The shape factor of the filters (the ratio of their 60 dB to 3 dB bandwidths) was around 11:1.  This is a modest figure and means that their selectivity for separating closely-signals of very different amplitudes is limited.  Thus when measuring such signals it was often necessary to use a much narrower RBW and suffer dramatically longer sweep times that generally got slower as the square of the RBW.  Wide span sweeps with narrow RBWs such as those used for spur searches could be painfully long.

In addition analog RBW filters do not have precisely predictable or constant bandwidths.  This is not a particular problem for CW signals but introduces errors when the analyzers are used to measure noise or noise-like signals, where the measured values of signals are proportional to the actual (and varying) bandwidth of the RBW filters.  The bandwidth of some of these filters varied by about ±20% from unit to unit.

Digital technologies came to the rescue, first for narrower RBWs and lower frequency analyzers, in the late 1980s.  The digital filters had shape factors closer to 4:1, similar to the shape shown in red in the figure above.  This dramatically improved their ability to resolve closely-spaced signals and thus allowed wider (and therefore faster-sweeping) RBW filters to be used for equivalent measurements.

Sweep times could be further improved by a technique known as “oversweep” where the predictable dynamic effects of faster-than-normal sweeping could be corrected very well.  The combination of better selectivity, oversweep, and correction of dynamic effects without slowing sweep allows for measurements that can be dozens of times faster than with analog filters.  Techniques for sweep correction of digital filters continue to evolve, and engineers can expect even faster sweeps in the future.

The consistent and accurate bandwidth of digital filters also improves measurement of noise and noise-like signals, where accuracy depends on knowing equivalent noise bandwidth.

For RF/microwave applications the culmination of this trend was the introduction of the first microwave swept spectrum analyzer, with an all-digital IF: the Agilent E4440A PSA spectrum analyzer in late 2000.  Now Agilent’s entire X-Series signal analyzer line uses all-digital IF sections with digital filters.

 

Gaussian digital filters may be the best choice for general spectrum analysis but they’re not the only one.  Future posts will discuss filters for other types of signal analysis and the tradeoffs they involve.

Originally posted Feb 12, 2013

 

Except sometimes it is

Reducing measurement variance is important for accuracy and resolution, and some sort of averaging is common in many measurement situations.  In a spectrum or signal analyzer, for example, there are so many potential averaging processes that one or more are likely to be operating on your measurements even if you don’t actively select one.  I’ll have more detail in future posts, but in this one I’ll discuss something more fundamental:  log and linear averaging scales.

Specifically, when you’re averaging signal measurements should you (or your analyzer) compute the average of the log-scaled value of the signal or instead average the linear power values (Watts) and then express the result in dB?  The first approach averages the log-scaled signal or data, essentially averaging the dB values.  It’s the kind of averaging performed by traditional swept spectrum analyzers as “video” averaging, operating the same whether an analyzer uses an analog or digital IF section.

The alternative approach of averaging power first and then converting to dB corresponds to what a power sensor/power meter combination would do.  It expresses the energy or heating value of a signal, regardless of signal characteristics.

Both of these approaches have their advantages and the tradeoffs will be discussed in posts to come.  First it’s important to understand the difference between averaging the log and log-scaling the average.  The graphic below details one example:

For this non-CW signal the average of the log power does not equal the log of the average power

For this non-CW signal the average of the log power does not equal the log of the average power

Here a square wave modulates a sine wave, switching its amplitude equally between 1mW and 4mW or between 0 dBm and 6 dBm.  The average power is 2.5mW or 3.98 dBm, but if you just average the dB measurements instead you get 3 dBm.

For CW signals the average of the log does equal the log of the average, and this example points to the problem with averaging dB or dBm readings for non-CW signals such as digital modulation, noise, or other dynamic signals.  The difference between the two averaging approaches is not constant but depends on signal statistics.

However as mentioned above the video bandwidth filtering in swept spectrum analyzers normally averages the log readings.  That’s not a problem for the CW signals these analyzers were originally designed to measure, and corrections for signals with known statistics (such as Gaussian noise) are straightforward.  It can be a problem for many other signals, though, and modern signal analyzers (especially those from Agilent Technologies) do a good job of configuring their averaging for accurate results based on the type of measurement being made and other user settings.

 

Averaging is a rich and important topic in today’s world of complex and dynamic signals and stringent requirements for accuracy.  I’ll have more on this topic in posts to come.

benz

What is Real-Time Analysis?

Posted by benz Sep 23, 2016

Originally posted Mar 1, 2013

 

One core concept to explain real-time measurements

“Real-time” in signal analysis is an important concept but it means different things to different engineers, and the common conflation with “real-time analyzers” can make explanations more complex than they need to be.  Here’s a starting point for a productive understanding of the whole thing:

In a modern RF analyzer with a digitally sampled IF section real-time means all signal samples are used in calculating measurement results.

Where results are calculated from a time record (block of samples) “T” and the time for calculations and display updates is “Calc” we can explain the situation with this diagram:

Real time analysis means that all signal samples are used to calculate results, or that the calculations are gap-free relative to the stream of signal samples.

Real time analysis means that all signal samples are used to calculate results, or that the calculations are gap-free relative to the stream of signal samples.

If the analyzer frequency span (and therefore its sample rate) is increased as shown in the middle of the figure, the time length of the time record T is shortened proportionally.  At some wider frequency span the time record fills just as fast as the Calc time and the analyzer has reached its real-time bandwidth or RTBW.  Measurements wider than this frequency span will no longer be real-time.

However even when all samples are included in a calculation for this type of real-time analysis, some samples may be effectively lost due to a process called “windowing” the time record.  That process, its benefits, and the use of overlap processing to counter its effects will be discussed in a future post.

One other type of real-time analysis is important here:  Time capture followed by post-processing.  In this alternate approach the samples are streamed to fast memory without gaps and without processing, providing a long capture buffer that can be post-processed an infinite number of times in an infinite number of ways to provide any type of gap-free results.  Real-time bandwidth is fully as wide as the IF, and real-time analysis duration is limited only by memory size.

Both these real-time approaches have powerful benefits and complementary tradeoffs.  The first one provides a virtually infinite duration of real-time results along with spectrum measurements that are always up to date.  The second approach provides complete measurement flexibility (including vector measurements and demodulation) and a solution to the windowing problem to allow full measurement of every sample.  It can thus be especially useful for transient and repeating signals.

 

Future posts will explore these tradeoffs in more detail, in the context of real world measurements.  I’ll discuss dedicated real-time analyzers, other tools that perform real-time analysis, and some that do both.

Originally posted Mar 18, 2013

 

Related concepts, though they don’t mean exactly the same thing

The previous post discussed real time analysis mostly in general terms, focusing on the core concept of processing every signal sample for some type of result.  Before future posts get into detail on example measurements it’s worth taking a minute to look at real time analysis specifically as performed by the tools specifically named real-time analyzers.

Ancient real-time history, non-RF

Real time analyzers have been around at audio frequencies since the 1970s.  The earliest ones were implemented with analog technologies, applying signals to parallel filter banks for audio frequency analysis.  Frequency resolution was often limited to octave or 1/3 octave steps.

Digital technologies matured enough in the late 1970s and early 1980s to make bench top real time audio analyzers practical through fast Fourier transform (FFT) processing.  Resolution was modest compared to today’s standards, generally power-of-two values such as 128 or 256 points (FFT “bins”).  Nonetheless this was much better than earlier parallel-filter analyzers.

RF signal monitoring/surveillance tools

Real-time capability at RF was especially valuable for signal monitoring and surveillance applications, leading to early, specialized (and expensive!) solutions in this area.  Some solutions were eventually paired with real-time demodulators as well.  These high performance solutions were capable for general RF analysis but remained niche solutions, not focused at general spectrum analysis and thus not a part of the tool kit of most RF engineers.

Real-time analyzers come to general RF analysis

The first real-time analyzer focused at general-purpose RF measurements to gain traction was the RSA6000 from Tektronix, introduced a few years ago.  The analyzer popularized a density display, which Tektronix calls DPX, that provides a way to make good use of the enormous number of spectra produced by real-time hardware calculations.  Follow-on models from Tektronix have improved performance and widened the range of solutions in this area.  These real-time analyzers use heterodyne downconversion to an IF and are thus similar to traditional spectrum analyzers but are non-swept and exclusively FFT-based.  Wide spans are implemented through the use of stepped or stitched FFTs.

Agilent has recently introduced its own real time spectrum analyzer as part of the N9030A PXA signal analyzers (www.keysight.com/find/rtsa).  Agilent’s real-time spectrum analyzer is an upgrade option for a mainstream signal analyzer, and thus supports swept spectrum analysis along with full vector signal analysis and digital demodulation.  Because it can be added to an existing signal analyzer platform it eliminates the need for a dedicated real-time analysis tool.  It also offers the widest bandwidth, best dynamic range, and best probability-of-intercept of available real-time spectrum analyzers.  Isn’t competition wonderful!

Defining these RF real-time analyzers

Today’s real time analyzers are implemented in slightly different ways and have a different balance of features but share common characteristics and benefits:

  • Measurements are gap free.  There is no dead time between acquisitions and all sampled data is processed.  The process can continue for an arbitrary duration
  • Measurements and display updates are fast and continuous.  Real-time calculations demand that FFTs be performed in hardware using dedicated ASICs or FPGAs or both.  Thus the update rate is not subject to an instrument’s Windows task interruptions.
  • High speed and/or data-dense displays make sense of all the measurement data.  Specialized displays are needed to make constructive use of the hundreds of thousands of spectra calculated every second.  Look for density or histogram displays, spectrograms, and fast power vs. time (PVT) traces.
  • Spectral or frequency mask triggers (FMTs) provide an alternative way to use the fast spectrum calculations and thus a trigger type with a new set of benefits for looking at agile signals or complex signal environments.

Future posts will look at real applications for all this measurement horsepower.

benz

Time-Gated Spectrum Measurements

Posted by benz Sep 23, 2016

Originally posted Mar 25, 2013

 

Knowing when to look and when to shut your eyes

Measuring a pulsed or bursted signal with a spectrum analyzer will yield a valid measurement but it may not be the measurement you’re looking for.  Pulsing a simple CW signal, for example, will produce a spectrum display with a nearly infinite number of sidebands whose spacing varies with the pulse period.  Similarly, the envelope of the sidebands will vary with the duty cycle of the pulse and can be used to determine average carrier amplitude.

The pulse spectrum is valuable information but in many cases the sidebands will interfere with detecting and measuring important signal characteristics such as spurious, intermodulation, adjacent channel power, etc.  The desired measurement for many R&D and compliance testing tasks is what the signal spectrum would be if the signal was not pulsed at all.  This measurement goal applies both to pulsed CW signals and to digital modulation that is in the form of bursts or frames.  That includes just about every digitally modulated signal these days!

Some answers to this measurement problem appeared in the late 1980s when digital control of spectrum analyzer local oscillators (some fully synthesized) and video data processing allowed them to choose exactly when to measure and when to “shut their eyes” and stop sweeping and/or ignore measurement results.  Later developments enhanced the triggering capabilities that are so critical to the success of time-gated measurements.

A newly-published article from Agilent’s Bob Nelson describes how to optimize these time-gated measurements and explains their background.  The article is from the March 20, 2013 issue of MicroWaves & RF magazine and titled “Optimize Time Gating in Spectrum Analysis” and is available at http://mwrf.com/test-amp-measurement/optimize-time-gating-spectrum-analysis.  The article is the latest in a series by Bob, one of Agilent’s long-established measurement experts.

A different article describes recent work to add intelligent compensation for non-ideal LO behavior in signal analyzers, further improving the accuracy and convenience of these measurements.  LO dynamics have always been a challenge in achieving the same accuracy from gated measurements as from non-gated ones.  To solve these issues Agilent’s signal analyzers implement intelligent pre-sweep, post-sweep and LO back-up as shown below:

This timing diagram shows typical LO settling phenomena and their effects on the frequency output of the IF chain. These settling effects can cause measurement errors in some gated LO spectrum measurements.

This timing diagram shows typical LO settling phenomena and their effects on the frequency output of the IF chain. These settling effects can cause measurement errors in some gated LO spectrum measurements.

You can find this article “Bringing New Power and Precision to Gated Spectrum Measurements” in the August 2007 issue of High Frequency Electronics magazine and at http://highfrequencyelectronics.com/Archives/Aug07/HFE0807_Zarlingo.pdf

Two extra terms for better understanding:  “Gated video” is sometimes called “gated data” because the sweep is continuous but the measurement data is gated to display only what is valid from the gate interval.  “Gated LO” is sometimes called “gated sweep” because the analyzer’s sweep starts and stops with the gate.  Perfecting these starts and stops is described in the article and graphic above.

Lastly, there is one big condition for these measurement approaches (pulse spectrum and time gating) to produce valid results.  The signal under test must be repeating, with any time-varying characteristics repeating consistently pulse-to-pulse.  Violate this at your measurement peril!

 

Well, perhaps that’s a bit too restrictive.  We know that many signals these days are transient and some vary with modulation or multipath or delay spread or Doppler or antenna scanning.  We have plenty of tools and techniques to measure them precisely and they’ll be discussed in future posts.

Originally posted Apr 1, 2013

 

Negative infinity may be just a femtowatt away

As RF engineers we deal with log-scaled data and displays so often that they feel like the only natural way of doing things, and we take this type of scaling for granted.  We even tend to convert measures such as EVM that were originally linear (percent) into log (dB) and rename them as modulation error ratio (MER) or relative constellation error (RCE) etc.  The log scale often enhances visualization of different measurements to provide insight.

The log scaling (of the vertical axis) of a power spectrum display is a good example.  In RF measurements we often deal with signals whose amplitude varies really widely.  R E A L L Y widely.  Just try switching your spectrum/signal analyzer from log to linear sometime, and see how difficult it is to understand the relationships of signals or signal components such as spurs, harmonics, and intermodulation.

The log scale improves the dynamic range of displays, generally compressing the power axis to accurately represent large differences in amplitude at the expense of small (in relative terms) ones.

However as is so often the case, noise and measuring near it makes things tricky.  Experienced RF engineers are always on the alert when dwelling in the domain of the really small.  Consider the examples of noise floor measurements below:

The difference between adding or subtracting a small amount of power from a measurement can be very large on a log scale

The difference between adding or subtracting a small amount of power from a measurement can be very large on a log scale

For a realistic figure and a round number, I assume the average noise power is -150 dBm/Hz.  This is a power spectral density (PSD) of 1 femtowatt/kHz.  Or an attowatt/Hz!

The situation gets more interesting when you’re taking individual samples of this noise, or small signals with it.  As it does so often, the random noise complicates things and the two example pairs above show how.  In the first one successive measurements of the same signal vary by equal amounts of power, ±0.75 fW. The dB variations are quite different, however, at +2.4 and -6 dB.  The log and linear power averages are different as well, with the average of the dB readings in error by 1.8 dB from the true -150 dBm figure.

Things get curiouser and curiouser when you increase the sample-to-sample variation by just a little bit, to ±1.0 fW.  The positive dB variation increases modestly to +3 dB but the negative dB variation goes to negative infinity.  That will foul up your averages!

Despite all this drama on the negative side of the log scale the linear power averaging that is so reliable for time-varying signals produces an accurate result whether that result is expressed in (linear) watts or (log) dBm.  That’s one good lesson from this mathematical thought experiment.  For another see my earlier post The Average of the Log is not the Log of the Average

 

So the log scaling that is so useful most of the time can be a reality distortion field, with very high numerical sensitivities to very small power changes.  Fortunately Agilent R&D engineers work hard to keep you out of this kind of trouble, choosing and linking averaging scales and detectors and programming measurement applications to yield correct answers unless you forcibly interfere.

benz

Different Views & Log Scaling

Posted by benz Sep 23, 2016

Originally posted Apr 8, 2013

 

Which view or scale is the right one for your needs?

In a recent post on his “bad astronomy” blog (http://www.slate.com/blogs/bad_astronomy/2013/04/06/curiosity_rover_picture_of_mount_sharp_changed_to_look_like_earth.html) astronomer Phil Plait describes how “Astronomers commonly change the contrast from a linear scale—where something that’s twice as bright is shown that way—to a logarithmic scale, which goes by factors of ten.”  Plait explains that converting to a log scale helps to pick out faint sources in an image. 

That’s a close analog to one of the reasons to use a log scale described in my previous post, and a starting point for discussing different views of measurement data.  Plait’s blog post included a NASA image that had both its contrast and color balance altered to make the scene look more like it would on Earth.  To make the comparison clearer I created the graphic below, combining the original rover image (without contrast or color balance changes) and one with Earth-like color balance.

Mt. Sharp panorama from the Curiosity Mars rover with original color balance and contrast (top) and with altered contrast and Earth-like color balance (bottom). Images courtesy NASA/JPL-Caltech

Mt. Sharp panorama from the Curiosity Mars rover with original color balance and contrast (top) and with altered contrast and Earth-like color balance (bottom). Images courtesy NASA/JPL-Caltech

So, in the vernacular of this blog, which image is the better measurement?  Both are better in their own way and it’s a matter of choosing what you want to see.  The original image represents what you would see if you were standing there and taking a picture without adjusting white balance.  Neat!  I know I’m not the only one who would give a small fortune to be able to go to Mars and look around, but that’s not going to happen and this uncorrected image is about the closest I’ll get.

However if you’re an Earth-trained geologist (the only known type so far) and want to use visual cues to do science on Mars you might prefer the bottom image with its familiar white balance and enhanced light/shadow cues.  The nonlinear scaling mimics how our eyes respond to light and of course the same thing is generally true of hearing.  Many systems and phenomena in nature have a lot of dynamic range, and log or other nonlinear scaling is an effective way to deal with it.

Another way to understand this comparison is as an example of how phenomena that are obscure in one measurement can be much clearer in another.  The better measurement is not always the one that is more accurate or faster, or closer to some ultimate truth.  It’s the measurement that gives you the answer or insight you need most clearly and reliably.  Sometimes that’s a straightforward matter of accuracy or measurement speed, sometimes not.

 

I have in mind a couple of examples from wireless communications and you’ll see them in days to come.  If you’ve got examples of your own I’m interested in them so please leave a comment.

Originally posted Apr 12, 2013

 

Choose the best view or use several at once

In my last post I used a geology imaging example from mars to explain different measurement views and how these views can meet different measurement needs. 

Well, I admit I also used the images because I’m fascinated by how a scene from Mars might look on Earth.

Now it’s time for an example or two from our home field of test and measurement.  Let’s consider two measurements of OFDM.  Over the past decade OFDM has become the single most important transport scheme in wireless, supporting a variety of modulation types through a large number of independent subcarriers.  The scheme provides practical benefits for RF transmission, though its multicarrier nature poses some test challenges.

Consider the two measurements of modulation error shown below.

Two error displays of the same OFDM signal frame. Error is plotted vs. time or symbol number (left) and vs. frequency or subcarrier number (right). One display clearly shows the source of the major error: narrowband interference.

Two error displays of the same OFDM signal frame. Error is plotted vs. time or symbol number (left) and vs. frequency or subcarrier number (right). One display clearly shows the source of the major error: narrowband interference.

These are orthogonal modulation error displays of the same signal.  Specifically they show the magnitude of the error vector (EVM) for every symbol and subcarrier in an IEEE 802.11a frame.  The difference between them is that the X-axis on the left is time or symbol number and the X-axis on the right is frequency or subcarrier number.  Thus the left display is error vector time and the right display is error vector spectrum.  In both displays the white symbol dots (small squares) are pilot symbols from the 4 pilot subcarriers and the heavy white lines are averages by symbol (left) or by subcarrier (right).

The biggest difference between the displays is their ability to reveal the source of a modulation error.  The total modulation error of the signal is relatively low, about 1%, and a constellation display would look good.  However the display on the left shows that a few symbols have higher error that is relatively constant over the frame or burst. 

The display on the right is much more useful, clearly showing that the larger errors all belong to a single subcarrier.  This subcarrier is impaired much more than the others, up to about 5% peak.  Such a narrowband interfering signal is likely a spur but could also be a harmonic of a narrow modulated signal from another band.

Thus an error that is obscure in one display is clear in another, making it much easier to spot a problem and determine a cause.  Tools such as Agilent’s 89600 VSA software support many different signal and error traces at once, along with large monitors, allowing simultaneous displays of multiple signals and multiple measurements.  This makes the best use of your visual processing, pattern recognition and system knowledge to discover and isolate problems quickly.

In this example the error spectrum trace is the key, but in others the error time trace can reveal a problem more clearly.  I’ll describe just such a situation in my next post.

By the way, the display on the right also illustrates one of the practical benefits of OFDM:  It’s resistance to narrowband interference or narrow spectral nulls from multipath.  With forward error correction to compensate for the loss of data from one or several subcarriers OFDM handles the kind of impairment that would cripple an equivalent signal using traditional single carrier modulation.  Its only one of many benefits of OFDM, which will be discussed in future posts.

 

For more on OFDM impairments see Bob Cutler’s classic article “Effects of physical layer impairments on OFDM systems” at http://rfdesign.com/images/archive/0502Cutler36.pdf.  Note the typo in the article, however, where the rightmost column label of the table on page 40 (3rd page) should be “single carrier modulation.”

Originally posted Apr 18, 2013

 

Big clues and little ones

In the previous post the signal impairment was confined to a narrow range of frequencies and therefore a frequency-specific error measurement (error vector spectrum) was the key to spotting the error and deducing its source.  Since the impairment was constant over time, the time-specific error measurement (error vector time) wasn’t much help.

Now let’s look at a different OFDM example, a prototype fixed WiMAX (IEEE 802.16) signal.  Here’s a measurement of error vector spectrum, the display type that was so useful in the previous post:

Error spectrum of an impaired WiMAX OFDM signal. The error is generally constant across the subcarrier frequencies and is much higher for one modulation type.

Error spectrum of an impaired WiMAX OFDM signal. The error is generally constant across the subcarrier frequencies and is much higher for one modulation type.

The small squares are individual symbol errors and knowing that the analyzer (Agilent 89600 VSA) represents different modulation types with different colors provides the first big clue.  The large errors are almost all green, while the blue squares and the orange ones behind them indicate that the other modulation types have much smaller errors.

Here’s where adding your own knowledge of the fixed WiMAX scheme comes in.  Multiple modulation types can be used in a single burst, and they are transmitted sequentially.  Time (or symbol number) and modulation type are the key.  A glance at two other common displays, constellation and error vector time, reveal the error clearly:

Composite constellation (left) and error vs. time (right) of a fixed WiMAX signal.

Composite constellation (left) and error vs. time (right) of a fixed WiMAX signal.

The clearest anomaly is in the error vector time trace on the right.  The sudden increase in error at the end of the frame corresponds to the switch from 16QAM in blue to 64QAM in green.  The average error indicated by the heavy white line also reflects the increased error.

The constellation diagram on the left is also very useful, even though the difference in error is not as apparent.  This constellation is a stacked display of all symbols and modulation types, using the same modulation color coding as the other displays.  The small white circles are ideal symbol locations, showing that the BPSK, QPSK and 16QAM symbols are being transmitted correctly.  By contrast the outer (largest amplitude) symbols of the 64QAM modulation all show an amplitude that is smaller than it should be.  Is this a result of compression (a typical RF error) or overall scaling (perhaps a baseband error) of the 64QAM modulation?

A less obvious clue from the constellation display settles the issue.  If we examine the BPSK pilots which are used to track received amplitude and scale the demodulation we see that their amplitude is correct.  It’s a little hard to see from this small display but the same is true of the BPSK, QPSK and 16QAM data.  As for 64QAM, close inspection shows that the inner symbols also have reduced amplitude.  This strongly suggests that the error is from overall scaling of 64QAM and not compression or limiting, which would mostly affect the outer constellation states.  The defect is probably in the digital baseband of the transmitter and will reduce performance or margin even if it does not cause bit errors by itself.

We’ve discussed larger clues here, and some smaller ones.  Another small clue is the low EVM of the pilot subcarriers in the first display above, and the fact that it does not increase along with the 64QAM data subcarrier error.  All these clues can be helpful and are generally clearer in one display type than another.  As with the example in the previous post it’s a reason to use multiple traces and a large display if available.  Combined with knowledge of systems or signals, pattern recognition is a powerful thing for experienced RF engineers!

You can experiment with this signal yourself, at no charge, using the 89600 VSA software and included recordings.  Just download the VSA software and accept the trial or demo license.  Then recall the demo recording, which includes a setup state and an explanation of the signal.  Dozens of other recorded signals and setups are also included.

Download the software at www.keysight.com/find/vsa.

You’ll find a complete installation guide here.

For this example just select File – Recall – Demo – WiMAX Fixed – WiMAX_5MHz_Impaired.htm

Originally posted Apr 29, 2013

 

Things that go bang in the night.  And in the day.  And anytime at all.

Like so many of you I have a wide-ranging curiosity and get a kick out of learning, whether in the RF measurement discipline or elsewhere.  It’s been difficult to find time to write a blog entry in the past few days but I have found time (what does this say about my priorities?) to follow my curiosity to an amazingly entertaining and educational blog.

Educational:  I had always wondered why so many compounds that are useful for “energetic disassembly” are mostly nitrogen.  Isn’t it the boring gas that makes up most of our atmosphere?  Ok, you can get it in liquid form to freeze a banana and then hammer a nail with it but that hardly seems dangerous.  As a matter of fact most of us use nitrogen as an inerting agent to stop things from happening.

Entertaining:  The blog in question had me laughing out loud repeatedly, even while explaining how boring old nitrogen can be so exciting in the form of a solid compound.  I read it to (inflicted it on) my wife and daughter and they laughed too.

The relevant entry series in the blog is entitled “Things I won’t work with” and you’ll find it at http://pipeline.corante.com/archives/things_i_wont_work_with/

If you’re curious like me just click right on over and read the top two entries on chlorine triflouride and azidoazide azides.  I’ll be here when you come back.

Chlorine triflouride (left) and sodium azide (right). “Exciting” compounds! Images from Wikimedia Commons

Chlorine triflouride (left) and sodium azide (right). “Exciting” compounds! Images from Wikimedia Commons

The entry on chlorine triflouride is full of more entertaining education.  I had always imagined that oxygen would be the most potent oxidizer, but the blog explains just how much more potent an oxidizer can be.  Chlorine triflouride can seemingly cause anything organic or inorganic to burn.  Glass, asbestos, rubber, leather, skin, you name it.

Despite their hazardous and ridiculously enthusiastic nature these compounds play a role in our work and everyday lives.  Chlorine triflouride is used in the semiconductor industry and sodium azide is the propellant in many automobile airbags.  Despite its explosiveness and severe toxicity it has saved a bunch of lives.  With assistance from our sophisticated electronics of course!

Originally posted May 13, 2013

 

How far do you want to go?  Do you feel lucky?

I guess it’s easy for me (someone who works for a manufacturer of test equipment) to simply suggest that you use only cables and accessories designed for the frequencies you’re dealing with.  And I have indeed made that suggestion in articles and app notes and presentations.  But we’re engineers after all, and so we’re resourceful and inquisitive and budget-sensitive and we can make do.

Indeed we can, and sometimes it’s a rational choice.  In this post I’ll explore the choice through an example.  Let’s start with insertion loss measurements of a 2.4 mm cable and an SMA cable with appropriate adapters:

Comparing insertion loss measurements of SMA and 2.4mm over 30-50 GHz. The amplitude scale is 0.2 dB/div.

Comparing insertion loss measurements of SMA and 2.4mm over 30-50 GHz. The amplitude scale is 0.2 dB/div.

It’s important to keep in mind that this is just one example and therefore your mileage may (will) vary, but the measurement does reveal a few issues related to pushing the SMA cable beyond its normal frequency range.

First, the average insertion loss over 30-50 GHz is actually a little better with the SMA cable than the 2.4 mm.  Perhaps this isn’t surprising, given the larger geometry of the cable.

The behavior of the insertion loss, however, is significantly worse for the SMA cable.  There is a significant drop or null just below 36 GHz, and many smaller nulls at higher frequencies, especially between 36 and 40 GHz. 

What’s happening in the SMA cable at these high frequencies is called “moding” and it involves excitation of the first circular waveguide propagation mode in the coax.  To avoid moding the dimensions of the coax must be much smaller than the signal wavelength, and that is not quite the case for the SMA cable.  Since some waveguide propagation is happening along with the normal signal transmission in the coax, the effective signal transmission is not consistent or well behaved.  You could say that the SMA cable is starting to behave like very poor waveguide.

On the other hand you might observe that the nulls or ripple aren’t that bad, and things look pretty good for SMA below 35 GHz.  That’s true in this case, and it might be true for your specific situation too.  Your own needs for accuracy and flatness may be modest and performance such as this may cause no problems.

So yes, you can sometimes make do with gear designed for lower frequencies, as long as you know the limitations.  But you should be aware of all the limitations and consequences.  For example these measurements haven’t shown the phase behavior of the SMA cable, and it is likely to be ill-behaved too.  And all these frequency response disturbances will be rather unpredictable and not repeatable.  Their frequency and size will be affected significantly by cable routing and movement, and will vary with the specific cables and connectors/adapters used.

Lastly, note the narrow bandwidth or high Q of these nulls.  That means that any signal that encounters them will experience amplitude and phase modulation as its frequency varies, even if the frequency variations are small.  Unintended modulation is your bonus and you never know exactly where you’ll find it!

By the way, since moding is a consequence of geometry and the geometry of 3.5 mm is generally the same as SMA, you can expect similar behavior from both.  The precision of the 3.5 mm hardware should mitigate the repeatability and predictability problems a little, though I wouldn’t rely on it.  Just confine your use of SMA to situations where it won’t cause errors large enough to be a problem for you, and reach for the higher frequency connection gear when performance and confidence are important.

Originally posted May 21, 2013

 

I agree with the Grinch on this one

Noise doesn’t undermine everything we do, though sometimes it feels that way.  And while adding a little noise can improve some measurements (the subject of a future post or two) it usually makes our jobs harder.  Noise is a limiting factor in the performance of many systems and frequently limits both the accuracy and the speed of measurements.

In signal measurements noise acts to increase measurement variance and also adds undesired power to individual measurements.  Thus noise can make real-world measurements slower by requiring some kind of averaging to reduce variance to acceptable levels.  I’ll talk about averaging techniques in another post but here I’ll focus on the effects of undesired noise power on spectrum measurements.

First let’s look at two spectrum measurements of a low power multitone signal with amplitude that decreases 3 dB from tone to tone with increasing frequency:

Two spectrum measurements of a low power 7-tone signal. Amplitude decreases 3 dB for each tone and the scale is 3 dB/div. The two measurements show the effects of an approximately 12 dB difference in signal level vs. noise floor.

Two spectrum measurements of a low power 7-tone signal. Amplitude decreases 3 dB for each tone and the scale is 3 dB/div. The two measurements show the effects of an approximately 12 dB difference in signal level vs. noise floor.

Notice the difference in the displayed power of each tone.  The tone at the left appears to have the same power in each measured trace.  However as the higher frequency tones decrease in power (3 dB/div. and 3 dB/tone) the difference between the measurements grows larger.

The effect of the analyzer noise floor (or the broadband noise of the signal or a combination of the two) is most clearly shown by the second tone from the right.  The power of the tone is approximately the same as the noise floor of the analyzer’s yellow trace and the result reads 3 dB high.  This makes perfect sense since the noise power in the analyzer’s resolution bandwidth filter at this frequency is the same as the power of the tone and the result is a reading 3 dB higher than it would otherwise be.

Of course this power addition affects more than just tones or other discrete signals.  It also affects measurements of noise or signal/noise and is just as applicable to phase noise.  The figure below summarizes the situation.

Expanded view of measurement of a CW signal near an analyzer’s noise floor. The analyzer’s own noise affects measurements of both the signal level and signal/noise.

Expanded view of measurement of a CW signal near an analyzer’s noise floor. The analyzer’s own noise affects measurements of both the signal level and signal/noise.

Measurements close to the noise floor of the analyzer need to be made and interpreted carefully, since power addition in the analyzer’s IF will affect signal/noise measurements along with measurements of discrete signals.

For me, the result of making measurements and drawings such as these is a resolve to be careful whenever I am measuring anywhere close to an analyzer system or noise floor.  For accurate measurements I’d define “close” as within about 20 dB if I’m looking for maximum accuracy.  I remind myself that the analyzer is always measuring the total signal in its IF section and figure out the implications from that starting point.

One final point:  This addition of noise power in the analyzer IF can work both ways, and if the analyzer’s noise floor is accurately known, its power can be subtracted from measurements to make them significantly more accurate.  Agilent’s technique to accomplish this automatically on the PXA signal analyzer is called noise floor extension (NFE) and it’s discussed in the application note Using Noise Floor Extension in the PXA signal analyzer.”  An excellent general reference for understanding noise in signal measurements is application note 1303 Spectrum and Signal Analyzer Measurements and Noise.”  

* From the animated TV special “How the Grinch Stole Christmas”

Originally posted Jul 2, 2013

 

My measurements are usually more noisy than I expect. Are yours?

I know I’m misusing the word a little but I think it’s clear enough when I say that I want measurements that are deterministic, not stochastic. That is, when I measure a signal I want a number I can rely on, with variance small enough to be ignored, and not some thorny statistical distribution that demands processing or interpretation.

Alas the universe shows no sign of caring about what I want and instead gives me just what I deserve. I have two choices about how to respond, and they’re not mutually exclusive: 

  • Set up my measurements to deserve better, to get answers closer to the pure values I want
  • Be realistic about what sort of noise or variance will be inherent in the answers I can expect. 

This post will explore the second choice, examining a little of the essentials of realism in signal measurements, using a familiar signal in several different forms. Other posts have already (and will in the future) discuss techniques to make the measurements themselves better.

The example for this discussion is an original GSM mobile phone signal, with a modulation scheme known as Gaussian filtered minimum shift keying or GMSK. While in the real world this signal would be bursted or framed as part of a time division multiple access (TDMA) technique, that is not the case here. The measurements in this example would behave the same way if appropriate time-gated measurements were made on a bursted signal.

Unlike many digitally modulated signals the GMSK signal is “constant envelope” and information is transmitted through frequency or phase changes only. The RF carrier changes phase by ±90 degrees over each symbol interval and the signal amplitude or RF envelope is essentially constant.  That makes it compatible with efficient power amplifiers.

I generated a GSM signal with three different signal/noise ratios (SNRs) and measured the complementary cumulative distribution function (CCDF) to explore the peak/average power ratio, or how noisy amplitude measurements would be.  CCDF is a two-dimensional, log-log scaled plot of peak/average power ratio vs. frequency of occurrence. Specifically the CCDF shows, for a given peak/average ratio, how often that peak occurs. Here are the three measurements, in two traces:

CCDF measurement of a constant-envelope GSM (GMSK) signal at 3 different SNRs. Note how reducing the SNR drives the CCDF curve dramatically to the right, indicating a much more noisy measurement of signal amplitude.

CCDF measurement of a constant-envelope GSM (GMSK) signal at 3 different SNRs. Note how reducing the SNR drives the CCDF curve dramatically to the right, indicating a much more noisy measurement of signal amplitude.

CCDF plots are generated from thousands or millions of power measurements and the Y-axis is a log-scaled measure of how frequently certain peak/average power values are observed. The scale covers from 0.001% to 100%.  The X-axis indicates peak/average power ratio in 1 dB/div. so the CCDF curve is normalized to average signal power. 

Describing CCDF can be clumsy (and I just proved that in the paragraphs above!) so let’s take an example from the lower measurement: Following the dotted red line to the blue “SNR 10 dB” curve you can see that power peaks of 5 dB or more occur in about 1% of the measurements.

That’s the sort of result that gets my attention. I’m measuring a constant-amplitude signal with an SNR that places it well above the noise floor and yet in one measurement out of 100 the reading will be 5 dB high due to noise. Ten percent of the measurements will read 3 dB or more above average.

Increasing the SNR to 20 dB reduces the measured peaks, though they are still substantial, at around 2.5 dB or greater in 1% of the measurements. Of these three measurements it’s only the one with the very high SNR (better than 60 dB) that creates a CCDF curve where peaks never exceed average values by more than a small fraction of 1 dB.

What I learned from this experiment: The universe is more noisy than I expected, and even measurement configurations that look favorable can yield readings with a surprisingly wide variance. I learned that if I really want the truth I should make friends with the most efficient averaging techniques (more in a future post) or take advantage of measurement applications where the averaging techniques and statistical analysis are built in.

Lastly, I should mention a couple of things that will be useful if you try this experiment yourself.  First, the results vary with frequency span, so your comparisons should all use the same span.  Second, any pulsing of signals will dramatically affect CCDF so it’s best to measure continuous signals or to make time-gated measurements.  By the way, the AWGN reference line provided in many CCDF measurements refers to the curve for additive white Gaussian noise, a signal with wider amplitude variations than most digitally modulated signals.

Originally posted Aug 27, 2013

 

Gaining comfort from an intuitive explanation

Multiple-input/multiple-output (MIMO) is one of the most powerful techniques used in RF communications. In real-world environments, it provides a substantial increase in channel capacity, and the added capacity can be traded for other system benefits such as greater range or improved link quality. Increases in processing power coupled with MIMO’s effectiveness have led to its use in 802.11n WLAN solutions, 3GPP Long Term Evolution, WiMAX, and other standards.

Alongside diversity and space-time coding, MIMO is one of many multiple-antenna techniques. However, it’s the only one to promise the magic of a substantial increase in channel throughput—and that’s what makes it so desirable.

MIMO uses matrix math on the signals from multiple receive antennas to separate incoming signals from multiple transmit antennas and thereby establish multiple virtual RF channels over the same frequency range. And there’s the magic: more capacity by sending different signals over the same frequencies at the same time by removing co-channel interference.

Simply knowing that the matrix math works, and the technology is proven, offers cold comfort to many RF engineers. It’s natural for many of us to want an intuitive understanding, both for personal satisfaction and as a guide to better RF testing and interpretation of test results. To help create a more visceral understanding of MIMO, here’s a diagram of a demonstration that vividly illustrates the 2×2 case:

Two signal generators and two receivers produce four possible RF signal paths through “the channel.” The frequencies of the multi-tone signals are interleaved, allowing each receiver channel to determine the transmitter for each tone.

Two signal generators and two receivers produce four possible RF signal paths through “the channel.” The frequencies of the multi-tone signals are interleaved, allowing each receiver channel to determine the transmitter for each tone.

In this example, the signal sources generate multi-tone signals with a spacing of 1 MHz. The tones from one generator are offset by 500 kHz so that those from each generator can be easily separated at the receiver. Of course, frequency spacing is not the only way to separate the tones. As one alternative, wideband modulated signals could be multiplied with orthogonal codes, allowing correlation to be used at the receiver to identify the energy from each signal source.

As shown in the lower diagram, there are four possible signal paths. Each receiver measures signals from both sources as they are added together in the transmission channel. The result is four sets of tones, as shown in the two-channel measurements at the top, from which the frequency response of each path can be interpolated. The scalar spectrum is shown here, but the measurement is actually a vector (mag/phase) one.

Our system knows what signal was transmitted (amplitude/phase or I/Q) and there are four signal paths to measure. We thus have four measurements, one for each path, so we can look at this as four equations and four unknowns and use matrix math to solve.

Here is the critical step in the explanation: Now that we have full vector knowledge of the four paths, we know exactly how signals will combine. After sending known signals to measure the channel, we can send independent data streams from each transmitter. The receiver can apply the channel knowledge to the received signals to reverse the combining (i.e., remove the interference) in the channel and recover the separate transmitted signals and therefore the two independent data streams. More apparent magic: Removing the co-interference creates two virtual channels in place of one.

In practical systems, the transmitters send modulated data most of the time and intermittently send known training sequences to allow the channel measurements to be updated as frequently as the channel is likely to change.

A favorite quote from science/science-fiction author and visionary Sir Arthur C. Clarke, often known as his third law, is “Any sufficiently advanced technology is indistinguishable from magic.” However, as engineers we deal in deeper understanding, and hopefully with this explanation even this very advanced and useful technology is fully distinguishable from magic.

Originally posted Sept 6, 2013

 

We wave our hands and it shows that many of us miss something important

The demonstration described in the previous post is a good way to get an intuitive feel for the processing required for signal separation. That’s essential because signal separation is the fundamental thing that makes MIMO work.

The demo also highlights important aspects of RF transmission that affect not just MIMO operation but most other types of single- or multi-channel signal transmission.

One remarkable thing about the demo is its ability to continuously and simultaneously show the frequency response of the four signal paths, and do so as quickly as the display updates. A video clip from this demonstration is available on YouTube, and an example display from a single transmitter (rather than two) is shown below.

A multitone signal covering approximately 36 MHz and transmitted from one antenna is shown as received by two different antennas. The tone peaks trace out the very different amplitude frequency responses of the two transmit channels.

A multitone signal covering approximately 36 MHz and transmitted from one antenna is shown as received by two different antennas. The tone peaks trace out the very different amplitude frequency responses of the two transmit channels.

As shown in the previous post, it’s often easy to visually interpolate between the alternating sets of multitone signal peaks and trace out the amplitude response of each path.

The other remarkable thing is what RF engineers do when they see this live illustration of the transmission environment: They wave their hands. Specifically, they wave their hands in the air directly between the transmit and receive antennas.

The demonstration is often set up with two signal generators at one end of a table and a two-channel RF receiver—in the form of a signal analyzer or an oscilloscope—at the other end. Engineers expect to reflect and absorb some of the signal and they want to see the effects in real time. They move their hands and watch for changes in the measurements, but the modest effects usually leave them feeling a little disappointed.

Their fluttering hands cause small changes in signal levels, and those changes are generally flat over the frequencies involved. At the same time, larger changes in frequency response are also seen and generally seem uncorrelated with all the hand waving. These other changes are often very non-flat with frequency—deep nulls, obvious peaks, ripple—and are much more interesting than the hand-made attenuation. What’s going on?

In most cases, the most dramatic frequency responses don’t come from direct transmission but instead are the result of reflections, multipath or delay spread. The signal at any receive antenna is a combination of all possible paths and, in many cases, no single path is fully dominant, not even the direct one. The path length and resulting phase variations cause constructive and destructive interference or fading, and the results can be narrower in frequency than simple attenuation due to obstructions.

What this means for our hand-waving demo is that the observed frequency responses tend to vary widely and respond strongly to environmental elements other than the hands of engineers. Perhaps someone is blocking a reflective surface or reflecting signals themselves. Perhaps a door is being opened or closed. Who knows? The number of possibilities is nearly infinite in office environments and thus this demo produces frequency responses that are constantly changing and seemingly random.

Of course, the full MIMO demo, even if only 2×2, is considerably more complex and it can appear that the four paths are completely unrelated to each other. And that’s a good thing. Highly correlated RF paths will cause the whole MIMO scheme to collapse—and that’s a good topic for another day.

Originally posted Sept 17, 2013

 

As your parents used to say, “It’s for your own good”

Most engineers enjoy a good puzzle and we’ve all encountered a device or technique that makes us scratch our heads. We spot something peculiar, we don’t see any obvious reasons for it and, in a Pavlovian moment, our speculative juices overflow. We wonder what’s behind this puzzle—maybe it has multiple explanations— and we suspect that it’s all due to some hard-won experience that came at great expense to someone else.

One such nice, little puzzle is found on the front panel of many millimeter-frequency signal generators and analyzers. Instead of the usual female connector—Type-N, 3.5 mm, 2.4 mm, etc.—you’ll often find a male connector with a large-diameter knurled flange that enables finger-tightened connections (see below).

While the typical front-panel connector is female, the gender is often reversed for equipment covering millimeter frequencies (30 GHz to 300 GHz). The male connector offers some degree of protection from several types of damage.

While the typical front-panel connector is female, the gender is often reversed for equipment covering millimeter frequencies (30 GHz to 300 GHz). The male connector offers some degree of protection from several types of damage.

Because most cable assemblies use a male connector at each end, there must be at least one good reason for this departure from the norm. Indeed that’s the case, and the reasons can be summarized with the words “connector saver.” Two examples and some common part numbers are shown below.

“Connector savers” are coaxial adapters placed between instrument front-panel connectors and cables or DUTs. They can be easily replaced if they are damaged or become worn.

“Connector savers” are coaxial adapters placed between instrument front-panel connectors and cables or DUTs. They can be easily replaced if they are damaged or become worn.

Two more key reasons are captured with the words “expendable” and “replaceable.” Millimeter connectors are inevitably small and somewhat delicate, and replacing an instrument’s front-panel connector is an expensive operation. What’s more, recalibration is often required, and that can take an instrument out of service for a day, maybe longer.

In situations that require frequent connection changes, it makes sense to keep a sacrificial adapter semi-permanently attached to the front panel of the instrument. When the adapter gets damaged or worn, it can be easily replaced at modest expense, relatively speaking: even the most expensive metrology-grade adapter costs less than an instrument repair and recalibration.

Back to damage and wear. Front-panel connectors are subject to several kinds and the use of the male connector and a connector saver helps in several ways. First, the male connector tends to encourage the use of a connector saver. An operator not trained in the proper use and care of connectors cannot connect most cables (again, male at both ends) directly to the front panel and is therefore less likely to cause accidental damage.

Second, the male connector is generally more physically robust than the female. The most common type of damage is destruction of the female collet by a center conductor that is either misaligned or the wrong size (see the post “Loose nut danger”). A male front-panel connector focuses the damage on the less-expensive part.

As a final point, this gender choice tends to dramatically reduce the number of connection cycles for the instrument connector itself. Even with careful operation, wear and damage are inevitable and it’s better to focus the wear and risk on expendable parts.

Bonus trivia #1: How do you determine the gender of connectors with varying configurations of outer shells? Remember that gender is determined by the center-most conductor of a connector, regardless of the position of the outer nuts, threaded barrels or bayonet hardware.

Bonus trivia #2: The flats machined into the barrels of the connector savers pictured above are not just for torque wrenches. They also allow use of a common open-ended wrench to avoid connector rotation when nuts are tightened. Connector rotation should be avoided because of the wear it causes, especially with delicate millimeter hardware.

Bonus trivia #3: Some instruments use custom adapters that can function as connector savers rather than the standard adapters described above. These custom adapters are a user-replaceable part of a front-panel connector set and may offer a choice of connector types at the user end. This approach may provide a shorter physical connection and some improvement in mechanical robustness due to the compact configuration. However, the special adapter may be a disadvantage if it is damaged in its connector-saver role and a replacement is not readily available.

Originally posted Sept 25, 2013

 

Understanding when the errors will become a problem

In this space, we’ve already talked about how the universe may be noisier than you think and described how noise adds something to all signal measurements. The Grinch’s distaste for noise was understandable, but it’s hardly a strategy for making better measurements—or for handling holiday stress—so now is a good time to quantify the errors associated with measurements made close to noise.

As engineers, we do our share of careful calculations; however, good engineering is also based in part on effective rules of thumb. Is there a minimum signal-to-noise ratio (SNR) for adequate accuracy in power measurements? Stated another way, how close to the noise floor must a signal be before the error from the noise, inevitably measured along with the signal, becomes too great?

In this case, a reasonable rule of thumb requires more than just a single number—but not too much more. For the typical case of measuring a CW signal near broadband noise, this little table should do:

These are the measurement error values—always positive—resulting from signal measurements made near broadband noise. The error is shown versus actual SNR and not apparent SNR.

These are the measurement error values—always positive—resulting from signal measurements made near broadband noise. The error is shown versus actual SNR and not apparent SNR.

The first entry in the table makes intuitive sense: if the actual signal power matches the noise level, the analyzer will read the combined power in its resolution bandwidth and the result will be too high by a factor or two, or 3 dB.

As SNR improves, things become more interesting. For example, with SNR at 5 dB the signal will usually be very distinct from the noise. In addition, the measurement may seem reliable even though an extra 1.2 dB of positive error is present.

At 10 dB SNR, things look even better and some engineers (not us, of course!) will accept a marker reading as it appears. However, the added error power of 0.41 dB is more than twice as large as the basic error figure of the best modern spectrum or signal analyzers.

As a matter of fact, you need an SNR of 15 dB to reach a measurement condition in which the added error power is comparable to the best error figure of a high-performance signal analyzer. If you want to make measurements that let you reasonably disregard the added power from broadband noise, you’ll need an SNR of 20 dB or better in many cases.

Alternatively, the broadband noise can be separately measured or modeled and then subtracted from the signal. This will increase accuracy and effectively improve dynamic range. The process is performed automatically in the Agilent PXA signal analyzer and called Noise Floor Extension (NFE). More on that in an upcoming post.

As we’ve seen before, noise is often more troublesome than we expect, and the amazing accuracy of some of today’s signal analyzers can make it a factor that might not otherwise be noticed.

To wrap this up, here are a few things to remember about these measurements and the table:

  • The signal measurement error due to the power added by broadband noise is always positive. Although noise voltage may cancel signal voltage at some points in time, the noise is uncorrelated with the signal and the powers will add.
  • The table applies to measurements of CW signals and broadband noise. Modulated signals require different measurement approaches.
  • The table doesn’t apply to the traditional spurious measurement setup that uses a peak detector along with significant video averaging (i.e., video bandwidth is much less than resolution bandwidth). This setup reduces the apparent noise level and its effects.
  • The table lists error versus actual SNR rather than apparent SNR. The difference is described in Oh, the noise!

Originally posted Oct 11, 2013

 

Which real-time view is best for you?

By now, you’ve probably heard of real-time spectrum analyzers. These blazing-fast instruments calculate hundreds of thousands of spectra per second, ensuring that all samples are processed and nothing is missed—even when analyzing wide frequency spans and the fast sample rates they require.

As you might imagine, this kind of speed presents a display challenge: How do you represent nearly 300,000 spectral results per second in a way that’s useful to an RF engineer? After all, our goal is notmore measurements but better measurements.

For compatibility with human visual systems, signal analyzers update their displays about 30 times per second. With 300,000 spectra per second, each display update must somehow represent approximately 10,000 spectrum calculations. Aside from simple peak or average calculations, the two most common representations of this “big data” are histogram or density displays (the subject of a future post) and spectrograms. Density displays summarize the frequency of occurrence of specific amplitude/frequency combinations, and this is a good way to reveal infrequent signals or those with very low duty cycles.

In contrast (pun intended), spectrograms represent large amounts of spectral data with a focus on timing. As with normal spectrum displays, the X-axis is a frequency axis; however, the Y-axis changes from amplitude to time, and amplitude is represented with color. As a result, the display can show at a glance how the power spectrum changes over time, and a spectrogram of real-time data can represent everything that happens over a wide span and a selectable time interval. Two examples of over-the-air spectrograms from the 2.4 GHz ISM band are shown in the figure below.

This shows two real-time spectrograms of the 100-MHz ISM band centered at 2.45 GHz. WLAN, Bluetooth and cordless telephone signals are shown. The frequency acquisition times set the time represented by each slice or pixel row and thus the time covered by the full spectrogram.

This shows two real-time spectrograms of the 100-MHz ISM band centered at 2.45 GHz. WLAN, Bluetooth and cordless telephone signals are shown. The frequency acquisition times set the time represented by each slice or pixel row and thus the time covered by the full spectrogram.

These spectrograms show the same signals and use the same measurement settings except for one: frequency acquisition time. In the top view, each display update (30/s) adds a single pixel row or slice to the spectrogram. The pixel row summarizes about 10,000 spectrum calculations using a peak-hold algorithm that is the default “display detector” type. Other available detectors include average, sample and negative peak.

Because this spectrogram is about 400 rows high, it represents a measurement interval of about 12 seconds and reveals a lot of information: apparently continuous WLAN traffic on 18 MHz channels, Bluetooth frequency-hop patterns (top), bursts from a cordless phone (center) and some WLAN channel scanning or switching (center left). The 12-second interval is long enough to show general signal behavior and catch infrequent events such as cordless phone activity and WLAN scanning and switching. However, the available time resolution is 30 ms and this is too coarse to show how the ISM band is shared by WLAN bursts and the much shorter Bluetooth transmissions.

The bottom spectrogram provides a dramatic increase in time resolution. The level of detail helps explain how a single band can support so many different users if it implements spread-spectrum techniques such as frequency hopping and pulsed or framed OFDM. The frequency acquisition time has been reduced to 1 ms and thus the spectrogram covers just 0.4 seconds. Because the analyzer’s display update rate for the spectrum is unchanged, each pixel row summarizes about 300 spectra, and 30 pixel rows must be added for each update. As a result, the display scrolls by more rapidly.

Although both spectrograms are accurate measurements of activity in the ISM band, they reveal different phenomena and do so with different tradeoffs. For example, deep, scrollable trace buffers and a slice marker (the white horizontal line in each spectrogram) allow the user to review larger blocks of spectrogram data and display individual spectrum results for each pixel row. No matter what frequency acquisition time is selected, the real-time spectrogram can cover the entire band and ensure that no signal is missed.

If needed, much greater time resolution is available using the same signal analyzer platform with a different type of real-time analysis: time capture and playback. That technique, along with its benefits and tradeoffs, will be the subject of another post.

For more information on real-time analysis and the Agilent PXA and MXA X-Series signal analyzers, see this application note.

Originally posted Oct 23, 2013

 

Hello, Lord Rayleigh; Goodbye, Herr Doktor Gauss

Joe Gorin and Michael Dobbert have done some impressive work that addresses the subject of better measurements from an unusual but very important direction. The result of their work is good news for RF and microwave engineers everywhere.

Here’s the short version of the story: for all of us, amplitude accuracy is a fundamental measurement goal that is often uncertain. Going deeper: in many RF and microwave signal measurements, the biggest single element of measurement uncertainty is due to impedance mismatch and its effect on power delivery from the DUT to the analyzer.

Mismatch error is calculated from the reflection coefficient specified by equipment manufacturers. That reflection coefficient is usually specified for frequency ranges in terms of a single magnitude value. The traditional approach to calculating mismatch accuracy using that reflection coefficient overestimates mismatch error by a factor of three to six. This unduly pessimistic approach has been accepted practice for decades.

In the real world of RF/microwave engineering, the benefits of reduced uncertainty can lead to tightened DUT specifications or can be traded for productivity benefits such as increased yield and improved throughput.

Gorin and Dobbert have taken a statistical approach to accurately estimating uncertainty bounds, basing their model on a better understanding of the way the reflection coefficient behaves in analyzers, power sensors and signal generators. They then validated this approach by making a very large number of measurements to see how the measurements fit their model.

The new statistical model of reflection coefficient—and thus mismatch error—is based on two points:

  • Represented in a complex I/Q plane, the real and imaginary parts of the reflection coefficient will have Gaussian distributions
  • Because the probability density of the magnitude of both complex parts of the reflection coefficient is Gaussian, the probability density of the magnitude of the reflection coefficient will have a Rayleigh distribution instead of a Gaussian shape

This reflection coefficient behavior and the resulting Rayleigh distribution is shown below, annotated with the 95th percentile measurement uncertainty bounds:

A complex I/Q representation of reflection coefficient with Gaussian distribution in each I/Q part (left) and the resulting Rayleigh distribution of the probability density of the magnitude of the reflection coefficient.

A complex I/Q representation of reflection coefficient with Gaussian distribution in each I/Q part (left) and the resulting Rayleigh distribution of the probability density of the magnitude of the reflection coefficient.

Of course, the test of a good theory or model is experimental evidence. There is room here for only a single example, but the real-world data fit the model well in all tested cases involving signal analyzers, signal generators and power sensors.

Comparing predicted Rayleigh (red) and measured (blue) values of the cumulative distribution function for the Agilent PXA signal analyzer preamplifier from 3.5 to 26.5 GHz. The fit error (green) is very low, validating the assumption of the Rayleigh distribution.

Comparing predicted Rayleigh (red) and measured (blue) values of the cumulative distribution function for the Agilent PXA signal analyzer preamplifier from 3.5 to 26.5 GHz. The fit error (green) is very low, validating the assumption of the Rayleigh distribution.

All this discussion of estimated error bounds can obscure the fact that this new work does not change the actual measurement accuracy of instruments or sensors. Mismatch uncertainties have not changed; however, traditional methods have overestimated them by a factor of three to six.

It is also worth noting that while Gorin and Dobbert validated their predictions on Agilent equipment, the predictions should apply broadly to RF and microwave signal generators, analyzers and power sensors.

 

For more information, see the Microwave Journal article “A New Understanding of Mismatch Error”and Agilent application note 1449-3 Fundamentals of RF and Microwave Power Measurements (Part 3) Power Measurement Uncertainty per International Guides. A spreadsheet-based uncertainty calculator is also available from Agilent to correspond to the calculations in the application note.

Originally posted Nov 6, 2013

 

Taking time resolution to the extreme in your measurements

Previous posts explained the fundamentals of real-time analysis and described the difference between real-time analyzers and real-time analysis. The simple concept common to any version of real-time analysis is results are calculated from every sample of the selected frequency span, with no gaps.

However, as we saw in the recent post on understanding different types of real-time spectrograms, the displays generated from real-time spectra can be quite different and can be used to analyze specific phenomena or problems. In particular, the setting for acquisition time changes the time scale of the spectrogram over a relatively wide range and thus sets the time resolution of the results.

The ability of high-performance analyzers to generate several hundred thousand spectra per second has an important effect on displays such as spectrograms and density. Because neither displays nor the human eye can process spectra at this rate, each spectrogram line must represent many separate spectra. The analyzer’s display-detector function—peak, negative peak, average, and so on—determines how the copious spectra are combined to form a single line. The effect, even for very short acquisition times, is to limit the time resolution of each spectrogram line or slice.

If you need very fine time resolution and greater measurement flexibility, it’s best to switch from real-time spectrum analysis to another family of real-time measurements: gap-free signal capture and post-processing. Time capture and flexible post-processing (playback) are provided by the 89600 VSA software. Its analysis capabilities are available on many hardware platforms including the PXA and MXA X-Series signal analyzers that provide the real-time capabilities we’ve discussed previously.

Let’s look at a spectrogram of the ISM band produced using capture and playback. Compared to the results described in a previous post about real-time analysis, the spectrogram shown below not only has a different appearance but also provides different benefits.

This is a real-time spectrogram from capture/playback of 100 MHz ISM band at 2.45 GHz. WLAN and Bluetooth® signals are prominent, along with leakage from several microwave ovens. Individual hops and frames are shown clearly and the time-slice markers identify the repeat rate of the wandering signals as 1/60 Hz from microwave ovens.

This is a real-time spectrogram from capture/playback of 100 MHz ISM band at 2.45 GHz. WLAN and Bluetooth® signals are prominent, along with leakage from several microwave ovens. Individual hops and frames are shown clearly and the time-slice markers identify the repeat rate of the wandering signals as 1/60 Hz from microwave ovens.

The benefits of flexible post-processing apply to analysis and troubleshooting. For example, playback speed is adjustable over a very wide range by adjusting overlap. A large overlap provides extremely fine time resolution in the spectrogram, and this can help you see exactly how signals change—or relate to others—over time.

In the spectrogram above, fine time resolution reveals a few instances of interference and many of spectrum sharing, which is vital to effective use of this band. Note that two WLANs are successfully sharing the center channel (in this timeframe, at least). Also note that most, but not all, Bluetooth hops are clear of the WLAN bursts and microwave oven activity.

The capture/playback approach also provides complete flexibility in changing analysis types and parameters. In the 89600 VSA software, center frequency and span can be changed after the capture, enabling analysis to focus on specific signals and time intervals. The analysis mode can be changed from spectrum to time domain or demodulation, and several different types of demodulation can be done at different center frequencies or spans to evaluate interference.

No single spectrogram method is best for all applications. The continuous spectrogram of a real-time spectrum analyzer is an excellent way to monitor signals or bands over longer periods. The capture/playback approach is a powerful way to examine short-term behavior in detail, especially when paired with magnitude or frequency-mask triggers and pre- or post-trigger delays.

 

For more information on these displays and measurements, please see the application note Real-Time Analysis Techniques for Wireless Measurements.

Originally posted Nov 22, 2013

 

Straightforward advice and a trick question

Well, it’s more of a tricky question than a trick question. The answer is an interesting and useful fact and we’ll get to it in just a moment. First, allow me to be the thousandth person to remind you that connector torque can be important and helpful at high frequencies, and explain a few of the reasons.

Engineering is inherently challenging at microwave and millimeter frequencies. Many things that can be ignored at lower frequencies really begin to matter. Accuracy and dynamic range are already declining as frequency increases, and you don’t want to give up anything you don’t have to in terms of accuracy, repeatability or connection loss. Because the measurement solutions you buy—including analyzers, signal generators and associated equipment—are more expensive, you want to extract all available performance and preserve all the margin you can.

All these factors lead us to the use of torque wrenches. I don’t know about you, but my fingertips just aren’t strong enough or repeatable enough when it’s time to hook up an important measurement or make final system connections. I’ve noticed that I can’t unscrew a properly-torqued connector nut with my fingers, and that suggests appropriate tightening requires a wrench. Applying consistent torque points me specifically to a torque wrench, and avoiding damage means paying attention to its limiting function.

But what does proper torque really do for these connections? One way to summarize is to say that it places a consistent stress (in the mechanical engineering sense) on the nut and the rest of the connector structure. This has three important benefits:

  • Consistent mechanical alignment and axial positioning of connector elements ensures the best connector performance (e.g., return loss) and repeatability.
  • Sufficient strain reduces the possibility that the connection will be loosened by vibration, thermal changes or external mechanical action such as bending or twisting.
  • Proper torqueing avoids connector damage from deformation due to excessive tightening.

The first and third benefits are often difficult to discern. Suboptimal connector performance can be a hard problem to isolate and connector distortion or damage from excessive tightening may be invisible to the eye.

The second benefit is something we’re all familiar with. After using our fingers and thinking we’ve got a connector tight enough, we later discover that it’s loose. Maybe the connector was bumped, or maybe something more subtle happened: Thermal cycles or cable motion reduced the strain from fingertip-tightened “too low” to “zero”—and the nut is unlikely to re-tighten itself from there!

Proper connector torque is simple for the vast majority of microwave and millimeter connections we use because there are only two torque values needed to cover SMA through 1.85 mm (70 GHz). SMA connectors should be torqued to 56 Newton-cm or 5 inch-pounds and all the rest—3.5, 2.92, 2.4 and 1.85 mm—should be tightened to 90 N-cm or 8 in-lbs.

And now to the tricky question mentioned above: What if you’re mating an SMA connector with one of the other compatible precision connectors, 3.5 or 2.92 mm? Should you use the SMA torque value or the precision-connector value? The answer is to follow the value for the male connector. Thus, if the SMA connector is male gender, then you should use the SMA torque value.

The power of wrenches must be used carefully because you can’t feel forces through them the way you do with your fingers. The potential for over-straining something is also greater when multiple or longer adapters are used or when devices are connected directly without cables that relieve bending stress. The diagram below describes one example: “wrench-lift stress.”

Using the appropriate wrenches to tighten or loosen connectors is good practice but they can cause wrench-lift stress and bend connectors if used the wrong way. Maintaining a small angle between the two wrenches during assembly and disassembly will avoid the problem.

Using the appropriate wrenches to tighten or loosen connectors is good practice but they can cause wrench-lift stress and bend connectors if used the wrong way. Maintaining a small angle between the two wrenches during assembly and disassembly will avoid the problem.

The second wrench is typically a simple open-ended one, but it also has an important job to do. By preventing rotation of the connector mating surfaces while they are in contact, it reduces a major source of connector wear and damage.

Of course, connector torque and rotation aren’t the only sources of problems such as connector damage and poor performance. Other examples are discussed in the posts Loose Nut Danger and Male Front-Panel Connectors on Millimeter-Frequency Instruments: Why?

You can find a summary of torque values for different connectors, along with wrench sizes and part numbers, at http://www.keysight.com/upload/cmc_upload/All/EPSG086190a.htm.

Lastly, for more detail on torque and other coaxial connection issues and practices see the classic application note Principles of Microwave Connector Care (AN-326), Agilent literature number 5954-1566.

Originally posted Dec 10, 2013

 

“No noise” is not always the right goal when you’re generating test signals

Minimizing noise is so often essential to better measurements that it’s easy to assume that maximum SNR is always the best test condition. In this post and one or two to come, I’ll discuss some examples and offer practical advice for situations in which some amount of noise—the precisely correct amount, of course!—will make your design or test task easier and your results more reliable.

As readers of this blog know, I’m generally not a fan of noise. It can represent disorder, entropy, poor performance, some degree of engineering failure, or maybe just bad luck. Excess noise instinctively feels like an affront to RF engineers and most of us would rather see perfectly pure sine waves or a well-constructed digitally-modulated signal, no matter how complex. Of course, many complex digitally-modulated signals look like band-limited noise, but that’s a sign of success.

Let’s focus on desirable noise in the form of controlled phase noise in test signals for orthogonal frequency-division multiplexed (OFDM) systems. OFDM is sensitive to phase noise because it causes the closely spaced subcarriers to interfere with each other and thereby reduce the orthogonality that is essential to the proper functioning of the system.

The obvious way to create OFDM test signals is to define them with near-zero phase noise; however, in the real world of user equipment this is neither realistic nor necessary. It isn’t realistic because creating signals with extremely low phase noise is expensive. It isn’t necessary because OFDM demodulators continuously track known “pilot” subcarriers and symbols in transmitted signals, and this tracking can compensate for some amount of phase noise. How much phase noise is OK? At what carrier offsets? Ah, that’s the signal generation test challenge, and the engineer’s chance to shine. The goal is to understand the maximum amount of allowable phase noise and to optimize design performance—and cost—by generating test signals that operate around this limit.

Thus, there are two major elements in testing to optimize phase noise performance in OFDM systems. First, generating signals with the appropriate amount and distribution of phase noise. Second, understanding how this phase noise will affect modulation quality measures such as error vector magnitude (EVM).

For the generation of OFDM signals with precisely impaired phase noise performance, the Agilent N5182B MXG X-Series signal generators use real-time baseband processing to provide a “phase noise injection” capability. The user need only specify a phase noise pedestal level and the frequency break points for the beginning and end of the pedestal. The figure below shows the relevant signal generator configuration screen with its target curve and the corresponding measurement by an Agilent X-Series signal analyzer with a phase noise measurement application.

The configuration screen for specifying added phase noise in the Agilent N5182B MXG signal generator is shown at left, including the anticipated phase noise curve. The corresponding measurement of actual phase noise by a signal analyzer measurement application is shown at right.

The configuration screen for specifying added phase noise in the Agilent N5182B MXG signal generator is shown at left, including the anticipated phase noise curve. The corresponding measurement of actual phase noise by a signal analyzer measurement application is shown at right.

As shown above, the RF signal generator neatly solves the problem of adding realistic phase noise at desired carrier offsets. The other major element of testing and system optimization is to verify the effect of this phase noise on modulation quality as the receiver sees it.

A common linear measurement of signal impairment is EVM, and for the purpose of optimizing phase noise it’s useful to assume the signal impairment is dominated by phase noise. Then it’s straightforward to use the rule-of-thumb that the pilot tracking effectively “tracks out” phase noise at offsets up to about 10 percent of the OFDM subcarrier spacing. That would be 31 kHz for the 312.5 kHz subcarrier spacing in this WLAN example.

EVM can then be estimated by integrating the single-sideband (SSB) phase noise power at offsets greater than 10 percent of the subcarrier spacing and less than the channel bandwidth, and adding 3 dB to convert SSB power to double sideband (DSB) or total power. In the display above, the power integration is performed by band-power markers in the measurement application and the result is -26.35 dBc when 3 dB is added to the -29.35 dBc marker reading.

The “10 percent of subcarrier spacing” rule-of-thumb is thought to be slightly conservative, and later measurement by a vector signal analyzer bore this out, with measured EVM just a fraction of 1 dB better than predicted.

The loop-closing process of generating impaired signals and verifying their impact on receivers is a powerful tool for optimizing OFDM systems and other complex designs. More detail on this approach to understanding phase noise and OFDM is available in an article I wrote last year entitled “Optimize OFDM Via Phase-Noise Injection.” You’ll find it in the October 2012 issue of Microwaves & RFmagazine.

Beware, however, that the article contains an error in the formula for calculating EVM from integrated phase noise. While the figures in the article’s examples are correct, my efforts to clarify the formula and its description went awry. I’m sorry about that(!) and here’s a better formula:

EVM (dB) ≈ [integrated SSB phase noise from 10% of subcarrier spacing to channel BW] + 3 (dB)

Originally posted Dec 20, 2013

 

The power of engineering analysis and creativity

We’ve been fighting the good fight for better measurements all year long, and now the holiday season—perhaps with a work break!—is upon us. It’s a good time to take a slightly wider view of the challenges and opportunities that face us, with an eye toward improvements in the new year.

As engineers, we want to do more than just make the best of limited choices. We seek deeper understanding and chances to add our own insight and skill, aiming to actually improve the available choices. Let me illustrate the difference with a couple of diagrams.

First, let’s take a look at what some call the “designer’s holy triangle.”

This classic triad expresses the tension and interaction between three factors characterizing many projects, products, or problem solutions. (Image from Wikimedia Commons)

This classic triad expresses the tension and interaction between three factors characterizing many projects, products, or problem solutions. (Image from Wikimedia Commons)

The traditional interpretation of this figure is as a Venn diagram with no central overlap and thus a forced choice constrained to any two of the three attributes. It’s a triangle that doesn’t seem too “holy” to me, and I’d argue it may be a perspective-limiting view. It’s useful as a reality check and a reminder to be realistic; however, for me (and maybe you) it fails to inspire innovative solutions.

For a few years I’ve been thinking instead along the lines of the alternative triad below.

This alternative triad uses information rather than performance as a third element in the measurement process. It symbolizes how measurements may be improved by treating information as an input to the measurement process and not only an output.

This alternative triad uses information rather than performance as a third element in the measurement process. It symbolizes how measurements may be improved by treating information as an input to the measurement process and not only an output.

I may have made this up myself, as I can’t find earlier examples. However, if you can shed any light, please do so in the comments. I believe I started with the well-known example of cost-time-performance tradeoffs and altered one part to better reflect what I’ve learned.

What exactly have I learned? And what’s different about this triangle? Mainly it’s the general concept of added information as applied to the tradeoffs in making measurements.

A straightforward example is a traditional spectrum measurement. Time is measurement time.Information received from the measurement process can be quantified in terms of accuracy, dynamic range, signal/noise ratio, frequency resolution, measurement variance, and so on. Costcontains many items: equipment cost, the cost to make an individual measurement, the cost to measure a DUT, or perhaps the cost of writing the code needed to produce the required result.

Thus a typical approach to equate information with performance and then to improve performance in one of several ways:

  • Improve signal/noise ratio, accuracy, variance, and frequency resolution by using a narrower RBW and consequently longer measurement time.
  • Improve measurement performance by using a more expensive analyzer with inherently better accuracy, dynamic range, sensitivity and (perhaps) measurement speed.
  • Improve (i.e., lower) cost by sacrificing performance for a given measurement time or tolerating a longer measurement time for a given performance level.

These optimizing approaches, however, may miss a fundamental opportunity. If instead you view information as an input to the process rather than only an output, you open the door to an expanded and improved set of tradeoffs. It’s a way to take advantage of your knowledge, insight and creativity to expand the otherwise-constrained envelope of tradeoffs. Here are three examples:

  • Improve measurement time by using segmented sweeps or list sweeps to measure only frequencies of interest.
  • Improve signal/noise ratio and measurement variance by using coherent (time) averaging on repetitive signals (a powerful and under-appreciated technique and subject of a future post).
  • Improve measurement cost by using a passive pre-filter or internal analyzer stage to remove known large signals in the measurement span. Removing large signals can allow use of lower input attenuation and reduce the required analyzer performance (such as sensitivity or dynamic range) and thus solution cost.

In each case, you’re adding information to the measurement process and improving the landscape of tradeoffs you face. Many times, these improvements can be obtained at little or no incremental cost. There are countless potential examples that share the approach of adding information to the process rather than seeing information only as a process output.

You’re probably already doing this instinctively. However, it can be useful to take a more deliberate approach to determining all the different types of information you can add to improve your choices. You can explicitly collect what you know about the measurement or device involved, including your priorities and measurement options.  You might also ask yourself what measurement process outputs you can do without. Each of these factors can point to potential improvements.

Continuing the spectrum analysis example, your analyzers may also be adding information that improves measurement speed and accuracy. For example, some recent-generation analyzers with digital IF filters use precise data about the repeatable dynamics of these filters to sweep them from three to six times faster than equivalent analog filters while maintaining specified accuracy and resolution. Because the dynamic effects of sweeping are well-corrected, information is added to the measurement process to yield faster measurements with no extra costs.

Another example of adding information is the sweep speed enhancement technology recently added to Agilent X-Series signal analyzers such as the PXA. Sophisticated RBW filters are matched to the sweeping LO of the analyzer, enabling sweep speeds up to 50 times faster than before. And because the improvement involves enhanced information processing, it’s simply a software upgrade for some existing analyzers.

 

Like knowledge, information is power you can use to make things better.

Originally posted Dec 30, 2013

 

With a few RF measurements along the way

A holiday break can be a chance to step back a little from the demands of designing and testing the latest technology and instead pursue a deeper appreciation of other impressive technologies. I’d like to take that liberty here and offer a window into another world and another time: the Apollo space program that sent humans beyond Earth orbit for the first and only time, and allowed us to explore the moon in-person.

Well, not all of us. Even among the few blessed with “the right stuff,” only 12 were chosen to experience both the lunar journey and the chance to walk around on another world. For me, even as a kid, Apollo was the most compelling and fascinating technological adventure of my lifetime. Looking up at the moon at night and realizing that, in my direct view a quarter million miles away, there are people working and exploring the surface was awe-inspiring and unforgettable.

In the years since, I’ve wondered what the personal experience would have been, and fortunately there are excellent ways to satisfy this curiosity. The principal key is information now freely available on the Internet, primarily from or through NASA. As an example, I’ll share just one photo from Apollo 17, the last manned lunar mission.

Manned exploration of the lunar surface wasn’t just a matter of getting there, walking around, and scooping up a few samples. With the help of a specialized vehicle the astronauts covered a lot of ground and did a lot of science. Click to expand this photo and note how small the lunar module appears to be, though it’s more than 20 ft. tall! (Image AS17-140-21493 from NASA via apolloarchive.com)

Manned exploration of the lunar surface wasn’t just a matter of getting there, walking around, and scooping up a few samples. With the help of a specialized vehicle the astronauts covered a lot of ground and did a lot of science. Click to expand this photo and note how small the lunar module appears to be, though it’s more than 20 ft. tall! (Image AS17-140-21493 from NASA via apolloarchive.com)

I’ve found, to my great satisfaction, that the cumulative effect of absorbing the pictures, descriptions, chronologies and maps is a passable sense of what it was like to be there. Not the full experience, of course, but more than I expected and very worthwhile and enlightening.

It’s simple to do. Just start with one or two of the resources below and follow your curiosity:

  • The Apollo Lunar Surface Journal is probably the single most useful resource. It includes pictures, detailed captions, radio transcripts and links to related sites.
  • The Apollo Archive is a large repository of digital scans, including the high resolution pictures from the astronauts’ medium-format Hasselblad film cameras.
  • Exploring the Moon by David Harland is a large-format book with full narratives of the surface explorations including detailed maps of the terrain covered by walking and driving. The book also describes the scientific objectives of each mission and the tradeoffs and moment-by-moment decisions involved.
  • The Apollo Flight Journal is a companion to the lunar-surface journal and the great detail provided in the transcripts is an important part of understanding what’s involved in getting there and getting back. It adds a vital element of perspective that isn’t available any other way.
  • Apollo by Charles Murray and Catherine Bly Cox provides comprehensive coverage of the entire program and a wealth of detail (previously published as Apollo: The Race to the Moon).

Some contend that the lunar landing happened 20 to 40 years earlier than it otherwise would have because of the contest between two superpowers and linkage to ICBMs and other high-priority military technology. Indeed, a successful moon landing by the end of the 1960s was predicted by very few science writers and science fiction authors, even in the late 1950s. After exploring the material described here I suspect you’ll agree that this was an achievement and timeframe without precedent.

Of course they couldn’t have gotten near the moon—literally, they had to aim at a lunar orbit insertion window just a few miles wide at a distance of 240,000 miles!—without some amazing RF and microwave technology for measurement and communications. They used wide-bandwidth S-band microwave links, pulse Doppler radars, flexible short range VHF links, and amazingly sophisticated telemetry.

Though most of the resources here point to older material, combining that material with new simulation technology can also be enlightening. A recent article Earthrise: Recreating an Iconic Moment in Space History shows how.

Lastly, some of the books covering this material—including one mentioned here—are out of print. A great source of used and hard-to-find books from small bookstores at great prices is the Advanced Book Exchange.

Happy holidays. I hope you enjoy the virtual journey as much as I did.

Originally posted Jan 10, 2014

 

Cheating the Shannon limit

My career has, happily, been punctuated by opportunities to work with a series of amazing RF technologies. And by “amazing” I mean innovative, impressive and effective. Many of these technologies also seemed, at first glance, to be overly complicated or of doubtful practicality. The name Rube Goldberg comes to mind.

Fractional-N technology in synthesizer phase-locked loops (PLLs) is the first example I remember. This technology came of age in the late 1970s and I first encountered it in 1980. With it, a divider in a PLL could be an arbitrary fraction instead of an integer, allowing very fine frequency resolution over a wide range with a single—albeit complex—PLL. And boy, what complexity! An intricate dance of analog signals and digital controls was updated at the astonishing rate of 100,000 times per second by a proprietary ASIC. Division constants were continuously changing, even for CW signals.

To compensate for the effect of the changing division constants, there were analog phase interpolators, which were switchable current sources if I recall correctly. From there it seemed to me only a short step to something like a flux capacitor* or even an oscillation overthruster.**

However, my skepticism simply demonstrated how shortsighted I could be. I gradually learned to nurture a degree of open-mindedness as a series of new technologies drove the RF world onward. While some of these were indeed fanciful or at least impractical, many worked well and made important contributions to our art. Achieving maturity and profitability took a lot of time and work, but of course as engineers that’s what we’re here for.

Two amazing-at-first-glance technologies of recent years are CDMA and MIMO. Along with space-time coding (STC)—which is perhaps a little less amazing—they share the approach of transmitting multiple RF signals at the same frequency at the same time.

A conceptual diagram of 2×2 MIMO is shown at left and a CDMA code-domain power measurement is shown at right. Both schemes transmit multiple signals at the same frequency and at the same time but the purpose and benefits are different.

A conceptual diagram of 2×2 MIMO is shown at left and a CDMA code-domain power measurement is shown at right. Both schemes transmit multiple signals at the same frequency and at the same time but the purpose and benefits are different.

Though these techniques all transmit multiple simultaneous signals over the same RF channel, their purposes are different:

  • STC is a diversity scheme, transmitting differently-coded versions of the same signal and using processing at the receive end to increase the likelihood that the signal can be successfully demodulated.
  • CDMA is a multiplexing scheme, providing a way to share spectrum among multiple users by multiplying signals from different users by different codes at the transmit end. A receiver with knowledge of the codes can use signal-processing techniques to separate them.
  • MIMO is a capacity enhancement scheme, transmitting independent signal streams over a single RF channel by using multiple independent transmitters and receivers to create the (partial) equivalent of separate transmit channels.

Thus, two of the three techniques aim to use the available RF channel more effectively or efficiently but don’t change the fundamental carrying capacity of the channel, often referred to as the Shannon limit. MIMO techniques seek to cheat or evade Shannon (and Hartley and Nyquist too!) and effectively create more of the RF spectrum that is so valuable these days.

Of course, by using coding and multiplexing schemes, the extra capacity of MIMO can be traded for other benefits as needed.

Amazing technologies indeed, and I’m no longer skeptical about them. I just have to keep an open mind about the next new thing.

Finally, if you need a quick break from real, complex and difficult-to-tame technology you might make a two-minute time investment to learn the technology of the Retro Encabulator.

* From a 1985 science fiction movie

** From a 1984 science fiction movie

benz

Sneaky little RF power errors

Posted by benz Sep 16, 2016

Originally posted Jan 20, 2014

Small but significant power anomalies: a case study

After a few somewhat abstract posts, it’s time for something more concrete. I have in mind a signal pair I encountered several years ago, and the signals conspired to hide a subtle but potentially significant problem.

I call the errors “sneaky” due to a kind of symmetry that combined with an average power value—one that was correct—and minimized the visibility of the problem. Seeing through these obscuring factors illustrates some important aspects of power measurement and can help you decide which RF power errors—sneaky or not!—are worth fixing.

The signals in question are the two RF transmitter outputs of an 802.11n WLAN unit in mixed-mode operation. A portion of their RF envelopes, focused on the frame preamble, are shown in the traces below.

RF envelope of two 802.11n MIMO transmitter outputs, with band-power markers. Amplitude anomalies in the HT-LTF portion of the preamble are measured with band-power markers. The upper marker is one symbol long and the bottom marker is two symbols long.

RF envelope of two 802.11n MIMO transmitter outputs, with band-power markers. Amplitude anomalies in the HT-LTF portion of the preamble are measured with band-power markers. The upper marker is one symbol long and the bottom marker is two symbols long.

The traces show the RF envelope (log magnitude) of the signal during the preamble and the beginning of payload transmission. Symbol length is 4 µs and preamble elements are usually one or two symbols long. This signal pair is 2×2 MIMO, so the high-throughput long training field (HT-LTF) is two symbols long. In the top trace, the band-power marker measures the first symbol of the HT-LTF; in the lower trace, the marker measures both symbols of the HT-LTF.

Though the content of the preamble and subsequent data transmission changes during the frame, causing the power statistics to change, the transmit power level is generally constant. The anomaly is the power variation during the HT-LTF. Interestingly, the variations are symmetrical, with the first symbol of the HT-LTF having higher-than-average power and the second having lower-than-average power for channel 1 and the opposite for channel 2.

The interesting—and slightly tricky—results of this symmetry are small errors in average power that do not appear in most measurement setups:

  • Power of preamble alone or total signal: power reading normal
  • Total power of two-symbol preamble segment (HT-LTF): power reading normal
  • Power of both signals combined, for all or any symbol or portion of preamble: power reading normal
  • Power of one symbol of the two-symbol preamble segment (HT-LTF), measuring one transmitter only: power reading abnormal

Thus, in most of the ways that you would measure signal power, the value looks correct for either signal or both together. Seeing the anomalies clearly requires not just a measurement focused on a specific symbol in the preamble, but on a single transmitter output. Coupling the band-power markers will make it easier to understand time alignment.

In many cases, these specific and revealing measurements will be made because an anomaly appears in an RF envelope measurement, whether made using a VSA or a swept analyzer in zero-span mode. A small anomaly such as this, however, may not be obvious unless the preamble is examined closely.

Why be so concerned about a power error that doesn’t appear in most measurements, doesn’t prevent demodulation, and doesn’t affect total power? The answer: preambles are special because they are the regulating and steering elements of digitally modulated signals (as are mid-ambles and pilot subcarriers). Because preambles guide signal timing, equalization and demodulation, systems are unusually sensitive to problems in this area.

Given the time alignment of these errors, their symmetry and power accuracy overall, it’s logical to suspect that this is an error in the digital baseband, specifically the math for the MIMO channel references.

Another potentially sneaky element comes to mind: The effect of these errors on receivers may vary considerably with signal-to-noise ratio. In good conditions, the channel-estimation processes they drive will be accurate and the separation of MIMO spatial streams will be effective. In poor conditions, the lower amplitudes of HT-LTF symbols may impair channel estimation and thus MIMO stream separation.

Situations like this benefit from a three-step approach to analyzing digitally modulated signals. The first step is spectrum and time-domain or vector measurements to find impairments that may cause problems downstream in demodulation. This is especially important when problems are obscure in demodulation measurements but easier to see in the time or frequency domain, as in this case.

If you’re interested in more detail on the three-step approach, you can view an upcoming webcast live on January 22 or a recording any time afterward. Go to www.keysight.com and search on “Successful Modulation Analysis in 3 Steps Webcast.”

Originally posted Jan 30, 2014

Better spectrum measurements through data reduction

Like most engineers using spectrum and signal analyzers, I make measurements without paying much attention to the “detector” settings. These are visible in the menus and in a little grid near the top of the screen that summarizes which detectors are in use on each trace. Occasionally, I notice that changes to parameters such as RBW and VBW affect the look of the trace. The same is true for choices such as selecting a marker to read noise level, harmonics or band power.

If I’m aware of detectors at all, I trust the analyzer to make appropriate choices—and that’s a good thing. Most modern analyzers make reasonable and safe default detector choices—at least in terms of measurement accuracy—especially when users tell them what sort of measurement is desired by selecting a measurement or marker type.

Of course, the analyzer is not clairvoyant and therefore the default setting and linkages aren’t always optimal. If you want to optimize for better measurements, it helps to know more about what the detector is and does. In this post I’ll explain the function of the detectors and take the first step toward helping you choose the one that’s optimal for your measurements.

But first, I’ll describe which detector it isn’t, as shown in the figure below. The diagram describes a spectrum analyzer from the IF onward in conceptual terms, and many modern analyzers perform these operations with DSP.

This partial, simplified block diagram of a spectrum analyzer shows the two detectors and their locations in the signal path. The IF or video detector converts the IF signal into a magnitude value and has no user controls. The display-detector or detector-mode setting is controlled by the user and determines how changing magnitude values are converted to measurement points.

This partial, simplified block diagram of a spectrum analyzer shows the two detectors and their locations in the signal path. The IF or video detector converts the IF signal into a magnitude value and has no user controls. The display-detector or detector-mode setting is controlled by the user and determines how changing magnitude values are converted to measurement points.

The IF or video or envelope detector converts the IF output to a DC “video” signal proportional to the magnitude of the IF output. It functions as an AM demodulator and, as the symbol indicates, it was sometimes implemented as a diode circuit. There’s a rich history here, including cat’s-whisker detectors and coherers going back more than a century! Unfortunately, that technology and wild things like spark-gap radios will have to wait for a future post.

For the analyzer user, the real action happens downstream at the display detector. This circuit or algorithm performs one or more types of data reduction, as shown in the figure below.

The function of a display detector is enlarged and shown over an interval of slightly more than one display point or bucket. The display detector reduces many measurements of IF magnitude to a single one for display, here selecting a maximum or minimum or an unbiased sample.

The function of a display detector is enlarged and shown over an interval of slightly more than one display point or bucket. The display detector reduces many measurements of IF magnitude to a single one for display, here selecting a maximum or minimum or an unbiased sample.

The decisions inherent in the data reduction can be very helpful in focusing the measurement on the intended target. For example, a peak detector can ensure that spurious signals falling between display points are never missed. A sample detector can avoid bias and improve accuracy in measuring noise. An average detector—the subject of a future post—is a powerful tool for reducing variance caused by noise and modulation.

As you can see from these examples, the optimum detector will depend on the measurement you’re trying to make. Agilent signal analyzers choose different detectors automatically, matching them to the setup choices you make and any measurement applications you may be using. When you’re interested in multiple measurements of the same signal, Agilent analyzers allow different detectors to be used simultaneously for each trace in a single sweep.

Detectors are both important and useful, affecting measurement speed, accuracy and dynamic range. I’ll have more examples in posts to come, and in the meantime you can learn more in the new, updated version of application note 150 Spectrum Analysis Basics.

benz

Pilot tracking civilizes OFDM

Posted by benz Sep 16, 2016

Originally posted Feb 10, 2014

Balancing RF performance, processing power and throughput

OFDM is a demanding modulation scheme for RF equipment. The numerous subcarriers are so closely spaced that phase noise is a potentially serious problem. In addition, the combination of these independently modulated—but time-aligned—subcarriers results in an RF signal with a high peak-to-average power ratio (PAPR) that challenges amplifier linearity and power efficiency.

One solution is to simply design or use synthesizers with very low phase noise and power amplifiers with extremely high linearity. But those approaches are expensive, and wireless engineers and system architects are cleverer than that. They recognize not only the growing power and power efficiency of advanced DSP but also its ability to change the balance of . . . um . . .power between RF cost/performance and processing algorithms.

A good example is the use of pilots in OFDM transmissions and pilot-tracking routines in OFDM receivers. Other interesting examples include PAPR-reduction techniques, relevant to OFDM and a good topic for a future post.

In many OFDM systems, pilots are used in the form of dedicated subcarriers that are transmitted continuously during the data portion of frames or subframes. These pilots displace subcarriers that would normally carry payload data and instead transmit a prearranged sequence, typically with low-order modulation such as BPSK or QPSK. Pilots are thus conceptually similar to equalizer training sequences: they compensate for non-ideal RF performance, but at the cost of carrying capacity and additional processing power.

The plot below shows OFDM error versus subcarrier, with the pilot symbols in white.

This error vector spectrum trace plots error versus frequency or subcarrier. Four of the 52 subcarriers, shown in white, are continuous pilots that carry reference information for demodulation rather than payload data. Note the comparatively high error associated with one subcarrier, caused by spurious interference.

This error vector spectrum trace plots error versus frequency or subcarrier. Four of the 52 subcarriers, shown in white, are continuous pilots that carry reference information for demodulation rather than payload data. Note the comparatively high error associated with one subcarrier, caused by spurious interference.

The essential thing to understand about pilots and OFDM demodulation is that the signal is demodulated relative to the pilots, which are presumed to generally share the impairments of the rest of the modulated signal. This part of the demodulation process is called pilot tracking and it allows the demodulation process to track signal errors and correct for them on a symbol-by-symbol basis.

Because the receiver knows the intended I/Q or amplitude/phase values of the pilot symbols, it can adjust all other symbols by the same amount required to fix the error it sees in the pilots. In this way, errors of signal amplitude, phase and symbol timing can be removed or “tracked out” as shown in the measurements below.

OFDM demodulation controls in the 89600 VSA software allow pilot tracking types to be individually selected as a way to isolate errors when troubleshooting. This signal has both amplitude droop and phase drift, shared by the pilots (white) and the data symbols (red). Correcting the pilot symbol locations corrects the data symbols as well.

OFDM demodulation controls in the 89600 VSA software allow pilot tracking types to be individually selected as a way to isolate errors when troubleshooting. This signal has both amplitude droop and phase drift, shared by the pilots (white) and the data symbols (red). Correcting the pilot symbol locations corrects the data symbols as well.

Switching off the amplitude and phase elements of pilot tracking reveals significant errors in this 16QAM signal. Note that the BPSK pilots show the same error as the data symbols, and that both the amplitude and phase portions of the error appear greater at higher signal amplitudes. This suggests that straightforward correction of the data I/Q values, scaled to the pilot values, will correct the data symbols properly.

These selectively configured demodulation options in the 89600 VSA software are obviously useful in determining the source of errors. The pilot error quantities themselves can also be displayed in the form of common pilot error (CPE) measurement traces, in which the demodulation software isolates errors common to all pilots. CPE results quantify the performance of core circuits and algorithms in real-world operation.

For the wireless engineer, these pre- and post-tracking results are also useful in making system optimization choices. Signal impairments that can be readily corrected through pilot tracking—such as close-in phase noise—may not justify the cost or power required to reduce them below a particular magnitude. I discussed an example of pilot tracking and phase noise in a previous post Phase Noise and OFDM: Adding the Right Amount in the Right Place.

There are other interesting aspects of pilot tracking, including pilot schemes that are more complicated than the one shown here. I’ll try to cover them in future posts.

If you’re interested in more demodulation and troubleshooting techniques, you may want a copy of our latest application note Successful Analysis of Modulated Signals in Three Steps.

 

By the way, I should mention that the spurious interference in the first figure above is not much of a concern for demodulation, and that robustness is an important benefit of OFDM. However, if the interference affected one of the pilot subcarriers it would be much more significant!

Originally posted Feb 20, 2014

 

There’s no shame in being average—if you can get there quickly

In RF testing, noise and the measurement variance it causes are sometimes the biggest things standing between you and better measurements. The combined effect can limit amplitude accuracy and effective resolution, and the various approaches to minimizing the effect can severely hinder measurement speed.

In a previous post, I complained about how noisy the universe is—though I understand if you have no sympathy for me!—and promised to discuss efficient averaging techniques. One of the best is a display feature called the average detector.

To explain the average detector and its advantages, here is the block diagram from my recent post Detector decisions: see it your way.

This partial, simplified block diagram of a spectrum analyzer shows the two detectors and their locations in the signal path. The video detector converts the IF signal into a magnitude value and has no user controls. The display detector is controlled by the user and determines how varying magnitude values are converted to measurement points.

This partial, simplified block diagram of a spectrum analyzer shows the two detectors and their locations in the signal path. The video detector converts the IF signal into a magnitude value and has no user controls. The display detector is controlled by the user and determines how varying magnitude values are converted to measurement points.

A traditional way to reduce measurement variance due to noise is the video bandwidth (VBW) filter. It’s simply a lowpass filter acting on the detected amplitude of the signal in the analyzer’s IF section. The detector output is called a video signal and the VBW filter thus smoothes the trace and performs an averaging function. Very simple and clever, and it has some advantages for measuring CW signals near noise. Unfortunately, these benefits often affect measurement speed, and I’ll discuss the tradeoffs in a future post.

Note, however, that the VBW filter is usually averaging a log-scaled signal. It is performing an average of the log of the signal and, as explained previously, The Average of the Log is Not the Log of the Average unless the signal in question is CW. These days it seems like only a minority of the signals we measure are CW, and we don’t want the amplitude statistics of the signal to foul up our average or accuracy—so what are we to do?

In current-generation spectrum/signal analyzers, the answer is the average detector. These analyzers use digital IF/video detectors that respond to linear signal power (watts, not dBm) and therefore an average detector can calculate the log of the average power. The figure below illustrates the process of averaging all the linear power values between buckets and displaying the result for one display point.

The function of the average detector is enlarged and shown over an interval of slightly more than one display point. The average detector collects many measurements of IF magnitude to calculate one value that will be displayed at each bucket boundary.

The function of the average detector is enlarged and shown over an interval of slightly more than one display point. The average detector collects many measurements of IF magnitude to calculate one value that will be displayed at each bucket boundary.

The average detector has several important advantages in accuracy and speed. Calculating an average of linear power values and expressing the result in log or dB form produces a result that is accurate independent of the signal statistics. Averaging values between display buckets allows the high sampling rate of the IF detector to be used to best advantage, with no samples thrown out. This maximizes variance reduction in the shortest time.

Finally, using the average detector allows the amount of averaging—and therefore variance reduction—to be easily increased simply by slowing the sweep speed. This is generally a more time-efficient method than trace averaging, in which many successive sweeps are needed to perform a good average (in part because many between-bucket samples may be thrown out).

By the way, the use of an average detector on a power scale is often referred to as RMS averaging or even an “RMS detector.” This usage is a bit confusing because there is no RMS detector per se; however, it indicates that the result is an accurate measurement of the RMS power of a signal, independent of its amplitude dynamics.

Of course, the average detector is not always the best choice and there is more to say in this area, especially in terms of averaging scales and the averaging processes in spectra/signals. Stay tuned.

Originally posted Feb 28, 2014

You may be faster and smarter than you realize

Spectrum and signal analyzers are often used to provide accurate measurements of well-understood signals. Since their earliest days, however, they’ve also been used to answer a more open-ended question: What signals are out there? This question is especially challenging when one is looking for small signals near noise, or transient signals, or signals that are very narrowband and therefore might fall unnoticed between the display bins of a spectrum analyzer.

One logical solution is to use a peak detector as the display detector, as explained in the recent post Detector decisions: see it your way. With the peak detector, no combination of bin spacing, RBW and center frequency will cause a signal to be missed or its amplitude to be incorrectly measured.

However, the peak detector has its disadvantages. In the absence of averaging such as VBW filtering, it can bias the apparent noise level upward and, depending on analyzer settings and signal composition, it can be hard to know how much. The peak detector can also hide small signals that are below the noise peaks.

As a matter of fact—and as you probably know—no single detector is best for all signals and all situations. So, is this one of those times when we must compromise and simply choose a detector with the fewest disadvantages? Instead, perhaps we should follow the old engineering motto we can do better than that and think a bit outside the box.

We can take advantage of the fast information processing and pattern recognition of the human eye-brain system and combine it with our engineering judgment by using a detector choice that is the opposite of data reduction. We can use multiple simultaneous detectors as shown in the figure below.

Using three display detectors simultaneously provides better insight into spectral content. The blue trace is an average detector; the pink trace is a negative peak detector; and the yellow trace is a “normal” detector, which is an intelligent combination of peak and negative-peak functions. Like the peak detector, it prevents signals such as the single yellow spur from being missed.

Using three display detectors simultaneously provides better insight into spectral content. The blue trace is an average detector; the pink trace is a negative peak detector; and the yellow trace is a “normal” detector, which is an intelligent combination of peak and negative-peak functions. Like the peak detector, it prevents signals such as the single yellow spur from being missed.

In this situation, the experienced eye makes use of the extra information to better understand the combination of signals and noise present. It’s easy to detect both the narrow spur and the adjacent channel power of the modulated signal and, at a glance, understand the average power of the modulated signal along with any clues from the negative peaks of the noise.

Agilent X-Series signal analyzers support up to four simultaneous detectors on separate traces, with complementary functions such as max hold and trace averaging. An offset value can be used on individual traces to separate them for easier viewing.

In many situations, a single detector will give you the answers you need, especially when you’re measuring a single type of signal and know what to expect. Just keep in mind that when in uncharted territory it helps to have multiple views to leverage your signal knowledge and intuition.

 

As mentioned in an earlier post, you can learn more about detector choices and tradeoffs in the new, updated version of application note 150, Spectrum Analysis Basics.

Originally posted Mar 10, 2014

Sometimes less is more.  But sometimes more is more.  And better, too.

The term “big data” has certainly been over-hyped but there is something useful in the concept, in a variety of applications. In this post I’ll show a non-faddish example of how a big data view can increase both productivity and insight for RF engineers.

Before I go further, however, I’ll offer my definition of big data in this context: I’m referring to a situation in which you pull together many pieces and types of information, aiming to see them all at once, discover relationships and get actionable insight. The payoff is that you’ll discover phenomena and understand relationships that you might otherwise miss. It’s a way of leveraging your own engineering skills to improve productivity and reduce the risk of missing problems or settling on a suboptimal solution.

Defined this way, big data can be just the thing for design and troubleshooting in wireless systems. These jobs demand multi-measurement views and analysis because of their complex multi-factor nature, with one type of complexity piled on another and another to an amazing degree.

For example, OFDM systems use multiple subcarriers that can independently carry different types of modulation and can change modulation type after every symbol. Power levels can vary between subcarriers and can change at every symbol. Multiple channels can be used and may carry entirely separate data streams or coded versions of the same stream. Multiple users may be supported by beam forming or multiplexing schemes. Multiple frequency allocations may be aggregated, and reallocated on a dynamic basis. Multiple radios may be used at the same time on the same device, competing for power and processor cycles. The list goes on, with each factor multiplying the complexity of the total task.

Fortunately, our eye-brain system is wired to handle this multiplicity, perhaps better than we realize. We can view multiple data sets on the same trace as I described in the post Spectrum analysis and the opposite of data reduction, and we can view many different traces on the same display as shown in the example below.

Sixteen measurements of a single LTE signal are shown on one Agilent 89600 VSA display. Frequency-, time- and modulation-domain results are related to each other in an arrangement that can be completely customized to spot problems and optimize analysis (click on the figure to view at full size).

Sixteen measurements of a single LTE signal are shown on one Agilent 89600 VSA display. Frequency-, time- and modulation-domain results are related to each other in an arrangement that can be completely customized to spot problems and optimize analysis (click on the figure to view at full size).

Large, inexpensive monitors are available for both PCs and benchtop instruments such as Agilent’s X-Series signal analyzers. An ordinary 1920×1080 screen has nearly twice the pixels of the example shown here, and the tools in the 89600 VSA allow you to configure a display that makes effective use of the powerful pattern recognition and system knowledge you already have.

In the past I’ve described how the right view can make an obscure problem obvious and how a different view can do the same for a different problem, demonstrating that some defects that are obscure in one measurement or domain are obvious in another. By using many simultaneous measurements you’ll maximize the chances that a problem will be clearly revealed in one of them.

There is one more benefit to a large, multi-window display—or even two displays—particularly with the 89600 VSA and demodulation: There are many signal and modulation parameters you may adjust, and you can bring up multiple control windows at the same time, easily switching between them to make the changes you want without the need to close one control and open another.

A big display with lots of measurements won’t make you omniscient but it will move you in that direction!

Originally posted Mar 20, 2014

Soothing the sins of the RF channel – and some hardware sins, too

Adaptive equalization has been an important part of wireless for decades and it’s an essential element of modern standards including the various flavors of 802.11 WLAN, LTE and WiMAX. Although the operation of some equalization techniques can be somewhat obscure, they’re usually easy to see in OFDM schemes and simple measurements can be important tools for troubleshooting.

The main idea is to construct a digital filter in a receiver to compensate for frequency-response errors in the channel. The filter can compensate for errors in the transmitter and receiver itself, though the channel errors usually dominate. Here, “adaptive” refers to filters that adjust to the channel frequency response as it changes over time.

Since the modulation is complex (I/Q), these filters perform complex compensation. And because they are digital, the filters are often specified or measured in terms of impulse response rather than frequency response—but the two representations are equivalent.

Some equalizers use decision feedback, adjusting themselves with algorithms based on the demodulated data. However, most equalizers use some form of training sequence, which is a portion of the transmitted signal known to the receiver. In OFDM signals, training sequences are often part of a preamble, at the beginning of a frame, and also contain synchronizing and control information. WLAN signals are an excellent example, as shown in the figure below. This is the same preamble with Sneaky little RF power errors, as discussed in a previous post.

A gated spectrum measurement (top trace) of the first portion of an 802.11n WLAN frame shows the short training sequence, which is primarily used for synchronization and frequency correction. Only every fourth subcarrier—of the system’s normal 52 subcarriers—is transmitted and this subset of subcarriers should all be the same amplitude. The lower trace shows the RF envelope with gate markers.

A gated spectrum measurement (top trace) of the first portion of an 802.11n WLAN frame shows the short training sequence, which is primarily used for synchronization and frequency correction. Only every fourth subcarrier—of the system’s normal 52 subcarriers—is transmitted and this subset of subcarriers should all be the same amplitude. The lower trace shows the RF envelope with gate markers.

The 89600 VSA software is a great tool for understanding signals like this. Its gated spectrum measurements can be configured graphically or by entering the desired gate timing and length. Above, the gate begins at the start of the frame and is 8 µs or two symbols long, matching the signal’s short training sequence. As you can see, the sequence is composed of every fourth subcarrier and the subcarrier amplitudes provide a basic idea of the frequency response.

The OFDM symbols are continuous over the length of a symbol or self-windowing. That allows a uniform or rectangular FFT window to be selected for the gated measurement, providing optimum (narrow) frequency resolution.

Resolution isn’t a problem for the measurement above, but the next element of the preamble—the channel-estimation sequence—is another matter. That sequence is the primary training element for the adaptive equalizer and it contains all of the closely spaced OFDM subcarriers as a way for the receiver to measure the channel in fine-grained detail. Moving the gate window 8 µs to the right yields the measurement below.

A gated spectrum measurement of the channel estimation sequence (top) shows how the OFDM equalizer is trained. The individual subcarriers are transmitted at a lower power than the short training sequence but there are more of them, yielding consistent total power as shown in the RF envelope measurement (bottom).

A gated spectrum measurement of the channel estimation sequence (top) shows how the OFDM equalizer is trained. The individual subcarriers are transmitted at a lower power than the short training sequence but there are more of them, yielding consistent total power as shown in the RF envelope measurement (bottom).

The optimized frequency resolution of the uniform window allows all 52 subcarriers to be resolved and measured, in much the same way a receiver would train its own equalizer.

Note that the total RF power level is constant over the preamble, though there are changes in power statistics due to changing preamble content.

These measurements were made in a vector spectrum mode, and similar ones are available in the 89600 VSA when using demodulation. They’re a useful troubleshooting tool and also powerful for optimizing signal quality because they show how linear errors are corrected. They also allow linear errors to be separated from nonlinear errors, providing insight into which ones are most worth correcting and which ones will be fixed automatically. All sins are not the same!

If you’d like to see what frequency-response errors look like in a real-world environment at short range, see my “hand waving” post and the video link there.

Originally posted Mar 10, 2014

Equalizer math can make a channel “flat” but that may not be what you really want

My previous post on adaptive equalization and OFDM summarized the basics of equalization and the use of a dedicated training sequence in OFDM preambles to configure a receiver’s equalizer. Theoretically, this approach allows the receiver to completely correct for channel frequency response errors in the channel and in the transmit/receive equipment, providing a frequency response that is flat in amplitude and group delay.

Great! Perfectly corrected signals for demodulation! Alas, you know that’s too good to be true, and if the universe worked that way you wouldn’t have much job security. When dealing with a real-world channel and the training sequence recovered at the downstream end, there are good reasons to avoid correcting it to be completely flat.

For example, let’s start with an 802.11a OFDM signal I captured at my desk a while back using a bent paper clip as an antenna, stuck into a BNC adapter.* Using the gating settings I described last time reveals the training sequence as recovered by the receiver.

This gated spectrum measurement of the channel estimation sequence (top) shows the amplitude frequency response of the RF channel at 5.8 GHz in an office environment. The vertical scale is 5 dB/div and the frequency response varies by 12 to 15 dB.

This gated spectrum measurement of the channel estimation sequence (top) shows the amplitude frequency response of the RF channel at 5.8 GHz in an office environment. The vertical scale is 5 dB/div and the frequency response varies by 12 to 15 dB.

The subcarrier peaks trace out the frequency response of the channel at 5.8 GHz and can be used for corrections in terms of magnitude/phase or I/Q. However, there are limitations in the use of this training sequence, mostly related to noise.

First, note that the channel frequency response varies by about 12 to 15 dB. If the largest subcarriers are used as a reference it means that the smallest ones will be boosted by 12 to 15 dB, and so will the noise associated with those subcarriers. Because these small subcarriers are relatively close to the noise level, the equalizer will give noise power a significant boost. If the effect of multipath were to create a null at some subcarrier frequencies—which it sometimes does—the equalizer would boost the noise at these frequencies even more, effectively replacing modulated subcarriers with noise.

Noise also creates uncertainty in the construction of the equalizer filter. Each received subcarrier from a training sequence represents a very limited amount of energy, acquired over a very short time, and it can be a challenge to construct a good equalizer filter from such limited information. Errors from bad equalization can cause demodulation errors.

There are several ways to mitigate these problems, including intelligent and dynamic limits on the gain of equalizer filters. Filters may be trained progressively over multiple bursts, implementing a kind of averaging function, with a time constant related to how rapidly the channel is expected to change. Clever design of filter construction algorithms may also be effective at sorting subcarrier energy from noise.

Because energy and information are related in this situation, more information may be gathered by training the equalizer on the data signal itself, in addition to the training sequence. This combination of decision feedback and a training sequence will require more processing power, but that is usually not a problem in today’s receivers.

Equalizers, no matter how advanced and well-trained, can’t fix every subcarrier. When modulation fails on one or more subcarriers, coding can come to the rescue by reconstructing missing data from other subcarriers. Systems can even change coding schemes on the fly, to respond to dynamic channel conditions.

Equalization operations can also be useful in troubleshooting. In tools such as the 89600 VSA software, the training method is often selectable for OFDM signals. The equalizer filter can be created from either the training sequence alone or a combination of the training sequence and received data, and the filter can be viewed directly. Switching between these techniques while watching EVM and the equalizer filter can give you insights about channel impairments and system performance.

* Don’t scoff. A paper clip cut to the right length is a pretty good antenna at 5.8 GHz, and using an inexpensive BNC adapter kept me from damaging my analyzer’s expensive input connector.

Originally posted Apr 10, 2014

CW signals, impulsive ones, and our old friend noise

In my mind, the fundamental task of spectrum analysis had always been a simple matter of separating one signal from another. That changed when a diagram from a spectrum-analyzer designer got me thinking a little more broadly and deeply. Here’s what I learned about making better measurements on signals that have different characteristics.

It’s fair to think of a spectrum analyzer sweeping a narrow filter across a selected chunk of the RF spectrum as a way to separate and measure the signals there. Sometimes the signals are closely spaced, sometimes not. Sometimes we’re measuring the separable parts of a single signal, sometimes we’re measuring different signals, and sometimes we aren’t sure.

Traditionally, most spectrum analysis hardware and techniques made an implicit assumption that the signals in question were CW, and therefore both the techniques and the results were valid for this case. What that meant for practical measurements was that we’d choose a resolution bandwidth (RBW) filter narrow enough to separate signals that were as closely spaced as we expected, or one that offered the frequency resolution we needed.

We might also choose a narrow RBW filter as a way to reduce the apparent noise floor of the measurement and thereby improve measurement accuracy. A factor of 10 reduction in the RBW reduces the noise power in the filter by the same amount (10 dB) while not affecting the measurement of the CW signals we’re trying to measure. That improves the signal/noise ratio (SNR) of the measurement.

While we get powerful benefits from narrow RBWs, there are tradeoffs, most obviously in measurement time. Because the maximum sweep rate of traditional filters varies as the square of the RBW, it can be painful to get the benefits of a narrow one when sweep rate is the limiting factor.

Of course, today’s RF spectrum is full of non-CW signals—even the spurs may be bursted or modulated—and so I haven’t yet described the whole story. Here’s the diagram that made things clearer for me:

The effective SNR of spectrum measurements varies according to the RBW in different ways for CW and impulsive signals, though noise is unaffected. The relationships are linear in this log-log format.

The effective SNR of spectrum measurements varies according to the RBW in different ways for CW and impulsive signals, though noise is unaffected. The relationships are linear in this log-log format.

The diagram shows how SNR varies with RBW for different signal types. It can also be interpreted as showing how effectively different RBW values separate the signals we want to measure from the noise that makes the whole process difficult. It also shows how RBW can be adjusted to better separate one type of discrete signal from another, a different dimension than the frequency-domain separation we usually think of.

On the left, in blue, is the situation we’re all familiar with. Reducing RBW provides a benefit of 10 dB/decade in separating CW signals from noise.

The red curve at the right is an interesting opposite to the CW situation. For impulsive signals, a narrower RBW desensitizes the spectrum analyzer as it tends to block signals when the filter is narrower than the equivalent bandwidth of the signal.

The insight I got from the diagram is that the best RBW is one that is matched to the characteristics of the signal under test. This adjustment of RBW will provide the most effective separation and measurement of signals in an environment of noise and other signals.

Of course, some signals aren’t purely CW or impulsive, and you may need to do some experimentation or calculation to optimize your own measurements for speed, performance or other priorities.

Speaking of measurement speed, the sweep-rate penalty mentioned above for narrow RBWs is much less in some modern analyzers than in previous generations. Advanced digital filters and FFT techniques have made narrow-RBW measurements much less painful, and that’s a good topic for a future post.

Originally posted Apr 22, 2014

Simplicity and synergy—sometimes physics breaks your way

Elegance, like beauty, may be in the eye of the beholder. And I doubt that non-engineers would find the classic spurious measurement setup to be elegant. Nonetheless, I think the traditional approach to measuring CW spurs near noise does qualify as elegant and it’s impressively simple and effective.

One of the main tasks of spectrum analysis has always been to find and measure spurious signals. It can be a difficult job when spurs are close to the noise floor, a situation that causes problems with both measurement accuracy and speed. I summarized the problem graphically in Oh, the noise! Noise! Noise! Noise and described how measurements of both the signal and the signal/noise ratio (SNR) would be affected when SNR was small.

Fortunately, the laws of physics sometimes break our way, and this is one of those times. The practical averaging technique available in early spectrum analyzers had two significant benefits: It both accurately represented the CW spurs that engineers were searching for and—wonders!—itreduced the apparent magnitude of the noise that was spoiling the measurement.

How can an averaging technique improve accuracy and effective SNR? The traditional averaging technique for spectrum analyzers was a narrow video bandwidth (VBW) filter, smoothing the video signal that represented the magnitude of the detected signal. Because the video signal was usually log-scaled, the VBW filtering performed an average of the log of the signal magnitude. In The Average of the Log is Not the Log of the Average I described the two approaches and noted that VBW filtering was accurate for CW signals but not for other signals such as noise. In Log Scaling: Useful But Sometimes Tricky I explained the downward bias that comes from video averaging noise and other time-varying signals.

It all comes together in the classic—and elegant—spurious measurement setup where the averaging of a VBW no wider than one-third of the RBW accurately represents the magnitude of CW spurs and reduces apparent noise power by about 2.5 dB, as shown in the figure below.

The results of power averaging (center) and log averaging (right) on a signal with 1 dB SNR are compared to a no-noise measurement (left). Log averaging substantially reduces the effect of the noise and dramatically improves the accuracy of the measurement.

The results of power averaging (center) and log averaging (right) on a signal with 1 dB SNR are compared to a no-noise measurement (left). Log averaging substantially reduces the effect of the noise and dramatically improves the accuracy of the measurement.

For the 1 dB SNR case shown in the figure, the decrease in measurement error is dramatic, falling from 2.54 dB to 0.63 dB!

It’s a fortunate coincidence for the common and demanding measurement situation in which small signals must be measured near noise: The simple, easily-implemented averaging technique is also the one that better separates the signal from noise and improves measurement accuracy.

Of course, other factors and tradeoffs are always involved. The VBW averaging technique is not accurate for non-CW signals. Also, narrow VBW filters may reduce measurement speed significantly because sweep rates are related to the narrower of VBW and RBW settings.

For a more thorough discussion of this topic, including a quantitative analysis of the errors, see page 24 of Application Note, Keysight Spectrum and Signal Analyzer Measurements and Noise.

Originally posted May 1, 2014

Get a quick SNR improvement of 10 to 20 dB—when the conditions are right

The topic of averaging comes up a lot in RF measurements and in this blog. I suppose it’s an unavoidable consequence of the fact that the universe is noisy and we engineers are striving instead for a quiet certainty.

Most discussions of averaging focus on smoothing data to reduce the effect of noise and therefore the variance of measurement results. As described previously, it’s important to reduce variance efficiently and without distorting the results.

However, as explained last time, the right type of averaging can modestly improve the signal/noise ratio for certain measurements instead of simply smoothing them. This is a very fortunate situation, albeit limited to CW signals near noise.

Another fortunate situation, and one with a bigger benefit, revolves around signals that repeat consistently. Such signals are common in communications, navigation and imaging, all applications in which improved dynamic range—not just reduced variance—is valuable.

These repeating signals represent additional information, giving us the chance to go beyond the well-known good/fast/cheap tradeoffs.

Information is the third element in this useful triad of measurement tradeoffs. The diagram symbolizes how measurements may be improved by treating information as an input to improve the measurement process and not merely an output.

Information is the third element in this useful triad of measurement tradeoffs. The diagram symbolizes how measurements may be improved by treating information as an input to improve the measurement process and not merely an output.

Just as it’s easy—for me, at least—to underestimate the magnitude of noise, it’s also easy to underestimate the additional information represented by repeating signals. So how do we harness this information to make better measurements?

The answer is time averaging, also referred to as synchronous or vector averaging. As the names imply, this type of averaging operates in the time domain and accounts for magnitude and phase or I and Q, as shown in the figure below.

The time averaging process is shown graphically for repeated samples of a single point in a repeating signal. Vector averaging of the samples quickly converges on a better measurement (red dot) of the signal’s actual value.

The time averaging process is shown graphically for repeated samples of a single point in a repeating signal. Vector averaging of the samples quickly converges on a better measurement (red dot) of the signal’s actual value.

The concept is straightforward: A repeating signal is sampled on a time scale precisely synchronized with its repetition interval, and the analyzer’s RF downconversion is phase-stable with respect to the signal. Samples from each repetition of the signal are averaged in I/Q or magnitude/phase to form a time record for any type of analysis, including time, frequency and demodulation. Noise is uncorrelated with the signal and is averaged out, as shown above.

The fundamental thing to understand about time averaging is the noise is not smoothed and the variance may not be reduced. Instead, most of the noise is effectively removed. That’s the magic of adding information to the measurement!

When seen in operation, it does look a little like magic. In many cases, hundreds of averages can be performed in a second or two, and the measurement noise floor plunges by 10 to 20 dB!

The other part of the magic is that the improvement in dynamic range applies to all kinds of measurements in the time, frequency and modulation domains.

This averaging type is available standalone on Agilent’s X-Series signal analyzers and the PSA spectrum analyzer, and on other measurement platforms through the 89600 VSA software.

In the next post I’ll illustrate the benefits of time averaging using an example or two, and discuss some practical implementations and limitations. This averaging won’t suit every situation, but it’s a powerful way to make the best use of in-hand information and produce better measurements.

Originally posted May 10, 2014

Improve SNR 10 to 20 dB? Absolutely!

Last time, I introduced the technique of time averaging, also known as vector or coherent averaging. When available in a signal analyzer and used on suitable signals, the SNR improvements are impressive, as shown below.

Compare the results of trace averaging (top left) and time averaging (top right) on the same signal with the same amplitude scale of 10 dB/div: Trace averaging reduces the variance of the results, while time averaging substantially reduces noise in the measurement. The pulsed signal’s RF envelope is shown in the lower trace, along with time-gate markers and the level of the magnitude trigger (dashed white line).

Compare the results of trace averaging (top left) and time averaging (top right) on the same signal with the same amplitude scale of 10 dB/div: Trace averaging reduces the variance of the results, while time averaging substantially reduces noise in the measurement. The pulsed signal’s RF envelope is shown in the lower trace, along with time-gate markers and the level of the magnitude trigger (dashed white line).

In this example a pulsed signal repeats with consistent phase and the IF magnitude trigger of the 89600 VSA software is used to align successive acquisitions of the signal for 100 time averages. The average noise level is reduced by about 20 dB, while the measured signal level is unaffected. Note, however, that the variance is higher in the time-averaged result.

The bottom trace in the figure shows the averaged time record from which the results are calculated. In this example, time gating is used to isolate the spectrum measurement on the pulse only.

Most measurement types can be calculated from the averaged time record, including spectrum, phase, delay and analog demodulation. All these measurements will benefit from the lower effective noise or better SNR of the time averaging.

As described last time, this technique has two big drawbacks: the need for a signal that repeats in a coherent fashion and some way to trigger the averaging process. Fortunately, repeating signals are common in wireless and aerospace/defense applications, especially those that use signals generated from arbitrary waveform generators or similar processes.

In addition, it is not necessary for the entire signal to repeat. Time averaging can be combined with gated or other time-selective measurements focused on the repeating portion of a signal that otherwise changes in some way from cycle to cycle. For example, the preambles of many digitally modulated signals repeat consistently even when payload data varies from frame to frame.

As for triggering requirements, Agilent signal analyzers and VSA software offer several solutions. The IF magnitude trigger, mentioned previously, is often a good choice and can be used with signal captures or recordings. External triggers are also available in many applications, especially when signal generators are used to generate framed signals or when the trigger associated with the repeat of an arbitrary waveform is available.

Another trigger source is the periodic timer or frame trigger function available in some Agilent signal analyzers. This function offers a high degree of precision and flexibility in generating periodic triggers from the analyzer itself rather than the input signal. More on this in a future post.

Lastly, I should mention that time averaging works on CW signals as well as those that are time-varying. Of course, with CW signals one can also reduce RBW to improve sensitivity. With time-varying signals you sometimes can’t do that because narrow RBWs can filter out some of the signal you’re trying to measure.

benz

Teach Yourself Electronics!

Posted by benz Sep 15, 2016

Originally posted May 20, 2014

Self-instruction from the pre-Internet era stands the test of time

I already spend plenty of time on the Web, so a recent urge to refresh my knowledge of vacuum tubes took me back to the place where I first learned how they worked: a “TutorText” book.

Yes, a real book and not a Web page or a YouTube video, helpful though each may be.

Many years ago—maybe in middle school—a TutorText guided me through the basics of electronics, including an introduction to vacuum tubes. I had fond memories of the book and its unique approach and, not being in a hurry, I found a copy online at a used book store in Michigan. A week later I held in my hands the same title that had been on a shelf in our library so long ago. For once, something from my distant past was exactly as I remembered it!

The book was Introduction to Electronics by Hughes & Pipe, published in 1961.

The TutorText concept is simple, though implementing it so effectively must have been a lot of work. The book opens with basic subject information for the reader, followed by a multiple-choice question on that information. Each possible answer directs you to a different page. Correct answers lead to additional explanations and instruction, followed by the next multiple-choice question. Incorrect answers lead to pages where likely errors are explained and the reader may be—not so gently!—admonished to pay attention and try again.

Hughes and Pipe were not just fooling around. The text reveals an instructional attitude that’s a little more direct than what is in vogue today. For example, if you ignore the instructions and turn from page 1 to page 2 you are met with:

“You did not follow instructions. . .”

“Now there was no way you could have arrived at this page if you had followed instructions.”

I found the approach to be refreshing, and the wrong-answer pages to be the most interesting. Here’s one:

And here’s another:

The authors clearly worked hard to make the style personal and motivational—to the extent I almost fear a rap on my knuckles when I turn to the wrong page due to sloppy reasoning or inattention.

For me, the effect of digging back into this book is a joyful recharging of my energy for learning—and maybe that’s another useful lesson from the book. Sometimes learning can be enhanced by changing the method or the vehicle. If you’ve been mired in online articles and video clips, consider getting up out of your chair and bugging an expert. Or go find a book. Even an old one from 1961.

Introduction to Electronics is one of a number of TutorTexts. Others will teach you about the arithmetic of computers, introductory algebra, basic computer programming (c. 1962), how to use a slide rule, advanced bidding, and the meaning of modern poetry. They’re sometimes grouped under the heading of “gamebooks” and are available at used book stores, both online and brick-and-mortar.

 

If you’re interested in a more current—and challenging—topic, you could do worse than a book I had a small role in writing: LTE and the Evolution to 4G Wireless. None of its passages will leave you fearing a rap on the knuckles.

Originally posted May 30, 2014

 

If noise power is part of your problem, less is more

An earlier post on measuring signals near noise described how noise power in a signal measurement adds to the measured signal power and thus creates an error component. The error can be significant, even for signals well above noise, due to the inherent accuracy of modern signal analyzers.

Fortunately, this additive error process can be reversed in many measurements, providing an improvement in both measurement accuracy and effective sensitivity. This performance improvement is especially important when small signals must be measured along with larger ones. That is, when sensitivity can’t be improved by reducing attenuation or adding preamplification.

The key to these improvements is knowledge of the amount of added noise power, and in most cases this corresponds to the noise floor of the signal analyzer. To correct the typical power spectrum measurement, the average noise floor power in the analyzer’s RBW filter is subtracted from each point of a power spectrum measurement. An example of that process is shown in the figure below.

Two spectrum measurements of a low power seven-tone signal. Amplitude decreases 3 dB for each tone and the scale is 3 dB/div. The blue trace shows the benefit of subtracting most of the signal analyzer noise using Agilent’s noise floor extension (NFE) technique.

Two spectrum measurements of a low power seven-tone signal. Amplitude decreases 3 dB for each tone and the scale is 3 dB/div. The blue trace shows the benefit of subtracting most of the signal analyzer noise using Agilent’s noise floor extension (NFE) technique.

Subtracting the analyzer’s noise power contribution is simple trace math on a power (not dB) scale, but precisely determining that power is not so simple.

The direct approach is to disconnect the signal under test, perform a highly averaged noise floor measurement, reconnect the signal and measure it with the noise subtracted. This approach is accurate and effective but can be very slow. In addition, the noise floor measurement must be re-done if the measurement configuration is changed in any way or if measurement conditions change, especially temperature.

A more sophisticated technique involves accurately modeling the signal analyzer noise floor under all measurement configurations and operating conditions then using that information to correct signal measurements on the fly. This technique is not quite as effective as individual noise floor measurements but it is much faster and more convenient. In addition, it requires no user interaction, imposes no speed penalty, and in the Agilent PXA signal analyzer it can provide up to 12 dB of improved sensitivity, as shown above.

This noise floor extension (NFE) technique has been available as a standard feature in the PXA signal analyzer for several years, and is now available as an option for the MXA signal analyzer. In the MXA this option is a license key upgrade, available for all existing MXA models and implemented through an automated self-calibration that takes 30 minutes or less.

Over the full frequency range of the MXA, the NFE option produces improvements such as those shown here.

The noise floor of an MXA signal analyzer is shown from 10 MHz to 26 GHz, both with and without the benefit of NFE noise power subtraction. Effective sensitivity is improved over a wide frequency range by approximately 9 dB with no reduction in measurement speed and no need for separate noise floor characterization measurements.

The noise floor of an MXA signal analyzer is shown from 10 MHz to 26 GHz, both with and without the benefit of NFE noise power subtraction. Effective sensitivity is improved over a wide frequency range by approximately 9 dB with no reduction in measurement speed and no need for separate noise floor characterization measurements.

I suppose this is another example of adding information to improve measurement performance. In this case the information is the analyzer noise power, and the resulting improvement is in both sensitivity and accuracy for small signals.

The discussion so far has focused mainly on sensitivity and its consequences. It’s worth noting that this sensitivity enhancement can be traded away for other benefits such as measurement speed. For example, NFE may allow a wider RBW to be used for a given measurement, resulting in significantly faster sweep rates.

Originally posted Jun 10, 2014

Mapping the benefits of noise subtraction to your own priorities

Otto von Bismarck said that “politics is the art of the possible” and he might as well have been speaking about RF engineering, where the art is to get the most possible from our circuits and our measurements.

The previous post on noise subtraction described a couple of ways that RF measurements could be improved by subtracting most of the noise power in a measuring instrument such as a spectrum or signal analyzer. In some instruments this process is now automated and it’s worth exploring the benefits and tradeoffs as a way to understand the limits of what’s possible.

In the last post I briefly mentioned sensitivity and potential speed improvements and in this post I’d like to discuss one example of what a potent technique noise subtraction can be. One diagram can summarize the benefits and tradeoffs for this example, but it’s an unusual format and a little bit complex so it deserves some explanation.

Accuracy vs. SNR for noise-like signals and a 95% coverage interval. The blue curves show the error bounds for measurements with noise subtraction and the red curves show the bounds without noise subtraction. Using subtraction provides a 9.1 dB improvement in the required SNR for a measurement with 1 dB error.

Accuracy vs. SNR for noise-like signals and a 95% coverage interval. The blue curves show the error bounds for measurements with noise subtraction and the red curves show the bounds without noise subtraction. Using subtraction provides a 9.1 dB improvement in the required SNR for a measurement with 1 dB error.

I didn’t produce this diagram and confess that I didn’t understand it very well at first glance. The 9.1 dB figure annotating the difference between two curves sounds impressive, but just what does it mean for real measurements?

Let me explain: This is a plot of accuracy (y-axis) vs. signal/noise ratio (SNR, x-axis) for a 95% error coverage interval and for noise-like signals. Many digitally modulated signals are good examples of noise-like signals.

The red curves and the yellow fill indicate the error bounds for measurements made without noise subtraction. Achieving 95% confidence that the measurement error will be less than 1 dB requires an SNR of 7.5 dB or better, keeping error below 2 dB requires an SNR of 3.5 dB, and so on. Note that the mean error is always positive and increases rapidly as SNR is degraded.

Now look at the blue curves and green fill to see the benefit of noise subtraction. In this example the effectiveness of the noise subtraction is sufficient to reduce noise level by about 8 dB, a conservative estimate of the performance of this technique, whether manual or automatic.

First, you can see that the mean error is now zero, removing a bias from the measurement error. Second, the required SNR for 1 dB error has been reduced to -1.6 dB, a 9.1 dB improvement from the measurement made without noise subtraction.

I have complained in the past about the effects of noise on RF measurements and it’s a frustration that many share. However, this example demonstrates the other side of the situation: Subtracting analyzer noise power, either manually or automatically, with technologies such as noise floor extension (NFE) provides big performance benefits.

What would you do with an extra 9 dB? You might use it to improve accuracy. You could trade some of it away for faster test time, improved manufacturing yields, a little increased attenuation to improve SWR, or perhaps eliminate the cost of a preamplifier. Use it well and pursue your own version of “the art of the possible.”

benz

Comparing Coax and Waveguide

Posted by benz Sep 15, 2016

Originally posted Jun 20, 2014

Making the choice for microwave and millimeter connections

I haven’t used waveguide very much, but it’s been an interesting technology to me for many years. I always enjoyed mechanical engineering and I’ve done my share of plumbing—everything from water to oil to milk—so waveguide engages my curiosity in multiple domains.

Coaxial cables and connectors are now readily available at frequencies to 110 GHz and at first glance they seem so much easier and simpler than waveguide. I wondered why waveguide is still in use at these frequencies, so a couple of years ago, while writing an application note, I spoke to electrical and mechanical engineers to understand the choices and tradeoffs.

It’s perhaps no surprise that there are both electrical and mechanical factors involved in the connection decision. At microwave frequencies, and especially in the millimeter range and above, electrical and mechanical characteristics do an intricate dance. Understanding how they intertwine is essential to making better measurements.

Coaxial connections: flexible and convenient. Direct wiring, in its coaxial incarnation, is the obvious choice wherever it can do the job acceptably well. The advances of the past several decades in connectors, cables and manufacturing techniques have provided a wide range of choices at reasonable cost. Coax is available at different price/performance points from metrology-grade to production-quality, and flexibility varies from extreme to semi-rigid. While the cost is significant, especially for precision coaxial hardware, it is generally less expensive than waveguide.

Coax can be an elegant and efficient solution when device connections require some kind of power or bias, such as probing and component test. A single cable can perform multiple functions, and the technique of frequency multiplexing can allow coax to carry multiple signals, including signals moving in different directions. For example, Agilent’s M1970 waveguide harmonic mixers use a single coaxial connection to carry an LO signal from a signal analyzer to an external mixer and to carry the IF output of the mixer back to the analyzer.

All is not lost for waveguide. Indeed, loss is an important reason waveguide may be chosen over coax.

Waveguide: power and performance. Power considerations, both low and high, are often the reasons engineers trade away the flexibility and convenience of coax. In most cases, the loss in waveguide at microwave and millimeter frequencies is significantly less than that for coax, and the difference increases at higher frequencies.

For signal analysis, this lower loss translates to increased sensitivity and potentially better accuracy. Because analyzer sensitivity generally declines with increasing frequency and increasing band or harmonic numbers, the lower loss of waveguide can make a critical difference in some measurements. Also, because available power is increasingly precious and expensive at higher frequencies, the typical cost increment of waveguide may be lessened.

On the subject of power, the lower loss in waveguide comes with high power-handling capability. As occurs with small signals, the benefit increases with increasing frequency.

As you can see from the summary below, other coax/waveguide tradeoffs may factor in your decision.

Comparing the benefits of coaxial and waveguide connections for microwave and millimeter frequency applications.

Comparing the benefits of coaxial and waveguide connections for microwave and millimeter frequency applications.

Mainstream technologies are extending to significantly higher frequencies and I have already wondered if you can push SMA cables and connectors to millimeter frequencies. In some cases, however, the question may be whether cables of any kind are the best solution, and whether it’s time to switch from wiring to plumbing.

Several application notes are available with information on measurements at high frequencies, including Microwave and Millimeter Signal Measurements: Tools and Best Practices.

Originally posted Jul 1, 2014

Where should your first mixer be when you’re making high-frequency measurements?

In Torque for Microwave & Millimeter Connections, I complained that engineering was inherently more challenging at microwave and millimeter frequencies. One reason: many factors that can be ignored at lower frequencies really begin to matter. Therefore, it’s important to consider all the tools and approaches that can help you optimize measurements at these frequencies, and this includes external mixing.

In my years of working at lower frequencies I knew about external mixing, but I always thought of it as a rather exotic and probably difficult technique. In reality, it’s a straightforward approach that has significant benefits, and modern hardware is making it both better and easier.

I also realized that I had been using external mixing for years, but at home: the low noise block (LNB) downconverter in my satellite dish. Satellite receivers use external mixing for many of the same reasons engineers do.

For satellite receivers and signal analyzers it’s a matter of where you place the first mixer. In analyzing microwave and millimeter signals, the first signal-processing element—other than a preamplifier or attenuator—is generally a mixer that downconverts the signal to a much lower frequency.

There’s no requirement that this mixer be inside the analyzer itself. In some cases there are benefits to moving the mixer outside the analyzer and closer to the signal under test, as shown below.

In external mixing, the analyzer supplies an LO signal output and its harmonics are used by the mixer to downconvert high frequencies from a waveguide input. The result is sent to the analyzer as an IF signal that’s processed by the analyzer’s normal IF section.

In external mixing, the analyzer supplies an LO signal output and its harmonics are used by the mixer to downconvert high frequencies from a waveguide input. The result is sent to the analyzer as an IF signal that’s processed by the analyzer’s normal IF section.

External mixing has a number of benefits:

  • Flexible, low-loss connection between signal and analyzer. The vital first downconverting element can be placed at the closest and best location to analyze the signal, typically with a waveguide connection. The analyzer can be located for convenience without a loss penalty from sending high frequencies over a distance.
  • Frequency coverage. External mixers are available for frequencies from 10 GHz to the terahertz range, in passive and active configurations.
  • Cost. Signal analysis may be needed over only a limited set of frequencies in the microwave or millimeter range, and a banded external mixer can extend the coverage of an RF signal analyzer to these frequencies.
  • Performance. Measurement sensitivity and phase noise performance can be excellent due to reduced connection loss and the use of high-frequency and high-stability LO outputs from the signal analyzer.

Some recent innovations have made external mixers easier to use and provide improved performance. These “smart” mixers add a USB connection to the signal analyzer to enable automatic configuration and power calibration. The only other connection needed is a combined LO output/IF input connection, as shown below.

Agilent’s M1970 waveguide harmonic mixers are self-configuring and calibrating, requiring only USB and SMA connections to PXA and MXA signal analyzers.

Agilent’s M1970 waveguide harmonic mixers are self-configuring and calibrating, requiring only USB and SMA connections to PXA and MXA signal analyzers.

The new mixers enhance ease of use, including automatic download of conversion loss for amplitude correction. Nonetheless, they can’t match the convenience and wide frequency coverage of a one-box internal solution that has direct microwave and millimeter coverage. And because external mixing doesn’t include a preselector filter, some sort of signal-identification function will be necessary to highlight and remove signals generated by a mode—LO harmonic or mixing—other than the one for which the display is calibrated (more on this in a future post).

External mixing is now a supported option in Agilent’s PXA and MXA signal analyzers. This is described in the new version of Application Note 150 Spectrum Analysis Basics and in the application note Microwave and Millimeter Signal Measurements: Tools and Best Practices.

Originally posted Jul 21, 2014

Like an Italian sports car, they combine impressive performance and design challenges

In a 1947 speech, Winston Churchill remarked “…it has been said that democracy is the worst form of government except all those other forms…” Today, I suspect that some microwave engineers feel the same way about YIG spheres as microwave resonator elements. They’re an incredibly useful building block for high-frequency oscillators and filters, but it takes creativity and careful design to tame their sensitive and challenging nature.

The “G” in “YIG” stands for garnet, a material better known in gemstone form. A YIG or yttrium-iron-garnet resonator element is a pinhead-sized single-crystal sphere of iron, oxygen and yttrium. These spheres resonate over a wide range of microwave frequencies, with very high Q, and the resonant frequency is tunable by a magnetic field.

That makes them perfect as tunable elements for microwave oscillators and filters, and in this post I’ll focus on their role in the YIG-tuned filters (YTFs) used as preselectors in microwave and millimeter signal analyzers.

These analyzers typically use an internal version of the external harmonic mixing techniquedescribed in the previous post. It’s an efficient way to cover a very wide range of input frequencies using different harmonics of a microwave local oscillator—itself often YIG-tuned!

However, mixers produce a number of different outputs from the same input frequencies, including high-side and low-side products plus many others, typically smaller in magnitude. These undesired mixer products will cause erroneous responses or false signals in the spectrum analyzer display, making wide-span signal analysis very confusing.

One straightforward solution to this problem is a bandpass filter in the signal analyzer that tracks the input frequency. Here’s an example:

The yellow trace is the frequency response of a YIG preselector bandpass filter as it appears at the signal analyzer IF section. The blue trace shows the raw frequency response, with the preselector bypassed.

The yellow trace is the frequency response of a YIG preselector bandpass filter as it appears at the signal analyzer IF section. The blue trace shows the raw frequency response, with the preselector bypassed.

YIG technology enables the construction of a tunable preselector filter, wider than the widest analyzer RBW, whose center frequency can be synchronously swept with the analyzer’s center frequency. This bandpass filter rejects any other signals that would cause undesirable responses in the analyzer display.

Problem solved! So why the Churchillian perspective on YIGs? It’s a matter of the costs that come with the compelling YIG benefits:

  • Sensitivity is reduced: The preselector’s insertion loss has a direct impact on analyzer sensitivity.
  • Stability and tuning are challenging: The preselector’s wide, magnetic tuning range comes with temperature sensitivity and a degree of hysteresis. It is a challenge to consistently tune it precisely to the desired frequency, requiring careful characterization and compensation.
  • Bandwidth is limited: The preselector passband is wider than the analyzer’s widest RBW filter, but narrower than some wideband signals that would normally be measured using a digitized IF and fixed LO.

Fortunately signal analyzer designers have implemented a number of techniques to optimize preselector performance and mitigate problems, as described in Agilent Application Note 1586 Preselector Tuning for Amplitude Accuracy in Microwave Spectrum Analysis.

An alternative approach is simply to bypass the preselector for wideband measurements and whenever conditions allow. Many measured spans are not wide enough to show the undesirable mixing products, or the unwanted signal responses can be noted and ignored.

So, just as with democracy and its alternatives, YIG preselectors offer compelling benefits that far outweigh their disadvantages.

If you’d like to know more about harmonic mixing and preselection, see Chapter 7 of the new version of Application Note 150 Spectrum Analysis Basics.

Originally posted Jul 21, 2014

I’m not talking about the quality, just the variance

The basic amplitude accuracy of today’s signal analyzers is amazingly good, sometimes significantly better than ±0.2 dB. Combining this accuracy with precise frequency selectivity over a range of bandwidths—from very narrow to very wide—yields good power measurements of simple or complex signals. It’s great for all of us who seek better measurements!

However, if you’re working with time-varying or noisy signals—including almost all measurements made near noise—you’ve probably needed to do some averaging to reduce measurement variance as a way to improve amplitude accuracy.

As a matter of fact, you may already be doing two or more types of averaging at once. Here’s a summary of the four main averaging processes in spectrum/signal analyzers:

  • Video bandwidth filtering is the traditional averaging technique of swept spectrum analyzers. The signal representing the detected magnitude and driving the display’s Y-axis is lowpass filtered.
  • Trace averaging is a newer technique in which the value of each trace point (bin or bucket) is averaged each time a new sweep is made.
  • The average detector is a type of display detector that combines all the individual measurements making up each trace point into an average for that point.
  • Band power averaging combines a specified range of trace points to calculate a single value for a frequency band.

Depending on how you set up a measurement, some or all of these averaging processes may be operating together to produce the results you see.

The use of multiple averaging processes may be desirable and effective, but as I mentioned in The Average of the Log is Not the Log of the Average, different types of averages—different averaging scales—can produce different average values for the same signal.

How do you make the best choice for your measurement, and make sure the averaging scales used are consistent? The good news is that in most cases there is nothing you need to do. Keysight signal analyzers will ensure that consistent averaging scales are used, locking the scale to one of three: power, voltage or log-power (dB).

In addition, Keysight analyzers choose the appropriate scale depending on the type of measurement you’re making. Selecting marker types and measurement applications—such as adjacent channel power, spurious or phase noise—gives the analyzer all the information it needs to make an accurate choice.

If you’re making a more general measurement in which the analyzer does not know the characteristics of the signal, there are a couple of choices you can make to ensure accurate results and optimize speed.

When you want to quickly reduce variance and get accurate results—regardless of signal characteristics—use the average detector.

The function of the average detector is enlarged and shown over an interval of slightly more than one display point. The average detector collects many measurements of IF magnitude to calculate one value that will be displayed at each bucket boundary.

The function of the average detector is enlarged and shown over an interval of slightly more than one display point. The average detector collects many measurements of IF magnitude to calculate one value that will be displayed at each bucket boundary.

Beyond the accuracy it provides for all signal types, the average detector is extremely efficient at quickly reducing variance and is very easy to optimize: If you want more averaging, just select a slower sweep speed. The analyzer will have more time to make individual measurements for each display point and will automatically add them to the average. Simply keep on reducing sweep speed until you get the amount of averaging you want.

The exception to this approach is when you’re measuring small CW spurs near noise, and in that case you may want to use a narrower video bandwidth filter for averaging.

With these two approaches you’ll improve both the quality of your signal measurements and the variance, with a minimum of effort and no accidental inconsistencies. Once again, a combination of techniques provides the desired results. For more detail, see Chapter 2 of the updated Application Note 150 Spectrum Analysis Basics. You’ll find it in the growing collection on our signal analysis fundamentals page.

Originally posted Aug 4, 2014

A rich, entertaining, and enlightening history

As RF engineers we spend most of our time working on the technologies of the present and the future. It can be a constant footrace, so sometimes it’s refreshing to take a look back and see how far we and our predecessors have come. With a history dating back to the very beginnings of RF, detectors or demodulators are an excellent historical example of innovations and progress.

In my post on signal analyzer detectors I mentioned cat’s whiskers and coherers. Though I had never used one, I was familiar with the cat’s whisker detector as a central element in a crystal radio set. The cat’s whisker is a simple rectifier, made by touching a thin metal wire (the “whisker”) to a semiconductor, typically a raw chunk of natural galena.

A tuned circuit is used in the crystal receiver to select a radio station, and the rectifying function of the cat’s whisker serves to extract the audio signal from the RF carrier. In other words, the cat’s whisker is a detector, acting as a demodulator. It’s essentially the same as the IF or video detector found in a signal analyzer, as shown at left in this block diagram.

This partial, simplified block diagram of a spectrum analyzer shows the two detectors and their locations in the signal path. The first detector demodulates the IF signal, converting it into a magnitude value in the same way a cat’s whisker would perform demodulation in a crystal radio.

This partial, simplified block diagram of a spectrum analyzer shows the two detectors and their locations in the signal path. The first detector demodulates the IF signal, converting it into a magnitude value in the same way a cat’s whisker would perform demodulation in a crystal radio.

Note that in this diagram the envelope detector is symbolized by a diode. Detection is accomplished with the assistance of other components such as a capacitor and resistor, but the critical and most challenging component is the rectifier.

From the standpoint of technical progress it’s interesting to note that a solid-state component—the cat’s whisker—came before vacuum tube solutions. The tube solutions were then superseded by newer solid-state diodes.

History has another curve to throw at us as we move further back in time: the coherer. When I first heard the term I had been already been working on HF and RF applications for more than a dozen years and was surprised I hadn’t encountered it before. The coherer is another—even older—solid-state device that performed a kind of RF detection.

Though complicated in theory, the coherer is simple in practice: it’s just a pair of electrodes in an insulating capsule, separated by metal filings. RF energy causes the metal particles to cohere and the resistance between the electrodes drops dramatically. The coherer detects RF energy in a way that’s useful for electrical circuits such as an RF telegraph.

A coherer, an early RF detector. Two metal electrodes are separated by metal particles and the resistance between them drops in the presence of RF energy. (Image from Wikimedia Commons)

A coherer, an early RF detector. Two metal electrodes are separated by metal particles and the resistance between them drops in the presence of RF energy. (Image from Wikimedia Commons)

The earliest radios were RF telegraphs, using spark-gap transmitters in an on/off configuration. They were entirely broadband, in a way that is shocking to modern RF sensibilities. When I screened video of a large spark gap transmitter to groups of RF application engineers in the mid-1990s it always produced an audible gasp as they intuitively grasped the nature of the emissions.

If you’re interested in the topic, this bit of RF history gives me a chance to recommend one of my favorite TV series on technology, the low-budget, refreshingly British, amazingly enlightening Secret Life of Machines by Tim Hunkin.  These programs from the late 1980s do an incredible job of explaining commonplace technology of the home and office such as refrigerators, vacuum cleaners, TVs, VCRs, word processors, fax machines and radios. I think the episodes on radios and fax machines are some of the very best, and are worth watching for their quirky perspective and humor, in addition to their brilliant explanations.

This all reminds me of my adventures many years ago with spark gaps driving large Tesla coils. Perhaps that’s a topic for a future post—but perhaps some RF interference sins should go unconfessed!

Originally posted Aug 15, 2014

Getting the accuracy you’ve paid for

You’ve probably had this experience while using one of our signal analyzers: The instrument pauses what it’s doing and, for some time—a few seconds, maybe a minute, maybe longer—it seems lost in its own world. Relays click while messages flash on the screen, telling you it’s aligning parts you didn’t know it had. What’s going on? Is it important? Can you somehow avoid this inconvenience?

There’s a short answer: The analyzer decided it was time to measure, adjust and check itself to ensure that you’re getting the promised accuracy.

That seems mostly reasonable. After all, you bought a piece of precision test equipment (thanks!) to get reliable answers, so you can do your real job: using RF/microwave technology to make things happen—important things. The last thing you want is a misleading measurement.

That’s not the whole story. Your time is valuable and it’s useful to understand the importance of these operations and whether you can stop them from interrupting your work.

The second short answer: the automatic operations are sometimes important but not crucial (usually). You can do several things to avoid the inconvenience, but it helps to first understand a few terms:

  • Calibrations are the tests, adjustments and verifications performed on an instrument every one to three years. The box is usually sent to a separate facility where calibrations are performed with the assistance of other test equipment.
  • Alignments are the periodic checks and adjustments that an in situ analyzer performs on itself without other equipment or user intervention. The combination of calibration and alignment ensures that the analyzer meets its warranted specifications.
  • Corrections are mathematical operations the analyzer performs internally on measurement results to compensate for known imperfections. These are quantified by calibration and alignment operations.

Alas, this terminology isn’t universal. For example, if you execute the query “*Cal?” the analyzer will tell you whether it is properly (and recently) aligned, but will say nothing about periodic calibration. Still, the terms are useful guides to getting reliable measurements while avoiding inconvenience.

As a starting point, you can use the default automatic mode. The designers have decided which circuits need alignment, how often and over what temperature ranges. Unfortunately, this may result in interruptions, and these can be a problem when you’re prevented from observing a signal or behavior you’re trying to understand. It’s especially frustrating when you’re ready to make a measurement and find that the analyzer has shifted into navel-gazing mode.

Switching off the automatic alignments will ensure that the instrument is always ready to measure—and it will notify you when it decides that alignments are needed. You can decide for yourself when it’s convenient to perform them, though this creates a risk that alignments won’t be current when you’re ready to make a critical measurement.

You can schedule alignments on your own, and tell the instrument to remind you once a day or once a week. This is a relatively low-risk approach if the instrument resides in a temperature-stable environment. However, with your best interests in mind, the analyzer will display this stern warning:

Switching off automatic alignment creates a small risk of compromised performance, and produces this popup.

Switching off automatic alignment creates a small risk of compromised performance, and produces this popup.

The default setting is governed by time and temperature, and in my experience it’s temperature that makes the biggest difference. I once retrieved an analyzer that had been left in a car overnight in freezing weather and, upon power up, found that for the first half hour it was slewing temperature so fast that alignments occurred almost constantly.

If you want to optimize alignments for your own situation, just check out the built-in help in the X-Series signal analyzers. You can even go online and download the spectrum analyzer mode help fileto your PC and run it from there.

Originally posted Aug 31, 2014

Is “recency” really a word?

My spell checker nags me with a jagged red underline, but yes, “recency” is a legitimate word. And it isn’t one of those newly invented for a marketing campaign words: Merriam-Webster traces it back to 1612, and others go back even further.

It’s a good word that means exactly what it sounds like: the quality of being recent. In our world of highly dynamic signals and spectral bands, this quantity is becoming ever more useful.

Of course, recency-coded displays have been around for a long time, though more commonly in oscilloscopes than spectrum analyzers. Traditional analog variable-persistence displays naturally highlighted recent phenomena, as the glow from excited phosphors decayed over time. Extending this decay time by an adjustable amount made the displays even more useful.

A current term for this sort of display is “digital persistence” and in the 89600 VSA software it produces this display of oscillator frequency and amplitude settling:

Digital-persistence spectrum of an oscillator as it settles to new frequency and amplitude values. The brighter traces are more recent, though recency could also be indicated by color mapping.

Digital-persistence spectrum of an oscillator as it settles to new frequency and amplitude values. The brighter traces are more recent, though recency could also be indicated by color mapping.

A good complement to recency is “frequency,” which—in this context—is defined as how often specific frequency and amplitude values occur in a spectrum measurement.

Common terms for this sort of display are frequency of occurrence or density or DPX or cumulative history. It’s a kind of historical measure of probability, and for the balance of this post I’ll just use the term density.

Thus, recency is a measure of when something happened, while density is a measure of how often.

In real-time analyzers—and analog persistence displays—the two phenomena are generally combined in some way. However, although related, they indicate different things about the signals we measure.

Because the 89600 VSA provides them independently, as separate traces with separate controls, I’ll use it for another example and discuss combined real-time analyzer displays in a future post. Here’s a frequency or density display of the infamous 2.4 GHz ISM band:

Off-air spectrum density measurement of 2.4 GHz ISM band, including brief Bluetooth hops and longer WLAN transmissions. Signal values of amplitude and frequency that occur more often are represented by red and yellow, while less-frequent values such as those from the Bluetooth hops are shown in blue.

Off-air spectrum density measurement of 2.4 GHz ISM band, including brief Bluetooth hops and longer WLAN transmissions. Signal values of amplitude and frequency that occur more often are represented by red and yellow, while less-frequent values such as those from the Bluetooth hops are shown in blue.

This pure density display represents a great deal of information about the time occupancy of the ISM band, showing the relatively long duration of the WLAN frames and the brevity of the Bluetooth hops. However, it offers nothing about signal timing: how many bursts, whether they overlap or not, or even whether the Bluetooth hops are sequential.

That leads, perhaps, to a suggestion. While both display types present a lot of information at once,—and can show very infrequent signals or behavior—they are optimal for different measurement purposes: if you want to know when something happened, with an emphasis on the recent, use persistence; if you want to distinguish signals by how often they appeared, use density.

It’s an over-simplification to say that persistence is best for viewing signals and density is best for viewing spectral bands, but that’s not a bad place to start.

If you’ve used a real-time analyzer you probably noticed that the density displays are usually a kind of hybrid, with an added element of persistence. And you’ve probably heard at least a little about spectrogram displays, which add the time element in a different and very useful way. They’re all excellent tools, and will be good subjects for future posts.

Originally posted Sept 10, 2014

Is technology repeating itself or just rhyming?

History does not repeat itself, but it rhymes” is one of the most popular quotes attributed to Mark Twain. Though there is no clear evidence that he ever said this, it certainly feels like one of his. It says so much in a few words, and reflects his fascination with history and the behavior of people and institutions.

Recently, history rhymed for me while making some OFDM demodulation measurements and looking at the spectrum of the error vector signal. It brought to mind the first time I looked beyond simple error vector magnitude (EVM) measurements to the full error vector signal and understood the extra insight it could provide in both spectrum and time-domain forms.

The rhyme in the error vector measurements—as residual error or distortion measurements—took me all the way back to the first distortion measurements I made with a simple analog distortion analyzer. Variations on that method are still used today, and the approach is summarized below.

A simple distortion analyzer uses a notch filter to remove the fundamental of a signal and a power meter to measure the rest. This is a measurement of the signal’s residual components, which can also be analyzed in other ways to better understand the distortion.

A simple distortion analyzer uses a notch filter to remove the fundamental of a signal and a power meter to measure the rest. This is a measurement of the signal’s residual components, which can also be analyzed in other ways to better understand the distortion.

The basic distortion analyzer approach uses a power meter and a switchable band-reject or notch filter. First, the full signal is measured to provide a power reference, and then the filter is switched in to remove the fundamental.

The signal that remains is a residual, containing distortion and noise, and can be measured with great sensitivity because it’s so much smaller than the full signal. That’s a big benefit of this technique, and why filters—including lowpass and highpass—are still used to improve the sensitivity and accuracy of signal measurements. Those basic distortion analyzers usually had a post-filter output that could be connected to an oscilloscope to see if the distortion could be further characterized.

To complete the rhyme, today’s digital demodulation measurements and quality metrics such as EVM or modulation error ratio (MER) are also residual measurements. Signal analyzers and VSAs first demodulate the incoming signal to recover the physical-layer data. They then use this data and fast math to generate a perfect version of the input signal. The perfect or reference signal is subtracted from the input signal to yield a residual, also called the error vector. This subtraction does the job that the notch filter did previously.

The residual can be summarized in simple terms such as EVM or MER. But if you want to understand the nature or cause of a problem and not just its magnitude, you can look at error vector time, spectrum, phase, etc. Here’s an example of measurements on a simple QPSK signal containing a spurious signal with power 36 dB lower.

A QPSK signal in blue contains a spurious signal 36 dB lower. The green trace is error vector spectrum, revealing the spur. A close look at a constellation point (upper left) shows repeating equal-amplitude errors that indicate that the spur is harmonically related to the modulation frequency.

A QPSK signal in blue contains a spurious signal 36 dB lower. The green trace is error vector spectrum, revealing the spur. A close look at a constellation point (upper left) shows repeating equal-amplitude errors that indicate that the spur is harmonically related to the modulation frequency.

Demodulation and subtraction remove the desirable part of the signal, providing more sensitivity and a tighter focus on distortion or interference. Because all these operations and displays are performed within the signal analyzer application or VSA, you need just one tool to help you understand both the magnitude and cause of problems.

At this point you may also be thinking that demodulation and subtraction could be a way to recover one signal deliberately hidden inside another. They can! I’ve experimented with that very interesting technique, and will explain more in a future post.

 

To make these explanations clearer, I’ve focused here on single-carrier modulation. These approaches to residual analysis work well for OFDM signals, and you can see examples at my previous posts The Right View Makes an Obscure Problem Obvious and A Different View Makes a Different Problem Obvious.

Originally posted Sept 19, 2014

Boost those electrons at your first opportunity

Preamplifiers are a time-tested way to improve measurement sensitivity and accuracy for small signals, especially those near noise. Some new external preamps, used alone or along with those internal to signal analyzers, may give your tiny signals the right boost in the right place to makebetter measurements. In the bargain, they’ll simplify small-signal and noise-figure measurements.

Once you’ve switched attenuation to 0 dB in a signal analyzer, the next step toward better sensitivity is some sort of amplifier. Many signal analyzers offer internal preamplifiers as an option, and it’s generally easier than employing an external preamp. The manufacturer can characterize the internal preamp in terms of gain and frequency response, and this can be reflected in the analyzer’s accuracy specifications.

However, internal preamplifiers may not have quite the gain you want over the frequency range you need, and an internal unit can’t be placed as close as possible to the signal-under–test (SUT). This is important for microwave and millimeter signals because they can’t travel far without significant attenuation, and because they tend to gather unknown and unwanted signals along the way. This is especially troublesome when you’re measuring small signals close to noise.

External preamplifiers are available in a wide range of frequency ranges, gains and noise figures, in both custom and off-the-shelf configurations, and can provide excellent performance. Unfortunately, it can be a challenge to integrate them into an end-to-end measurement. Accurate measurements require correcting for gain versus frequency and, if possible, noise figure, impedance match and temperature coefficients.

That’s where Keysight comes in. It recently introduced several external “smart” preamplifiers that automatically integrate with the measurement system and are compatible with all of the X-Series signal analyzers. They connect directly to the RF input of the signal analyzers, as shown below, and can function as a remote test head, providing amplification closest to the SUT.

An external USB smart preamplifier connected to an X-Series signal analyzer. The preamp can serve as a high-performance remote test head for spectrum and noise-figure measurements. The USB cable connecting the analyzer and preamplifier is not shown.

An external USB smart preamplifier connected to an X-Series signal analyzer. The preamp can serve as a high-performance remote test head for spectrum and noise-figure measurements. The USB cable connecting the analyzer and preamplifier is not shown.

The U7227A/C/F preamplifiers use a single USB connection to identify themselves to the analyzer and download essential information such as gain versus frequency, noise figure and S-parameters.

As described in a previous post about smart external mixers, the combination of downloaded data and analyzer firmware fully integrates the amplifier into the measurement setup and effectively extends the measurement plane to its input. This allows Keysight to provide a complete measurement solution with very high performance and allows you to focus on critical measurements instead of system integration.

The USB preamplifiers have high gain and very low noise figure, and can be used in combination with the optional internal preamplifiers of the X-Series signal analyzers. The result is a very impressive system noise figure, as shown in the example below.

The displayed average noise level of the Keysight PXA signal analyzer is shown without a preamp (top), with the internal preamp (middle) and with the addition of the external USB preamp (bottom). Note the measured 13 GHz noise density at the bottom of the marker table of -171 dBm/Hz.

The displayed average noise level of the Keysight PXA signal analyzer is shown without a preamp (top), with the internal preamp (middle) and with the addition of the external USB preamp (bottom). Note the measured 13 GHz noise density at the bottom of the marker table of -171 dBm/Hz.

The performance and USB connectivity of the external preamps improves and simplifies noise-figure measurements and analyzer sensitivity, giving those few critical electrons a boost just when they need it most.

For more detail please see the USB preamplifier technical overview.

benz

OFDM is Ubiquitous. Why?

Posted by benz Sep 14, 2016

Originally posted Oct 1, 2014

 

One transport scheme to rule them all. And I get to use the word ubiquitous!

In the early 1990s, working with the first vector signal analyzers, I had a front row seat as digital modulation schemes came to the fore. Digital modulation wasn’t new, but the advent of second-generation cellular standards such as GSM, NADC, CDMA/IS-95 and PDC put digital modulation in the hands of the masses.

The pace of innovation seemed never to slacken during the decade: broadcast television began to go digital, and third-generation cellular consumed vast amounts of money and brainpower.

Over a period of years, I was amazed at the proliferation of modulation types and transport schemes, and the apparently endless combinations and refinements. These required an equally constant flow of innovations to enable understanding, analysis, optimization and troubleshooting.

With mild exasperation, I asked my expert colleagues: “Are we going to continue to see this constant rollout of different modulation types and transport schemes?” The nearly universal answer was, “Yes, for quite a while.”

They were correct, but an important trend emerged late in the decade. One transport scheme grew from niche to dominance in the following decade and beyond: orthogonal frequency division multiplexing or OFDM.

I’ve mentioned various aspects of OFDM and its analysis in past posts, but haven’t explained the fundamentals and why it has become so widely used. I can only scratch the surface in this blog format, but can summarize the technological and environmental drivers.

The first word in the acronym is key: Orthogonality of a large number of RF subcarriers is the central feature of this transport scheme. As a transport scheme, rather than modulation type, it can employ multiple different modulations, typically simultaneously. The figure below illustrates this RF subcarrier orthogonality.

The spectrum of three overlapping OFDM subcarriers, in which the center of each subcarrier corresponds with spectral nulls for all of the other subcarriers. This non-interfering overlap provides the orthogonality necessary to allow independent modulation of each subcarrier.

The spectrum of three overlapping OFDM subcarriers, in which the center of each subcarrier corresponds with spectral nulls for all of the other subcarriers. This non-interfering overlap provides the orthogonality necessary to allow independent modulation of each subcarrier.

In OFDM, orthogonality and carrier independence do not mean that the subcarriers are non-overlapping. Indeed, they are heavily overlapped and the center frequencies are arranged with a specific close spacing that places the main spectral peak of every subcarrier on frequencies where all other subcarriers have nulls.

With the independence of its subcarriers, OFDM can be seen as a multiplexing or multiple-access technique, somewhat similar to CDMA. It doesn’t increase theoretical channel capacity, but it has benefits that allow systems to operate closer to their theoretical capacity in real-world environments:

  • A high degree of operational flexibility by allocating subcarriers and symbols as needed, along with signal coding schemes, to accommodate different users with different needs for data rates, latency, priority, and more.
  • Multiple access (OFDMA) to support multiple users (radios) simultaneously using flexible and efficient subcarrier allocations.
  • High symbol and data integrity by transmitting at a relatively slow symbol rate to mitigate multipath effects and reduce the impact of impulsive noise, and by spreading data streams over multiple subcarriers with symbol coding and forward error correction.
  • High data throughput by transmitting on hundreds or thousands of carriers simultaneously and using appropriate signal coding.
  • Robust operation in interference-prone environments due to its spread spectrum structure and tolerance for the loss of a subset of subcarriers.
  • High spectral efficiency by spacing many subcarriers very closely and arranging them to be independent, allowing each subcarrier to be separately modulated.
  • High spatial efficiency through compatibility with spatial multiplexing techniques such as multiple-input/multiple-output (MIMO) transmission.

Potential benefits of OFDM were anticipated for years, but the technique only became practical for wide use as signal processing power became available in high quantity at low cost. As that performance/cost ratio improved, OFDM increased its dominance, and that is a major RF wireless story of the past 15 years or so.

 

You can read more about the technique in a recent OFDM introduction application note, and I’ll discuss some of the implementation and test implications in future posts.

Originally posted Oct 10, 2014

RF and microwave applications get their own benefits from semiconductor advances

Gordon Moore is well known for his 1965 prediction that the number of transistors in high-density digital ICs would double every two years, give or take. While the implications for processors and memory are well understood, perhaps only RF and microwave engineers recall Moore’s other prediction in that same paper: “Integration will not change linear systems as radically as digital systems.”

Though it’s hard to quantify, it seems that the pace of advances in combined digital/analog circuits is somewhere in the middle: slower than that of processors and memory, but faster than purely analog circuits. To many of us, the actual rate means a lot because analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) bring the power, speed and flexibility of digital circuits to real-world challenges in electronic warfare (EW), wireless, radar, and beyond.

Direct digital synthesis (DDS) technologies are an excellent example, and they’re becoming more important and more prominent in demanding RF and microwave applications. The essentials of DDS are straightforward, as shown in the diagram of a signal source below.

In DDS, memory and DSP drive an RF-capable DAC, and its output is filtered to remove harmonics and out-of-band spurs. Other spurs must be kept inherently low by the design of the DAC itself.

In DDS, memory and DSP drive an RF-capable DAC, and its output is filtered to remove harmonics and out-of-band spurs. Other spurs must be kept inherently low by the design of the DAC itself.

Deep memory and high-speed digital processing—using ASICs and FPGAs—have long been used to drive DACs. Unfortunately, most DACs have been too narrowband for frequency synthesis, and wideband units lacked the necessary signal purity. The Holy Grail has been a DAC that can deliver instrument-quality performance over wide RF bandwidths.

Engineers at Keysight Technologies (formerly Agilent) realized that new semiconductor technology was intersecting with this need. They used an advanced silicon-germanium BiCMOS process to fabricate a DAC that is the perfect core for a DDS-based RF/microwave signal generator. Signal purity is excellent even at microwave frequencies, as shown in the spectrum measurement below.

A 10 GHz CW signal measured over a 20 GHz span, showing the output purity of DDS technology in the Keysight UXG agile signal generator.

A 10 GHz CW signal measured over a 20 GHz span, showing the output purity of DDS technology in the Keysight UXG agile signal generator.

Compared to traditional phase-locked loops (PLLs) and direct analog synthesizers, DDS promises a number of advantages:

  • Frequency and amplitude agility. With no loop-filter settling, new output values can be implemented at the next DAC clock cycle. As a result, the UXG can switch as fast as 250 ηs.
  • Multiple, coherent signals can be generated from one source. DDS can generate both CW and complex or composite signals, continuously or in a changing sequence. This enables generation of scenarios instead of just signals, making the technology well-suited to EW or signal-environment simulation.
  • No phase noise pedestal from a PLL, and no need to trade phase noise performance for agility. PLLs provide a wide frequency range, high resolution and good signal quality, but often require tradeoffs between frequency agility and phase noise performance.
  • Signal coherence, phase continuity and signal generator coordination. Multiple signals from a single generator can be aligned in any way desired, and switching can be phase continuous. Triggers and a shared master clock allow multiple DDS generators to produce coordinated outputs easily and with great precision.

With sufficient DAC performance, DDS is clearly a good fit for signal generation in radar and EW, which need agility and wide bandwidth. DDS also can be valuable in signal analyzers and receivers because fast sweeping/tuning and the lack of a phase noise pedestal enables LO designs with better performance and fewer tradeoffs.

DDS implementations are generally more expensive than PLL technologies. However, as Moore predicted, technological evolution creates a dynamic environment in which the optimal solutions change over time. It seems clear that DDS will have an expanding role in better RF measurements, even if it doesn’t happen at the pace of Moore’s law.

 

For more about the use of DDS in a specific implementation, go to www.keysight.com/find/UXG and download a relevant app note.

Originally posted Oct 21, 2014

 

The tiny ones and the giants

Even if you have especially good eyes—and I do not—it can be difficult to identify the various connector types found on the bench of the typical microwave/millimeter engineer. This is because the frequencies are very high the the dimensions are very small! Nonetheless, accuracy and repeatability are expensive and hard-won at these extreme frequencies, and so it’s worth it to get interconnections right.

Getting things right also helps avoid the cost and inconvenience of connector damage. Connectors are designed with mechanical characteristics to avoid mating operations that would cause gross connector damage, but these measures sometimes fail, subjecting you to hazards such as loose nut danger.

The vast majority of intermating possibilities can be summarized in two sentences:

  • SMA, 3.5 mm and 2.92 mm (“K”) connectors are mechanically compatible for connections.
  • 2.4 mm and 1.85 mm connectors are compatible with each other, but not with the SMA/3.5 mm/2.92 mm.

A good single-page visual summary is available from Keysight. Here’s a portion of it.

This summary of microwave and millimeter connector types uses color to indicate which types can be intermated without physical damage.

This summary of microwave and millimeter connector types uses color to indicate which types can be intermated without physical damage.

Avoiding outright damage is important; however, performance-wise, it’s a pretty low bar for the RF engineer. Our goal is to optimize performance where it counts, and microwave and millimeter frequencies demand particular care.

For example, intermating different connector types, even when they’re physically compatible, has a real cost in impedance match (return loss) and impedance consistency. This has implications for amplitude accuracy and repeatability, with examples described in the March 2007 Microwave Journalarticle Intermateability of SMA, 3.5 mm and 2.92 mm connectors.

And it isn’t just mating different connector types that will give you fits. Like teenagers, it seems you can’t send millimeter signals anywhere without them suffering some sorts of issues. All kinds of connectors, adapters and even continuous cabling will affect signals to some degree, and suboptimal connection performance can be a hard problem to isolate.

Even connector savers, a good practice recommended here, add the effects of one more electrical and mechanical interface. As always, it’s a matter of optimizing tradeoffs, though of course that’s job security for RF engineers.

One approach to mastering the connection tradeoffs is to eliminate some adapters by using cables different from the usual male-at-each-end custom. Cables can take the place of connector savers and streamline test connections, especially when you’ll be removing them infrequently.

While you’re at it, consider cable length and quality carefully. Good cables can be expensive but may be the most cost effective way to improve accuracy and repeatability.

Finally, what about those huge connectors you see on some network analyzers and oscilloscopes? These are the ones that require a 20 mm wrench or a special spanner or both. The threads on some connector parts appear to be missing, though there’s a heck of a lot of metal otherwise. Here are two examples:

Male and female examples of NMD or ruggedized millimeter connectors. The larger outer dimensions provide increased robustness and stability.

Male and female examples of NMD or ruggedized millimeter connectors. The larger outer dimensions provide increased robustness and stability.

The large connectors are actually NMD or ruggedized versions of 2.4 mm and 1.8 mm connectors, providing increased mechanical robustness and stability. They’re designed to mate with regular connectors of the same type, or as a mount for connector savers, typically female-to-female. Test port extension and other cables are also available with these connectors.

I’ve previously discussed the role of torque in these connections. If you’d like something to post near your test equipment, a good summary of the torque values and wrench sizes is available from Keysight.

Originally posted Oct 30, 2014

 

An alternative to PLLs changes the phase noise landscape

Phase-locked loops (PLLs) in radio receivers date back to the first half of the 20th Century, and made their way into test equipment in the second half. In the 1970s, PLLs supplanted direct analog synthesizers in many signal generators and were used as the local oscillator (LO) sections of some spectrum analyzers.

In another example of Moore’s Law, the late 1970s and 1980s saw rapid improvements in PLL technology, driven by the evolution of powerful digital ICs. These controlled increasingly sophisticated PLLs and were the key enabling technology for complex techniques such as fractional-N synthesis.

PLLs are still key to the performance and wide frequency range of all kinds of signal generators and signal analyzers. However, as I mentioned in a recent post, direct digital synthesis (DDS) is coming of age in RF and microwave applications, and signal analyzers are the newest beneficiary.

A good example is the recently introduced Keysight UXA signal analyzer. DDS is used in the LO of this signal analyzer to improve performance in several areas, particularly close-in phase noise. The figure below compares the phase noise of three high-performance signal analyzers at 1 GHz.

The phase noise of the UXA signal analyzer is compared with the performance of the PXA and PSA high-performance signal analyzers. Note the UXA’s lack of a phase noise pedestal and significant improvement at narrow frequency offsets.

The phase noise of the UXA signal analyzer is compared with the performance of the PXA and PSA high-performance signal analyzers. Note the UXA’s lack of a phase noise pedestal and significant improvement at narrow frequency offsets.

Phase noise is a critical specification for signal analyzers, determining the phase noise limits of the signals and devices they can test, and the accuracy of measurements. For example, radar systems need oscillators with very low phase noise to ensure that the returns from small, slow-moving targets are not lost in the phase noise sidebands of those oscillators.

A spectrum/signal analyzer’s close-in phase noise reflects the phase noise of its frequency-conversion circuitry, particularly the local oscillator and frequency reference. The phase noise of PLL-based LOs typically includes a frequency region in which the phase noise is approximately flat with frequency offset. This is called a phase noise pedestal and its shape and corner frequency are determined in part by the frequency response of the filters in the PLL’s feedback loop(s). The PLL’s loop-filter characteristics are adjusted automatically, and sometimes selectable by the analyzer user as a way to optimize phase noise performance in the most important offset region for a measurement.

With the DDS technology in the UXA, the absence of a pedestal means that improved performance is available over a wide range of offsets up to about 1 MHz. For very wide offsets, a PLL is used along with the DDS to get a lower phase noise floor from its YIG-tuned oscillator.

 

Despite its obvious advantages, DDS will not fully replace PLLs any time soon. DDS technology is generally more expensive than PLLs, requiring very high-speed digital-to-analog converters with extremely good spurious performance, and high-speed DSP to drive the DACs. In addition, PLLs still offer the widest frequency range, and therefore most DDS solutions will continue to include PLLs.

Originally posted Nov 10, 2014

 

What condition is your condition in?*

As we know all too well, the RF spectrum is a limited resource in an environment of ever-increasing demand. Engineers have been working hard to increase channel capacity, and one of the most effective techniques is spatial multiplexing via multiple-input, multiple-output (MIMO) transmission.

MIMO allows multiple data streams to be sent over a single frequency band, dramatically increasing channel capacity. For an intuitive approach to understanding the technique, see previous posts here: MIMO: Distinguishing an Advanced Technology from Magic and Hand-Waving Illustrates MIMO Signal Transmission. And if MIMO sounds a little like CDMA, the difference is explained intuitively here as well.

Intuitive explanations are fine things, but engineering also requires quantitative analysis. Designs must be validated and optimized. Problems and impairments must be isolated, understood and solved. Tradeoffs must be made, costs reduced and yields increased.

Quantitative analysis is a special challenge for MIMO applications. For example, a 4×4 MIMO system using OFDM has 16 transmit paths to measure, with vector results for each subcarrier and each path.

The challenge is multiplied by the fact that successful MIMO operation requires more than an adequate signal/noise ratio (SNR). Specifically, the capacity gain depends on how well receivers can separate the simultaneous transmissions from each other at each antenna. This separation requires that the paths be different from each other, and that SNR be sufficient to allow the receiver to detect the differences. Consider the artificially-generated example channel frequency responses shown below.

A 2 MHz bandpass filter has been inserted into one channel frequency response of a MIMO WiMAX signal with 840 subcarriers. The stopband attenuation of the filter will reduce SNR at the receiver and impair its ability to separate the MIMO signals from each other.

A 2 MHz bandpass filter has been inserted into one channel frequency response of a MIMO WiMAX signal with 840 subcarriers. The stopband attenuation of the filter will reduce SNR at the receiver and impair its ability to separate the MIMO signals from each other.

The bandpass filter applied to one channel will impair MIMO operation for the affected OFDM subcarriers in some proportion to the amount of attenuation. The measurement challenge is then to quantify the effect on MIMO operation.

The answer is a measurement called the MIMO condition number. It’s a ratio of the maximum to minimum singular values of the matrix derived from the channel frequency responses and used to separate the transmitted signals. You can find a more thorough explanation in the application noteMIMO Performance and Condition Number in LTE Test, but from an RF engineering point of view it’s simply a quantitative measure of how good the MIMO operation is.

Condition number quantifies the two most important problems in MIMO transmission: undesirable signal correlation and noise. I’ll discuss signal correlation in a future post; here I’ll focus on the effects of SNR by showing the condition number resulting from the filtering in the example above.

Condition number is a ratio of singular values and always a positive real number greater than one. It is often expressed in dB form, plotted for each OFDM subcarrier. The ideal value for MIMO is 1:1 or 0 dB, and values below 10 dB are desirable. In this measurement example the only signal impairment is SNR, degraded by a bandpass filter.

Condition number is a ratio of singular values and always a positive real number greater than one. It is often expressed in dB form, plotted for each OFDM subcarrier. The ideal value for MIMO is 1:1 or 0 dB, and values below 10 dB are desirable. In this measurement example the only signal impairment is SNR, degraded by a bandpass filter.

Condition-number measurements are an excellent engineering tool for several reasons:

  • They effectively measure MIMO operation by combining the effects of noise and undesirable channel correlation.
  • They are measured directly from channel frequency response, without the need for demodulation or a matrix decoder.
  • They are a frequency- or subcarrier-specific measurement, useful for uncovering frequency response effects.
  • They relate a somewhat abstract matrix characteristic to practical RF signal characteristics such as SNR.

The last point above is especially significant for understanding MIMO operation: With condition number expressed in dB, if the condition number is larger than the SNR of the signal, it’s likely that MIMO separation of the multiple data streams will not work correctly.

*I don’t know about you, but I can’t hear the phrase “condition number” without thinking of the Mickey Newbury song Just Dropped In (To See What Condition My Condition Was In), made famous by Kenny Rogers & the First Edition way back in 1968. Did Newbury anticipate MIMO?
benz

More Tactics to Civilize OFDM

Posted by benz Sep 13, 2016

Originally posted Nov 21, 2014

 

“Tone reservation” is not an agency to book a band for your wedding

I recently explained some of the reasons why OFDM has become ubiquitous in wireless applications, but didn’t say much about the drawbacks or tradeoffs. As an RF engineer, you know there will be many—and they’ll create challenges in measurement and implementation. It’s time to look at one or two.

Closely spaced subcarriers demand good phase noise in frequency conversion, and the wide bandwidth of many signals means that system response and channel frequency response both matter. Fortunately, it’s quite practical to trade away a few of the data subcarriers as reference signals or pilots, and to use them to continuously correct problems including phase noise, flatness and timing errors.

However, pilot tracking cannot improve amplifier linearity, which is another characteristic that’s at a premium in OFDM systems. A consequence of the central limit theorem is that the large number of independently modulated OFDM subcarriers will produce a total signal very similar to additive white Gaussian noise (AWGN), a rather unruly beast in the RF world.

The standard measure for the unruliness of RF signals is peak/average power ratio (PAPR or PAR). The PAPR of OFDM signals approaches that of white noise, which is about 10-12 dB. This far exceeds most single-carrier signals, and the cost and reduced power efficiency of the ultra-linear amplifiers needed to cope with it can counter the benefits of OFDM.

A variety of tactics have been used to reduce PAPR and civilize OFDM, and they’re generally called crest factor reduction (CFR). These range from simple peak clipping to selective compression and rescaling, to more computationally intensive approaches such as active constellation extension andtone reservation. The effectiveness of these techniques on PAPR is best seen in a complementary cumulative distribution function (CCDF) display:

The CCDF of an LTE-Advanced signal with 20 MHz bandwidth is shown before and after the operation of a CFR algorithm. Shifting the curve to the left reduces the amount of linearity demanded of the LTE power amplifier.

The CCDF of an LTE-Advanced signal with 20 MHz bandwidth is shown before and after the operation of a CFR algorithm. Shifting the curve to the left reduces the amount of linearity demanded of the LTE power amplifier.

Peak clipping and compression have not been especially successful because they are nonlinear transformations. Their inherent nonlinearity can cause the same problems we’re trying to fix.

As you’d expect, it’s the more DSP-heavy techniques that provide better PAPR reduction without undue damage to modulation quality or adjacent spectrum users. This is yet another example of using today’s rapid increases in processing power to improve the effective performance of analog circuits that otherwise improve much more slowly on their own.

In the tone reservation technique, a subset of the OFDM data subcarriers is sacrificed or reserved for CFR. The tones are individually modulated, but not with data. Instead, appropriate I/Q values are calculated on the fly to counter the highest I/Q excursions (total RF power peaks) caused by the addition of the other subcarriers.

Since all subcarriers are by definition orthogonal, the reserved ones can be freely manipulated without affecting the pilots or those carrying data. Thus, in theory the cost of CFR is primarily computational power along with the payload capacity lost from the sacrificed subcarriers.

The full nature of the cost/benefit tradeoff is more complicated in practice but one example is discussed in an IEEE paper: “By sacrificing 11 out of 256 OFDM tones (4.3%) for tone reservation, over 3 dB of analog PAR reduction can be obtained for a wireless system.” [1]

That’s a pretty good deal, but not the only one available to RF engineers. I mentioned the active constellation extension technique above, and other approaches include selective mapping and digital predistortion. They all have their pros and cons, and I’ll look at those in future posts.

 

[1] An active-set approach for OFDM PAR reduction via tone reservation, available at IEEE.org.

Originally posted Dec 1, 2014

 

Maybe this time I’ll remember my own explanation

At times, my brain seems to have a non-stick coating when it comes to certain technical details. I usually feel confident in my grasp of things while getting an explanation from an expert, watching a video or looking at a diagram. But a month or two later I’ll struggle to remember some critical element or distinction, and heaven help me if I’m called on to explain it to someone else!

One distinction that has vexed me in this way is the difference between MIMO channels and streams. The literature on MIMO is packed with mentions of both terms, but it’s rare to see them both explained in context. This blog has also been guilty of casual treatment—see the first post on MIMO—and it’s time to explain a little more. A clear, intuitive understanding is worthwhile, since both channels and streams are important, and understanding them together can lead to better RF measurements.

As always, different explanations will gain traction with different readers. Some will gain optimal insight from a mathematical discourse. Others, like me, do better with a visual approach and a diagram of a MIMO transmitter chain is the best place to start:

Transmit chain example for 2×2 MIMO in an IEEE802.11n system. The spatial encoding or mapping block determines how streams become RF channel outputs.

Transmit chain example for 2×2 MIMO in an IEEE802.11n system. The spatial encoding or mapping block determines how streams become RF channel outputs.

Streams are the easiest element to understand from a digital point of view. In the 2×2 MIMO case, a single payload data stream is scrambled and interleaved, and error correction is added. The stream is then split in two and multiplexed to the 48 OFDM data subcarriers. Independently, the two streams each carry half of the data payload.

The streams then become I/Q values, and if the streams are separately sent to RF transmit chains—bypassing any spatial encoding or mapping—the distinction between streams and RF channels is trivial. This configuration is called direct mapping.

However, there are several reasons why direct mapping is not the best approach for some real-world conditions. I’ll explain more in a future post, but for now imagine a situation in which one RF channel is nearly perfect and the other is badly impaired. The error-correction overhead required to keep the bad channel functioning would be wasted on the good one, and total throughput would be suboptimal.

An elegant way to solve this problem is to convert streams to RF channels using a scheme that’s more complex than direct mapping. For example, the spatial encoder could add the I/Q values of the two streams and send the sum to one RF channel; it could simultaneously subtract one from the other and send that result to the other RF channel.

In this way—and with appropriate encoding and decoding—the effective impairments are averaged between the two RF channels. An efficient amount of error-correction overhead can be chosen for the channel pair, optimizing overall data transmission. Symmetrical decoding and de-mapping at the receiver recovers the streams from the two incoming RF channels.

To cement the distinctions in my mind, I view streams and channels like this: streams are payload data, transformed to I/Q values on the OFDM subcarriers; and channels are the actual transmitted RF signals. For the direct mapped case, the streams become channels using the modulation processing we are familiar with, and that’s a useful mode of operation for RF troubleshooting. However, it’s likely not the common operating mode and it’s important to understand the difference!

For explaining channels and streams, then, this is a start. I haven’t said much about the implications for RF measurements, but will get to that in future posts.

Originally posted Dec 10, 2014

 

What is the role of a physical book in an electronic world?

 

I recently got a copy of the newest edition of Spectrum and Network Measurements by Bob Witte. This is the second edition, and it was a good time for an update. It’s been more than a dozen years since the previous one, and I think an earlier, similar work by Bob first appeared in the early 1990s. Bob has a deep background in measurement technology and was, among other things, a project manager on the first swept analyzer with an all-digital IF section. That was back in the early 1980s!

 

One of the reasons for the update is apparent from the snapshot I took of the cover.

 

The latest edition of Bob Witte’s book on RF measurements, with a real-time spectrum analysis display featured on the cover. Pardon the clutter but the book just didn’t look right without a few essential items.

The latest edition of Bob Witte’s book on RF measurements, with a real-time spectrum analysis display featured on the cover. Pardon the clutter but the book just didn’t look right without a few essential items.

 

The cover highlights a relatively recent display type, variously referred to as density, cumulative history, digital phosphor, persistence, etc. These displays are a characteristic of real-time spectrum analyzers, and both the analyzers and displays were not in mainstream RF use when the previous edition of the book appeared.

An update to a useful book is great, of course, but why paper? What about a website or a wiki or an eBook of some kind? Digital media types can be easily updated to match the rate of change of new signals, analyzers and displays.

In looking through Bob’s book I’ve been trying to understand and to put into words how useful it feels, in just the form it’s in. It’s different from an app note online, or article, or Wikipedia entry. Not universally better or worse, but different.

Perhaps it’s because while some things have changed in spectrum and network measurements, so many things are timeless and universal. The book is particularly good at providing a full view of the measurement techniques and challenges that have been a part of RF engineering for decades. It’s a reminder that making valid, reliable, repeatable measurements is mostly a matter of understanding the essentials and getting them right every time.

Resources online are an excellent way to focus on a specific signal or measurement, especially new ones. Sometimes that’s just what you need if you’re confident you have the rest of your measurements well in hand.

I guess that’s the rub, and why a comprehensive book like this is both enlightening and reassuring. RF engineering is a challenging discipline and there are many ways, large and small, to get it wrong. This book collects the essentials in one place, with the techniques, equations, explanations and examples that you’ll need to do the whole measurement job.

Of course there are other good books with a role to play in RF measurements. While Bob’s book is comprehensive in terms of spectrum and network measurements, one with a complementary focus on wireless measurements is RF Measurements for Cellular Phones and Wireless Data Systems by Rex Frobenius and Allen Scott. And when you need to focus even tighter on a specific wireless scheme you may need something like LTE and the Evolution to 4G Wireless: Design and Measurement Challenges*, edited by Moray Rumney.

All of these are non-eBooks, with broad coverage including many examples, block diagrams and equations. Together with the resources you’ll find using a good search engine, you’ll have what you need to make better measurements of everything you find in the RF spectrum.

 *Full disclosure: I had a small role in writing the signal analysis section of the first edition of the LTE book. But it turned out well nonetheless!

Originally posted Dec 26, 2014

Streams multiply complexity but they can also add insight

Multiple-input multiple-output (MIMO) techniques are powerful ways to make efficient use of scarce RF spectrum. In a bit of engineering good fortune, MIMO methods are also generally most effective where they’re most needed: crowded, reflective environments.

However, MIMO systems and signals—and the RF environments they occupy—can be difficult to troubleshoot and optimize. The number of signal paths goes up with the square of the number of transmitters, so even “simple” 2×2 MIMO provides the engineer with four paths to examine. 4×4 systems yield 16 paths, and in some systems 8×8 is very much on the table!

All these channels and streams, each with several associated measurements, can provide good hiding places for defects and impairments. One approach for tracking down problems in the thicket of results is to use a large display and view many traces at once, the subject of my Big Data post a while back. Engineers have powerful pattern recognition and this is a good way to use it.

Another way to boil down lots of measurements and produce some insight—measuring condition number—is specific to MIMO. This trace is a single value for every subcarrier, no matter how many channels are used, and it quantifies how well MIMO is working overall. Sometimes not too well, as in this measurement:

This condition number trace is flat over the channel, at a value of about 25 dB. The ideal value is 0 dB and condition number should be similar to the signal/noise ratio (SNR), so signal separation and demodulation is likely to be very poor unless SNR is very good.

This condition number trace is flat over the channel, at a value of about 25 dB. The ideal value is 0 dB and condition number should be similar to the signal/noise ratio (SNR), so signal separation and demodulation is likely to be very poor unless SNR is very good.

The signal for the measurement above was produced with four linked signal generators, so SNR should not be a problem. However, the fact that the condition number is far above 0 dB certainly indicates that there is a problem somewhere.

Analysis software such as the 89600 VSA provides several other tools to peer into the thicket from a different angle. As mentioned previously, this 4×4 MIMO system has 16 possible signal paths, and they can be overlaid on a single grid. In this instance a dozen of the paths looked good, while four showed a flat loss about 25 dB greater than the others. That is suspiciously close to the 25 dB condition number.

Of course, when engineers see two sets of related parameters they tend to think about using a matrix to get a holistic view of the situation. That’s just what’s provided by MIMO demodulation in the 89600 VSA software as the MIMO channel matrix trace, and in this case it reveals the nature of the problem.

The MIMO channel matrix shows the complex magnitude of the 16 possible channel and stream combinations in a 4×4 MIMO system with spatial expansion. Note that the value of channel 4 is low for all four streams.

The MIMO channel matrix shows the complex magnitude of the 16 possible channel and stream combinations in a 4×4 MIMO system with spatial expansion. Note that the value of channel 4 is low for all four streams.

This MIMO signal was using spatial expansion or spatial encoding, as I described recently. Four streams are combined in different ways to spread across four RF channels. The complex magnitudes are all different—to facilitate MIMO signal separation—and very much non-zero.

All except for channel 4, where the problem is revealed. The matrix shows that the spatial encoding is working for all four streams, but one channel is weak for every stream. In this case the signal generator producing channel four had a malfunctioning RF attenuator, reducing output by about 25 dB.

As is so often the case, the solution comes down to engineers using pattern recognition, deduction and intuition in combination with the right tools. For Keysight, Job 1 is bringing the tools that help you unlock the necessary insights.