Skip navigation
All Places > Keysight Blogs > Better Measurements: The RF Test Blog > Blog

In the previous edition of The Four Ws, I reviewed the fundamentals of adjacent channel power (ACP). This time I’m discussing the WHAT, WHY, WHEN and WHERE of harmonic distortion measurements. Measuring harmonic distortion will help you validate the proper functioning of your device’s components and, in turn, avoid interference with systems operating in other channels.

 

What is harmonic distortion?

From simple continuous waves (CW) to complex digitally-modulated signals, every real signal has some amount of distortion. One type of distortion to consider is the total harmonic distortion (THD). The THD value indicates how much of your device’s signal distortion is due to harmonics. These harmonics are energies created at various multiples of the frequency of your signal where none previously existed or should exist. This extra energy is frequently caused by nonlinearities in the transfer function of a circuit, component or system. In practical systems, nonlinearities are due to gain compression, transistor switching or source-load impedance mismatches.

 

An 850-MHz signal with obvious harmonics on both sides.

Figure 1: A basic swept measurement made with an X-Series signal analyzer shows an 850-MHz signal with obvious harmonics on both sides.

 

To calculate THD you need to determine the ratio of the sum of the power of all surrounding harmonic components to the power of your device’s fundamental signal:

To calculate THD you need to determine the ratio of the sum of the power of all surrounding harmonic components to the power of your device’s fundamental signal.The resulting THD is stated in dBc.

 

Why and When to measure THD

THD is typically characterized during design validation and troubleshooting when you are confirming that your signal is behaving as expected. Your THD will indicate if your device’s surrounding harmonics will affect your signal quality or interfere with another device.

 

You want the THD to be as low as possible. This implies that your device has a nearly pure signal making it unlikely that it’s harmonics will cause interference. On the other hand, a high THD means that you may need to rework your design because the distortion could negatively affect your signal quality or create interference in other channels.

Measuring THD can also be an effective indicator of overall signal performance. In an amplifier, for example, excessive THD indicates issues like clipping, gain compression, switching distortion, or improper transistor biasing or matching.

 

An example of Where distortion shows up and how you measure it

A simple, real-world example of harmonic distortion is found in audio speakers. Let’s say you’re playing a song from your phone and you hook it up to a speaker. If the speaker’s internal components – amplifiers and filters – give us an accurate reproduction of the song, then the speaker has a low amount of distortion. On the other hand, if the speaker’s internal components give you a misrepresentation of the song then it has a high amount of distortion. Therefore, you want your device’s THD value to be as low as possible to maintain good signal quality.

 

Another issue harmonic distortion can cause is interference with other signals. Since harmonic distortion is unwanted energy at the harmonics (integral multiples) of the fundamental frequency, the distortion can interfere with another device that is operating in the same band as the harmonic. Therefore, a low THD value is also a good indicator that interference is less likely to occur.

 

Seeing your signal’s harmonics can be difficult to observe and measuring them can be quite time consuming if done manually. You’d have to identify all the harmonic power levels, sum them, and then find the ratio to the power of your device’s signal. That is a hassle.

 

However, some signal analyzers provide a built-in measurement that will automatically calculate THD for you. This can shorten your measurement time and ensure an accurate calculation.

 

The built-in harmonics measurement calculates the THD and results for up to 10 individual harmonics.Figure 2. The built-in harmonics measurement on an X-Series signal analyzer quickly calculates the THD for the same 850-MHz signal seen in Figure 1. In addition to THD, the measurement shows results for up to 10 individual harmonics.

 

Using the harmonics measurement shown in Figure 3, you can calculate the total harmonic distortion and the results for up to ten harmonics, automatically.  All you have to do is set the fundamental frequency and the measurement takes care of the rest.

At each cycle, the analyzer performs an accurate zero-span measurement of the device’s signal and each of its harmonics. It calculates the level of each harmonic, as well as the total harmonic distortion of the signal, both of which are shown in dBc. The harmonic distortion measurement used in our example supports signals from simple CW to complex multi-carrier communication signals.

 

Wrapping up

Knowing the total harmonic distortion of your signal can help you evaluate if your device will cause any interference with its own signal or with systems operating in other channels. If you identify troublesome harmonics, you’ll have to rework your design and use something like a filter to tune them out.

THD is just one of nine RF power measurements made easy with PowerSuite, a standard feature on the X-Series signal analyzers. If you’d like to learn more about power measurements, check out the PowerSuite page and the Making Fast and Accurate Power Measurements application note.

 

I hope my fourth installment of The Four Ws provided you with some worthwhile information. Please post any comments – positive, constructive, or otherwise – and let me know what you think. If this post was useful give it a like and, of course, feel free to share.

This week’s post is guest authored by Charlie Slater, Business Development and Operations Manager for Keysight Services.

 

These days, most organizations operate within one of two scenarios: cutting costs while delivering the same topline, or holding costs steady while increasing revenues. The third, less-common scenario is investing more to create a giant leap in output. If you’re in this fortunate group, confidence in future growth usually opens the door to major investments in plant, property and equipment (PPE)—and the “E” in PPE includes test equipment. Optimizing the management of test assets can help you create some semblance of order within the chaos.

 

Uncovering some unexpected side effects of rapid growth

Surprising problems can arise when your organization is moving at high speed. Several months ago I met with a manager in a high-growth company. Our purpose was to plan for onsite delivery of calibration services. When creating such a plan, key baseline information includes the location and condition of all in-hand test assets.

 

As we talked, it became clear that he had incomplete data about his company’s installed base of test equipment. Further discussion revealed the unexpected cause. The company’s engineers had extremely high purchasing authority and pallets of new network analyzers and spectrum analyzers were coming in every day. The manager had virtually no idea what was arriving and limited visibility into what his engineers were actively using or even if the equipment was in working order.

 

Gaining control of test assets and getting more from each one

During chaotic growth, sticking to the basics can help contain spending and restore order to an organization. For the company described above, accurate tracking of all new and existing RF equipment helped get its inventory under control. Today, better monitoring enables compliance with internal and external quality standards, and this includes staying up to date with test-asset calibration.

 

The underlying solution is real-time tools that provide centralized visibility. This enhances productivity by letting managers and engineers find and reassign unused instruments rather than waiting for delivery of new ones.

 

For any organization, real-time monitoring can pinpoint instruments that are underused or idle. In many cases, the most cost-effective way to refresh a languishing-but-viable test asset is an update or upgrade—and new functionality may be just a download away. For hardware upgrades that require installation, the turnaround time is usually shorter than the lead-time for a new instrument.

 

Exploring all three scenarios

To learn more, check out our latest resources, including a white paper about how to best enable 4G to 5G migration and a case study about how one company improved the health of their test assets.

 

Please chime in with any and all comments. How have you tried to optimize your situation? What worked best and why?

In a rock band, the drummer keeps the beat steady and the other musicians follow the rhythm. The drummer keeps the entire band in synch. The same concept is true when you integrate multiple instruments into a test system. The individual instruments need to be synchronized, especially when you are making multi-channel RF measurements. Like a drummer, a trigger and a reference clock communicate the “beat” to synchronize the instruments so they can make precise, time-aligned measurements. Let’s take a closer look at multi-channel measurements and how to achieve an accurate multi-channel test setup.

 

Multi-antenna RF techniques

Most modern wireless systems, whether in commercial applications or aerospace and defense, have adopted some kind of multi-antenna technique, such as MIMO (multiple input, multiple output), beamforming or phased-array radar. These techniques improve:

  • Spectral efficiency (bit/sec/Hz)
  • Signal quality
  • Signal coverage

 

For example, MIMO increases data rates by using two or more streams of data transmitted with multiple antennas. The antennas transmit the data on the same frequency and at the same time without interfering with one another, as shown in Figure 1. Spectral efficiency is improved using the same bandwidth.

 

Simplified 2x2 MIMO system

Figure 1. A simplified 2x2 MIMO system with two transmitters and two receivers.

 

Keys to synchronize multiple instruments

While MIMO and other technologies deliver increased data rates, they also increase the number of antennas in a device. And, as the number of antennas increases, test complexity increases significantly. For example, the latest IEEE WLAN technology, 802.11ax, use up to 8x8 MIMO. That means your test setup must have eight transmit channels and eight receive channels! And, it’s crucial that they are synchronized. 

 

To synchronize your test system, there are three key elements: the trigger, the sampling clock, and the event-signal effects.

 

An easy method to synchronize multiple instruments is to use a trigger. A trigger is a coordination signal that is sent to each instrument in a test setup. When the trigger signal is detected, each instrument performs its task. Using a trigger signal ensures all your instruments are in synch. However, there are two sources of error that must be addressed:

  1. Sampling clock: Even when all the instruments being triggered are identical, for example your signal generators, the initial phase of each instrument’s sampling clock is random. To align the sampling clock of each instrument, use the same reference frequency for all the instruments.
  2. Event-signal effects: Cabling and external devices can affect how long it takes your trigger signal to reach each instrument. This is called trigger delay. These event-signal effects need to be accounted for so that your instruments still transmit or receive at the same time. Using a channel skew control on your master instrument allows for precise time synchronization between all channels.

 

Figure 2 illustrates two arbitrary waveform generators (AWGs) that are in time alignment. Here’s a quick review of the setup:

  • First, use a common frequency reference to synchronize the timing clocks for all instruments.
  • Second, connect the master's "trigger out" to the slave's "trigger in" connector. The AWG will start generating the signal after a trigger event is detected.
  • Finally, remove the effects of master-to-slave trigger delay to align the two waveforms. The trigger delay can be measured with an oscilloscope or a digitizer. Then, add the delay time to the master AWG.

 

This process also applies to analyzers. You can use one splitter to distribute signals to a multi-channel analyzer and measure the time differences among the channels. The relative delays of each channel can be compensated by application software. Having the timing synchronized between the instruments allows you to build a multi-channel RF test system.

 

Two AWGs configured to generate time-aligned signals.

 

Figure 2. These two AWGs (master and slave) are configured to generate time-aligned signals. To remove the effects of master-to-slave delay, it is necessary to delay the signal generated by the master.

 

Modular instruments can make implementation easier

While the number of synchronized channels increases, the cabling between the instruments becomes much more complicated and achieving proper time-synchronization can take a significant amount of time. Modular instruments are based on standard instrumentation buses such as PXI, AXIe, and VXI. These instruments can share clocks and trigger signals through a backplane bus. This makes it easier to implement synchronization and makes the trigger events more repeatable because the test environment is controlled with minimal cabling.

 

For example, a PXI trigger bus consists of eight trigger lines spanning the backplane connectors. The trigger lines are divided into three trigger bus segments, slot numbers 1-6, 7-12 and 13-18. Figure 3 shows an easy PXI trigger setup with Keysight IO Library software.

 

PXI triegger setup using Keysight IO Library software

 

Figure 3. PXI trigger setup using Keysight IO Library software. In this example there are eight trigger lines (0-7) and three bus segments. The trigger routing direction between the segment of each trigger line can also be configured.

 

Figure 4 below shows two PXI chassis being used as a WLAN 802.11ax test solution that fully supports 8x8 MIMO. The PXI backplane bus routes trigger signals to target modules for eight-channel signal generation and acquisition. This system takes advantage of the PXI standards that minimize a chassis’ slot-to-slot trigger time and clock skew to hundreds of pico seconds.  This results in very accurate timing synchronization so you don't need to make any adjustment for MIMO transmitter and receiver testing.

 

WLAN 802.11ax test solution for 8x8 MIMIO.

Figure 4. WLAN 802.11ax test solution that fully supports 8x8 MIMO configuration in two PXI chassis.

 

Trigger and Time Synchronization Lead to Better Testing

To effectively test today’s multi-channel devices, you must perform tightly synchronized, multi-channel signal generation and analysis. With accurate triggering among the instruments, you ensure that all measurements start at precisely the right time. (If you require carrier phase coherency, you will also need to use a common local oscillator (LO) reference.) To simplify your test synchronization, consider a modular test system that allows easier integration of multiple instruments into a multi-channel test system.

 

If you’d like to know more about instrument interactions, refer to the following application note: Solutions for Design and Test of LTE/LTE-A Higher Order MIMO and Beamforming.

 

If you like this post, give it a like and feel free to share. Thanks for reading.

As you walk through your lab, take a look at each RF bench. How old are your signal and network analyzers? How often are they kludged together to create one-off measurements? How recently have your engineers bugged you about getting new equipment that can actually test your latest RFIC?

 

I’m here to help you make a stronger case when your team’s success depends on timely access to better RF instruments. This post introduces language, concepts and solutions that will help you influence purchase decisions and improve your chances of getting the right tools at the right time. When you apply these ideas, your newfound business sense may surprise—if not impress—your boss or boss-squared.

 

Understanding your current reality

Day to day, you deal with competing objectives: delivering excellent results while staying within stringent constraints. From a high-level business perspective, there are three ways to do this: cut costs and deliver the same topline; hold costs steady and increase revenues; or invest more to create a giant leap in output. These days, most organizations operate within the first two scenarios while fast-growing companies chase the third.

 

Getting the right tools at the right time (and place)

Whichever situation you face, one of your biggest issues is likely to be test equipment. In fluent “manager speak,” “test assets” are often your organization’s most “underutilized assets.” Why? Because it’s difficult to confidently determine two crucial bits of information: the location of every instrument and how much each one is truly being utilized.

 

For you and your team, easy access to the right tools enables everyone to do their best work and stay on schedule. Applying manager-speak once more: for “technical staff,” “highly available” test equipment can be a “high-leverage asset.”

 

Pushing for better decisions in less time

An accurate view of location and utilization is essential to making credible decisions in less time: Do you need to purchase or rent additional equipment? Is it better to redeploy, upgrade, trade in or sell some of your existing gear?

 

A few basic changes can provide three big benefits: better visibility, improved utilization, and reduced expenses (capital and operating). The starting point is a solution that puts real-time information at your fingertips. Relevant information about test-asset location and utilization is essential to greater availability and improved productivity.

 

Taking the next steps

Being able to make quick, thoughtful decisions on how to best equip your engineers with the right tools is the foundation for a successful organization. To learn more, check out our latest resources to better understand how to drive down your total cost of ownership.

 

Please chime in with any and all comments. How difficult has it been to get the test tools your team needs? What techniques have you used to help make it happen?

Prove yourself as an engineer! The Schematic Challenge is the perfect opportunity to test your skills. On March 12, 13, and 14, we will be posting a new schematic or problem-solving challenge. If you, as a community, are able to answer questions 4, 5, and 6 correctly by Thursday, March 15 at 11:59 PM MST, we will add three 1000 X-Series oscilloscopes to the overall Wave 2018 giveaway! Answers should be posted in a comment on the #SchematicChallenge posts on the Keysight Bench or RF Facebook pages. Work with your family, friends, coworkers, or fellow engineers in the Wave community to solve these problems. If you haven’t already, be sure to register for Wave 2018 at wave.keysight.com.


Question 4:

By Ryan Carlino

 

Status: SOLVED! (A=1 and B=2)

 

Week2 Q4 Schematic Challenge Wave2018

Given this circuit and assuming an ideal op-amp powered by +/-5V and ideal resistors, calculate the output voltage with respect to the input. Vin will be limited to +/-1V.

Express this transfer function like this:
Vout = A*Vin + B

The answer being posted should be a single number AB. For example, if A=4 and B=7, the answer you should post is 47.

 

Question 5:

By Jonathan Falco and Lukas Mead

 

Status: SOLVED! (90 MHz)

 

What integer frequency in MHz should the LO be set, to allow the RF input range to be seen on OUT?

 

.


Question 6:

By Barrett Poe

 

Status: SOLVED! (4-10-8-8)

 

You are asked to design the front end of a 10 MHz oscilloscope. The “front end” refers to the internals of an oscilloscope between the probe and the analog to digital converter. Your system requires you take a +/- 10V signal input, and output a 0-3.3V signal to the ADC input, which is terminated at 50 Ohms. Your circuit must scale, offset, and filter the incoming signal, then rescale it to the full range (within 10%) of the ADC’s reference voltage without clipping the sampled signal.

 

Oh no! You also just discovered your supplier has discontinued your favorite ideal operational amplifier (opamp). Your next two best choices are:

  • Opamp with 1 pF of capacitance on the inputs
  • Opamp with 10 pF of capacitance on the inputs

Make sure your design works with both of these back-up options. However, note that you will only use the same parts together. Meaning, you will only ever have two 1 pF opamps OR two 10 pF opamps, never one of each.


Also keep in mind – opamp output voltage cannot exceed the supply rails.

Output Voltage = 0 to 3.3V; Ensure Vout is +0%/-10% of ADC range for max input across bandwidth

Frequency = DC to 10 MHz

 

Assign a value to variables a, b, c, and d. The final answer to be posted on Facebook should be expressed as: a-b-c-d. For example, if a = 8, b = 6, c = 12, and d = 10, then the answer should be expressed as: 8-6-12-10.

 

HINTS:

The variable “a” is equal to one of these three options

  • c-1
  • 4
  • 4b

The variable “b” is equal to one of these three options

  • c+1
  • 4
  • 10

The variable “c” is equal to one of these three options

  • b/2
  • 6
  • 2*a

The variable “d” is equal to one of these three options

  • 8
  • (b+2)/3
  • 10

Helping You Achieve Greater Performance and Fast Measurement Speed

At an exhibition demo booth, an engineer complained to me about the measurement speed of a PXI oscilloscope. To make a measurement, he programmed the data acquisition and post-analysis himself. The test took him over a minute to get each result. I told him that he didn’t have to do all of that; all he needed to do was setup the measurement on the oscilloscope and fetch the measurement item result directly. The process should only take a couple of microseconds. An on-board ASIC helps minimize data transfer volumes and speed-up analysis!

 

Like an oscilloscope has on-board digital signal processing, RF signal analysis tools also have on-board processing to accelerate measurement speed.

 

RF Measurement Challenges

For RF signal analysis, it's common to frequency-shift the RF signal to an intermediate frequency (IF) so that you can use a high-resolution digitizer for a high dynamic range signal acquisition. This then gets sent to a PC for data analysis. However, the complexity of this analysis increases with today's wireless communication systems, such as 5G technologies, 802.11ax standard and so on. Measuring these systems can include complex modulation schemes (e.g., orthogonal frequency-division multiplexing, OFDM), carrier aggregation, or MIMO (multi-input multi-output) signals.

 

These complications require significant signal processing, which in turn slows the measurement speed. This is a challenge as measurement throughput is critical in most applications, especially in high volume production testing.

In most signal analyzers, a digitizer is an indispensable component. For wider bandwidth analysis, you need a high-speed digitizer to capture signals. At the heart of a high-speed digitizer is a powerful FPGA or ASIC that processes data in real-time. This allows data reduction and storage to be carried out at the digital level, minimizing data transfer volumes and speeding-up analysis.

 

A key feature often available on digitizers is real-time digital down conversion (DDC). In frequency domain applications, DDC allows engineers to focus on a specific part of the signal using a higher resolution, and transfer only the data of interest to the controller/PC. It works directly on ADC data providing frequency translation and decimation sometimes called "tune" and "zoom." The block diagram shown in Figure 1 illustrates this basic concept of DDC.

 

Digital down-converter block diagram

Figure 1. Digital down-converter block diagram

 

How DDC Works

The frequency translation (tune) stage of the DDC generates complex samples by multiplying the digitized stream of samples from the ADC with a digitized cosine (in-phase channel) and a digitized sine (quadrature channel).

The in-phase and quadrature signals can then be filtered to remove unwanted frequency components. Then, you can zoom in on the signal of interest and reduce the sampling rate (decimation).

 

Finally, the on-board processor sends only the data you care about (I/Q data) to the on-board memory for further analysis. Most of Keysight's digitizers and signal analyzers have implemented DDC to accelerate measurement speed and for demodulation acceleration.

 

In addition, you can perform FFT with I/Q data in parallel for spectral analysis.  Some signal analyzers can do real-time FFT processing (nearly 300,000 times/second) and use comprehensive spectrum displays (density and spectrum) so that you won't lose any agile signals on the screen, shown in Figure 2.

 

Real-time spectrum analysis at 2.4 GHz ISM bandFigure 2. Real-time spectrum analysis at 2.4 GHz ISM (industrial, scientific and medical) band

 

Benefits and Limitations of a High-Speed Digitizer with DDC

Using a high-speed digitizer with DDC for your RF testing can be significantly more efficient:

  1. The frequency translation (tune) reduces both on-board memory and data transfer requirements. The resulting data is in complex form (I+jQ), which is usable for demodulation analysis directly and accelerates measurement speed.
  2. Digital filtering and decimation (zoom) reduce the wideband integrated noise and improve overall SNR.

 

However, there are some limitations with DDC implementation:

  1. The ADC's sampling rate is limited. It's not possible to digitize the high-frequency carrier directly. A common workaround is to use an analog circuit to bring the carrier to an IF so the digitizer can acquire the signal.
  2. The ADC's dynamic range is also limited. In wireless communication systems, you may need to capture both large and small signals at the same time.

 

New generations of high-speed and high-resolution ADC technologies provide excellent resolution and dynamic range into the tens of GHz, which allows you to capture high-resolution wideband waveforms. DDC accelerates measurement speed and increases processing gain to improve performance.

 

Furthermore, the I/Q data can be processed further for advanced real-time signal analysis or sent to a customized FPGA for user-defined signal processing algorithms. These provide you better RF measurement fidelity, signal integrity and higher measurement throughput.

 

If you’d like to learn more about wideband signal acquisitions, you can refer to the following white paper Understanding the Differences Between Oscilloscopes and Digitizers for Wideband Signal Acquisitions to understand what you should be using for your application.

Like any RF engineer, there comes a time in your product’s design cycle that you need to test your device to make sure it’s behaving as you expect. There are different ways you can view your device’s signal, which brings us to why measuring signals in the time domain and frequency domain is the same, but not. This is because they both convey the same signal, but in a different way.

 

Figure 1. The time domain of a signal on the left, and the frequency domain of the same signal on the right. The time domain displays a signal in respect to amplitude vs. time whereas the frequency domain displays amplitude vs. frequency.

 

By properly combining spectrum, or a collection of sine waves, you can view the time domain of your signal. It shows your signal’s amplitude versus time. This is typically done using an oscilloscope. Why would you want to view your signal in the time domain, you ask? Basically, a time-domain graph shows how a signal changes with time. This lets you see or visualize instances where the amplitude is different.

 

Viewing your device’s signal in the time domain doesn’t always provide you with all the information you need. For example, in the time domain you can decipher that a signal of interest is not a pure sinusoid, however, you won’t know why. This is where the frequency domain comes in. The frequency domain display plots the amplitude versus the frequency of each sine wave in the spectrum. This may help you discern why your signal isn’t the pure sinusoidal wave you were hoping it to be.

 

Figure 2. Harmonic distortion test of a transmitter, which is most appropriately measured using a spectrum analyzer in the frequency domain.

 

The frequency domain can help identify questions about your signal that you wouldn’t be able to see in the time domain. However, this doesn’t mean that you can just scrap measuring signals in the time domain altogether. The time domain is still better for many measurements, and some measurements are only possible in the time domain. Examples include pulse rise and fall times, overshoot, and ringing.

 

But just like the time domain has its advantages, so does the frequency domain. For one, the frequency domain is better for determining the harmonic content of a signal (as seen in Figure 2). So, those of you in wireless communications who need to measure spurious emissions are better off using the frequency domain. Yet another example is seen in spectrum monitoring. Government regulatory agencies allocate different frequencies for various services. This spectrum is then monitored because it it is critical that each of these services operate at its assigned frequency and stay within the allocated channel bandwidth.

 

While measuring signals in the time domain and frequency domain is similar, it is also very different. Each domain conveys the same signal, but from different perspectives. This enables us engineers to get more insight into how our device is behaving and ultimately develop better products for our customers.

 

To build a stronger foundation in signal analysis that will help you deliver your next breakthrough, check out the Spectrum Analysis Basics app note. Please post any comments - positive, constructive, or otherwise - and let me know what you think. If this post was useful give it a like and, of course, feel free to share.

benz

Passing Along the Magic

Posted by benz Feb 20, 2018

  Demystifying technology, and marking five years of The RF Test Blog

For several years, I co-coached two middle school robotics teams. It was a great experience, and I learned at least as much as I taught—though generally about different subjects!

Some of the kids gravitated toward the robot mechanisms, while others found a natural focus on the programming side. I suppose that’s part of the intent of robotics clubs, mixing hardware and software to increase the chances of inspiring kids to pursue STEM studies and careers.

Ironically, our success with ever-more-complex technology may create some barriers to getting kids interested in it. During a club meeting one afternoon, I was vividly reminded of Arthur C. Clarke’s famous quote, “Any sufficiently advanced technology is indistinguishable from magic.” While the kids were working with robots and laptops, virtually all of them were carrying a magical device that was even more advanced: their mobile phone.

These thin slabs of metal, glass, and plastic, invisibly connected to the rest of the universe, could be expected to do just about anything when equipped with the right app. Seeing something so magical being taken so thoroughly for granted, I understood why some kids weren’t all that captivated by the robots.

That realization left me a bit troubled, and I wondered about other ways to get the kids engaged.

A partial answer came later in the semester. My co-coach had the brilliant idea of devoting one club session to the dismantling of technology. She brought in some older devices, working or not, including an early digital camera, a portable CD player, and a slider-type mobile phone. We gave the kids some small screwdrivers and turned them loose to get a glimpse behind the engineering curtain.

I was amazed at the spike in enthusiasm and engagement, especially from some kids who had previously been marginal participants. Once they reasoned out how to open the devices and free the contents, they then delighted in showing others how they thought the parts actually worked. They got an especially big kick out of the tiny motor and attached eccentric that vibrated the phone. It was the one recognizable part of the device that moved!

My take-away: if we want to pass along our interest in creating the magic of new technologies—and solving the attendant problems—we need to keep our eyes open to new approaches to communicate and share.

That’s what we were thinking five years ago when we started this blog. Since then, it has been a delight to learn about RF technologies and share the results with you. I very much appreciate your indulgence as I’ve wandered from Loose Nut Danger (the first post) to MIMO to the technology of furry hoods.

It’s now time to pass along the writing of this blog to a new generation, with their own perspectives, insights, and peculiar interests.

Composite image of new primary writers of this blog: Eric Hsu, Vandana Duff, Nick Ben, and Tit Bin Teo

Meet the new primary writers of Keysight’s Better Measurements: The RF Test Blog, clockwise from upper left: Eric Hsu, Vandana Duff, Nick Ben, and Tit Bin Teo

Nick Ben has already written several guest posts here, and I think this blog will benefit from the new writers’ wider range of interests and experience. I look forward to following where they lead.

As for me, I plan to pursue my interests in a direction that looks more like retirement, with increased opportunities to learn and to teach, coach, and share.

  Fortunately, we can make things better—for your signal analyzer

 

Note from Ben Zarlingo: This guest post comes to us from Bill Scharf, a Keysight engineer with long experience in microwave signal analysis.

 

If you use even a gently aged spectrum analyzer, you may sometimes wonder why its amplitude accuracy above about 4 GHz is slightly worse than when it was new or after it has been freshly calibrated. Personally, I sometimes wonder why I cannot do the things I did when I was 20 years old.

 

In both cases aging is occurring. Although nothing is technically broken, we can make things better without magically locating a certain DeLorean car equipped with a flux capacitor and then driving 88 mph, hoping for a lightning strike, and traveling back in time.

 

What is a preselector, and why would it drift?

If we assume the instrument is a Keysight X-Series signal analyzer, what has probably happened is the preselector, sometimes called a YIG-tuned filter or tracking filter, has drifted a bit—but not enough to cause an out-of-specification situation. In an X-Series analyzer, the preselector is located in the signal path between the input attenuator and the first mixer, and it is used only at tuned frequencies of 3.6 GHz and higher.

 

The filter bandwidth should be wide enough to measure the desired signal, yet narrow enough to reject image frequencies and undesired signals (which may overload the first mixer). Depending on the tuned frequency, the bandwidth of the filter ranges from about 40 MHz to 75 MHz. Filter shape and ripple across the passband also vary with tuned frequency. As the analyzer tunes, the preselector filter tracks the change and provides a “centered” passband at the current frequency, as shown below.

Frequency response or gain/attenuation parameters of a YIG-tuned filter, used as a preselector in signal analzyers to remove undesired image or out-of-band signals and the spurious responses they would create in the signal analysis results

Typical passband response of a YIG-tuned preselector

Instrument software automatically handles most of this preselector tuning; however, careful adjustment of the instrument will help deal with the rest.

 

Ensuring better performance

As the instrument ages, especially its preselector assembly, the filter bandpass will drift. As a result, the signal being measured might fall in an area of passband ripple or on a steeper portion of the filter response. Here are three tips to help ensure the best performance:

  •  For the absolute best amplitude accuracy, the Preselector Center function (accessible via the front panel or SCPI) uses internal calibration signals to vary the preselector filter tuning in real time and obtains the best possible tuning. Be forewarned that this routine is time-consuming. If you need the very best amplitude accuracy using the preselector, then re-center the preselector at each measurement frequency.
  • Every three to six months, apply the Characterize Preselector routine. This performs “preselector centering” at various pre-determined frequencies up to the maximum frequency range of your analyzer. The analyzer stores the tuning values and automatically uses them the next time the analyzer is tuned to those frequencies. One advantage: after this routine runs, you may not need to rely on the slower Preselector Center routine (above). No external equipment is required: simply press System, Alignments and Advanced then select Characterize Preselector.
  • Bypass the preselector filter. If your instrument contains option MPB, microwave preselector bypass, you can select the bypass path and remove the preselector from the signal path. The downside: the instrument is no longer filtering the input signals (i.e., it isn’t “preselected”). Depending on the span setting, you may see image frequencies that are not being rejected by the preselector and so appear at the first mixer. The advantage: the bandwidth is about 800 MHz at the first mixer, preselector drift is no longer an issue, and measurement speed may increase because the instrument is no longer trying to avoid oversweeping the preselector filter.

More detail is available in our preselector tuning application note.

Wrapping up

Three closing comments: The “Y” in the YIG-tuned filter, when inverted, is almost the same schematic symbol as the flux capacitor. If you are more than 20 years old, use a knee brace when running marathons, thereby avoiding future trips to the hospital. Those of you that have an X-Series analyzer can use the Characterize Preselector routine to optimize accuracy between periodic calibrations.

  Adapting to future circumstances instead of expecting to anticipate them

 There’s nothing inherently wrong with trying to predict the future, whether that of technology or any other area. It’s easy to understand why “trying” is an attractive pastime, but expecting consistent success is where engineers and others may run into trouble.

Instead, I suggest that engineers use their super-powers of creative adaptation.

My jaded attitude toward predictions comes from work I did a couple of decades ago, forecasting the future sales (0 to 18 months out) of about two dozen measurement products. I put my analytical skills to work with some modest success, but a little honest self-appraisal left me doubting that I’d added real value. Sometimes I was just lucky, and it was hard to take much satisfaction from that.

Some research into the general landscape of prognostication left me wondering if maybe the universe was actually hostile to the whole enterprise of predicting the future. Or if not actively hostile, then resistant in a passive and maddening way.

Back then, greater minds than mine had repeatedly come to grief in such prediction efforts, including a group of brilliant academics and bureaucrats in Japan. They were economists and mathematical modelers, and despite their dedication and diligence, they were no more successful than I was.

In this situation the obvious question was to ask what kind of approaches, if any, were effective in somehow handling the important unknowns the future held. If you accept that you can’t reliably predict the future, what can you do?

In short, you can adapt as the future arrives. To be more successful than others in your field, you can work to adapt faster and better than they do. In fact, you may be able to speed up the process by pre-adapting using techniques such as scenario planning. In scenario planning, multiple possible futures are considered, and steps are taken in advance to outline carefully considered responses to the ones judged most likely to occur.

Scenario planning is usually thought of as a large-scale strategic activity, but you may already be doing it with a narrower view. For example, your designs may be anticipating a clear price/performance trend in either digital signal processing or analog semiconductors such that your product will be ready for the new leading edge. Tactically, this may mean implementing a modular design that lets you drop in the new elements as soon as they’re available in quantity.

As much as I’ve been disappointed in our collective inability to accurately predict the future, I have been repeatedly impressed by the ability of designers to adapt as technologies and markets evolve. Take wireless networking as one example.

Crowd of over 100,000 people at Michigan Stadium.  An example of the demands of large numbers of wireless networking and cellular data users in close proximity

The original designers of Michigan Stadium anticipated that it would need to hold more than 100,000 people, and designed its footings accordingly. However, they had no conception of a future where the vast majority of the fans would be carrying wireless telephones and would expect mobile network or Wi-Fi access. (photo from Wikipedia)

The definitive Wi-Fi standard, 802.11b, emerged more than 20 years ago, in 1997. 3G telecom networks began appearing perhaps a year later. In the years since, growth in all dimensions—users, connected devices, infrastructure, and data rates—has been enormous and continuous. It’s likely to continue at a similar pace for years to come.

While the original standards couldn’t handle these demands, creative engineers were—and still are—constantly working to adapt and expand. They continue to succeed beyond the expectations of many, including me.

They’re also making it clear that predicting the future is less important than creatively responding to shifting demands, expectations, and technologies. That multi-dimensional creativity has included OFDM, OFDMA, MIMO (including multi-user MIMO), beamforming, carrier aggregation, and manufacturing techniques that make microwave and millimeter devices practical and affordable.

Twenty years after my forecasting adventures, the underlying lesson—and my suspicion about nature’s hostility to prediction—remain the same: count on relentless change, and rely on your adaptability and creativity. It’s OK to burn a few mental cycles speculating about what’s over the horizon, but our real power lies in our ability to solve problems and optimize designs when the future becomes the present.

  In praise of the humble power sensor

It’s always nice to get a reality check from a fellow RF engineer, and Keysight’s Eric Breakenridge recently delivered one in the form of an explanation of the capabilities of modern RF power sensors.

I guess I’ve become a bit of a measurement snob, having spent many years working with vector signal analyzers (VSAs). When we developed and introduced these products 25 years ago, we were really enthusiastic about a new tool that would tell us virtually anything about the most complex RF signals.

Measurements with the VSAs weren’t just comprehensive, they also had unprecedented accuracy, including RF power. They were significantly more precise than the spectrum analyzers of the time, especially on time-varying signals or those with complex modulation.

Looking back, however, I remember the RF engineers who were developing VSAs also had power meters and power sensors on their benches, and used them frequently. Those power meters and sensors were the benchmarks for our nascent VSAs, and the new analyzers would never have achieved such exceptional accuracy without them.

Power sensors—whether they’re connected to power meters or to PCs via USB or LAN—are relatively inexpensive and have advantages that ensure the ultimate in power accuracy. For one, you can attach the sensors directly to the DUT, eliminating cabling and adapters. Also, many sensors are designed for specific frequency ranges, letting them cover the frequencies in question with excellent impedance match and accuracy—and that accuracy is highly traceable.

The sensors, as I learned from Eric, can also make great measurements of power versus time. Here’s his example, a measurement of the time to switch the gain state of an amplifier.

RF power vs. time measurements using power sensor.  Power shown on log (dBm) and linear (Watts) scales

Two measurements from the U2042XA X-Series power sensor show the time to switch the gain state of an amplifier. Power is shown in watts (top) and decibels (bottom). The default 10 percent and 90 percent reference points have been adjusted to better reflect the time for the gain to reach its final value.

The USB and LAN power sensors can be connected to PCs and used with the power meter application in Keysight's BenchVue software. That application provides both graphical results and compiled tabular data such as this pulse analysis table.

Power meter application program on PC creates pulse statistics table from data acquired from RF power sensor via LAN or USB

When connected to an X-Series power sensor, the Power Meter Application assembles a series of measurements and creates a complete summary of pulse characteristics.

In addition to benchtop configurations, the LAN models (via power-over-Ethernet) are useful for remote monitoring, placing the sensor right at the DUT. Multiple sensors can be used with a single PC.

The power sensors have a wide dynamic range of –70 dBm to +26 dBm and sample rates as short as 50 ns. While they can’t match the speed, sensitivity, or selectivity of signal analyzers, their performance is a good fit for many applications, and the combination of low cost and measurement accuracy can help you make better use of the more-expensive signal analyzers in your lab. A power sensor demonstration guide shows some example measurements and configurations.

I don’t suppose anything will dull my esteem for VSAs, but my recent exposure to power sensors and the sophisticated power tools in my previous post have made me a little less of a measurement elitist. Whatever gets the job done the best!

  Understand, anticipate, and respect your power limitations

Are IoT and other smart/connected devices the biggest wireless opportunity right now, or the biggest source of hype? I suppose they can be both at the same time, and it’s clear that lots of devices will be designed and sold before we know the true magnitude of this wireless segment.

A crucial element of many wireless devices is operation on battery power. In recent years, this has often meant lithium ion batteries that are recharged once every day or two. These days, however, lots of design effort is transitioning to devices that use primary batteries, ranging from traditional alkaline cells to buttons and lithium-coin cells. These power sources are expected to last months, if not a year or more, despite their small size.

Meeting these power demands will require careful engineering—both RF power and DC power—and a holistic approach, to give you confidence that you’ll get the needed combination of performance and real-world functionality. This is a field with lots of investment and competition, meaning you may not have a second chance to fix a design failure or a development delay.

Exceptional power efficiency doesn’t happen by accident, and it isn’t a result of some tuning or tweaking at the end of the design process. Instead, it starts when devices are being designed, and overall success stems from a sustained process of measuring and optimizing. Two aspects of test & measurement are worth special note in designing for very low power: 

  • Using a power source with realistic limitations
  • Precisely measuring power consumption in all modes of operation, and during transitions

When powering a device or circuit, using a benchtop power supply can actually hide problems from you. Primary cells, especially when they’re very gradually going flat, can be highly imperfect power sources, and their imperfections can change with aging and temperature. Some precision power sources are now available to emulate real-world cells.

 

Diagram shows voltage-current emulation capabilities of Keysight B2961A/62A low-noise power sources

Keysight’s B2961A/62A low-noise power sources can emulate the DC voltage/current output characteristics of many different power sources, providing insight into real-world behavior in limited power conditions.

These advanced power sources can give you early warning of DC power problems while there’s still time and flexibility to design around them. They can also emulate power sources such as solar cells, with their very non-battery characteristics.

As always, if you’re going to optimize something, you have to measure it. On the power measurement side, extended battery life may require the ability to measure small currents, and perhaps a form of power scheduling to avoid excessive demand from simultaneous digital and RF activities. Whether you’re using a real battery or an emulator, instruments such as a DC power analyzer can tell you how much power is being used, and just when it’s needed.

Short term and long term current measurements from Keysight N6075C DC power analyzer

On the Keysight N6705C, the dynamics of current consumption are shown over 30 ms in scope view (left) and over 30 seconds in data logger view (right). Measurements such as these provide a more complete understanding of the real-world power demands of a device or subsystem.

The use of periodic quiescent states is one proven technique for extended battery operation, and it presents its own measurement challenges. Extremely tiny currents must be assessed to understand cumulative consumption, and recent products, such as the Keysight CX3300A device current waveform analyzer, are meant for just that. These analyzers have both analog and digital inputs, and the ability to time-align measurements of both.

In this post I’ve drifted away from my usual focus on RF measurements but, of course, the core concern for us in these DC power issues is to ensure that RF matters are proceeding as they should, no matter the state of DC power. Fortunately, there are ways to use the new power analyzers to trigger RF signal analyzers and thus correlate DC power with RF power and modulation, and that’s a subject for a future post.

  Are you a good spectral citizen?

 

Note from Ben Zarlingo: This is the third in our series of guest posts by Nick Ben, a Keysight engineer. In this post he provides an overview of adjacent channel power, a measure of how well your products play with others.

 

In the previous edition of The Four Ws, I discussed the fundamentals of noise figure. This time I’m discussing the WHAT, WHY, WHEN and WHERE of adjacent channel power (ACP) measurements so that you can ensure your device is only transmitting within its assigned channel and doesn’t interfere with signals in undesignated adjacent channels.

 

What

A key requirement for every wireless transmitting device is that it should only transmit within its assigned channel. To make sure of this, adjacent channel power (ACP) measurements determine the average power or interference a transmitting device generates in the adjacent channels compared to the average power in its assigned channel. This ratio is known as the adjacent channel power ratio (ACPR). ACP measurements use a reference level of 0.00 dBm, and the desire is to have the ACP be as low as possible. A poor ACPR is an indication of spectral spreading or switching transients for the device under test (DUT), which are a big no-no.

General diagram of adjacent channel power measurements, including main transmitter channel and two adjacent channels. Hand-drawn diagram

A look at generated power in a transmitting device’s adjacent channels. Adjacent channels are located above and below the generated power in the transmitting device’s designated channel. The ratio of the two gives you your ACPR.

 

Why and When

ACP is key in ensuring we avoid interference with other signals in adjacent channels where your device has not been licensed (from a regulatory body or agency like the Federal Communications Commission) to transmit.

 

The measurement is made on digital traffic channels. However, it is especially important to make the ACP measurement when there may be more stringent requirements beyond regulatory licensing. For example, Bluetooth, LTE and W-CDMA have ACP as part of their physical layer requirements.

 

Where (& How)

When using a spectrum analyzer, the results of an ACP measurement are displayed as a bar graph or as spectrum data (or a combination), with data at specified offsets. They can also be displayed as a table that includes the actual power of the adjacent channels and your transmitting device’s channel in dBm. This is done in addition to the power relative to the carrier in dBc for both the upper and lower sidebands.

 

As described earlier, to determine ACPR you have to integrate the power in your assigned channel and the power in the adjacent channels, and find the ratios between the integrals.  Keysight’s ACP PowerSuite measurement simplifies this process to give you fast results without manual calculations. All you do is set the channel frequency, bandwidth, and channel offsets for your signal’s specifications. The ACP PowerSuite measurement on X-Series signal analyzers takes care of everything else.

Adjacent channel power ratio measurement screen and partial front panel of Keysight signal analzyer.

Keysight Technologies EXA Signal Analyzer displaying a transmitter output using the ACP measurement, which is one of nine power measurements in the X-Series PowerSuite. Ideally a good signal should not go outside the transmitted channel (purple) into adjacent channels (green). Channel power ratios are shown in the table below the spectrum/channel bar display.

 

Wrapping Up

If you’d like to learn more about making fundamental measurements and spectrum analysis concepts to ensure your transmitting device is behaving as you’d expect, refer to the application note Spectrum Analysis Basics. The application note collects the fundamental knowledge needed to ensure your continued development of a great product. I hope my third installment of The Four Ws of X has provided some worthwhile information that you can use. Please post any comments – positive, constructive, or otherwise – and let me know what you think. If this post was useful give it a like and, of course, feel free to share.

  Spectrum crowding is bad, but I think interference is worse

He’s guilty of some degree of hyperbole, but Lou Frenzel highlights a fundamental issue in his recent column Spectrum Apocalypse: The Coming Death of Wireless. We’re all aware of the limited nature of our shared spectrum resource, and Lou extrapolates to a future in which crowding renders wireless practically unworkable. Fortunately, there are many ways for RF engineers to stave off that dismal future, and Lou summarizes them: modulation schemes and other RF techniques, protocol enhancements, and regulatory steps, and even RF alternatives such as optical.

However, in terms of the day-to-day issues that vex us, I’d argue that interference is more of a headache, and it’s getting worse. As Lou did, I’d like to take the opportunity to summarize the ways to deal with the problem.

In the early days of the vector signal analyzer, we delighted in our newfound ability to completely understand and measure transient signals, especially those that had interference potential because they ventured outside their channel. The VSA’s combination of signal capture+playback, frequency-domain triggering (with pre-trigger delay), and simultaneous multi-domain analysis let us see whatever we wanted, on any time scale. Colorful spectrogram displays made it all clear, as with this marine handheld radio.

Spectrogram (composite spectra over time) of transmitter turn-on event, produced from gap-free capture of RF signal

A spectrogram display details the behavior of a handheld radio as the transmit button is pressed. The carrier and its sidebands gradually stabilize (top to bottom) over several hundred milliseconds.

The spectrogram above was generated by playing back a gap-free RF capture, and we used the little radio for demonstrations, marveling at how it wandered around in a leisurely but purposeful fashion on its way to its programmed frequency. Unfortunately for other users of the band, this wandering crossed a dozen other channels, creating clicks or pops and sometimes PLL unlocks in other receivers every time the transmit button was pressed.

Fortunately, this interference was an annoyance and not a serious problem, due to the nature of the signal traffic and the low geographic density of transmitters. As far as I can tell, this sort of thing, when it was understood at all, was often just tolerated back then.

These days, the spectrum is infinitely more crowded, and many signals are bursted or otherwise time-varying, so transient interference is a much bigger problem. As a wireless engineer—whether you’re a potential interferer or interferee—you need to reliably detect and accurately measure elusive signals.

As Lou did, I’d like to summarize the techniques, and suggest a sequence:

Assess what you know about the possible interference: If you know its amplitude or frequency or timing, you can proceed to signal capture through VSA software, perhaps with a frequency-specific trigger, negative trigger delay, or both, to catch the beginning of a transient.

Apply real-time spectrum analysis (RTSA): If you suspect interference but know little or nothing about it (or want to ensure you find it if it exists), it’s time for RTSA. This is a scalar-only spectrum measurement and does not provide timing, but will ensure that you spot signals even if you know nothing about them. If timing specifics are important, consider the time-qualified trigger.

Use playback or post-processing: Once the signal in question is in the capture memory, use playback or post-processing in the VSA. Deep capture is available with graphical tools to select only the portion you want for analysis, all relative to timing established by IF magnitude, frequency mask, or time qualification.

Explore the signal during playback: You can easily change the center frequency and span to focus on the frequencies of interest. You can change parameters freely, without the need to repeat the capture, as long as the signal was somewhere in the original capture bandwidth. Repeat as necessary.

Change the analysis and display types: This is the first step to fully understanding the interference. You may want to use time and frequency markers, band-power calculations, or even demodulation to identify the signal and its critical characteristics. Spectrograms and density or persistence displays may reveal important signal behavior—and its relationship to desired signals—at a glance.

All these steps can be performed with signal analyzers that have the processing power and vector architecture to do real-time and vector signal analysis.

No matter what tools and steps are involved, the goal is to know the signal. After all, if you can understand the problem and fix it, you can keep the wireless world running until that spectrum apocalypse hits.

  While some are timeless, others evolve

This post will be brief, partly a consequence of the wildfires that have affected the Santa Rosa headquarters of Keysight and modified all our schedules, at least a little. Without getting too metaphysical, the fires and their aftermath are a powerful reminder that things are always changing.

This is certainly true in technology and test equipment. The need for better measurements never abates, and I’d like to say a thing or two about our cooperative efforts to keep pushing forward in applications (yours) and measurements (ours).

I’ve been reminded of the changing nature of measurement fundamentals in the context of my previous post on the innovations of the vector signal analyzer and Keysight’s RF test engineering webcast series on measurement fundamentals.

While some things are timeless—such as resolution bandwidth and power addition—others begin as advanced concepts and gradually become mainstream and even fundamental. Examples include ACPR/ACLR and error vector magnitude (EVM). Many of us can remember the first time someone explained channel power integration or vector error quantities to us, and yet eventually these measurement concepts are taken for granted in the context of more complex figures of merit. How about cumulative ACLR for non-contiguous aggregated carriers?

A similar phenomenon is evident in the core signal analyzer architecture described in the previous post. Vector signal analyzers began as a new type of analyzer, with a different architecture to preserve calibrated vector signal information. Eventually, this advanced architecture became common in lab-grade signal analyzers, and vector signal analysis transitioned to be an embedded software application for the analyzer platform.

The old/new duality is evident in the RF test webcast series mentioned above. The next installment deals with fundamentals.

Portion of Web page describing signal analysis fundamentals webcast, to be presented by Dave Engelder

Dave Engelder’s webcast will cover fundamentals, informed by his considerable involvement in new-generation analyzer architectures.

It’s a little ironic that Dave, the presenter for the fundamentals webcast, has spent a great deal of his time in recent years on product definition and planning for the newest generation of signal analyzers and their advanced capabilities.

Some fundamentals are timeless and unchanging, and others eventually give way to superior techniques and technologies. I suspect that dealing with an evolving mix of the timeless and the temporary is our fate as long as we work in RF.