Skip navigation
All Places > Keysight Blogs > Better Measurements: The RF Test Blog > Blog

As you walk through your lab, take a look at each RF bench. How old are your signal and network analyzers? How often are they kludged together to create one-off measurements? How recently have your engineers bugged you about getting new equipment that can actually test your latest RFIC?


I’m here to help you make a stronger case when your team’s success depends on timely access to better RF instruments. This post introduces language, concepts and solutions that will help you influence purchase decisions and improve your chances of getting the right tools at the right time. When you apply these ideas, your newfound business sense may surprise—if not impress—your boss or boss-squared.


Understanding your current reality

Day to day, you deal with competing objectives: delivering excellent results while staying within stringent constraints. From a high-level business perspective, there are three ways to do this: cut costs and deliver the same topline; hold costs steady and increase revenues; or invest more to create a giant leap in output. These days, most organizations operate within the first two scenarios while fast-growing companies chase the third.


Getting the right tools at the right time (and place)

Whichever situation you face, one of your biggest issues is likely to be test equipment. In fluent “manager speak,” “test assets” are often your organization’s most “underutilized assets.” Why? Because it’s difficult to confidently determine two crucial bits of information: the location of every instrument and how much each one is truly being utilized.


For you and your team, easy access to the right tools enables everyone to do their best work and stay on schedule. Applying manager-speak once more: for “technical staff,” “highly available” test equipment can be a “high-leverage asset.”


Pushing for better decisions in less time

An accurate view of location and utilization is essential to making credible decisions in less time: Do you need to purchase or rent additional equipment? Is it better to redeploy, upgrade, trade in or sell some of your existing gear?


A few basic changes can provide three big benefits: better visibility, improved utilization, and reduced expenses (capital and operating). The starting point is a solution that puts real-time information at your fingertips. Relevant information about test-asset location and utilization is essential to greater availability and improved productivity.


Taking the next steps

Being able to make quick, thoughtful decisions on how to best equip your engineers with the right tools is the foundation for a successful organization. To learn more, check out our latest resources to better understand how to drive down your total cost of ownership.


Please chime in with any and all comments. How difficult has it been to get the test tools your team needs? What techniques have you used to help make it happen?

Prove yourself as an engineer! The Schematic Challenge is the perfect opportunity to test your skills. On March 12, 13, and 14, we will be posting a new schematic or problem-solving challenge. If you, as a community, are able to answer questions 4, 5, and 6 correctly by Thursday, March 15 at 11:59 PM MST, we will add three 1000 X-Series oscilloscopes to the overall Wave 2018 giveaway! Answers should be posted in a comment on the #SchematicChallenge posts on the Keysight Bench or RF Facebook pages. Work with your family, friends, coworkers, or fellow engineers in the Wave community to solve these problems. If you haven’t already, be sure to register for Wave 2018 at

Question 4:

By Ryan Carlino


Status: SOLVED! (A=1 and B=2)


Week2 Q4 Schematic Challenge Wave2018

Given this circuit and assuming an ideal op-amp powered by +/-5V and ideal resistors, calculate the output voltage with respect to the input. Vin will be limited to +/-1V.

Express this transfer function like this:
Vout = A*Vin + B

The answer being posted should be a single number AB. For example, if A=4 and B=7, the answer you should post is 47.


Question 5:

By Jonathan Falco and Lukas Mead


Status: SOLVED! (90 MHz)


What integer frequency in MHz should the LO be set, to allow the RF input range to be seen on OUT?



Question 6:

By Barrett Poe


Status: SOLVED! (4-10-8-8)


You are asked to design the front end of a 10 MHz oscilloscope. The “front end” refers to the internals of an oscilloscope between the probe and the analog to digital converter. Your system requires you take a +/- 10V signal input, and output a 0-3.3V signal to the ADC input, which is terminated at 50 Ohms. Your circuit must scale, offset, and filter the incoming signal, then rescale it to the full range (within 10%) of the ADC’s reference voltage without clipping the sampled signal.


Oh no! You also just discovered your supplier has discontinued your favorite ideal operational amplifier (opamp). Your next two best choices are:

  • Opamp with 1 pF of capacitance on the inputs
  • Opamp with 10 pF of capacitance on the inputs

Make sure your design works with both of these back-up options. However, note that you will only use the same parts together. Meaning, you will only ever have two 1 pF opamps OR two 10 pF opamps, never one of each.

Also keep in mind – opamp output voltage cannot exceed the supply rails.

Output Voltage = 0 to 3.3V; Ensure Vout is +0%/-10% of ADC range for max input across bandwidth

Frequency = DC to 10 MHz


Assign a value to variables a, b, c, and d. The final answer to be posted on Facebook should be expressed as: a-b-c-d. For example, if a = 8, b = 6, c = 12, and d = 10, then the answer should be expressed as: 8-6-12-10.



The variable “a” is equal to one of these three options

  • c-1
  • 4
  • 4b

The variable “b” is equal to one of these three options

  • c+1
  • 4
  • 10

The variable “c” is equal to one of these three options

  • b/2
  • 6
  • 2*a

The variable “d” is equal to one of these three options

  • 8
  • (b+2)/3
  • 10

Helping You Achieve Greater Performance and Fast Measurement Speed

At an exhibition demo booth, an engineer complained to me about the measurement speed of a PXI oscilloscope. To make a measurement, he programmed the data acquisition and post-analysis himself. The test took him over a minute to get each result. I told him that he didn’t have to do all of that; all he needed to do was setup the measurement on the oscilloscope and fetch the measurement item result directly. The process should only take a couple of microseconds. An on-board ASIC helps minimize data transfer volumes and speed-up analysis!


Like an oscilloscope has on-board digital signal processing, RF signal analysis tools also have on-board processing to accelerate measurement speed.


RF Measurement Challenges

For RF signal analysis, it's common to frequency-shift the RF signal to an intermediate frequency (IF) so that you can use a high-resolution digitizer for a high dynamic range signal acquisition. This then gets sent to a PC for data analysis. However, the complexity of this analysis increases with today's wireless communication systems, such as 5G technologies, 802.11ax standard and so on. Measuring these systems can include complex modulation schemes (e.g., orthogonal frequency-division multiplexing, OFDM), carrier aggregation, or MIMO (multi-input multi-output) signals.


These complications require significant signal processing, which in turn slows the measurement speed. This is a challenge as measurement throughput is critical in most applications, especially in high volume production testing.

In most signal analyzers, a digitizer is an indispensable component. For wider bandwidth analysis, you need a high-speed digitizer to capture signals. At the heart of a high-speed digitizer is a powerful FPGA or ASIC that processes data in real-time. This allows data reduction and storage to be carried out at the digital level, minimizing data transfer volumes and speeding-up analysis.


A key feature often available on digitizers is real-time digital down conversion (DDC). In frequency domain applications, DDC allows engineers to focus on a specific part of the signal using a higher resolution, and transfer only the data of interest to the controller/PC. It works directly on ADC data providing frequency translation and decimation sometimes called "tune" and "zoom." The block diagram shown in Figure 1 illustrates this basic concept of DDC.


Digital down-converter block diagram

Figure 1. Digital down-converter block diagram


How DDC Works

The frequency translation (tune) stage of the DDC generates complex samples by multiplying the digitized stream of samples from the ADC with a digitized cosine (in-phase channel) and a digitized sine (quadrature channel).

The in-phase and quadrature signals can then be filtered to remove unwanted frequency components. Then, you can zoom in on the signal of interest and reduce the sampling rate (decimation).


Finally, the on-board processor sends only the data you care about (I/Q data) to the on-board memory for further analysis. Most of Keysight's digitizers and signal analyzers have implemented DDC to accelerate measurement speed and for demodulation acceleration.


In addition, you can perform FFT with I/Q data in parallel for spectral analysis.  Some signal analyzers can do real-time FFT processing (nearly 300,000 times/second) and use comprehensive spectrum displays (density and spectrum) so that you won't lose any agile signals on the screen, shown in Figure 2.


Real-time spectrum analysis at 2.4 GHz ISM bandFigure 2. Real-time spectrum analysis at 2.4 GHz ISM (industrial, scientific and medical) band


Benefits and Limitations of a High-Speed Digitizer with DDC

Using a high-speed digitizer with DDC for your RF testing can be significantly more efficient:

  1. The frequency translation (tune) reduces both on-board memory and data transfer requirements. The resulting data is in complex form (I+jQ), which is usable for demodulation analysis directly and accelerates measurement speed.
  2. Digital filtering and decimation (zoom) reduce the wideband integrated noise and improve overall SNR.


However, there are some limitations with DDC implementation:

  1. The ADC's sampling rate is limited. It's not possible to digitize the high-frequency carrier directly. A common workaround is to use an analog circuit to bring the carrier to an IF so the digitizer can acquire the signal.
  2. The ADC's dynamic range is also limited. In wireless communication systems, you may need to capture both large and small signals at the same time.


New generations of high-speed and high-resolution ADC technologies provide excellent resolution and dynamic range into the tens of GHz, which allows you to capture high-resolution wideband waveforms. DDC accelerates measurement speed and increases processing gain to improve performance.


Furthermore, the I/Q data can be processed further for advanced real-time signal analysis or sent to a customized FPGA for user-defined signal processing algorithms. These provide you better RF measurement fidelity, signal integrity and higher measurement throughput.


If you’d like to learn more about wideband signal acquisitions, you can refer to the following white paper Understanding the Differences Between Oscilloscopes and Digitizers for Wideband Signal Acquisitions to understand what you should be using for your application.

Like any RF engineer, there comes a time in your product’s design cycle that you need to test your device to make sure it’s behaving as you expect. There are different ways you can view your device’s signal, which brings us to why measuring signals in the time domain and frequency domain is the same, but not. This is because they both convey the same signal, but in a different way.


Figure 1. The time domain of a signal on the left, and the frequency domain of the same signal on the right. The time domain displays a signal in respect to amplitude vs. time whereas the frequency domain displays amplitude vs. frequency.


By properly combining spectrum, or a collection of sine waves, you can view the time domain of your signal. It shows your signal’s amplitude versus time. This is typically done using an oscilloscope. Why would you want to view your signal in the time domain, you ask? Basically, a time-domain graph shows how a signal changes with time. This lets you see or visualize instances where the amplitude is different.


Viewing your device’s signal in the time domain doesn’t always provide you with all the information you need. For example, in the time domain you can decipher that a signal of interest is not a pure sinusoid, however, you won’t know why. This is where the frequency domain comes in. The frequency domain display plots the amplitude versus the frequency of each sine wave in the spectrum. This may help you discern why your signal isn’t the pure sinusoidal wave you were hoping it to be.


Figure 2. Harmonic distortion test of a transmitter, which is most appropriately measured using a spectrum analyzer in the frequency domain.


The frequency domain can help identify questions about your signal that you wouldn’t be able to see in the time domain. However, this doesn’t mean that you can just scrap measuring signals in the time domain altogether. The time domain is still better for many measurements, and some measurements are only possible in the time domain. Examples include pulse rise and fall times, overshoot, and ringing.


But just like the time domain has its advantages, so does the frequency domain. For one, the frequency domain is better for determining the harmonic content of a signal (as seen in Figure 2). So, those of you in wireless communications who need to measure spurious emissions are better off using the frequency domain. Yet another example is seen in spectrum monitoring. Government regulatory agencies allocate different frequencies for various services. This spectrum is then monitored because it it is critical that each of these services operate at its assigned frequency and stay within the allocated channel bandwidth.


While measuring signals in the time domain and frequency domain is similar, it is also very different. Each domain conveys the same signal, but from different perspectives. This enables us engineers to get more insight into how our device is behaving and ultimately develop better products for our customers.


To build a stronger foundation in signal analysis that will help you deliver your next breakthrough, check out the Spectrum Analysis Basics app note. Please post any comments - positive, constructive, or otherwise - and let me know what you think. If this post was useful give it a like and, of course, feel free to share.


Passing Along the Magic

Posted by benz Feb 20, 2018

  Demystifying technology, and marking five years of The RF Test Blog

For several years, I co-coached two middle school robotics teams. It was a great experience, and I learned at least as much as I taught—though generally about different subjects!

Some of the kids gravitated toward the robot mechanisms, while others found a natural focus on the programming side. I suppose that’s part of the intent of robotics clubs, mixing hardware and software to increase the chances of inspiring kids to pursue STEM studies and careers.

Ironically, our success with ever-more-complex technology may create some barriers to getting kids interested in it. During a club meeting one afternoon, I was vividly reminded of Arthur C. Clarke’s famous quote, “Any sufficiently advanced technology is indistinguishable from magic.” While the kids were working with robots and laptops, virtually all of them were carrying a magical device that was even more advanced: their mobile phone.

These thin slabs of metal, glass, and plastic, invisibly connected to the rest of the universe, could be expected to do just about anything when equipped with the right app. Seeing something so magical being taken so thoroughly for granted, I understood why some kids weren’t all that captivated by the robots.

That realization left me a bit troubled, and I wondered about other ways to get the kids engaged.

A partial answer came later in the semester. My co-coach had the brilliant idea of devoting one club session to the dismantling of technology. She brought in some older devices, working or not, including an early digital camera, a portable CD player, and a slider-type mobile phone. We gave the kids some small screwdrivers and turned them loose to get a glimpse behind the engineering curtain.

I was amazed at the spike in enthusiasm and engagement, especially from some kids who had previously been marginal participants. Once they reasoned out how to open the devices and free the contents, they then delighted in showing others how they thought the parts actually worked. They got an especially big kick out of the tiny motor and attached eccentric that vibrated the phone. It was the one recognizable part of the device that moved!

My take-away: if we want to pass along our interest in creating the magic of new technologies—and solving the attendant problems—we need to keep our eyes open to new approaches to communicate and share.

That’s what we were thinking five years ago when we started this blog. Since then, it has been a delight to learn about RF technologies and share the results with you. I very much appreciate your indulgence as I’ve wandered from Loose Nut Danger (the first post) to MIMO to the technology of furry hoods.

It’s now time to pass along the writing of this blog to a new generation, with their own perspectives, insights, and peculiar interests.

Composite image of new primary writers of this blog: Eric Hsu, Vandana Duff, Nick Ben, and Tit Bin Teo

Meet the new primary writers of Keysight’s Better Measurements: The RF Test Blog, clockwise from upper left: Eric Hsu, Vandana Duff, Nick Ben, and Tit Bin Teo

Nick Ben has already written several guest posts here, and I think this blog will benefit from the new writers’ wider range of interests and experience. I look forward to following where they lead.

As for me, I plan to pursue my interests in a direction that looks more like retirement, with increased opportunities to learn and to teach, coach, and share.

  Fortunately, we can make things better—for your signal analyzer


Note from Ben Zarlingo: This guest post comes to us from Bill Scharf, a Keysight engineer with long experience in microwave signal analysis.


If you use even a gently aged spectrum analyzer, you may sometimes wonder why its amplitude accuracy above about 4 GHz is slightly worse than when it was new or after it has been freshly calibrated. Personally, I sometimes wonder why I cannot do the things I did when I was 20 years old.


In both cases aging is occurring. Although nothing is technically broken, we can make things better without magically locating a certain DeLorean car equipped with a flux capacitor and then driving 88 mph, hoping for a lightning strike, and traveling back in time.


What is a preselector, and why would it drift?

If we assume the instrument is a Keysight X-Series signal analyzer, what has probably happened is the preselector, sometimes called a YIG-tuned filter or tracking filter, has drifted a bit—but not enough to cause an out-of-specification situation. In an X-Series analyzer, the preselector is located in the signal path between the input attenuator and the first mixer, and it is used only at tuned frequencies of 3.6 GHz and higher.


The filter bandwidth should be wide enough to measure the desired signal, yet narrow enough to reject image frequencies and undesired signals (which may overload the first mixer). Depending on the tuned frequency, the bandwidth of the filter ranges from about 40 MHz to 75 MHz. Filter shape and ripple across the passband also vary with tuned frequency. As the analyzer tunes, the preselector filter tracks the change and provides a “centered” passband at the current frequency, as shown below.

Frequency response or gain/attenuation parameters of a YIG-tuned filter, used as a preselector in signal analzyers to remove undesired image or out-of-band signals and the spurious responses they would create in the signal analysis results

Typical passband response of a YIG-tuned preselector

Instrument software automatically handles most of this preselector tuning; however, careful adjustment of the instrument will help deal with the rest.


Ensuring better performance

As the instrument ages, especially its preselector assembly, the filter bandpass will drift. As a result, the signal being measured might fall in an area of passband ripple or on a steeper portion of the filter response. Here are three tips to help ensure the best performance:

  •  For the absolute best amplitude accuracy, the Preselector Center function (accessible via the front panel or SCPI) uses internal calibration signals to vary the preselector filter tuning in real time and obtains the best possible tuning. Be forewarned that this routine is time-consuming. If you need the very best amplitude accuracy using the preselector, then re-center the preselector at each measurement frequency.
  • Every three to six months, apply the Characterize Preselector routine. This performs “preselector centering” at various pre-determined frequencies up to the maximum frequency range of your analyzer. The analyzer stores the tuning values and automatically uses them the next time the analyzer is tuned to those frequencies. One advantage: after this routine runs, you may not need to rely on the slower Preselector Center routine (above). No external equipment is required: simply press System, Alignments and Advanced then select Characterize Preselector.
  • Bypass the preselector filter. If your instrument contains option MPB, microwave preselector bypass, you can select the bypass path and remove the preselector from the signal path. The downside: the instrument is no longer filtering the input signals (i.e., it isn’t “preselected”). Depending on the span setting, you may see image frequencies that are not being rejected by the preselector and so appear at the first mixer. The advantage: the bandwidth is about 800 MHz at the first mixer, preselector drift is no longer an issue, and measurement speed may increase because the instrument is no longer trying to avoid oversweeping the preselector filter.

More detail is available in our preselector tuning application note.

Wrapping up

Three closing comments: The “Y” in the YIG-tuned filter, when inverted, is almost the same schematic symbol as the flux capacitor. If you are more than 20 years old, use a knee brace when running marathons, thereby avoiding future trips to the hospital. Those of you that have an X-Series analyzer can use the Characterize Preselector routine to optimize accuracy between periodic calibrations.

  Adapting to future circumstances instead of expecting to anticipate them

 There’s nothing inherently wrong with trying to predict the future, whether that of technology or any other area. It’s easy to understand why “trying” is an attractive pastime, but expecting consistent success is where engineers and others may run into trouble.

Instead, I suggest that engineers use their super-powers of creative adaptation.

My jaded attitude toward predictions comes from work I did a couple of decades ago, forecasting the future sales (0 to 18 months out) of about two dozen measurement products. I put my analytical skills to work with some modest success, but a little honest self-appraisal left me doubting that I’d added real value. Sometimes I was just lucky, and it was hard to take much satisfaction from that.

Some research into the general landscape of prognostication left me wondering if maybe the universe was actually hostile to the whole enterprise of predicting the future. Or if not actively hostile, then resistant in a passive and maddening way.

Back then, greater minds than mine had repeatedly come to grief in such prediction efforts, including a group of brilliant academics and bureaucrats in Japan. They were economists and mathematical modelers, and despite their dedication and diligence, they were no more successful than I was.

In this situation the obvious question was to ask what kind of approaches, if any, were effective in somehow handling the important unknowns the future held. If you accept that you can’t reliably predict the future, what can you do?

In short, you can adapt as the future arrives. To be more successful than others in your field, you can work to adapt faster and better than they do. In fact, you may be able to speed up the process by pre-adapting using techniques such as scenario planning. In scenario planning, multiple possible futures are considered, and steps are taken in advance to outline carefully considered responses to the ones judged most likely to occur.

Scenario planning is usually thought of as a large-scale strategic activity, but you may already be doing it with a narrower view. For example, your designs may be anticipating a clear price/performance trend in either digital signal processing or analog semiconductors such that your product will be ready for the new leading edge. Tactically, this may mean implementing a modular design that lets you drop in the new elements as soon as they’re available in quantity.

As much as I’ve been disappointed in our collective inability to accurately predict the future, I have been repeatedly impressed by the ability of designers to adapt as technologies and markets evolve. Take wireless networking as one example.

Crowd of over 100,000 people at Michigan Stadium.  An example of the demands of large numbers of wireless networking and cellular data users in close proximity

The original designers of Michigan Stadium anticipated that it would need to hold more than 100,000 people, and designed its footings accordingly. However, they had no conception of a future where the vast majority of the fans would be carrying wireless telephones and would expect mobile network or Wi-Fi access. (photo from Wikipedia)

The definitive Wi-Fi standard, 802.11b, emerged more than 20 years ago, in 1997. 3G telecom networks began appearing perhaps a year later. In the years since, growth in all dimensions—users, connected devices, infrastructure, and data rates—has been enormous and continuous. It’s likely to continue at a similar pace for years to come.

While the original standards couldn’t handle these demands, creative engineers were—and still are—constantly working to adapt and expand. They continue to succeed beyond the expectations of many, including me.

They’re also making it clear that predicting the future is less important than creatively responding to shifting demands, expectations, and technologies. That multi-dimensional creativity has included OFDM, OFDMA, MIMO (including multi-user MIMO), beamforming, carrier aggregation, and manufacturing techniques that make microwave and millimeter devices practical and affordable.

Twenty years after my forecasting adventures, the underlying lesson—and my suspicion about nature’s hostility to prediction—remain the same: count on relentless change, and rely on your adaptability and creativity. It’s OK to burn a few mental cycles speculating about what’s over the horizon, but our real power lies in our ability to solve problems and optimize designs when the future becomes the present.

  In praise of the humble power sensor

It’s always nice to get a reality check from a fellow RF engineer, and Keysight’s Eric Breakenridge recently delivered one in the form of an explanation of the capabilities of modern RF power sensors.

I guess I’ve become a bit of a measurement snob, having spent many years working with vector signal analyzers (VSAs). When we developed and introduced these products 25 years ago, we were really enthusiastic about a new tool that would tell us virtually anything about the most complex RF signals.

Measurements with the VSAs weren’t just comprehensive, they also had unprecedented accuracy, including RF power. They were significantly more precise than the spectrum analyzers of the time, especially on time-varying signals or those with complex modulation.

Looking back, however, I remember the RF engineers who were developing VSAs also had power meters and power sensors on their benches, and used them frequently. Those power meters and sensors were the benchmarks for our nascent VSAs, and the new analyzers would never have achieved such exceptional accuracy without them.

Power sensors—whether they’re connected to power meters or to PCs via USB or LAN—are relatively inexpensive and have advantages that ensure the ultimate in power accuracy. For one, you can attach the sensors directly to the DUT, eliminating cabling and adapters. Also, many sensors are designed for specific frequency ranges, letting them cover the frequencies in question with excellent impedance match and accuracy—and that accuracy is highly traceable.

The sensors, as I learned from Eric, can also make great measurements of power versus time. Here’s his example, a measurement of the time to switch the gain state of an amplifier.

RF power vs. time measurements using power sensor.  Power shown on log (dBm) and linear (Watts) scales

Two measurements from the U2042XA X-Series power sensor show the time to switch the gain state of an amplifier. Power is shown in watts (top) and decibels (bottom). The default 10 percent and 90 percent reference points have been adjusted to better reflect the time for the gain to reach its final value.

The USB and LAN power sensors can be connected to PCs and used with the power meter application in Keysight's BenchVue software. That application provides both graphical results and compiled tabular data such as this pulse analysis table.

Power meter application program on PC creates pulse statistics table from data acquired from RF power sensor via LAN or USB

When connected to an X-Series power sensor, the Power Meter Application assembles a series of measurements and creates a complete summary of pulse characteristics.

In addition to benchtop configurations, the LAN models (via power-over-Ethernet) are useful for remote monitoring, placing the sensor right at the DUT. Multiple sensors can be used with a single PC.

The power sensors have a wide dynamic range of –70 dBm to +26 dBm and sample rates as short as 50 ns. While they can’t match the speed, sensitivity, or selectivity of signal analyzers, their performance is a good fit for many applications, and the combination of low cost and measurement accuracy can help you make better use of the more-expensive signal analyzers in your lab. A power sensor demonstration guide shows some example measurements and configurations.

I don’t suppose anything will dull my esteem for VSAs, but my recent exposure to power sensors and the sophisticated power tools in my previous post have made me a little less of a measurement elitist. Whatever gets the job done the best!

  Understand, anticipate, and respect your power limitations

Are IoT and other smart/connected devices the biggest wireless opportunity right now, or the biggest source of hype? I suppose they can be both at the same time, and it’s clear that lots of devices will be designed and sold before we know the true magnitude of this wireless segment.

A crucial element of many wireless devices is operation on battery power. In recent years, this has often meant lithium ion batteries that are recharged once every day or two. These days, however, lots of design effort is transitioning to devices that use primary batteries, ranging from traditional alkaline cells to buttons and lithium-coin cells. These power sources are expected to last months, if not a year or more, despite their small size.

Meeting these power demands will require careful engineering—both RF power and DC power—and a holistic approach, to give you confidence that you’ll get the needed combination of performance and real-world functionality. This is a field with lots of investment and competition, meaning you may not have a second chance to fix a design failure or a development delay.

Exceptional power efficiency doesn’t happen by accident, and it isn’t a result of some tuning or tweaking at the end of the design process. Instead, it starts when devices are being designed, and overall success stems from a sustained process of measuring and optimizing. Two aspects of test & measurement are worth special note in designing for very low power: 

  • Using a power source with realistic limitations
  • Precisely measuring power consumption in all modes of operation, and during transitions

When powering a device or circuit, using a benchtop power supply can actually hide problems from you. Primary cells, especially when they’re very gradually going flat, can be highly imperfect power sources, and their imperfections can change with aging and temperature. Some precision power sources are now available to emulate real-world cells.


Diagram shows voltage-current emulation capabilities of Keysight B2961A/62A low-noise power sources

Keysight’s B2961A/62A low-noise power sources can emulate the DC voltage/current output characteristics of many different power sources, providing insight into real-world behavior in limited power conditions.

These advanced power sources can give you early warning of DC power problems while there’s still time and flexibility to design around them. They can also emulate power sources such as solar cells, with their very non-battery characteristics.

As always, if you’re going to optimize something, you have to measure it. On the power measurement side, extended battery life may require the ability to measure small currents, and perhaps a form of power scheduling to avoid excessive demand from simultaneous digital and RF activities. Whether you’re using a real battery or an emulator, instruments such as a DC power analyzer can tell you how much power is being used, and just when it’s needed.

Short term and long term current measurements from Keysight N6075C DC power analyzer

On the Keysight N6705C, the dynamics of current consumption are shown over 30 ms in scope view (left) and over 30 seconds in data logger view (right). Measurements such as these provide a more complete understanding of the real-world power demands of a device or subsystem.

The use of periodic quiescent states is one proven technique for extended battery operation, and it presents its own measurement challenges. Extremely tiny currents must be assessed to understand cumulative consumption, and recent products, such as the Keysight CX3300A device current waveform analyzer, are meant for just that. These analyzers have both analog and digital inputs, and the ability to time-align measurements of both.

In this post I’ve drifted away from my usual focus on RF measurements but, of course, the core concern for us in these DC power issues is to ensure that RF matters are proceeding as they should, no matter the state of DC power. Fortunately, there are ways to use the new power analyzers to trigger RF signal analyzers and thus correlate DC power with RF power and modulation, and that’s a subject for a future post.

  Are you a good spectral citizen?


Note from Ben Zarlingo: This is the third in our series of guest posts by Nick Ben, a Keysight engineer. In this post he provides an overview of adjacent channel power, a measure of how well your products play with others.


In the previous edition of The Four Ws, I discussed the fundamentals of noise figure. This time I’m discussing the WHAT, WHY, WHEN and WHERE of adjacent channel power (ACP) measurements so that you can ensure your device is only transmitting within its assigned channel and doesn’t interfere with signals in undesignated adjacent channels.



A key requirement for every wireless transmitting device is that it should only transmit within its assigned channel. To make sure of this, adjacent channel power (ACP) measurements determine the average power or interference a transmitting device generates in the adjacent channels compared to the average power in its assigned channel. This ratio is known as the adjacent channel power ratio (ACPR). ACP measurements use a reference level of 0.00 dBm, and the desire is to have the ACP be as low as possible. A poor ACPR is an indication of spectral spreading or switching transients for the device under test (DUT), which are a big no-no.

General diagram of adjacent channel power measurements, including main transmitter channel and two adjacent channels. Hand-drawn diagram

A look at generated power in a transmitting device’s adjacent channels. Adjacent channels are located above and below the generated power in the transmitting device’s designated channel. The ratio of the two gives you your ACPR.


Why and When

ACP is key in ensuring we avoid interference with other signals in adjacent channels where your device has not been licensed (from a regulatory body or agency like the Federal Communications Commission) to transmit.


The measurement is made on digital traffic channels. However, it is especially important to make the ACP measurement when there may be more stringent requirements beyond regulatory licensing. For example, Bluetooth, LTE and W-CDMA have ACP as part of their physical layer requirements.


Where (& How)

When using a spectrum analyzer, the results of an ACP measurement are displayed as a bar graph or as spectrum data (or a combination), with data at specified offsets. They can also be displayed as a table that includes the actual power of the adjacent channels and your transmitting device’s channel in dBm. This is done in addition to the power relative to the carrier in dBc for both the upper and lower sidebands.


As described earlier, to determine ACPR you have to integrate the power in your assigned channel and the power in the adjacent channels, and find the ratios between the integrals.  Keysight’s ACP PowerSuite measurement simplifies this process to give you fast results without manual calculations. All you do is set the channel frequency, bandwidth, and channel offsets for your signal’s specifications. The ACP PowerSuite measurement on X-Series signal analyzers takes care of everything else.

Adjacent channel power ratio measurement screen and partial front panel of Keysight signal analzyer.

Keysight Technologies EXA Signal Analyzer displaying a transmitter output using the ACP measurement, which is one of nine power measurements in the X-Series PowerSuite. Ideally a good signal should not go outside the transmitted channel (purple) into adjacent channels (green). Channel power ratios are shown in the table below the spectrum/channel bar display.


Wrapping Up

If you’d like to learn more about making fundamental measurements and spectrum analysis concepts to ensure your transmitting device is behaving as you’d expect, refer to the application note Spectrum Analysis Basics. The application note collects the fundamental knowledge needed to ensure your continued development of a great product. I hope my third installment of The Four Ws of X has provided some worthwhile information that you can use. Please post any comments – positive, constructive, or otherwise – and let me know what you think. If this post was useful give it a like and, of course, feel free to share.

  Spectrum crowding is bad, but I think interference is worse

He’s guilty of some degree of hyperbole, but Lou Frenzel highlights a fundamental issue in his recent column Spectrum Apocalypse: The Coming Death of Wireless. We’re all aware of the limited nature of our shared spectrum resource, and Lou extrapolates to a future in which crowding renders wireless practically unworkable. Fortunately, there are many ways for RF engineers to stave off that dismal future, and Lou summarizes them: modulation schemes and other RF techniques, protocol enhancements, and regulatory steps, and even RF alternatives such as optical.

However, in terms of the day-to-day issues that vex us, I’d argue that interference is more of a headache, and it’s getting worse. As Lou did, I’d like to take the opportunity to summarize the ways to deal with the problem.

In the early days of the vector signal analyzer, we delighted in our newfound ability to completely understand and measure transient signals, especially those that had interference potential because they ventured outside their channel. The VSA’s combination of signal capture+playback, frequency-domain triggering (with pre-trigger delay), and simultaneous multi-domain analysis let us see whatever we wanted, on any time scale. Colorful spectrogram displays made it all clear, as with this marine handheld radio.

Spectrogram (composite spectra over time) of transmitter turn-on event, produced from gap-free capture of RF signal

A spectrogram display details the behavior of a handheld radio as the transmit button is pressed. The carrier and its sidebands gradually stabilize (top to bottom) over several hundred milliseconds.

The spectrogram above was generated by playing back a gap-free RF capture, and we used the little radio for demonstrations, marveling at how it wandered around in a leisurely but purposeful fashion on its way to its programmed frequency. Unfortunately for other users of the band, this wandering crossed a dozen other channels, creating clicks or pops and sometimes PLL unlocks in other receivers every time the transmit button was pressed.

Fortunately, this interference was an annoyance and not a serious problem, due to the nature of the signal traffic and the low geographic density of transmitters. As far as I can tell, this sort of thing, when it was understood at all, was often just tolerated back then.

These days, the spectrum is infinitely more crowded, and many signals are bursted or otherwise time-varying, so transient interference is a much bigger problem. As a wireless engineer—whether you’re a potential interferer or interferee—you need to reliably detect and accurately measure elusive signals.

As Lou did, I’d like to summarize the techniques, and suggest a sequence:

Assess what you know about the possible interference: If you know its amplitude or frequency or timing, you can proceed to signal capture through VSA software, perhaps with a frequency-specific trigger, negative trigger delay, or both, to catch the beginning of a transient.

Apply real-time spectrum analysis (RTSA): If you suspect interference but know little or nothing about it (or want to ensure you find it if it exists), it’s time for RTSA. This is a scalar-only spectrum measurement and does not provide timing, but will ensure that you spot signals even if you know nothing about them. If timing specifics are important, consider the time-qualified trigger.

Use playback or post-processing: Once the signal in question is in the capture memory, use playback or post-processing in the VSA. Deep capture is available with graphical tools to select only the portion you want for analysis, all relative to timing established by IF magnitude, frequency mask, or time qualification.

Explore the signal during playback: You can easily change the center frequency and span to focus on the frequencies of interest. You can change parameters freely, without the need to repeat the capture, as long as the signal was somewhere in the original capture bandwidth. Repeat as necessary.

Change the analysis and display types: This is the first step to fully understanding the interference. You may want to use time and frequency markers, band-power calculations, or even demodulation to identify the signal and its critical characteristics. Spectrograms and density or persistence displays may reveal important signal behavior—and its relationship to desired signals—at a glance.

All these steps can be performed with signal analyzers that have the processing power and vector architecture to do real-time and vector signal analysis.

No matter what tools and steps are involved, the goal is to know the signal. After all, if you can understand the problem and fix it, you can keep the wireless world running until that spectrum apocalypse hits.

  While some are timeless, others evolve

This post will be brief, partly a consequence of the wildfires that have affected the Santa Rosa headquarters of Keysight and modified all our schedules, at least a little. Without getting too metaphysical, the fires and their aftermath are a powerful reminder that things are always changing.

This is certainly true in technology and test equipment. The need for better measurements never abates, and I’d like to say a thing or two about our cooperative efforts to keep pushing forward in applications (yours) and measurements (ours).

I’ve been reminded of the changing nature of measurement fundamentals in the context of my previous post on the innovations of the vector signal analyzer and Keysight’s RF test engineering webcast series on measurement fundamentals.

While some things are timeless—such as resolution bandwidth and power addition—others begin as advanced concepts and gradually become mainstream and even fundamental. Examples include ACPR/ACLR and error vector magnitude (EVM). Many of us can remember the first time someone explained channel power integration or vector error quantities to us, and yet eventually these measurement concepts are taken for granted in the context of more complex figures of merit. How about cumulative ACLR for non-contiguous aggregated carriers?

A similar phenomenon is evident in the core signal analyzer architecture described in the previous post. Vector signal analyzers began as a new type of analyzer, with a different architecture to preserve calibrated vector signal information. Eventually, this advanced architecture became common in lab-grade signal analyzers, and vector signal analysis transitioned to be an embedded software application for the analyzer platform.

The old/new duality is evident in the RF test webcast series mentioned above. The next installment deals with fundamentals.

Portion of Web page describing signal analysis fundamentals webcast, to be presented by Dave Engelder

Dave Engelder’s webcast will cover fundamentals, informed by his considerable involvement in new-generation analyzer architectures.

It’s a little ironic that Dave, the presenter for the fundamentals webcast, has spent a great deal of his time in recent years on product definition and planning for the newest generation of signal analyzers and their advanced capabilities.

Some fundamentals are timeless and unchanging, and others eventually give way to superior techniques and technologies. I suspect that dealing with an evolving mix of the timeless and the temporary is our fate as long as we work in RF.

  A new RF tool leveraged two evolving technologies

These days, vector signal analyzers (VSAs) are in broad use, especially in wireless, aerospace, and defense applications. They’re essential for designing the complex modulated and time-varying signals that have become ubiquitous.

However, VSAs haven’t been around nearly as long as spectrum or network analyzers, and I can remember the process—including an informal contest—that yielded the name “vector signal analyzer.” This month marks the 25th anniversary of the first VSA, and I’d like to take a brief look back.

Lots of technical forces were coming together as the 1980s came to a close, both in terms of signals and the equipment to test them. Mobile phones were entering a period of what would become explosive growth, and the transition from analog (1G) to digital modulation (2G) was the way to handle the increase in traffic.

In test equipment, signal processing and digitizers were improving rapidly, and some low-frequency signal analyzers had switched from analog filters to digital ones in their intermediate frequency (IF) stages. Technology and demand were both in place.

These forces converged in the single test and measurement division that had a deep background in both swept and FFT-based products. HP’s Lake Stevens Instrument Division had already produced the first low-frequency swept network and spectrum analyzers (e.g., up to 200 MHz) with digital IF sections. That put the division in a unique position to combine the classic superheterodyne architecture with high performance ADCs and DSP.

Resolution bandwidth filters could be all digital, with better speed, accuracy, and selectivity. Virtually any resolution bandwidth could be produced, from sub-hertz to several megahertz. Perhaps most significant, the entire signal chain could preserve signal phase and therefore vector content.

Processing a signal’s complete vector information was important for obvious and less-obvious reasons. Vector processing allows for accurate, selective analog demodulation, fully separating amplitude from phase or frequency modulation. It also provides complete pulse analysis, and the potential for digital demodulation.

A key decision in this area, driven by a need for accurate pulse analysis, was to perform continuous time-domain vector calibration across the analyzer signal chain. This improvement on frequency-domain calibration was later to be essential to precise digital demodulation of all kinds in the VSA.

Over a span of several years, all of this evolving and improving technology was subjected to extensive discussions and trials with potential customers. Their feedback was crucial to the definition and implementation of the first VSA and, in many ways, they taught us what a VSA should really be. Many changes and refinements were made along the way, and in October 1992 we introduced the first RF vector signal analyzer, the HP 89440A.

89440A RF vector signal analyzer catalog photo. Part of family including 89410A and eventually 89600 vector signal analyzer.

This image from the HP test and measurement catalog shows the first RF vector signal analyzer, the 89441A. The bottom section contains the RF receiver and companion source. The two-channel top section was also available as the (baseband) 89410A VSA.

Vector Signal Analyzer—I suppose the name seems obvious in retrospect, but it wasn’t so clear at the time. We were aware that it was a new type of analyzer, one we expected would be an enduring category, and one we wanted to get right. I can’t recall the other candidate names, but remember voting for VSA. After all, it was a signal analyzer—not just a spectrum analyzer—that provided vector results.

Wireless and aerospace/defense test engineers quickly grasped the possibilities. Shortly after introduction, we took the analyzer to its first trade show: engineers were lining up for demos. The signal views and insights provided by the frequency+time+modulation combination was compelling, and we were able to show waterfall and spectrogram displays, along with complete signal capture and playback.

Within the year we added digital demodulation, and the wireless revolution picked up steam. VSAs helped enable the new transmission schemes, from CDMA to high-order QAM, multi-carrier signals, OFDM, and MIMO. Software enhancements allowed the VSA to track the emerging technologies and standards, giving engineers a reliable test solution early in the design process.

Though the name and measurements would continue, the VSA as a separate analyzer type gradually yielded to newer “signal analyzers” with digital vector processing. These analyzers started with swept scalar spectrum analysis, and VSA capability became an option to the base hardware.

Now separate and embedded, Keysight’s 89600 VSA software continues the tradition of supporting the leading edge of wireless technology. The latest example: a new VSA software release supports pre-5G modulation analysis, and will evolve along with the standard.

It’s been a busy quarter century for all of us, and I expect VSAs will be just as useful for the next one.

  Even great minds fail sometimes, but we can revise our mental models

Though this blog is created by a company that makes hardware and software, the core of our work—and yours—is problem solving and optimization. In the past, I’ve said that success in this work demands creativity, judgement, and intuition. I’ve also written (several times) about my fascination with failures of intuition and ways we might understand and correct such failures.

One way to look at intuition in engineering is to see it as the result of mental models of a phenomenon or situation. Failures, then, can be corrected by finding the errors in these mental models.

We learn from those who go before us, and a great example of a failure (and its correction) by a famous engineer/scientist prompted me correct one of my own mental models. It was an error many of us share, and though I realized it was wrong a long time ago, it’s only now that I can clearly explain the defect.

The trailblazer is the famous rocket pioneer Robert Goddard. He was also a pioneer in our own field, more than a century ago inventing and patenting one of the first practical vacuum tubes for amplifying signals. If he hadn’t focused most of his energy on rockets, Goddard would probably be as famous in electronics as his contemporary Lee de Forest (but that’s a story for another day).

Goddard knew that stability would be a problem for his rockets, and he had no directional control system for them. To compensate, he used a cumbersome design that placed the combustion chamber and exhaust of his first liquid-fuel rocket at the top, with the heavy fuel and oxidizer tanks at the bottom. Having a rocket engine pointed at its own fuel is clearly inviting problems, but Goddard thought it was worth it.

He was mistaken, falling for what would later be described as the pendulum rocket fallacy. He was a brilliant engineer and experimenter, however, and corrected this erroneous mental model in rockets built just a couple of months later.

My own error in this area involved helicopters and their stability. Decades ago, I was surprised to learn that they are inherently very unstable. A friend—an engineer who flew model ’copters—gave a simplified explanation: The main mass of the helicopter may hang from the rotor disk, but when that disk tilts, it’s thrust vector tilts in the same direction. The lateral component of that vector causes more tilt and, unfortunately, also reduces the lift component. The process quickly runs away, and the helicopter seems to want to dive to the ground.

It’s similar to an inverted pendulum, and just the opposite of what my intuition would have predicted. It explains why pilots of non-stabilized helicopters describe the experience as balancing on the top of a pole: they must sense small accelerations and correct before motions build up.

While the explanation corrected and improved my mental model, my intuition was woefully unable to handle the claim that the aircraft below was relatively stable and easy to fly.

Hiller VZ-1 Pawnee flying platform in flight. The platform is similar to a helicopter, but rotors are in the form of a ducted fan at the bottom of the craft. Stability is generally better than a helicopter

The Hiller “Pawnee” flying platform of the 1950s used ducted fans beneath the engine and payload/pilot. Despite its appearance, it is easier to fly than a helicopter. (public domain image via Wikipedia)

This aircraft certainly does look like an inverted pendulum, though that perception is actually another fallacious mental model. 

One explanation comes from the powerful engineering analysis technique of imagining small increments of time and motion. If the flying platform tilts, the thrust vector of the fans tilts in the same direction, just as with the helicopter. However, in this instance the thrust is applied below the center of mass rather than above. The tendency to cause a tilt is applied in the opposite direction and is therefore not self-reinforcing.

I don’t believe the flying platform arrangement is inherently stable, but it is much less unstable than a helicopter. I once flew a flying platform simulator in a museum, and it was indeed straightforward to control.

So, let us acknowledge the power of our engineering intuition and salute our mental models as immensely useful guides. But let’s also remain vigilant in remembering that even the best engineers sometimes get things wrong, and it’s essential to be willing to make corrections.

  Avoiding extra functionality in your circuits

A common saying among electrical engineers is that if you want an oscillator, try building an amplifier. This is all too often accurate, and unwanted oscillations in amplifiers can be a problem for designs at almost any frequency, from very low to high. Of course, if you design an oscillator it probably won’t, at least not at first, but that’s a topic for another day.

Although not part of the saying, it’s important to realize that if you do get an unintentional oscillator, it’s likely to be a terrible one, both unstable and unpredictable. That has consequences for RF and baseband measurements, and we’ll get to that in a bit.

My first taste of these parasitic oscillations outside of a college lab was working with a manufacturing engineer to troubleshoot a 13 MHz function generator. Some samples of the generators were found to be oscillating at frequencies over 80 MHz. The oscillations were found almost by accident, because none of us expected significant output at over six times the generator’s highest frequency!

In today’s wireless environment, unintended oscillations can be a potent source of interference. They’re also more consequential because the spectrum is so crowded, and vital or high-profile services are more likely to be affected. And, as with that function generator, undesirable output signals may be overlooked for a while because they are so far outside of the normal operating frequency range.

Years after the function generator adventure, I heard from a Keysight (then Agilent) systems engineer about a much more serious case of parasitic oscillations. Engineers in the Moss Landing area on Monterey Bay had been using an HP/Agilent signal analyzer, a handheld radio, and a directional antenna to track down signals that were intermittently but persistently jamming GPS signals. The jamming extended well outside the harbor entrance, so it was a clear hazard to navigation.

The interference was maddeningly inconsistent and the directional antenna was of limited help due to strong reflections. As described in an article by the investigators, they eventually tracked the problem to parasitic oscillations in an active TV antenna on a boat at the marina.

Surprisingly, eliminating the first interferer did not completely fix the problem, and instead revealed two more accidental jammers. At least two of the three used the same amplifier board, where a design change had provoked the oscillations.

The instability that made these jammers so hard to find provides some lessons for RF engineers. The principal one is that the oscillations may not be there when you happen to be looking for them, so to ensure their absence you’ll need to explore a wider-than-usual range of frequencies and operating conditions.

Though self-exciting, the oscillators at Moss Landing were sensitive to a variety of factors, some predictable and others capricious: temperature; power supply voltage and its fluctuations; fluorescent lights; antenna configuration; building wiring; and nearby metal objects. Even the motion of the hand of a researcher 10 feet away could alter the frequency by several megahertz. Tellingly, the handheld radio could demodulate the fluctuating output to reveal the distinctive sound of a bilge pump!

It’s clearly essential to test your designs with real-world variations, and these days you have the added challenge that desirable signals and interference are both time-varying. Fortunately, you can take advantage of the signal processing and display capabilities available in signal analyzers to catch virtually everything. For example, spectrograms and real-time spectrum processing can digest and display vast amounts of data, highlighting even the briefest or most agile signals.

Simulated narrowband interference in a group of satellite channels. Real time spectrum analysis (density) display and spectrogram display, showing spectrum vs time

Intermittent, low duty cycle interference in satellite channels is clearly revealed in a real-time spectrum display (top), and the time-varying nature of the interference is shown by the spectrogram display (bottom).

Signal analyzers such as Keysight’s X-Series can also be equipped with VSA software for in-depth analysis and demodulation, and gap-free signal recordings can be made to allow flexible post-processing of transient events.

Of course, amplifiers aren’t the only source of spurious and troublesome oscillations. I once built a FET speed control for a remote-controlled car, and its interference disabled the car’s own radio. The switching frequency was only 1 kHz, but during each brief transition, the FETs oscillated wildly, with powerful harmonics reaching nearly 1 GHz. If only my intent had been to produce an unstable, high-power comb generator!