Skip navigation
All Places > Keysight Blogs > Better Measurements: The RF Test Blog > Blog > 2017 > May
2017
benz

The Four Ws of Noise Figure

Posted by benz May 30, 2017

  If you're going to optimize it you have to quantify it

 

Note from Ben Zarlingo: This is the second in our series of guest posts by Nick Ben, a Keysight engineer. Here he turns his attention to the fundamentals of noise figure.

 

In the previous edition of The Four Ws, I discussed the fundamentals of spurious emissions. This time I’m discussing the WHAT, WHY, WHEN and WHERE of characterizing your system’s noise figure (NF) to process low-level signals for improving product design.

What

Noise figure, also known as noise factor, can be defined as the degradation of (or decrease in) the signal-to-noise ratio (SNR) as a signal passes through a system network. In our case the “network” is a spectrum or signal analyzer (SA).

Basically, a low figure means the network adds very little noise (good) and a high noise figure means it adds a lot of noise (bad). The concept fits only those networks that process signals and have at least one input and one output port.

Figure 1, below, provides the fundamental expression for noise figure.

Equation describing noise figure as a power ratio, comparing signal/noise at input and output of two-port device

Figure 1. Noise figure is the ratio of the respective signal-to-noise power ratios at the input and output when the input source temperature is 290 °K.

Additionally, noise figure is usually expressed in decibels:

             NF (in dB) = 10 log (F) = 10 log (No) – 10 log (Ni)

Why and When

Noise figure is a key system parameter when handling small signals, and it lets us make comparisons by quantifying the added noise. Knowing the Noise Figure value, we can calculate a system’s sensitivity from its bandwidth.

 It’s important to remember that a system’s noise figure is separate and distinct from its gain. Once noise is added to the signal, subsequent gain stages amplify signal and noise by the same amount and this does not change the SNR.

 Figure 2.a, below, shows an input to an amplifier, and the peak is 40 dB above the noise floor; Figure 2.b shows the resulting output signal. Gain has boosted the signal and noise levels by 20 dB and added its own noise. As a result, the peak of the output signal is now only 30 dB above the noise floor. Because degradation in the SNR is 10 dB, the amplifier has a 10 dB noise figure.

Relative signal and noise levels compared at the input and output of an amplifier. The noise level increases more than the signal level, due to the noise added by the amplifier.

Figure 2: Examples of a signal at an amplifier’s input (a) and (b) its output. Note that the noise level rises more than the signal level due to noise added by the amplifier circuits. This change in signal and noise is the amplifier noise figure.

Where (& How)

The open question: Where are the system noise sources that affect noise figure? Most noise consists of spontaneous fluctuations caused by ordinary phenomena in the electrical equipment, and this noise is generally flat. We perform measurements on this noise to characterize noise figure. These noise sources fit into two main categories: thermal noise and shot noise.

 One more note: It’s important to consider that some of the power measured in a noise figure measurement may be some type of interference rather than noise. Therefore, it’s critical to be alert for and guard against this by performing measurements in shielded rooms to ensure we’re seeing only the spontaneous noise we want to measure.

 Wrapping Up

If you’d like to learn more about characterizing noise figure and improving your product designs, recent application notes titled Three Hints for Better Noise Figure Measurements and Noise and Noise Figure: Improving and Simplifying Measurements include great explanations and how-to techniques as well as pointers to additional resources. You’ll find both in the growing collection on our signal analysis fundamentals page.

 I hope my second installment of The Four Ws of X provided some information you can use. Please post any comments – positive, constructive, or otherwise – and let me know what you think. If this post was useful, please give it a like and, of course, feel free to share.

  Handling the frequency and bandwidth challenges of 5G, radar, and more

5G technologies and markets are much in the news these days, and for good reasons. The economic potential is large, the opportunities are relatively near-term, and the technological challenges are just the kind of thing RF engineers can get excited about. Whether your focus is on design or test, there is plenty of difficult work ahead in the pursuit of practical ways to fully utilize the potential capacity of the centimeter and millimeter bands.

Unfortunately, much of the analysis and commentary focuses on economic factors and broad-brush coverage of technological challenges. A good overview of a complex subject is essential for resource planning, but it isn’t deep enough for us to see the specific measurement challenges and how we might handle them.

Some measurement experts have a “just right” combination of broad technical knowledge and specific measurement insight to make a contribution here, and I can heartily recommend Keysight’s Pete Cain. He has not only the expertise but also an impressive ability to explain the technical factors and tradeoffs.

Pete recently produced a webcast on millimeter-wave challenges, and it’s a good fit for the needs of the RF/microwave engineer or technical manager who will be dealing with these extreme frequencies and bandwidths. It’s available on-demand now, and I wanted to share a few highlights.

His presentation begins with a discussion of general technology drivers such as the high value of lower-frequency spectrum and the public benefit of shifting existing traffic to higher frequencies to free it up whenever possible. That’s an important issue, and perhaps a matter of future regulation to avoid a tragedy of the commons.

Next, Pete goes on to explain the problem of increased noise that goes along with the wider bandwidths and increased data rates of microwave and millimeter bands. This noise reduces SNR and eventually blunts channel capacity gains, as shown here.

Comparing the maximum spectral efficiency of different channel bandwidths. The S-shaped curve shows how spectral capacity and channel efficiency returns diminish as bandwidths get very wide.

The wide bandwidths available at millimeter-wave frequencies promise dramatic gains in channel capacity. Unfortunately, these bandwidths gather up more noise, and that limits real-world capacity and spectral efficiency.

As shown in the diagram, Pete also discusses spectral efficiency and shows where existing services operate. This is where RF engineers have already turned theory into practical reality, and the landscape of tradeoffs they’ll optimize as millimeter technologies become widespread.

To further inspire the technically inclined, Pete dives deeper into the essentials of high-frequency testing, including the issues of loss and frequency response at microwave and millimeter-wave frequencies. As is often the case, high quality measurements require a combination of hardware, software, and careful measurement technique. In particular, he describes the value of establishing source and analyzer calibration planes right at the DUT, thereby minimizing measurement error.

Diagram shows measurement of a millimeter-wave DUT where the calibration planes of the input and output have been moved to the edges of the DUT itself, for better accuracy.

To optimize accuracy, it’s necessary to move the calibration planes of measurements from the instrument front panels to the DUT signal ports. Software such as the K3101A Signal Optimizer can make this much easier.

Moving the calibration planes to the DUT ports grows more important as frequencies increase. Loss is an issue, of course, but in many cases the thorniest problems are frequency-response effects such as ripple and non-repeatability. Ripple is especially troublesome for very-wideband signals, while repeatability can be compromised by sensitivity to cable movement and routing as well as connector torque and wear.

In the webcast, Pete also compares signal-connection methods, including coax, waveguide, probes, and antennas.

That’s just a quick overview of an enlightening presentation. To see the whole thing, check out the “Millimeter wave Challenges” on-demand webcast—and good luck in the land of very short waves.

  As technologists, we need ways to tell the difference

Over the past few months I’ve been hearing more about a propulsion technology called an EM Drive or EMDrive or, more descriptively, a “resonant cavity thruster.” As a technology that uses high-power microwaves, this sort of thing should be right in line with our expertise. However, that doesn’t seem to be the case because the potential validity of this technique may be more in the domain of physicists—or mystics!

Before powering ahead, let me state my premise: What interests me most is how one might approach claims such as this, especially when a conclusion does not seem clear. In this case, our knowledge of microwave energy and associated phenomena does not seem to be much help, so we’ll have to look to other guides.

First, let’s consider the EM drive. Briefly, it consists of an enclosed conductive cavity in the form of a truncated cone (i.e., a frustum). Microwave energy is fed into the cavity, and some claim a net thrust is produced. It’s only a very small amount of thrust, but it’s claimed to be produced without a reaction mass. This is very much different than established technology such as ion thrusters, which use electric energy to accelerate particles. The diagram below shows the basics.

Diagram of EM drive, showing mechanical configuration, magnetron input to the chamber, and supposed forces that result in a net thrust

This general arrangement of an EM drive mechanism indicates net radiation force and the resulting thrust from the action of microwaves in an enclosed, conductive truncated cone. (image from Wikipedia)

The diagram is clear enough, plus it’s the first time I’ve had the chance to use the word “frustum.” Unfortunately, one thing the diagram and associated explanations seem to lack is a model—quantitative or qualitative, scientific or engineering—that clearly explains how this technology actually works. Some propose the action of “quantum vacuum virtual particles” as an explanation, but that seems pretty hand-wavy to me.

Plenty of arguments, pro and con, are articulated online, and I won’t go into them here. Physicists and experimentalists far smarter than me weigh in, and they are not all in agreement. For example, a paper from experimenters at NASA’s Johnson Space Center has been peer-reviewed and published. Dig in if you’re interested, and make up your own mind.

I’m among those who, after reading about the EM drive, immediately thought “extraordinary claims require extraordinary evidence.” (Carl Sagan made that dictum famous, but I was delighted to learn that it dates back 200 years to our old friend Laplace.) While it may work better as a guideline than a rigid belief, it’s an excellent starting point when drama is high. The evidence in this case is hardly extraordinary, with a measured thrust of only about a micronewton per watt. It’s devilishly hard to reduce experimental uncertainties enough to reliably measure something that small.

I’m also not the first to suspect that this runs afoul of Newton’s third law and the conservation of momentum. A good way to evaluate an astonishing claim is to test it against fundamental principles such as this, and a reaction mass is conspicuously absent. Those who used this fundamentalist approach to question faster-than-light neutrinos were vindicated in good time.

It’s tempting to dismiss the whole thing, but there is still that NASA paper, and the laws of another prominent scientific thinker, Arthur C. Clarke. I’ve previously quoted his third law: “Any sufficiently advanced technology is indistinguishable from magic.” One could certainly claim that this microwave thruster is just technology more advanced than I can understand. Maybe.

Perhaps Clark’s first law is more relevant, and more sobering: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

I’m neither distinguished nor a scientist, but I am a technologist, and perhaps a bit long in the tooth. I should follow Clarke’s excellent example and maintain a mind that is both open and skeptical.