Skip navigation
All Places > Keysight Blogs > Next Generation Wireless Communications > Blog > 2018 > April > 30

The objective of calibration is to remove the largest contributor to measurement uncertainty: systematic errors. As you start working in mmWave frequencies, the objective is unchanged, but the actual process for achieving the calibration is quite different.

 

The mmWave frequency band from 30-300 GHz is enabling technologies such as 5G and radar. But as we move into these higher frequencies, wavelengths become smaller and margins for error become tighter. The opportunity at mmWave frequencies is substantial. But, you can’t forget to account for the unique measurement challenges that come with moving to this frequency band. Properly calibrating your measurement set up is critical if you want to get accurate and repeatable measurements.

 

 

 

 

Figure 1: Network analyzer with a test set controller and frequency extenders

 

Your Measurement is Only as Good as Your Calibration

If you regularly work with a VNA, you’re probably familiar with the necessity of calibration. VNA’s are designed to be incredibly precise measurement tools, but without proper calibration, you’re leaving that precision on the table. To maximize the precision of your VNA, you need to calibrate it using a mathematical technique called vector-error-correction. This type of calibration accounts for measurement errors in the VNA itself- plus all the test cables, adapters, fixtures, and probes you have hooked up between your analyzer and the DUT. But calibration at mmWave isn’t this simple.

 

New Calibration Challenges

The main calibration challenge that comes with working at mmWave frequencies is that you now need a broadband calibration over a very wide frequency range- often from 500 MHz up to 125 GHz or higher. But most calibration techniques aren’t designed to get a cal over such a wide range. What you’re really looking for is a load that offers this broadband frequency coverage. You can get reasonable accuracy by using a well-designed broadband load. But a sliding load isn't a good fit for mmWave. So, what other option do you have?

 

The Old Model: Polynomial Calibration

Well, you might first consider using a polynomial model. This is a common model used at low frequencies. With this model, you’d need three bands- low, high, and a type of broadband sliding load. This usually works fine at frequencies below 30 GHz, but as you get into the mmWave frequency range, you’ll notice some issues.

 

Figure 2 shows a short with three different polynomial models- low, high, and broadband. The x-axis is frequency in GHz and the y-axis is the error ratio. (So, low numbers are good in this case) The red trace is when we use a low band model, one that is optimized for low band performance. It has a good load, but potentially limited shorts. For this signal, around 40 GHz, we notice that it breaks down and the error starts to expand out.

 

The blue trace is when we use shorts without any low band load. In this case, with the multiple shorts, you limit the performance at 40 GHz and above.

 

However, if you can combine a broadband model that takes advantage of the lower band load of the red trace and the high band offset short corrections of the blue trace, your result would be something like the green trace.

 

 

Figure 2: Low vs high vs broadband load models across a frequency range of 0-70 GHz

 

This demonstrates the new challenge of working at mmWave frequencies. As we get into these broadband frequencies, we need to eliminate the load. To do this, you need to use multiple shorts to cover the broad frequency range that you are now working in. It’s no longer possible to find a single load that covers the full frequency range you are testing. Also, you can’t just combine multiple shorts to achieve this either. A new solution is required.

 

The New Model: Database Calibration

So, we know we need to use multiple shorts to cover this broad frequency range. But how? You need a calibration kit that eliminates the need for a broadband load. It should implement multiple shorts to cover the entire frequency range you’re working in- something like the Keysight calibration kit in Figure 3. This mechanical, coaxial calibration kit:

  • Has a low band load, four shorts, and an open,
  • Covers the low frequencies up to 50 GHz with the load, and
  • Uses the offset shorts to provide states on the Smith Chart that represent different impedance conditions.

 

 

 

Figure 3: Mechanical calibration kit

 

This calibration kit uses a database model. This model is a good fit for mmWave testing. It characterizes each device using a specified dataset and uses a Smith Chart with known data of various components across a certain frequency range.

 

For example, for a source match type measurement, if we’re measuring a high reflect device, we can ask “what represents a good short at this frequency?” We plot that out, and we use this as our database calibration model. You can do that for any type of measurement you are working with: plot out the ideal conditions and use that as a model.

This dataset then allows us to calibrate our system.

 

The Keysight calibration kit in Figure 3 uses these techniques and allows us to effectively calibrate our system for mmWave testing. It’s important to realize that calibration kits and methods that work at lower frequencies simply do not work at these broadband frequencies. You need to consider selecting a new set of calibration tools that will optimize the accuracy of your mmWave test set up.

 

Conclusion

Tight margins at mmWave frequencies require new, more precise calibration techniques. You need to be able to make accurate, repeatable measurements or else risk design failures and missed deadlines.

 

Proper calibration across the broad frequency range is the first step to a reliable test set up. Consider re-evaluating your test set up, calibration tools and techniques. What changes do you need to make for working in the mmWave frequency range? How can you ensure you’re getting the most reliable measurements and avoiding costly test errors?

 

Get more mmWave resources!

The 5G vision set forth by IMT-2020 is an amazing thing.  It opens up so many possibilities for consumers, the environment, health and safety, humanity. Virtually every industry will be transformed, and new ones will emerge. The three defined use cases: enhanced mobile broadband (eMBB) to support extreme data rates, ultra-reliable low latency communications (URLLC) for near instant communications, and massive machine type communications (mMTC) for massive interconnects, are foundational to setting the 5G specifications. 

 

The 3GPP is developing standards for radio interference technologies to be submitted to the ITU (International Mobile Telecommunications 2020) for compliance with the IMT-2020 requirements. While these standards are in some ways an extension to existing 4G standards, they really are radically different from what’s in use today.  If the standards are radically different, then it’s not a stretch that the tests required to verify 5G product designs are also radically different.

 

The initial 5G New Radio (NR) release 15 was introduced in December 2017, and the full release is targeted for June 2018.  Release 15 focuses on specifying standards for the eMBB and URLLC use cases. Standards for the mMTC will be addressed in future standards releases. New releases of the standard will continue to roll out over many years. No previous standard has attempted to cover such a broad range of bandwidths, data rates, coverage, and energy efficiency.

 

Some key differences in 5G NR release 15 include:

 

  • Flexible numerology enables scalability – Where subcarrier spacing was fixed to 15 kHz in 4G LTE, it now scales to higher spacings.  Wider spaced subcarriers shorten the symbol period, which enables higher data rates and lower latency for URLLC use cases.  In contrast, with shorter subcarrier spacing, longer symbol periods allow for lower data rates and energy efficiency for IoT, or the mMTC use case.

 

  • mmWave frequencies open up more bandwidth – LTE supports up to six channel bandwidths, from 1.4 MHz to 20 MHz.  These can be combined through carrier aggregation for a maximum bandwidth of 100 MHz.  The initial 5G NR release 15 specifies frequency up to 52.6 GHz with aggregated channel bandwidths up to 800 MHz. Initial target frequency bands are 28 GHz and 39 GHz.  To put this in perspective, these mmWave bands alone can encompass the entire spectrum of the current 3G and 4G mobile communications system.  This additional spectrum is essential to enabling eMBB extreme data rates.

 

  • Massive MIMO to increase capacity – MIMO in LTE uses multiple antennas to send multiple, independent streams of data through the same frequency and time space. MIMO has been shown to increase data rates by making better utilization of the spectrum. With Massive MIMO, the number of antenna elements on the base station is considerably greater than the number on the device. Implementing multiple antennas on base stations and devices will be essential to increasing capacity and achieving the throughput envisioned in eMBB use cases. 

  

New Test Challenges

These new standards will introduce new challenges in test. 

 

Flexible numerology complicates the development of the 5G NR waveforms and introduces many new use cases that need to be tested.  In addition, it also introduces a new levels of coexistence testing with 4G and potentially Wi-Fi.  

 

mmWave frequencies with more bandwidth changes all assumptions about conducted tests.  Due to the higher frequencies and use of highly integrated multi-antenna arrays, tests will now be performed over-the-air (OTA). 

 

Massive MIMO increases the number of antennas, and subsequently the number of beams coming out of base stations and devices.   These beam patterns, whether at sub-6 GHz or mmWave, need to be characterized and validated in an OTA test environment.

 

 Viewing a 256 QAM waveform with antenna pattern

Viewing a 256 QAM waveform with antenna pattern

 

Radically different?  Absolutely. Test solutions must be flexible and scalable so that they cover the

number of use cases, frequencies, and bandwidths, as well as OTA validation. The test solutions must also evolve as the standards evolve.  Check out this article series by Moray Rumney to understand more about how test will change as we move into the next stage of 5G development: The Problems of Testing 5G Part 1.  

Late last year, technical thought leaders from academia and commercial organizations assembled in San Francisco to exchange insights on 5G NR, phased array antennas, and Over-the-Air (OTA) testing. Roger Nichols, Keysight’s 5G Program Manager, hosted the 5G Tech Connect event, which was timed to align with the publication of the first 3GPP 5G NR specifications. I was there to capture his insights on 5G along with other thought leaders’ reflections on technology challenges and I’ve collected their remarks into soundbites for you.

Roger, with his 33 years of engineering and management experience in wireless test and measurement,  talked about the many challenges the industry is facing as we move towards the 5G NR standard. He made the point that the proliferation of frequency bands will make it necessary for devices to work across a wide range of fragmented bands leading to more complex designs and possible interaction issues. Also, the elimination of cables and connectors is leading to the need to measure and conduct testing Over-the-Air, which can be both costly and complex.  

 

Another well-known and experienced industry expert, Moray Rumney, who leads Keysight’s strategic presence in standards organizations such as 3GPP, expanded on the implications 5G NR. He remarked that mmWave has much to offer in terms of wider bandwidths, but will also lead to challenges related to beamforming where narrow signals propagate in three-dimensional space. He claimed that such environments will require 3-D spatial test systems and simulation tools to enable equipment manufacturers to validate the performance of their designs. Moray further developed these ideas in his presentation ‘PRACH me if you can’, where he cheekily claimed that ”there is no meaning to the power of a signal if you are looking in the wrong direction.”

Professor Mischa Dohler of King’s College London, one of the many industry and academic thought leaders present at the event, talked about some of the challenges 5G technology will bring, such as delay. Since human response time is around 10 ms, round trip delay (latency) must be less. One way to reduce the delay is by adopting what he calls ’model-mediated AI,” which is already used by the gaming industry to predict hundreds of milliseconds ahead in time to create a real-time experience. Mischa also said that the expected explosion of traffic will inevitably lead to the need for a lot more bandwidth to allow networks address expectations on both data rates and latency.

 

I had the chance to sit down with Mischa to talk about some of the ideas he shared in his presentation. In this mini-interview, he summarized that since 5G will generate at least 10 Gbps data rates, enable eNodeBs to support 50 0000 UEs and create latencies of less than 10 ms, the technology will be good enough for a wide range of exciting industry applications. He also mentioned that virtualization is driven by the need for flexibility, which will require a software-based architecture.

 

Another industry thought leader, Maryam Rofougaran – Co-founder and Co-CEO of Movandi Corporation – explained that the move to mmWave frequencies implies new designs and innovations to create efficient integrated systems. Movandi uses Keysight’s solutions for modulation characterization and beamforming testing to verify their system.

To address some of these challenges, Keysight introduced at the event the world’s first 5G NR network emulator. Lucas Hansen, Senior Director, 5G & Chipset Business Segment at Keysight Technologies, explains how it enables users to prototype and develop 5G NR chipsets and devices.

 

Watch more videos from 5G Tech Connect on Keysight’s YouTube channel.