Skip navigation
All Places > Keysight Blogs > Next Generation Wireless Communications > Blog

Ask any R&D engineer these days if they are working on devices with wireless connectivity and they will likely say yes. Ask them again if they consider themselves Internet of Things (IoT) engineers and it’s a safe bet the answer will be no.

It’s easy to understand the disconnect. R&D engineers come out of college and enter the workplace ready to design widgets of all shapes, sizes and functionality. Until more recently, those widgets likely didn’t have to communicate wirelessly with other things and that was just fine, because for most, IoT wasn’t on the educational menu. In fact, it’s only recently that universities have begun incorporating IoT concepts and design practices into their curriculum.

Here’s where things get interesting. Everything is becoming wirelessly connected these days. By 2020 alone, there will be an estimated 50 billion connected devices around the world. That means one thing: It’s no longer a matter of if R&D engineers will have to work on IoT devices, but when.

What happens when these engineers, who have previously developed countless widgets, none of which ever had to communicate wirelessly with other things, finds themselves designing a widget and it has a radio. It needs to send and receive information and must work in an environment with lots of other widgets sending and receiving information at the same time, and potentially interfering with its communication. What once was a relatively straightforward design project, suddenly becomes a complicated mess.

So, while IoT may seem to some engineers like nothing more than an overhyped buzzword, it will soon be impacting every aspect of their design work—if it doesn’t already. And let’s be clear. There is a big difference between modifying a widget to communicate wirelessly and designing an IoT device to succeed in the real world.

Designers working on an IoT device design

Creating an IoT device to stand the test of time and onslaught from competing products is quite tricky. To make devices “smart,” advanced technologies must be utilized and that introduces new design and test challenges. The device may have to work unattended for long periods of time and in harsh environments, making a long battery life and reliability absolutely essential. It may have to work in networks with lots of other devices and sources of interference, necessitating extensive co-existence testing. It must comply with industry standards and regulations. And, it must be secure. How the device will be utilized in the real-world also has to be considered, so it’s design can be properly optimized.

Some of these concerns are commonplace for today’s R&D engineers, but many are not. Succeeding in this environment will require engineers to look beyond their job titles and come face to face with what it really means to design for the IoT.

For some, learning new IoT design skills will be in order. For others, it will simply be a matter of finetuning their existing skillset. Either way, make no mistake, designing IoT devices is a difficult task. It would be a serious misstep for any engineer to think otherwise and to assume that because they don’t consider themselves IoT engineers, they don’t have to deal with IoT issues. Nothing could be farther from the truth. Engineers will have to work hard to create designs to succeed in the IoT and that hard work starts with building a strong IoT skillset that’s supported by the right tools.

For more information on the designing for the IoT and understanding the challenges you face, go to and And to help you down the right path to improving your IoT design skills, check out these free webcasts: Maximizing IoT Device Battery Life, Overcoming IoT Device Wireless Design and Test Challenges, and Analyzing IoT Device Power Integrity. iotinternetofthings iottest endtoendtesting iotdevice batterylife devicesecurity

 A significant milestone in the final sprint toward 5G commercialization


Last month the 3GPP (Third Generation Partnership Project) approved 5G New Radio (NR) Release-15 standalone (SA) specification, paving the way for commercial introductions later this year.


Leading telecom operators, internet companies, and chipset, device, and equipment vendors contributed to this historic moment in 5G.  Release-15 introduces a new end-to-end network architecture enabling many new high throughput and low latency use cases that opens the door to new business models, and an era of everything connected. 


Does this mean 5G specifications are done? Not quite.  3GPP is working on additions to the current release and has already started work on phase II with Release-16 expected in 2019. The specifications add new capabilities and technologies that require leading-edge innovations in new designs. I am personally excited about the technology growth and advancements we will see in the next three to five years as the specifications evolve through Release-16 and beyond.

3GPP Release 15 photo by Keysight representative Moray Rumney.

3GPP Release-15 photo, courtesy of Keysight representative Moray Rumney.


The December 2017 announcement enabled operators to deploy 5G non-standalone mode (NSA) using existing LTE evolved packet core (EPC) for the control plane.  With the June 2018 Release-15, now 5G can operate in standalone (SA) mode using the completely new 5G RAN (radio access network) and core. This is a significant step forward because now 5G can support the many different use cases envisioned by IMT-2020, including high throughput on the mobile internet and low latency applications.  Enabling technologies such as flexible numerologies, massive MIMO and beam steering, and the use of mmWave spectrum dramatically changes designs of devices and network infrastructure.


Operators and network equipment manufacturers are already conducting 5G trials and plan for initial mmWave fixed wireless introductions in select cities later this year, and mobile smartphone services in 2019. Keysight has been working closely with these companies. Satish Dhanasekaran, senior vice president of Keysight Technologies, and president of the Communications Solutions Group (CSG) shared his perspective on the achievement: “We are excited to enable the industry at a threshold of 5G acceleration and commercialization. The completion of the standalone (SA) 5G new radio (NR) specification marks a distinct milestone and offers a playbook for a connected ecosystem to move forward, in making 5G a reality and unlocking huge potential for society. Keysight is engaged with market leaders, contributing to the 3GPP standardization development, and providing scalable 5G test and measurement solutions from L1 all the way to L7.”


What’s next? After a brief self-congratulatory pause, the focus quickly turns to work on a late drop of Release-15 planned for later in 2018 to fix some known issues. There is also the lengthy “to-do” list for Release-16 to address some critical challenges including reducing device power consumption, addressing network interference management, enhancing reliability in IoT use cases, and furthering the integration of licensed, unlicensed and shared spectrum into 5G deployments. There’s a long road ahead of us for 5G, and it’s not going to be an easy one. To stay up-to-date on 5G New Radio, check back on this blog or go to the 5G NR webpage to access white papers, webinars, and other informative content on 5G NR.


You can get more information about Keysight Technologies 5G solutions here.

Network of lights

With 5G NR release 15 in June 2018, how soon can you expect to see 5G devices operating at mmWave frequencies?  The current buzz is sooner than you expect. 


At the recent IMS 5G Summit, I learned about some timelines. Initial mmWave releases are expected to be point-to-point, or point-to-multi-point, but not fully 5G NR compliant. But soon after, in the first half of 2019, operators and equipment makers are planning to introduce 5G devices with mmWave radios in select cities. This poses some pretty significant challenges for designers to produce a mmWave mobile device that meets expected quality of service while traveling through the network. 


mmWave isn’t new for wireless communications, but it is new for cellular communications. 5G NR specifies frequency up to 52.6 GHz and new operating bands that open up almost 10 GHz of new spectrum.


  • Frequency Range 1 (FR1): 400 MHz to 6 GHz adds 1.5 GHz of new spectrum in frequency bands: 3.3-4.2 GHz, 3.3–3.8 GHz, 4.4–5 GHz.


  • Frequency Range 2 (FR2):25 to 52.6 GHz adds 8.25 GHz of new spectrum in frequency bands: 26.5–29.5 GHz, 24.25–27.5 GHz, 37–40 GHz. Initial mmWave targets are 28 GHz and 39 GHz in Japan and the US.


mmWave, where there is greater modulation bandwidth, is essential to meeting the extreme data throughput envisioned in 5G mobile broadband. However, establishing a mmWave communication link and tracking a mmWave device through the mobile network will be a challenge. mmWave signals just don’t behave the same as signals under 6 GHz. 


5G NR will use technologies like beam steering and new initial access procedures to enable a mmWave communication link, but transmitters and receivers must also be able to produce and demodulate high-quality signals in the device and base station. IQ impairments, phase noise, linear compression (AM to AM) and nonlinear compression (AM to PM), and frequency error can all cause distortion in the modulated signal. Phase noise is one of the most challenging factors in mmWave OFDM systems. Too much phase noise in designs can result in each subcarrier interfering with other subcarriers, leading to impaired demodulation performance.  These issues are even more problematic at mmWave frequencies with wider bandwidths.


Evaluating a signal’s modulation properties provides one of the most useful indicators of signal quality. Viewing the IQ constellation helps to determine and troubleshoot distortion errors. A key indicator of a signal’s modulation quality is a numeric error vector magnitude (EVM) measurement that provides an overall indication of waveform distortion. As modulation density increases, so too does the requirement for better EVM.  Shown here is the 3GPP (Third Generation Partnership Project) TS 38.101-1 EVM requirement for 5G UE (user equipment).



Modulation scheme for PDSCH

Required EVM











Measurements of the overall spectrum are also used to validate the signal’s RF performance.  


Test solutions don’t just migrate from sub 6 GHz.  The test equipment needs to operate at the higher mmWave frequencies with wider modulation bandwidths and have better specifications than the device under test (DUT). When designing test solutions, you now need to be even more concerned about issues like adapters and cables, switching, over-the-air test, and system-level calibration. The measurement system needs to perform better than the DUT’s design goals, and a proper system level calibration helps to eliminate uncertainties due to test fixtures and is valuable for very wide bandwidth signals.


To find out more about overcoming the challenges of mmWave device design and test, check out this white paper series that looks at many of the challenges you can expect with 5G NR including, the new flexible numerology, mmWave design considerations, MIMO and beamforming, and over-the-air testing challenges at

The higher in frequency you go, the harder it is for a connector to find a mate.


The key to a successful connection is finding a good mate. As it turns out, finding a mate may be more difficult at millimeter-wave frequencies.


Before we talk about connections, let’s consider the block diagram of a transceiver operating at millimeter-wave frequencies. The implementation issues in the physics mean that a different approach is required, at least for now, because the capacitance of a tiny metal plate or the inductance of a bond lead have an impact that goes up at least linearly with frequency. So, it is important to keep this in mind as a design consideration.


Significant elements in a block diagram of high-frequency, wide-bandwidth radios

Figure 1. Significant elements in a block diagram of high-frequency, wide-bandwidth radios.


In the red box on the left-hand side, we've got a phased array antenna. Since we are running at higher frequencies, we are very likely to be using beam-steered arrays. Beams are formed by shifting the phase of the signal emitted from each radiating element to provide constructive or destructive interference, which steers the beams in the desired direction.


Next is the transmit/receive portion of the block diagram with up and down converters.


Moving to the right, in the red block in the IFIQ section, we move into the world of quadrature mixing where we are doing frequency conversion. Doing frequency conversion using this 90-degree difference means that you have a single-sideband mixer, so you directly get image suppression on the IF side. Due to this benefit, it's a commonly used technique.


Figure 1 is representative for both backhaul and user equipment, because even on the user equipment side, it is likely that the devices will try and make use of diversity reception.


Let’s come back to the challenges of capacitance and inductance from above. As old-school as it may sound, impedance matching in these circuits is critical. To get designs at these frequencies working well, you must pay close attention to capacitive and inductive tuning. This is some of the hard work required to make wide-bandwidth, high-frequency radios operate. While the components may be “hidden” inside the RFIC, we still see the higher levels of integration to account for the extremely small dimensions required by these frequencies.


Depending on how highly integrated, when we look at those phased array antennas, there’s an increasing chance we're not going find connectors anymore because the extremely small size of the components makes the notion of a “connector” geometrically impractical. The higher in frequency we go, the smaller the dimensions and the more likely that we won't find a connector to mate with. The growth of this connector-less interface is the heart of over-the-air (OTA) test.


This is yet another example of the ways radio development at millimeter-wave frequencies requires extra care and attention.


Get more mmWave resources!

The Internet of Things (IoT) is changing everything and it’s not just the interaction between humans and machines.


As with any new or emerging technology, the changes it creates often ripple through society in multiple ways. It may transform the way businesses operate and even, the goods and services they deliver. It may transform existing industries with new use cases and dramatically improve processes. And, it will undoubtedly transform workforces and the skillsets required of that workforce. It’s happening today in the IoT.


The IoT is comprised of a vast worldwide ecosystem of devices, wireless communications, networks, and infrastructure. It spans many industries and encompasses many different vertical markets, including: automotive, industrial, medical, smart city, smart home, and wearables. Each market has its own requirements and challenges, and so do the IoT applications within each of these markets.


Successfully navigating this dynamic ecosystem is no easy task. It requires designers, manufacturers, network operators, and service providers with skills finely tuned for the IoT.


They must know what challenges lie ahead, regardless of whether they are designing the next great wearable device or putting a network in place to support the many IoT devices within a hospital environment. They must understand the nuances of the environments in which their devices and networks will operate. And just as critically, they must learn how to use the resources at their disposal to overcome any challenges and meet any requirements to develop IoT devices, wireless communications and networks that can thrive in the real world.


With such a large expanse of diversity in the IoT, learning all this critical information can be difficult, and at times, overwhelming. Many designers, manufacturers, network operators, and service providers simply don’t know where to go for the information they need.


If you are faced with this problem, attending formal training courses and seminars, or reading up on the latest IoT-related article, books, and eBooks is always a good place to start. It’s also smart to reach out to your trusted solutions vendors to get their insight on challenges and requirements in the IoT. In addition, these vendors can help you learn what solutions are at your disposal for addressing those challenges.


However, if what you are really after is more specialized information on how to do specific IoT tasks, then the IoT Education Hub may be just what you need. The IoT Education Hub gives you free online access to the latest educational resources on IoT device testing, wireless communications test, and network and system test. You'll gain access to valuable “how-to” information like how to:

  • Measure current drain
  • Deal with the interference between medical IoT devices
  • Deliver consistent measurements for IoT designs
  • Build resilient security into your network
  • Unmask network and data evasions
  • And much more...


Check out the IoT Education Hub: Device Test, IoT Education Hub: Wireless Communications, and IoT Education Hub: Network and System today to jumpstart your development efforts. A little knowledge can go a long way in helping you more quickly realize the promise of the IoT.   iotdevicesecurity iot iotdevicesecurity iotdevicetest

I recently attended the Brooklyn 5G Summit 2018 with luminaries such as Thomas Marzetta, Arun Ghosh and Marty Cooper. Over the two day conference, these innovative minds and their peers impressed with insightful talks about the latest technology that will make 5G successful. I was much amused by the fact that by the end of the conference, only one brave soul admitted to actually knowing what 5G was… and a significant amount of attendees were already heralding 6G!


Using 5G to improve human existence

However we choose to define 5G, the conference reminded me that there are many different drivers that play a role in making 5G a reality. Some of these are commercial in nature and others revolve around the satisfaction of solving complex technical problems. Marty Cooper – the inventor of the first portable handset and the first person to make a call on a portable cell phone in April 1973 – shared his vision, which is that we will use the technology to improve human existence and solve problems associated with poverty, healthcare and education. He pointed out that thanks to the mobile industry with all its talented and driven engineers, 1 billion people in Africa have moved out of poverty in the past 20 years.


Creative minds love to solve tough challenges

But not everyone draw their energy from knowing that the technology they’re part of developing will help to improve people’s lives. Creative minds love to solve tough challenges and 5G will definitely serve up some interesting technical challenges around mmWave, Massive MIMO, Beamforming, etc.


Karolina Eklund with Thomas Marzetta, the originator of Massive MIMOThomas Marzetta, Distinguished Industry Professor Electrical and Computer Engineering  at the NYU Tandon School of Engineering elaborated on this topic: ‘Massive MIMO, in its ultimate embodiment, is most likely to take place in the hugely valuable sub-5 GHz spectrum, driven by the staggering throughput and latency requirements of ubiquitous VR/AR. Although a complete reduction of Massive MIMO to commercial practice has not yet happened, already a fundamental question has emerged: Will future developments in wireless physical layer technology be limited to incremental improvements upon Massive MIMO, or are genuine breakthroughs still possible through some as-yet undiscovered principles of operation?’


Several speakers, including Melissa Arnoldi, President for Technology & Operations at AT&T were excited about offering vertical industries (e.g. Autonomous Vehicles) and consumers (e.g. AR/VR) use cases only achievable with ultra low latencies. These will be delivered by deploying ‘secure, flexible networks that operate at the edge’ by virtualizing the network and by deploying cloud technology and mobile edge computing (MEC). “In the short term, MEC is the key technology that will enable low latencies and in many cases also security. This will also bring machine learning based intelligence to wireless networks, which in our opinion will be the next big thing in mobile technologies,” commented Matti Latva-aho, Academy Professor at University of Oulu.  


New business opportunities in vertical industries

But you may not be that excited about the prospect of applying your insights, education and grey matter to solve complex technical challenges. Instead, the idea of creating consumer value through new use cases such as VR, AR, 5K video and the commercial possibilities these will bring to your business might get your juices going! Marcus Weldon, CTO & President of Nokia Bell Labs, discussed some of the commercial drivers that will make 5G a success. He claimed that 5G technology will increase productivity, lower costs and create new business opportunities in a wide range of vertical industries. Other speakers chimed in – vertical industries will be the ‘killer app’ that will make 5G successful!


Weldon pointed out that we are reaching the limits of TCO due to operational complexity. However, 5G technology will deliver fully automated network slicing across the RAN and core to address a diverse set of requirements, which will lower the cost of delivering high speed data and in turn open up new applications and drive new use cases – many of which are not yet known. Mikael Hook, Research Area Director for Radio-research within Ericsson Research commented: ‘The first version of 5G is now materializing - a result of quite some years of research, standardization and trials of both Mobile Broadband use cases and proof-points from several vertical segments. Guided by insights from these activities, the first 5G release can support a wide range of industrial use-cases besides MBB. The future-proof design of NR and expected continuous evolution of 5G mean that we will be able to meet also future requirements emerging from use cases no one has thought of yet.’


With the launch of the 3GPP 5G NR specifications, we’re on the cusp of unleashing the full potential of 5G and the benefits this technology will deliver in terms of improving productivity, opening up new business models and opportunities, improve humans’ existence and increase the GDP in countries that choose to invest in its technology. I left the conference feeling invigorated – whatever is driving 5G development and deployment, it has a true chance to be a force for good and that is an exciting prospect to be a part of!

Here’s a question for you: How do you define mission-critical in the Internet of Things (IoT)? Some might say that a mission-critical IoT device or application is one with the potential to impact life or death. A power plant, water infrastructure, refinery, or medical devices might fall into this category. If a city’s smart water system shuts down, the health of its citizens is impacted. If a medical monitoring device fails to deliver a critical alert to a healthcare professional, a patient may die. You get the idea.


I personally think that the mission-critical IoT is much broader than this definition. In fact, I would argue that many IoT devices and applications once thought of as luxuries have today become “mission critical.” They are an integral part of our everyday life and we depend on them to work right, every time—even if their failure to do so doesn’t necessarily have a dire consequence.  


GPS is a prime example. It’s a technology virtually everyone is familiar with these days. But that wasn’t always the case. When GPS first entered the marketplace, it was in the form of large bulky devices that could be carried with you as you walked, or attached to your car window to guide you to your destination. The devices were expensive and required constant software updates. They were a luxury item. 


Over time, those devices got smaller and more accurate. More importantly, GPS made its way into two very popular IoT devices: smart phones and smart watches. Today, that combination enables some very mission-critical applications.

When driving, GPS-based mapping applications in smart phones guide us safely to our destinations. If the wireless connectivity in the smart phone cuts out or the application fails to work as expected, accidentally sending you in the wrong direction or into an unsafe area, you could easily find yourself driving into a ditch or potentially, the victim of a crime. 


In smart watches, GPS helps to keep children safe via a geofence that establishes a virtual perimeter or barrier around a physical geographical area. When a child wearing a smart watch goes beyond that perimeter, a notification is sent to their parent or guardian. 


These examples underscore the ongoing transition of more luxury, or consumer-based IoT devices into the mission-critical arena. It's a trend that will only increase as the IoT proliferates. It's happening today. In the process, it is opening many new opportunities for designers and IoT device manufacturers traditionally developing products for the consumer market, and for network operators and service providers as well. By taking a broader view of the needs of individuals and society around them, they can begin to identify innovative ways to tweak their products for use in the mission-critical IoT. A prime example might be a wearable device adapted to alert patients and healthcare professionals to health irregularities and to predict potentially significant incidents before they have the chance occur. 


For IoT device designers, manufacturers, network operators, and service providers looking to expand into the mission critical IoT, there will undoubtedly be challenges ahead. Requirements will need to be understood and best design practices adopted. The right design, test and monitoring solutions will also need to be selected. 


Two other important factors that will need to be taken into consideration are reliability and security. IoT devices that people count on to work right, simply can’t fail. That means IoT devices, wireless communications and networks have to be ultra-reliable. And because those devices capture immense amounts of data, all of which is vulnerable to attack, security is imperative, especially in the medical arena.


Security is crucial for IoT medical devices.


Security involves not just the IoT endpoint device, but the networks on which the data is transmitted. Any potential vulnerability in the chain could be catastrophic. A hacker gaining access to a smart infusion pump, for example, might change the timing or amount of medication dispensed to a patient, causing a life-threatening emergency. Think it can’t happen? In 2015, the FDA issued an alert, warning of just that possibility with the Symbiq Infusion pump.


Appropriate visibility and test solutions designed to validate the security posture of networks can go a long way in ensuring any potential vulnerabilities are identified and dealt with quickly. Likewise, solutions that allow the performance of IoT devices to be evaluated under real-world conditions can be quite valuable for identifying reliability issues before they can result in costly product redesigns or even a recall.


While many unknowns lie ahead for designers, manufacturers, network operators, and service providers on the road to the mission-critical IoT, it’s clear that the size of the opportunity in this rapidly evolving space will only continue to grow. For many, that makes it a journey well worth undertaking. If you’re interested in finding out more about the evolving mission-critical IoT and what you can due to leverage its growing opportunities, check out the white paper Key Technologies Needed to Advance Mission-Critical IoT at the Keysight Mission-Critical IoT webpage.

 iot# iotdevicetest iotwireless missioncritical iiot iotdevices iotdevicesecurity

The millimeter-wave (mmwave) frequencies of 30-300 GHz offer incredible opportunities for innovation. As we reach the bandwidth limits of lower frequencies, engineers are looking towards mmwave frequencies to help accommodate the explosive growth of wireless devices. One of the goals of 5G is to decentralize mobile phone transmission from big cellular towers to numerous small hot spots, also known as cells. A single cellular tower can only support so many devices, so increasing the number of cells will relieve mobile traffic as more devices demand bandwidth. Millimeter-wave signals attenuate quickly in the air so multiple cells can use the same frequency without interfering with each other.


This is just one example of how the industry is moving towards higher frequencies. Gigabit WiFi devices, automotive radar, and secure military and aerospace radar will all depend on mmwave frequencies. Vector network analyzers typically have maximum frequencies of only 67 GHz, so frequency extenders are required to test at these higher mmwave frequencies.


Frequency extender modules allow vector network analyzers to characterize devices at frequencies up into the hundreds of GHz. But there are more considerations than just your instrument’s frequency capability when making mmwave measurements. How do you validate that your measurements are accurate and reliable? You need to minimize uncertainty by:

  • Minimizing cable loss
  • Stabilizing temperature
  • DC biasing


Minimizing Cable Loss

Cable loss increases significantly with frequency, as seen in Figure 1. Even a very good cable will lose more than 1.1 dB over 8 cm at 110 GHz and higher, which has a strong impact on measurements. To put that in perspective, a standard 0.5 m cable could lose over 9 dB.



Figure 1: Cable Loss and Frequency


This amount of loss is unacceptable when testing low-power devices. External frequency extenders can be placed right next to the DUT to minimize cable loss at high frequencies. This means that you only need to account for high frequency loss between the extenders and the DUT rather than the entire length between the DUT and the network analyzer.



Figure 2: Frequency extenders placed right next to the DUT, a wafer


Stabilizing Temperature

Frequency extension with poor temperature stability will bring down the performance of your entire system, regardless of how good your network analyzer is. When measuring wafers with thousands of devices, your measurement equipment will heat up. Smaller instruments like frequency extenders are more susceptible to ambient temperature changes than full-sized vector network analyzers. Higher temperatures agitate charge carriers and create thermal noise. We can see temperature’s direct contribution to the noise power in the following formula:



Where k is the Boltzmann constant in J/K, T is temperature in K, and B is the measurement bandwidth in Hz. In addition to the thermal noise, mechanical effects of heat can introduce measurement errors. Metal connectors expand and can shift when heated. This can lead to impedance mismatch and phase errors, especially at high frequencies. At a wavelength of 2.7 mm it only takes 1.35 mm of movement in the measurement plane for a 180° phase shift.


A measurement setup is only as strong as its weakest link, so frequency extenders ideally have temperature stability on par with high-end VNAs. Rugged, high quality connectors and convection cooling help mitigate thermal errors. Figure 3 shows how a measurement setup featuring frequency extenders with good temperature stability (blue) compares to a setup with more temperature drift (red) after 8 hours of measurement.



Figure 3: Drift Impact on Calibrated Measurements


DC Biasing

Many RF devices are active, meaning they require a DC bias to operate. Some frequency extenders have an internal bias tee which adds a DC signal to the mmwave test signal. The equivalent circuit in Figure 4 shows us how this works. The inductor blocks AC and the capacitor blocks DC so the inputs cannot interfere with each other. The output contains both a DC bias and an RF test signal.

Figure 4: Bias Tee Equivalent Circuit


Variations in bias current can lead to variations in measurements. The DC bias is susceptible to small leakage currents and the farther the bias is from the DUT, the more current will leak. Placing the bias tee within the frequency extender brings it as close to the DUT as possible, limiting current leaks to provide a consistent bias.


In Figure 5 we see several measurements of drain-source current vs gate-source voltage on a FET. Zooming in on the measurement reveals that there are actually different I-V curves. This is due to ground current leakage varying between measurements. Keeping the bias close to the DUT helps minimize errors like this.



Figure 5: FET I-V Curves



As we’ve seen, the frequency extension modules have a few ways of helping your mmwave vector network analysis.

The small size of the modules allows you to bring the measurement to the device. This reduces cable loss between your DUT and your instrument by minimizing the cable length the mmwave signals need to travel. The modules also minimize temperature drift errors with precision hardware and temperature regulation. Finally, the modules are able to bias active devices with a built-in bias tee.


The N5295AX03 frequency extender module uses all of these advantages to make accurate continuous sweeps from 900 Hz to 120 GHz. 


Get more mmWave resources!

The objective of calibration is to remove the largest contributor to measurement uncertainty: systematic errors. As you start working in mmWave frequencies, the objective is unchanged, but the actual process for achieving the calibration is quite different.


The mmWave frequency band from 30-300 GHz is enabling technologies such as 5G and radar. But as we move into these higher frequencies, wavelengths become smaller and margins for error become tighter. The opportunity at mmWave frequencies is substantial. But, you can’t forget to account for the unique measurement challenges that come with moving to this frequency band. Properly calibrating your measurement set up is critical if you want to get accurate and repeatable measurements.





Figure 1: Network analyzer with a test set controller and frequency extenders


Your Measurement is Only as Good as Your Calibration

If you regularly work with a VNA, you’re probably familiar with the necessity of calibration. VNA’s are designed to be incredibly precise measurement tools, but without proper calibration, you’re leaving that precision on the table. To maximize the precision of your VNA, you need to calibrate it using a mathematical technique called vector-error-correction. This type of calibration accounts for measurement errors in the VNA itself- plus all the test cables, adapters, fixtures, and probes you have hooked up between your analyzer and the DUT. But calibration at mmWave isn’t this simple.


New Calibration Challenges

The main calibration challenge that comes with working at mmWave frequencies is that you now need a broadband calibration over a very wide frequency range- often from 500 MHz up to 125 GHz or higher. But most calibration techniques aren’t designed to get a cal over such a wide range. What you’re really looking for is a load that offers this broadband frequency coverage. You can get reasonable accuracy by using a well-designed broadband load. But a sliding load isn't a good fit for mmWave. So, what other option do you have?


The Old Model: Polynomial Calibration

Well, you might first consider using a polynomial model. This is a common model used at low frequencies. With this model, you’d need three bands- low, high, and a type of broadband sliding load. This usually works fine at frequencies below 30 GHz, but as you get into the mmWave frequency range, you’ll notice some issues.


Figure 2 shows a short with three different polynomial models- low, high, and broadband. The x-axis is frequency in GHz and the y-axis is the error ratio. (So, low numbers are good in this case) The red trace is when we use a low band model, one that is optimized for low band performance. It has a good load, but potentially limited shorts. For this signal, around 40 GHz, we notice that it breaks down and the error starts to expand out.


The blue trace is when we use shorts without any low band load. In this case, with the multiple shorts, you limit the performance at 40 GHz and above.


However, if you can combine a broadband model that takes advantage of the lower band load of the red trace and the high band offset short corrections of the blue trace, your result would be something like the green trace.



Figure 2: Low vs high vs broadband load models across a frequency range of 0-70 GHz


This demonstrates the new challenge of working at mmWave frequencies. As we get into these broadband frequencies, we need to eliminate the load. To do this, you need to use multiple shorts to cover the broad frequency range that you are now working in. It’s no longer possible to find a single load that covers the full frequency range you are testing. Also, you can’t just combine multiple shorts to achieve this either. A new solution is required.


The New Model: Database Calibration

So, we know we need to use multiple shorts to cover this broad frequency range. But how? You need a calibration kit that eliminates the need for a broadband load. It should implement multiple shorts to cover the entire frequency range you’re working in- something like the Keysight calibration kit in Figure 3. This mechanical, coaxial calibration kit:

  • Has a low band load, four shorts, and an open,
  • Covers the low frequencies up to 50 GHz with the load, and
  • Uses the offset shorts to provide states on the Smith Chart that represent different impedance conditions.




Figure 3: Mechanical calibration kit


This calibration kit uses a database model. This model is a good fit for mmWave testing. It characterizes each device using a specified dataset and uses a Smith Chart with known data of various components across a certain frequency range.


For example, for a source match type measurement, if we’re measuring a high reflect device, we can ask “what represents a good short at this frequency?” We plot that out, and we use this as our database calibration model. You can do that for any type of measurement you are working with: plot out the ideal conditions and use that as a model.

This dataset then allows us to calibrate our system.


The Keysight calibration kit in Figure 3 uses these techniques and allows us to effectively calibrate our system for mmWave testing. It’s important to realize that calibration kits and methods that work at lower frequencies simply do not work at these broadband frequencies. You need to consider selecting a new set of calibration tools that will optimize the accuracy of your mmWave test set up.



Tight margins at mmWave frequencies require new, more precise calibration techniques. You need to be able to make accurate, repeatable measurements or else risk design failures and missed deadlines.


Proper calibration across the broad frequency range is the first step to a reliable test set up. Consider re-evaluating your test set up, calibration tools and techniques. What changes do you need to make for working in the mmWave frequency range? How can you ensure you’re getting the most reliable measurements and avoiding costly test errors?


Get more mmWave resources!

The 5G vision set forth by IMT-2020 is an amazing thing.  It opens up so many possibilities for consumers, the environment, health and safety, humanity. Virtually every industry will be transformed, and new ones will emerge. The three defined use cases: enhanced mobile broadband (eMBB) to support extreme data rates, ultra-reliable low latency communications (URLLC) for near instant communications, and massive machine type communications (mMTC) for massive interconnects, are foundational to setting the 5G specifications. 


The 3GPP is developing standards for radio interference technologies to be submitted to the ITU (International Mobile Telecommunications 2020) for compliance with the IMT-2020 requirements. While these standards are in some ways an extension to existing 4G standards, they really are radically different from what’s in use today.  If the standards are radically different, then it’s not a stretch that the tests required to verify 5G product designs are also radically different.


The initial 5G New Radio (NR) release 15 was introduced in December 2017, and the full release is targeted for June 2018.  Release 15 focuses on specifying standards for the eMBB and URLLC use cases. Standards for the mMTC will be addressed in future standards releases. New releases of the standard will continue to roll out over many years. No previous standard has attempted to cover such a broad range of bandwidths, data rates, coverage, and energy efficiency.


Some key differences in 5G NR release 15 include:


  • Flexible numerology enables scalability – Where subcarrier spacing was fixed to 15 kHz in 4G LTE, it now scales to higher spacings.  Wider spaced subcarriers shorten the symbol period, which enables higher data rates and lower latency for URLLC use cases.  In contrast, with shorter subcarrier spacing, longer symbol periods allow for lower data rates and energy efficiency for IoT, or the mMTC use case.


  • mmWave frequencies open up more bandwidth – LTE supports up to six channel bandwidths, from 1.4 MHz to 20 MHz.  These can be combined through carrier aggregation for a maximum bandwidth of 100 MHz.  The initial 5G NR release 15 specifies frequency up to 52.6 GHz with aggregated channel bandwidths up to 800 MHz. Initial target frequency bands are 28 GHz and 39 GHz.  To put this in perspective, these mmWave bands alone can encompass the entire spectrum of the current 3G and 4G mobile communications system.  This additional spectrum is essential to enabling eMBB extreme data rates.


  • Massive MIMO to increase capacity – MIMO in LTE uses multiple antennas to send multiple, independent streams of data through the same frequency and time space. MIMO has been shown to increase data rates by making better utilization of the spectrum. With Massive MIMO, the number of antenna elements on the base station is considerably greater than the number on the device. Implementing multiple antennas on base stations and devices will be essential to increasing capacity and achieving the throughput envisioned in eMBB use cases. 


New Test Challenges

These new standards will introduce new challenges in test. 


Flexible numerology complicates the development of the 5G NR waveforms and introduces many new use cases that need to be tested.  In addition, it also introduces a new levels of coexistence testing with 4G and potentially Wi-Fi.  


mmWave frequencies with more bandwidth changes all assumptions about conducted tests.  Due to the higher frequencies and use of highly integrated multi-antenna arrays, tests will now be performed over-the-air (OTA). 


Massive MIMO increases the number of antennas, and subsequently the number of beams coming out of base stations and devices.   These beam patterns, whether at sub-6 GHz or mmWave, need to be characterized and validated in an OTA test environment.


 Viewing a 256 QAM waveform with antenna pattern

Viewing a 256 QAM waveform with antenna pattern


Radically different?  Absolutely. Test solutions must be flexible and scalable so that they cover the

number of use cases, frequencies, and bandwidths, as well as OTA validation. The test solutions must also evolve as the standards evolve.  Check out this article series by Moray Rumney to understand more about how test will change as we move into the next stage of 5G development: The Problems of Testing 5G Part 1.  

Late last year, technical thought leaders from academia and commercial organizations assembled in San Francisco to exchange insights on 5G NR, phased array antennas, and Over-the-Air (OTA) testing. Roger Nichols, Keysight’s 5G Program Manager, hosted the 5G Tech Connect event, which was timed to align with the publication of the first 3GPP 5G NR specifications. I was there to capture his insights on 5G along with other thought leaders’ reflections on technology challenges and I’ve collected their remarks into soundbites for you.

Roger, with his 33 years of engineering and management experience in wireless test and measurement,  talked about the many challenges the industry is facing as we move towards the 5G NR standard. He made the point that the proliferation of frequency bands will make it necessary for devices to work across a wide range of fragmented bands leading to more complex designs and possible interaction issues. Also, the elimination of cables and connectors is leading to the need to measure and conduct testing Over-the-Air, which can be both costly and complex.  


Another well-known and experienced industry expert, Moray Rumney, who leads Keysight’s strategic presence in standards organizations such as 3GPP, expanded on the implications 5G NR. He remarked that mmWave has much to offer in terms of wider bandwidths, but will also lead to challenges related to beamforming where narrow signals propagate in three-dimensional space. He claimed that such environments will require 3-D spatial test systems and simulation tools to enable equipment manufacturers to validate the performance of their designs. Moray further developed these ideas in his presentation ‘PRACH me if you can’, where he cheekily claimed that ”there is no meaning to the power of a signal if you are looking in the wrong direction.”

Professor Mischa Dohler of King’s College London, one of the many industry and academic thought leaders present at the event, talked about some of the challenges 5G technology will bring, such as delay. Since human response time is around 10 ms, round trip delay (latency) must be less. One way to reduce the delay is by adopting what he calls ’model-mediated AI,” which is already used by the gaming industry to predict hundreds of milliseconds ahead in time to create a real-time experience. Mischa also said that the expected explosion of traffic will inevitably lead to the need for a lot more bandwidth to allow networks address expectations on both data rates and latency.


I had the chance to sit down with Mischa to talk about some of the ideas he shared in his presentation. In this mini-interview, he summarized that since 5G will generate at least 10 Gbps data rates, enable eNodeBs to support 50 0000 UEs and create latencies of less than 10 ms, the technology will be good enough for a wide range of exciting industry applications. He also mentioned that virtualization is driven by the need for flexibility, which will require a software-based architecture.


Another industry thought leader, Maryam Rofougaran – Co-founder and Co-CEO of Movandi Corporation – explained that the move to mmWave frequencies implies new designs and innovations to create efficient integrated systems. Movandi uses Keysight’s solutions for modulation characterization and beamforming testing to verify their system.

To address some of these challenges, Keysight introduced at the event the world’s first 5G NR network emulator. Lucas Hansen, Senior Director, 5G & Chipset Business Segment at Keysight Technologies, explains how it enables users to prototype and develop 5G NR chipsets and devices.


Watch more videos from 5G Tech Connect on Keysight’s YouTube channel.

The world’s older population is growing dramatically; 8.5% of people worldwide (617 million) are aged 65 and over, and this number is projected to jump to nearly 17% of the world’s population by 2050 (1.6 billion). In addition, global life expectancy at birth is projected to increase by almost eight years, climbing from 68.6 years in 2015 to 76.2 years in 20501. Chronic diseases and conditions are on the rise, which will push the current healthcare systems beyond its current limits and capabilities. Societies have rising expectations for robust health care services; and healthcare facilities are facing many new and serious challenges balancing the expectations against the available resources. Luckily, continuous technological developments are helping to improve some medical processes, ease the workflow of healthcare practitioners, and ultimately, to improve the situation in an overloaded hospital. 


Digital transformation of healthcare
Internet of Things (IoT), or to be more specific, Internet of Medical Things (IoMT), is revolutionizing the healthcare industry. The number of connected medical devices is expected to increase from 10 billion to 50 billion over the next decade2. Cisco estimates that by 2021, the total amount of data created by any IoT device will reach 847 Zettabytes (ZB) per year3. At some point, IoT will become the biggest source of data on Earth. Imagine the possibilities if human-oriented data, like medical history, allergies to medication, laboratory test results, personal statistics, amongst many other things, were to be digitized as part of the electronic health initiatives. Healthcare practitioners will be able to interpret and leverage the plethora of big data from connected systems to make informed patient care decisions as well as understand and predict current and future health trends. The answer? Machine Learning (ML).


Machine learning helping to propel healthcare IoT
ML is an approach to achieve artificial intelligence (AI); algorithms are utilized to analyze data, learn from it, and identify patterns, then makes decisions with minimal human intervention. Healthcare providers and device makers are integrating AI and IoT to create advanced medical applications and devices that can provide person-centric care for individuals, from initial diagnosis to ongoing treatment options, while solving a variety of problems for patients, hospitals and the healthcare industry. At the same time, these AI-enabled medical IoT devices will make healthcare treatments more proactive rather than preventive.


An autonomous “nurse” is an example of AI-enabled medical IoT application. It will be able to answer patients’ questions since it is connected via the internet to a large range of data from previous health records. By integrating facial recognition, the robotic ‘nurse’ will be able to recognize the patient’s mood and will adapt its behavior and reaction accordingly. It will also be able to remind its patients to take medication, as well as reminding them of their doctor’s appointments. Now, imagine if a hospital were to “hire” robotic “nurses” that can reason, make choices, learn, communicate, move and are connected to the hospital’s network and connected to each other, they would be able to help the nurses with tasks like administering medication, maintaining records and communicating with doctors and educate patients and on disease management, just to name a few. This will be a good solution to the situation where the nurses sometimes are pushed to handle more than they are capable of.


Soon, ML will bring a set of bots to the healthcare industry, with billions of “dumb” machines transformed into smart machines.  This change is going to transform the way patients are assessed and treated; and healthcare professionals can provide a better quality of care that is tailored to each patient.


A bright future for telehealth
Another new development the healthcare industry is experiencing today is a general shift of in-office visits to remote health monitoring or telehealth. Many patients have agreed that home is the best place for healthcare, with patients being in their “normal everyday environment”.  A survey conducted in 2016 concluded that 94-99 percent of 3,000 patients were very satisfied with telehealth, while one-third of the respondents preferred the telehealth experience to an in-office doctor visit4.


With IoT, remote health monitoring, or telehealth is feasible, especially for patients living in remote areas. Another reason why remote health monitoring is getting popular is because of the vast varieties of biosensors and medical wearables that are available readily in the market today. So, what’s in it for the healthcare practitioners? All data coming from their remote patients will be able to help them detect patterns and gain new insights into health trends. That’s what IoT, big data and analytics software can help to achieve. 



The healthcare landscape has changed and is still changing.  Patients are starting to embrace the change, using medical IoT devices to manage their health requirements. Healthcare providers are starting to incorporate connected healthcare to drive excellence, be competitive and improve treatment outcomes to give patients better healthcare experience, while medical device makers are developing solutions that are more accurate, intelligent, and personalized. Ultimately, leveraging technologies in an effort to improve treatment outcomes, the management of drugs and diseases, and the patient experience, will lead to a more efficient hospital.








The use of connected medical devices or medical equipment in hospitals has increased with the expansion of wireless technologies and advancement in medical device designs. As a result, the number of connected devices in a large hospital or a healthcare facility reaches 85,000 at any given time1. As the density of the connected devices increase, so does the density of the electromagnetic environment; and there are concerns about the impact from sources producing radio frequency interference (RFI). Advances of technology in medical devices and many general consumer products are significantly affecting the efforts aimed at maintaining the required operations and interoperability between the products in a hospital or healthcare facility.


Some of the RFI sources in a hospital environment are, natural or ambient electromagnetic energy like lightning, television transmission or AM, FM or satellite radio. Other sources are from medical equipment and consumer products like ventilators, cardiac defibrillators, infusion pumps, motorized wheelchair, MRI systems, cellphone, tablet or laptops. RFI can cause many serious problems, some of which can lead to a patient’s death. There have been many reported cases; sleep-apnea monitors failed to sound an alarm when babies stopped breathing, power wheelchairs started rolling after their brakes released because of certain field strengths, and anesthetic gas monitors stopped working when influenced by interference from electrosurgery units2.


Hospital administrative staff, product makers, patients and the public has a huge responsibility and challenge to keep pace with efforts needed to maintain the required level of electromagnetic compatibility (EMC) in a hospital environment. Here are some examples of how everyone can play their part.


Hospital administrative control:

1. RF shielding

There are two reasons why certain medical equipment needs to be shielded. As an example, let’s take an MRI machine. The MRI machine is usually placed in a shielded room to prevent extraneous electromagnetic radiation from distorting the MR signal, and secondly, to prevent electromagnetic radiation generated by the MR scanner from causing interference to nearby medical devices. RF shielding must encircle the entire room; walls, ceiling and floor. The management of hospital could also consider less expensive materials that can increase shielding capacity, such as electricity-conductive paint, electricity-conductive wallpaper, and electricity-conductive cloth for less critical areas in the hospital.                


2. Restrict RF sources from certain areas in the hospital

Controlling the number of devices that could potentially contribute to the radio frequency interference (RFI) in a specific area will help to lower the risk of interfering operations of clinical and other electronic equipment. The management of hospitals should take measures to control the electromagnetic environment in the hospital by restricting cell phones and other potential RF sources from sensitive areas in the hospital; such as the intensive care unit, the neonatal intensive care unit and the operating theater, where critical care medical equipment is in use.


Public and patients:

3. Implementation of control techniques

There has been an increase in the use of electronically controlled medical devices outside the clinical environment, and they are often used at home, attached or implanted into a patient. RFI problems will also affect these patients, especially patients with cardiac pacemaker implants. Though the chances of an EMI from a cellphone could produce a life-threatening situation, certain steps are advisable to minimize any risks. Government bodies has issued caution and recommendations to the wearer3.

  • Using a cellphone very close to the pacemaker may cause the pacemaker to malfunction.
  • It is advisable to avoid carrying a cellphone in the breast pocket directly over the pacemaker because an incoming call will switch the phone to its transmission mode and may cause interference.
  • When using a cellphone, it is advisable to hold it to the ear farthest from the pacemaker.


Medical device makers/developers:

4. RF immunity of medical devices

For many years now, military, aircraft and automotive electronic systems have been required to meet strict RFI immunity requirements. The technology has been developed and can be easily deployed for medical devices. Most techniques are not costly if they are incorporated into the electronics system design in the early stage. The international standard for RF immunity of medical devices is the IEC standard 60601-1-2, requires a minimum immunity level of 3 V/m in the 26-1000 MHz frequency range4. Medical device developers and makers need to consider incorporating RF immunity techniques like shielding, grounding and filtering to ensure the medical devices are in conformance with the standard and is robust against RFI.


5. Incorporate RFI immunity into product design

Modern medical devices are getting smaller in size; combining low power integrated circuitry which is more sensitive and susceptible to RFI. A medical device maker could incorporate RFI immunity into the product design, making sure the medical device is robust and able to work well above the defined minimum immunity level.  One way this could be done is by testing the product in a real-world setting to ensure the medical device is able to withstand high RF field strength.


As you can see, there are many ways to deal with electromagnetic interference in a healthcare facility. However, the field strength to which the medical device may be exposed to depends on many conditions, and is beyond what the medical device makers or developers can do. It is up to the hospital administrative staff to impose and regulate a certain guideline to maintain safe environment for the hospital patients.


Here is a related webinar that talks about RF Coexistence challenges, what is RF coexistence testing and how it is performed. And if you are looking for solutions to combat your design challenges, please go to for more information.  





5G has so many promises it’s difficult to tell how they all can be achieved.  From extreme data download speeds, to self-driving automobiles, to IoT devices monitoring over many years.  One of the key enablers for these to happen is the flexible numerology recently defined in 3GPP Release 15.  I find this part of 5G fascinating and think it will be a key part of 5G to support a wide range of frequencies and scheduling for many diverse services. 


The top five key features of 5G flexible numerology are:


1. Subcarrier spacing is no longer fixed to 15 kHz. Instead, the subcarrier spacing scales by 2µ x 15 kHz to cover different services: QoS, latency requirements and frequency ranges. 15, 30, and 60 kHz subcarrier spacing are used for the lower frequency bands, and 60, 120, and 240 kHz subcarrier spacing are used for the higher frequency bands.


2. Number of slots increases as numerology (µ) increases. Same as LTE, each frame is 10 ms, each subframe is 1 ms.  Ten subframes to a frame.  In normal CP, each slot has 14 symbols.  As, numerology increases, the number of slots in a subframe increase, therefore increasing the number of symbols sent in a given time. As shown in figure 1, more slots as the frequency increases results in shorter slot duration. 


Slot length scales with the subcarrier spacing: slot length = 1 ms/2 μ



Subcarrier spacing

# slots per subframe

Slot length


15 kHz


1ms/21=1 ms


30 kHz




60 kHz


1ms/24=250 us


120 kHz






Figure 1. Slots within a subframe and the associated slot duration time.


3. Mini-slots for low latency applications.  A standard slot has 14 OFDM symbols.  In contrast, mini-slots can contain 7, 4, or 2 OFDM symbols.  Mini-slots can also start immediately without needing to wait for slot boundaries, enabling quick delivery of low-latency payloads. Mini-slots are not only useful for low-latency applications, but they also play an important role in LTE-NR coexistence and beamforming.


4. Slots can be DL, UL, or flexible. NR slot structure allows for dynamic assignment of the link direction in each OFDM symbol within the slot. With this, the network can dynamically balance UL and DL traffic. This can be used to optimize traffic for different service types.


Figure 2. Link direction can be dynamically assigned.



5. Multiplexing of different numerologies. Different numerologies can be transmitted on the same carrier frequency with a new feature called bandwidth parts.  These can be multiplexed in the frequency domain.  Mixing different numerologies on a carrier can cause interference with subcarriers of another numerology.  While this provides the flexibility for diverse services to be sent on the same carrier frequency, it also introduces new challenges with interference between the different services.


Why should you care?  I see it like a multi-lane super highway with lots of control.  These lanes represent the different types of services offered in 5G. You have the fast lanes that are very speedy and can handle a lot of cars.  You have the slow lanes where traffic may be at turtle’s pace. And now, throw in a motorbike that can speed in and out of lanes at any time.  Now you need to be concerned about traffic and possible collisions.


Flexible numerology in 5G is much different from numerology found in 4G.  It enables a lot of flexibility, but it also introduces new challenges with the way waveforms are built and managed.  Now you need to consider subcarrier spacing, UL, DL configurations, and bandwidth parts.  The number of test cases explodes, and device designers will need to create and analyze waveforms in the frequency-, time-, and modulation domains, as well as verify the device’s performance on the network with many different numerologies. 


If you are interested in learning more about 5G Numerology, I’d highly recommend watching the webinar: Understanding the 5G NR Physical Layer by Javier Campos. It provides lots of information on the new standards and goes into details on 5G numerology, waveforms, and new access procedures.

Within the last year I have been a victim, twice. The first time, a thief stole two catalytic converters off my car parked in my driveway. The second time, a thief stole a package from my mailbox. Okay, in the grand scheme of things, these probably aren’t the worst crimes that could have occurred; but they still got me thinking. Barring installing an expensive security system, welding rebar over my new catalytic converters, or picking up my mail directly from the post office, was there anything I could do to potentially prevent these crimes from happening again in the future? As it turns out, there may be, and it will likely come in the form of a low-power wide area network (LPWAN) technology known as Narrowband-IoT (NB-IoT).


NB-IoT is one of the Cellular IoT (CIoT) technologies defined under the 3GPP umbrella to enable IoT connectivity using the licensed frequencies and to co-exist with legacy cellular broadband technologies like LTE, UMTS, and GSM. By reusing the cellular infrastructure, it enables devices to connect directly to operator networks, providing access to improved nationwide coverage with value-added services like mobility, roaming, security, and authentication. The target for NB-IoT is to provide sufficient coverage for smart meters and other IoT appliances typically located in basements and similarly deep inbuilding locations.


                                 Smart meter


That makes NB-IoT suitable for commercial applications like home lighting, security control, and maybe even keeping tabs on my catalytic converters and mail. It also opens the door to new opportunities for industrial IoT (IIoT) applications like energy and utility management (e.g., a smart grid), asset tracking, and machine-to-machine communication. After all, NB-IoT can provide robust coverage and is scalable to very large numbers of devices—two hallmarks for the IIoT.


But, succeeding in the IIoT with NB-IoT devices and systems will require critical attention to three key challenges.


Battery Life. In NB-IoT, the maximum battery life expected is in the range of 10+ years; however, to avoid costly maintenance, the battery should last for the lifecycle of the device. Unfortunately, battery life is impacted by coverage.


In low coverage, more repetitions are needed to transfer data. The more repetitions, the longer the duty cycles of the IoT modems and the higher the power consumption. Excess repetitions due to network misconfiguration or network implementation also have a similar impact. In a deep inbuilding location there may be a difference of tens of dBs in coverage between operators, and that can take years off the battery life of an IoT device deployed in that location.


To ensure a long battery life, manufacturers will need to characterize the current consumption of the device under active, idle, standby, and sleep modes. Device vendors, will need to recreate operating conditions to better understand how much current is drawn in each scenario (e.g., a remote software update versus a device that is unable to connect to server).


Coverage. NB-IoT is expected to be enable a coverage gain of 23 dB (max.) over regular LTE. The real gain may be less, depending on the deployment method and configuration. The challenge with NB-IoT coverage is that it is heavily dependent on the field performance of the commercial network equipment and IoT devices, interoperability between the equipment, and the network design and configuration.


To ensure extreme coverage, manufacturers will need to simulate different RF environments (e.g., at remote locations, basement installations, hidden installations, behind concrete walls, and industrial environments). And, they will need to perform transmitter and receiver characterization to understand device performance under these different RF conditions. Once the IoT network is live, the service provider will need to perform field measurements to ensure the simulated tests match real-life conditions.


Low Cost. The NB-IoT module target price is below $5.00. Initially the cost is expected to be comparable to that for GSM/GPRS. However, the underlying NB-IoT technology is much simpler and its cost will likely decrease as demand increases. An unreliable NB-IoT device can add to the module price, with its associated service and/or recall cost, as can the cost of test during the device’s development and production.


To achieve a low NB-IoT device cost, manufacturers can use lower priced components or simplify the hardware design, but the performance of the device must be properly characterized to ensure these cost-cutting measures don’t compromise device reliability. Manufacturers must also carefully select the right test equipment to reduce the cost of test. An integrated solution that can cover the whole product lifecycle, from design to manufacturing to conformance test, can help minimize test equipment capital cost.


A Final Thought

Without a doubt, NB-IoT holds great promise for the rapidly expanding IIoT. For those who can overcome its challenges, opportunities abound. Does that mean a nifty way of keeping track of my mail is around the corner? Perhaps. Telia, a Swedish mobile operator, recently teamed up with the Finnish postal service Posti to develop smart mailboxes for just that purpose. And, Borgs Technologies now offers a NB-IoT tracker for pets. Maybe a tracker for my catalytic converters will be next? In the meantime, I’ll be sure to set my car alarm and keep my driveway lights on!


If you are interested in finding out how Keysight’s solutions can help address your NB-IoT device and system challenges, check out the following links:

·       For battery drain analysis, check out the N6705C DC Power Analyzer and N6781A 2-Quadrant Source/Measure Unit

·       To monitor current drawn at sub-circuits with much higher bandwidth and dynamic range for the most demanding applications, check out the CX3300 Device Current Waveform Analyzer

·       To validate coverage with field measurement, check out Nemo Outdoor and Nemo Analyze

·       To perform parallel testing of multiple devices under test to get the maximum throughput for the production line, check out the EXM Wireless Test Set

internetofthings iot

industrialiot iiot iotdevicetest cellulariot nb-iot narrowbandiot