Skip navigation
All Places > Keysight Blogs > Next Generation Wireless Communications > Blog > 2018 > April
2018

The objective of calibration is to remove the largest contributor to measurement uncertainty: systematic errors. As you start working in mmWave frequencies, the objective is unchanged, but the actual process for achieving the calibration is quite different.

 

The mmWave frequency band from 30-300 GHz is enabling technologies such as 5G and radar. But as we move into these higher frequencies, wavelengths become smaller and margins for error become tighter. The opportunity at mmWave frequencies is substantial. But, you can’t forget to account for the unique measurement challenges that come with moving to this frequency band. Properly calibrating your measurement set up is critical if you want to get accurate and repeatable measurements.

 

 

 

 

Figure 1: Network analyzer with a test set controller and frequency extenders

 

Your Measurement is Only as Good as Your Calibration

If you regularly work with a VNA, you’re probably familiar with the necessity of calibration. VNA’s are designed to be incredibly precise measurement tools, but without proper calibration, you’re leaving that precision on the table. To maximize the precision of your VNA, you need to calibrate it using a mathematical technique called vector-error-correction. This type of calibration accounts for measurement errors in the VNA itself- plus all the test cables, adapters, fixtures, and probes you have hooked up between your analyzer and the DUT. But calibration at mmWave isn’t this simple.

 

New Calibration Challenges

The main calibration challenge that comes with working at mmWave frequencies is that you now need a broadband calibration over a very wide frequency range- often from 500 MHz up to 125 GHz or higher. But most calibration techniques aren’t designed to get a cal over such a wide range. What you’re really looking for is a load that offers this broadband frequency coverage. You can get reasonable accuracy by using a well-designed broadband load. But a sliding load isn't a good fit for mmWave. So, what other option do you have?

 

The Old Model: Polynomial Calibration

Well, you might first consider using a polynomial model. This is a common model used at low frequencies. With this model, you’d need three bands- low, high, and a type of broadband sliding load. This usually works fine at frequencies below 30 GHz, but as you get into the mmWave frequency range, you’ll notice some issues.

 

Figure 2 shows a short with three different polynomial models- low, high, and broadband. The x-axis is frequency in GHz and the y-axis is the error ratio. (So, low numbers are good in this case) The red trace is when we use a low band model, one that is optimized for low band performance. It has a good load, but potentially limited shorts. For this signal, around 40 GHz, we notice that it breaks down and the error starts to expand out.

 

The blue trace is when we use shorts without any low band load. In this case, with the multiple shorts, you limit the performance at 40 GHz and above.

 

However, if you can combine a broadband model that takes advantage of the lower band load of the red trace and the high band offset short corrections of the blue trace, your result would be something like the green trace.

 

 

Figure 2: Low vs high vs broadband load models across a frequency range of 0-70 GHz

 

This demonstrates the new challenge of working at mmWave frequencies. As we get into these broadband frequencies, we need to eliminate the load. To do this, you need to use multiple shorts to cover the broad frequency range that you are now working in. It’s no longer possible to find a single load that covers the full frequency range you are testing. Also, you can’t just combine multiple shorts to achieve this either. A new solution is required.

 

The New Model: Database Calibration

So, we know we need to use multiple shorts to cover this broad frequency range. But how? You need a calibration kit that eliminates the need for a broadband load. It should implement multiple shorts to cover the entire frequency range you’re working in- something like the Keysight calibration kit in Figure 3. This mechanical, coaxial calibration kit:

  • Has a low band load, four shorts, and an open,
  • Covers the low frequencies up to 50 GHz with the load, and
  • Uses the offset shorts to provide states on the Smith Chart that represent different impedance conditions.

 

 

 

Figure 3: Mechanical calibration kit

 

This calibration kit uses a database model. This model is a good fit for mmWave testing. It characterizes each device using a specified dataset and uses a Smith Chart with known data of various components across a certain frequency range.

 

For example, for a source match type measurement, if we’re measuring a high reflect device, we can ask “what represents a good short at this frequency?” We plot that out, and we use this as our database calibration model. You can do that for any type of measurement you are working with: plot out the ideal conditions and use that as a model.

This dataset then allows us to calibrate our system.

 

The Keysight calibration kit in Figure 3 uses these techniques and allows us to effectively calibrate our system for mmWave testing. It’s important to realize that calibration kits and methods that work at lower frequencies simply do not work at these broadband frequencies. You need to consider selecting a new set of calibration tools that will optimize the accuracy of your mmWave test set up.

 

Conclusion

Tight margins at mmWave frequencies require new, more precise calibration techniques. You need to be able to make accurate, repeatable measurements or else risk design failures and missed deadlines.

 

Proper calibration across the broad frequency range is the first step to a reliable test set up. Consider re-evaluating your test set up, calibration tools and techniques. What changes do you need to make for working in the mmWave frequency range? How can you ensure you’re getting the most reliable measurements and avoiding costly test errors?

 

Get more mmWave resources!

The 5G vision set forth by IMT-2020 is an amazing thing.  It opens up so many possibilities for consumers, the environment, health and safety, humanity. Virtually every industry will be transformed, and new ones will emerge. The three defined use cases: enhanced mobile broadband (eMBB) to support extreme data rates, ultra-reliable low latency communications (URLLC) for near instant communications, and massive machine type communications (mMTC) for massive interconnects, are foundational to setting the 5G specifications. 

 

The 3GPP is developing standards for radio interference technologies to be submitted to the ITU (International Mobile Telecommunications 2020) for compliance with the IMT-2020 requirements. While these standards are in some ways an extension to existing 4G standards, they really are radically different from what’s in use today.  If the standards are radically different, then it’s not a stretch that the tests required to verify 5G product designs are also radically different.

 

The initial 5G New Radio (NR) release 15 was introduced in December 2017, and the full release is targeted for June 2018.  Release 15 focuses on specifying standards for the eMBB and URLLC use cases. Standards for the mMTC will be addressed in future standards releases. New releases of the standard will continue to roll out over many years. No previous standard has attempted to cover such a broad range of bandwidths, data rates, coverage, and energy efficiency.

 

Some key differences in 5G NR release 15 include:

 

  • Flexible numerology enables scalability – Where subcarrier spacing was fixed to 15 kHz in 4G LTE, it now scales to higher spacings.  Wider spaced subcarriers shorten the symbol period, which enables higher data rates and lower latency for URLLC use cases.  In contrast, with shorter subcarrier spacing, longer symbol periods allow for lower data rates and energy efficiency for IoT, or the mMTC use case.

 

  • mmWave frequencies open up more bandwidth – LTE supports up to six channel bandwidths, from 1.4 MHz to 20 MHz.  These can be combined through carrier aggregation for a maximum bandwidth of 100 MHz.  The initial 5G NR release 15 specifies frequency up to 52.6 GHz with aggregated channel bandwidths up to 800 MHz. Initial target frequency bands are 28 GHz and 39 GHz.  To put this in perspective, these mmWave bands alone can encompass the entire spectrum of the current 3G and 4G mobile communications system.  This additional spectrum is essential to enabling eMBB extreme data rates.

 

  • Massive MIMO to increase capacity – MIMO in LTE uses multiple antennas to send multiple, independent streams of data through the same frequency and time space. MIMO has been shown to increase data rates by making better utilization of the spectrum. With Massive MIMO, the number of antenna elements on the base station is considerably greater than the number on the device. Implementing multiple antennas on base stations and devices will be essential to increasing capacity and achieving the throughput envisioned in eMBB use cases. 

  

New Test Challenges

These new standards will introduce new challenges in test. 

 

Flexible numerology complicates the development of the 5G NR waveforms and introduces many new use cases that need to be tested.  In addition, it also introduces a new levels of coexistence testing with 4G and potentially Wi-Fi.  

 

mmWave frequencies with more bandwidth changes all assumptions about conducted tests.  Due to the higher frequencies and use of highly integrated multi-antenna arrays, tests will now be performed over-the-air (OTA). 

 

Massive MIMO increases the number of antennas, and subsequently the number of beams coming out of base stations and devices.   These beam patterns, whether at sub-6 GHz or mmWave, need to be characterized and validated in an OTA test environment.

 

 Viewing a 256 QAM waveform with antenna pattern

Viewing a 256 QAM waveform with antenna pattern

 

Radically different?  Absolutely. Test solutions must be flexible and scalable so that they cover the

number of use cases, frequencies, and bandwidths, as well as OTA validation. The test solutions must also evolve as the standards evolve.  Check out this article series by Moray Rumney to understand more about how test will change as we move into the next stage of 5G development: The Problems of Testing 5G Part 1.  

Late last year, technical thought leaders from academia and commercial organizations assembled in San Francisco to exchange insights on 5G NR, phased array antennas, and Over-the-Air (OTA) testing. Roger Nichols, Keysight’s 5G Program Manager, hosted the 5G Tech Connect event, which was timed to align with the publication of the first 3GPP 5G NR specifications. I was there to capture his insights on 5G along with other thought leaders’ reflections on technology challenges and I’ve collected their remarks into soundbites for you.

Roger, with his 33 years of engineering and management experience in wireless test and measurement,  talked about the many challenges the industry is facing as we move towards the 5G NR standard. He made the point that the proliferation of frequency bands will make it necessary for devices to work across a wide range of fragmented bands leading to more complex designs and possible interaction issues. Also, the elimination of cables and connectors is leading to the need to measure and conduct testing Over-the-Air, which can be both costly and complex.  

 

Another well-known and experienced industry expert, Moray Rumney, who leads Keysight’s strategic presence in standards organizations such as 3GPP, expanded on the implications 5G NR. He remarked that mmWave has much to offer in terms of wider bandwidths, but will also lead to challenges related to beamforming where narrow signals propagate in three-dimensional space. He claimed that such environments will require 3-D spatial test systems and simulation tools to enable equipment manufacturers to validate the performance of their designs. Moray further developed these ideas in his presentation ‘PRACH me if you can’, where he cheekily claimed that ”there is no meaning to the power of a signal if you are looking in the wrong direction.”

Professor Mischa Dohler of King’s College London, one of the many industry and academic thought leaders present at the event, talked about some of the challenges 5G technology will bring, such as delay. Since human response time is around 10 ms, round trip delay (latency) must be less. One way to reduce the delay is by adopting what he calls ’model-mediated AI,” which is already used by the gaming industry to predict hundreds of milliseconds ahead in time to create a real-time experience. Mischa also said that the expected explosion of traffic will inevitably lead to the need for a lot more bandwidth to allow networks address expectations on both data rates and latency.

 

I had the chance to sit down with Mischa to talk about some of the ideas he shared in his presentation. In this mini-interview, he summarized that since 5G will generate at least 10 Gbps data rates, enable eNodeBs to support 50 0000 UEs and create latencies of less than 10 ms, the technology will be good enough for a wide range of exciting industry applications. He also mentioned that virtualization is driven by the need for flexibility, which will require a software-based architecture.

 

Another industry thought leader, Maryam Rofougaran – Co-founder and Co-CEO of Movandi Corporation – explained that the move to mmWave frequencies implies new designs and innovations to create efficient integrated systems. Movandi uses Keysight’s solutions for modulation characterization and beamforming testing to verify their system.

To address some of these challenges, Keysight introduced at the event the world’s first 5G NR network emulator. Lucas Hansen, Senior Director, 5G & Chipset Business Segment at Keysight Technologies, explains how it enables users to prototype and develop 5G NR chipsets and devices.

 

Watch more videos from 5G Tech Connect on Keysight’s YouTube channel.

The world’s older population is growing dramatically; 8.5% of people worldwide (617 million) are aged 65 and over, and this number is projected to jump to nearly 17% of the world’s population by 2050 (1.6 billion). In addition, global life expectancy at birth is projected to increase by almost eight years, climbing from 68.6 years in 2015 to 76.2 years in 20501. Chronic diseases and conditions are on the rise, which will push the current healthcare systems beyond its current limits and capabilities. Societies have rising expectations for robust health care services; and healthcare facilities are facing many new and serious challenges balancing the expectations against the available resources. Luckily, continuous technological developments are helping to improve some medical processes, ease the workflow of healthcare practitioners, and ultimately, to improve the situation in an overloaded hospital. 

 

Digital transformation of healthcare
Internet of Things (IoT), or to be more specific, Internet of Medical Things (IoMT), is revolutionizing the healthcare industry. The number of connected medical devices is expected to increase from 10 billion to 50 billion over the next decade2. Cisco estimates that by 2021, the total amount of data created by any IoT device will reach 847 Zettabytes (ZB) per year3. At some point, IoT will become the biggest source of data on Earth. Imagine the possibilities if human-oriented data, like medical history, allergies to medication, laboratory test results, personal statistics, amongst many other things, were to be digitized as part of the electronic health initiatives. Healthcare practitioners will be able to interpret and leverage the plethora of big data from connected systems to make informed patient care decisions as well as understand and predict current and future health trends. The answer? Machine Learning (ML).

 

Machine learning helping to propel healthcare IoT
ML is an approach to achieve artificial intelligence (AI); algorithms are utilized to analyze data, learn from it, and identify patterns, then makes decisions with minimal human intervention. Healthcare providers and device makers are integrating AI and IoT to create advanced medical applications and devices that can provide person-centric care for individuals, from initial diagnosis to ongoing treatment options, while solving a variety of problems for patients, hospitals and the healthcare industry. At the same time, these AI-enabled medical IoT devices will make healthcare treatments more proactive rather than preventive.

 

An autonomous “nurse” is an example of AI-enabled medical IoT application. It will be able to answer patients’ questions since it is connected via the internet to a large range of data from previous health records. By integrating facial recognition, the robotic ‘nurse’ will be able to recognize the patient’s mood and will adapt its behavior and reaction accordingly. It will also be able to remind its patients to take medication, as well as reminding them of their doctor’s appointments. Now, imagine if a hospital were to “hire” robotic “nurses” that can reason, make choices, learn, communicate, move and are connected to the hospital’s network and connected to each other, they would be able to help the nurses with tasks like administering medication, maintaining records and communicating with doctors and educate patients and on disease management, just to name a few. This will be a good solution to the situation where the nurses sometimes are pushed to handle more than they are capable of.

 

Soon, ML will bring a set of bots to the healthcare industry, with billions of “dumb” machines transformed into smart machines.  This change is going to transform the way patients are assessed and treated; and healthcare professionals can provide a better quality of care that is tailored to each patient.

 

A bright future for telehealth
Another new development the healthcare industry is experiencing today is a general shift of in-office visits to remote health monitoring or telehealth. Many patients have agreed that home is the best place for healthcare, with patients being in their “normal everyday environment”.  A survey conducted in 2016 concluded that 94-99 percent of 3,000 patients were very satisfied with telehealth, while one-third of the respondents preferred the telehealth experience to an in-office doctor visit4.

 

With IoT, remote health monitoring, or telehealth is feasible, especially for patients living in remote areas. Another reason why remote health monitoring is getting popular is because of the vast varieties of biosensors and medical wearables that are available readily in the market today. So, what’s in it for the healthcare practitioners? All data coming from their remote patients will be able to help them detect patterns and gain new insights into health trends. That’s what IoT, big data and analytics software can help to achieve. 

 

Conclusion

The healthcare landscape has changed and is still changing.  Patients are starting to embrace the change, using medical IoT devices to manage their health requirements. Healthcare providers are starting to incorporate connected healthcare to drive excellence, be competitive and improve treatment outcomes to give patients better healthcare experience, while medical device makers are developing solutions that are more accurate, intelligent, and personalized. Ultimately, leveraging technologies in an effort to improve treatment outcomes, the management of drugs and diseases, and the patient experience, will lead to a more efficient hospital.

 

References:

1. https://www.census.gov/content/dam/Census/library/publications/2016/demo/p95-16-1.pdf 

2. https://www.medicaldesignandoutsourcing.com/connected-medical-devices-5-things-need-know/

3. https://www.cisco.com/c/en/us/solutions/collateral/service-provider/global-cloud-index-gci/white-paper-c11-738085.html

4. https://www.fshealth.com/blog/29-statistics-about-telemedicine-healthcare

 

The use of connected medical devices or medical equipment in hospitals has increased with the expansion of wireless technologies and advancement in medical device designs. As a result, the number of connected devices in a large hospital or a healthcare facility reaches 85,000 at any given time1. As the density of the connected devices increase, so does the density of the electromagnetic environment; and there are concerns about the impact from sources producing radio frequency interference (RFI). Advances of technology in medical devices and many general consumer products are significantly affecting the efforts aimed at maintaining the required operations and interoperability between the products in a hospital or healthcare facility.

 

Some of the RFI sources in a hospital environment are, natural or ambient electromagnetic energy like lightning, television transmission or AM, FM or satellite radio. Other sources are from medical equipment and consumer products like ventilators, cardiac defibrillators, infusion pumps, motorized wheelchair, MRI systems, cellphone, tablet or laptops. RFI can cause many serious problems, some of which can lead to a patient’s death. There have been many reported cases; sleep-apnea monitors failed to sound an alarm when babies stopped breathing, power wheelchairs started rolling after their brakes released because of certain field strengths, and anesthetic gas monitors stopped working when influenced by interference from electrosurgery units2.

 

Hospital administrative staff, product makers, patients and the public has a huge responsibility and challenge to keep pace with efforts needed to maintain the required level of electromagnetic compatibility (EMC) in a hospital environment. Here are some examples of how everyone can play their part.

 

Hospital administrative control:

1. RF shielding

There are two reasons why certain medical equipment needs to be shielded. As an example, let’s take an MRI machine. The MRI machine is usually placed in a shielded room to prevent extraneous electromagnetic radiation from distorting the MR signal, and secondly, to prevent electromagnetic radiation generated by the MR scanner from causing interference to nearby medical devices. RF shielding must encircle the entire room; walls, ceiling and floor. The management of hospital could also consider less expensive materials that can increase shielding capacity, such as electricity-conductive paint, electricity-conductive wallpaper, and electricity-conductive cloth for less critical areas in the hospital.                

 

2. Restrict RF sources from certain areas in the hospital

Controlling the number of devices that could potentially contribute to the radio frequency interference (RFI) in a specific area will help to lower the risk of interfering operations of clinical and other electronic equipment. The management of hospitals should take measures to control the electromagnetic environment in the hospital by restricting cell phones and other potential RF sources from sensitive areas in the hospital; such as the intensive care unit, the neonatal intensive care unit and the operating theater, where critical care medical equipment is in use.

 

Public and patients:

3. Implementation of control techniques

There has been an increase in the use of electronically controlled medical devices outside the clinical environment, and they are often used at home, attached or implanted into a patient. RFI problems will also affect these patients, especially patients with cardiac pacemaker implants. Though the chances of an EMI from a cellphone could produce a life-threatening situation, certain steps are advisable to minimize any risks. Government bodies has issued caution and recommendations to the wearer3.

  • Using a cellphone very close to the pacemaker may cause the pacemaker to malfunction.
  • It is advisable to avoid carrying a cellphone in the breast pocket directly over the pacemaker because an incoming call will switch the phone to its transmission mode and may cause interference.
  • When using a cellphone, it is advisable to hold it to the ear farthest from the pacemaker.

 

Medical device makers/developers:

4. RF immunity of medical devices

For many years now, military, aircraft and automotive electronic systems have been required to meet strict RFI immunity requirements. The technology has been developed and can be easily deployed for medical devices. Most techniques are not costly if they are incorporated into the electronics system design in the early stage. The international standard for RF immunity of medical devices is the IEC standard 60601-1-2, requires a minimum immunity level of 3 V/m in the 26-1000 MHz frequency range4. Medical device developers and makers need to consider incorporating RF immunity techniques like shielding, grounding and filtering to ensure the medical devices are in conformance with the standard and is robust against RFI.

 

5. Incorporate RFI immunity into product design

Modern medical devices are getting smaller in size; combining low power integrated circuitry which is more sensitive and susceptible to RFI. A medical device maker could incorporate RFI immunity into the product design, making sure the medical device is robust and able to work well above the defined minimum immunity level.  One way this could be done is by testing the product in a real-world setting to ensure the medical device is able to withstand high RF field strength.

 

As you can see, there are many ways to deal with electromagnetic interference in a healthcare facility. However, the field strength to which the medical device may be exposed to depends on many conditions, and is beyond what the medical device makers or developers can do. It is up to the hospital administrative staff to impose and regulate a certain guideline to maintain safe environment for the hospital patients.

 

Here is a related webinar that talks about RF Coexistence challenges, what is RF coexistence testing and how it is performed. And if you are looking for solutions to combat your design challenges, please go to www.keysight.com/find/IoT for more information.  

 

 

References:

  1.       https://hitinfrastructure.com/news/edge-computing-uses-iot-devices-for-fast-health-it-analytics
  2.       http://www.nlc-bnc.ca/eppp-archive/100/201/300/cdn_medical_association/cmaj/vol-154/0373e.htm
  3.       https://www.fda.gov/Radiation-EmittingProducts/RadiationEmittingProductsandProcedures/HomeBusinessandEntertainment/CellPhones/ucm116311.htm
  4.       https://www.ncbi.nlm.nih.gov/pubmed/9604711

5G has so many promises it’s difficult to tell how they all can be achieved.  From extreme data download speeds, to self-driving automobiles, to IoT devices monitoring over many years.  One of the key enablers for these to happen is the flexible numerology recently defined in 3GPP Release 15.  I find this part of 5G fascinating and think it will be a key part of 5G to support a wide range of frequencies and scheduling for many diverse services. 

 

The top five key features of 5G flexible numerology are:

 

1. Subcarrier spacing is no longer fixed to 15 kHz. Instead, the subcarrier spacing scales by 2µ x 15 kHz to cover different services: QoS, latency requirements and frequency ranges. 15, 30, and 60 kHz subcarrier spacing are used for the lower frequency bands, and 60, 120, and 240 kHz subcarrier spacing are used for the higher frequency bands.

 

2. Number of slots increases as numerology (µ) increases. Same as LTE, each frame is 10 ms, each subframe is 1 ms.  Ten subframes to a frame.  In normal CP, each slot has 14 symbols.  As, numerology increases, the number of slots in a subframe increase, therefore increasing the number of symbols sent in a given time. As shown in figure 1, more slots as the frequency increases results in shorter slot duration. 

 

Slot length scales with the subcarrier spacing: slot length = 1 ms/2 μ

 

Numerology

Subcarrier spacing

# slots per subframe

Slot length

0

15 kHz

1

1ms/21=1 ms

1

30 kHz

2

1ms/22=500us

2

60 kHz

4

1ms/24=250 us

3

120 kHz

8

1ms/28=125µs

 

 

 

Figure 1. Slots within a subframe and the associated slot duration time.

 

3. Mini-slots for low latency applications.  A standard slot has 14 OFDM symbols.  In contrast, mini-slots can contain 7, 4, or 2 OFDM symbols.  Mini-slots can also start immediately without needing to wait for slot boundaries, enabling quick delivery of low-latency payloads. Mini-slots are not only useful for low-latency applications, but they also play an important role in LTE-NR coexistence and beamforming.

 

4. Slots can be DL, UL, or flexible. NR slot structure allows for dynamic assignment of the link direction in each OFDM symbol within the slot. With this, the network can dynamically balance UL and DL traffic. This can be used to optimize traffic for different service types.

 

Figure 2. Link direction can be dynamically assigned.

 

 

5. Multiplexing of different numerologies. Different numerologies can be transmitted on the same carrier frequency with a new feature called bandwidth parts.  These can be multiplexed in the frequency domain.  Mixing different numerologies on a carrier can cause interference with subcarriers of another numerology.  While this provides the flexibility for diverse services to be sent on the same carrier frequency, it also introduces new challenges with interference between the different services.

 

Why should you care?  I see it like a multi-lane super highway with lots of control.  These lanes represent the different types of services offered in 5G. You have the fast lanes that are very speedy and can handle a lot of cars.  You have the slow lanes where traffic may be at turtle’s pace. And now, throw in a motorbike that can speed in and out of lanes at any time.  Now you need to be concerned about traffic and possible collisions.

 

Flexible numerology in 5G is much different from numerology found in 4G.  It enables a lot of flexibility, but it also introduces new challenges with the way waveforms are built and managed.  Now you need to consider subcarrier spacing, UL, DL configurations, and bandwidth parts.  The number of test cases explodes, and device designers will need to create and analyze waveforms in the frequency-, time-, and modulation domains, as well as verify the device’s performance on the network with many different numerologies. 

 

If you are interested in learning more about 5G Numerology, I’d highly recommend watching the webinar: Understanding the 5G NR Physical Layer by Javier Campos. It provides lots of information on the new standards and goes into details on 5G numerology, waveforms, and new access procedures.

Within the last year I have been a victim, twice. The first time, a thief stole two catalytic converters off my car parked in my driveway. The second time, a thief stole a package from my mailbox. Okay, in the grand scheme of things, these probably aren’t the worst crimes that could have occurred; but they still got me thinking. Barring installing an expensive security system, welding rebar over my new catalytic converters, or picking up my mail directly from the post office, was there anything I could do to potentially prevent these crimes from happening again in the future? As it turns out, there may be, and it will likely come in the form of a low-power wide area network (LPWAN) technology known as Narrowband-IoT (NB-IoT).

 

NB-IoT is one of the Cellular IoT (CIoT) technologies defined under the 3GPP umbrella to enable IoT connectivity using the licensed frequencies and to co-exist with legacy cellular broadband technologies like LTE, UMTS, and GSM. By reusing the cellular infrastructure, it enables devices to connect directly to operator networks, providing access to improved nationwide coverage with value-added services like mobility, roaming, security, and authentication. The target for NB-IoT is to provide sufficient coverage for smart meters and other IoT appliances typically located in basements and similarly deep inbuilding locations.

 

                                 Smart meter

 

That makes NB-IoT suitable for commercial applications like home lighting, security control, and maybe even keeping tabs on my catalytic converters and mail. It also opens the door to new opportunities for industrial IoT (IIoT) applications like energy and utility management (e.g., a smart grid), asset tracking, and machine-to-machine communication. After all, NB-IoT can provide robust coverage and is scalable to very large numbers of devices—two hallmarks for the IIoT.

 

But, succeeding in the IIoT with NB-IoT devices and systems will require critical attention to three key challenges.

 

Battery Life. In NB-IoT, the maximum battery life expected is in the range of 10+ years; however, to avoid costly maintenance, the battery should last for the lifecycle of the device. Unfortunately, battery life is impacted by coverage.

 

In low coverage, more repetitions are needed to transfer data. The more repetitions, the longer the duty cycles of the IoT modems and the higher the power consumption. Excess repetitions due to network misconfiguration or network implementation also have a similar impact. In a deep inbuilding location there may be a difference of tens of dBs in coverage between operators, and that can take years off the battery life of an IoT device deployed in that location.

 

To ensure a long battery life, manufacturers will need to characterize the current consumption of the device under active, idle, standby, and sleep modes. Device vendors, will need to recreate operating conditions to better understand how much current is drawn in each scenario (e.g., a remote software update versus a device that is unable to connect to server).

 

Coverage. NB-IoT is expected to be enable a coverage gain of 23 dB (max.) over regular LTE. The real gain may be less, depending on the deployment method and configuration. The challenge with NB-IoT coverage is that it is heavily dependent on the field performance of the commercial network equipment and IoT devices, interoperability between the equipment, and the network design and configuration.

 

To ensure extreme coverage, manufacturers will need to simulate different RF environments (e.g., at remote locations, basement installations, hidden installations, behind concrete walls, and industrial environments). And, they will need to perform transmitter and receiver characterization to understand device performance under these different RF conditions. Once the IoT network is live, the service provider will need to perform field measurements to ensure the simulated tests match real-life conditions.

 

Low Cost. The NB-IoT module target price is below $5.00. Initially the cost is expected to be comparable to that for GSM/GPRS. However, the underlying NB-IoT technology is much simpler and its cost will likely decrease as demand increases. An unreliable NB-IoT device can add to the module price, with its associated service and/or recall cost, as can the cost of test during the device’s development and production.

 

To achieve a low NB-IoT device cost, manufacturers can use lower priced components or simplify the hardware design, but the performance of the device must be properly characterized to ensure these cost-cutting measures don’t compromise device reliability. Manufacturers must also carefully select the right test equipment to reduce the cost of test. An integrated solution that can cover the whole product lifecycle, from design to manufacturing to conformance test, can help minimize test equipment capital cost.

 

A Final Thought

Without a doubt, NB-IoT holds great promise for the rapidly expanding IIoT. For those who can overcome its challenges, opportunities abound. Does that mean a nifty way of keeping track of my mail is around the corner? Perhaps. Telia, a Swedish mobile operator, recently teamed up with the Finnish postal service Posti to develop smart mailboxes for just that purpose. And, Borgs Technologies now offers a NB-IoT tracker for pets. Maybe a tracker for my catalytic converters will be next? In the meantime, I’ll be sure to set my car alarm and keep my driveway lights on!

 

If you are interested in finding out how Keysight’s solutions can help address your NB-IoT device and system challenges, check out the following links:

·       For battery drain analysis, check out the N6705C DC Power Analyzer and N6781A 2-Quadrant Source/Measure Unit

·       To monitor current drawn at sub-circuits with much higher bandwidth and dynamic range for the most demanding applications, check out the CX3300 Device Current Waveform Analyzer

·       To validate coverage with field measurement, check out Nemo Outdoor and Nemo Analyze

·       To perform parallel testing of multiple devices under test to get the maximum throughput for the production line, check out the EXM Wireless Test Set

internetofthings iot

industrialiot iiot iotdevicetest cellulariot nb-iot narrowbandiot

Here’s a question for you. Is your connected home safe? I’m not talking about whether you remembered to lock your doors and turn on your security system. What I’m more interested in is if the wireless data being transmitted from one connected home device to another, and to the internet, is secure. It’s a potentially scary proposition and one you will likely have to confront in the very near future, if not already.

 

Gartner estimates that the number of connected things will increase multiple folds yearly—the latest forecast is up 31 percent from the year prior. By 2020, the number of connected things will top 21 billion. What that means is that sooner or later, you will have a growing number of consumer products and smart home devices connected  in your home. From televisions, refrigerators, audio speakers, and home alarm systems, to door locks, these devices will soon be able to talk to one another. Your home alarm system will be activated as soon as your door lock is turned on. And, when the room temperature in your living room reaches a certain limit, it will automatically turn on the air conditioning. One day soon, this type of activity will become the norm in every household.

 

And that brings me back to my original point. Have you ever wondered how well these devices are talking to one another, or how safe your home will be when these devices start to send information wirelessly? These devices may not necessarily be designed to perform across a wide range of real-world conditions, and if so, their performance can fall off. That performance might be affected by the device’s distance from the nearest wireless access point, density of wireless signals in the same area, interference from other radio-frequency noise sources, and of course, data interoperability. To make matters worse, the task of securing these smart devices is like trying to protect endangered wild species in a sub-Saharan desert or the Amazon forest.

 

Making sure new devices establish robust, reliable, and secure connections across the intended range of environments cannot be left to chance; it must be guaranteed. To do that, product makers, consumers, operators and cloud providers alike will need to implement different strategies, coupled with the right test tools, to ensure connected homes stay both connected and safe. 

 

Product Makers

With many connected things packed into a living area, product makers will need to test device performance in the presence of several wireless access points, and to ensure robust enterprise-grade security. They should also check for performance in the presence of other wireless emitters, such as microwave ovens operating in the same frequency band.

 

Consumers

For consumers, the best strategy is to raise public awareness about the dangers of buying hardware that connects to the unsecured internet. Without the proper protections in place, hackers could easily take over a home’s automation and collect sensitive personal information without the owner’s consent. Perhaps what’s needed to protect or help consumers is a public safety warning on every IoT device, much like the safety warnings found on alcohol bottles.

 

Operators and Cloud providers

Most IoT devices and applications deployed in a cellular or cloud provider environment require low latency. Because of that, operators tend to move functionality and content to the edge (Edge Computing (EC)) of networks to automatically respond to IoT devices instantaneously.  However, sensitive data should move away from edges to cloud and secured with encryption. Cloud providers should  do their part in data security, by providing services such as local encryption and digital certificate to authenticate other third-party applications trying to communicate with the cloud service, for their customers.

 

Granted, there is not much we can do to stop cyber criminals from trying to hack the smart devices in our connected homes. But, we can work together to make that task harder, if not impossible. Test and measurement vendors can play a critical role in this process by providing the solutions needed to perform end-to-end testing on devices before they hit the market. At least that way, consumers can be more assured that the devices themselves are secure. And, if consumers do their part by implementing their own security strategies, such as using strong passwords that are routinely changed, we can together ensure our connected homes are indeed safer and more secure.

 

For more information on solutions for ensuring smart device security, check out the following links: IoT Testing, Monitoring and Validation and BreakingPoint, an all-in-one applications and network security testing platform.