Skip navigation
All Places > Keysight Blogs > Next Generation Wireless Communications > Blog

The NB-IoT LPWAN radio technology is a great solution for many IoT applications because it leverages long-proven cellular radio technology and infrastructure that is supported by numerous cellular providers worldwide. Furthermore, NB-IoT has good security, and it is an LTE specification from 3GPP, which gives it substantial technical momentum for evolution today and in the future. Learn more about NB-IoT design and test challenges.

 

However, NB-IoT is not the right solution for every IoT application, and you should consider the information below to determine whether other LPWAN technologies  might better suit your application context and objectives. Note, however, that a technology that has an advantage over NB-IoT in one area may have a significant disadvantage in another area. Selecting an IoT radio technology involves a complex set of tradeoffs.

 

Coverage area: One reason that NB-IoT may not be the best choice is that your application is in an area with no or poor LTE cellular coverage; perhaps the cellular technology is GSM or CDMA, which is incompatible with NB-IoT. One LPWAN alternative, long range WiFi has been proven to work at over 350 km in certain cases. To be fair, very long range WiFi is not common, but it is relatively straightforward to achieve distances over 20 km with inexpensive, readily available hardware.

 

Even if you are in an area with good LTE coverage for cellular IoT, you may find that a solution specifically designed for IoT is already readily available. One example is Sigfox, which is widely available in Europe. Sigfox has an established presence for IoT connectivity, and its radio modules are less expensive than those used for NB-IoT.

 

Customizability and Control: Another reason that you might prefer an LPWAN solution other than NB-IoT is customizability and control. You may want or require the flexibility that comes from keeping all configuration and capacity expansion decisions in house, rather than being constrained by the NB-IoT standard and operators.

 

Perhaps you are limited in funds, but you have a technical staff with the capabilities to design and maintain custom software or hardware optimized for your particular application challenges. A university or research consortium with a substantial pool of graduate students would likely fail into this category. The cost of the technical staff may be less than the ongoing wireless data expense of NB-IoT.

 

Finally, you may want or need to use a vendor-provided API to create tailored software for your application. Companies such as Telensa offer LPWAN solutions with this sort of flexibility.

 

In short, NB-IoT is a powerful and robust LPWAN solution that takes advantage of existing infrastructure and technical momentum. In some situations, however, it may not be the best choice. We will consider this topic further in the next blog post.

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

The emergence of 5G mobile communications is set to revolutionize everything we know about the design, testing, and operation of cellular systems. The industry goal to deploy 5G in the 2020 timeframe demands mmWave over-the-air (OTA) test solutions and requirements in little more than half the time taken to develop the basic 4G MIMO OTA test methods we have today.

 

If you highlight anything from this blog post, know this:

 

We are going to have to move all of our testing to radiated, not just some of it like we do today, and that's a big deal.

 

 

First, a bit of background on the move from cabled to radiated testing, and then I’ll discuss the three main areas of testing that we're going to have to deal with: RF test, demodulation test, and radio resource management.

 

Millimeter-wave devices with massive antenna arrays cannot be tested using cables because there will be no possibility to add connectors for every antenna element. The dynamic (active) nature of antenna arrays means it isn’t possible to extrapolate end-to-end performance from measurements of individual antenna elements. So yes, for testing 5G, it really is time to throw away the cables…whether we want to or not!

 

A new radio design starts with the reality of the deployment environment, in this case a mmWave one. How this behaves isn’t a committee decision, it’s just the laws of physics. Next, we model the radio channel and once we have a model, we can design a new radio specification to fit the model. Then, we design products to meet the new radio specifications, and finally we test those products against our starting assumptions in the model.

 

If we have got it right—in other words, if the model is sufficiently overlapped with reality—then products that pass the tests should work when they are deployed in the real environment. That's the theory.

 

This process works well at low frequencies. For mmWave, however, there is a big step up as the difference in the propagation conditions is enormous.

 

Now let’s look at the categories of radio requirements that we're going to have to measure—that is, what we measure and the environments we measure them in.

 

For RF, it’s about what is already familiar—power signal quality, sensitivity—and those are all measured in an ideal line-of-sight channel.

 

With regards to demodulation, throughput tests will be done in non-ideal (faded) conditions as was the case for LTE MIMO OTA. There we had 2D spatial channels, but for mmWave, the requirement will be 3D spatial channels because the 2D assumptions at low frequencies are no longer accurate enough.

 

Radio resource management (RRM) requirements are about signal acquisition and channel-state information (CSI) reporting, signal tracking, handover, etc. That environment is even more complicated because now we’ll have a dynamic multi-signal 3D environment unlike the static geometry we have for the demodulation tests.

 

Opportunities and Challenges

 

The benefits of 5G and mmWave have been well publicized. There's a lot of spectrum that will allow higher network capacity and data rates, and we can exploit the spatial domain and get better efficiencies. However, testing all of this has to be done over the air and that presents a number of challenges that we have to solve if we're going to have satisfied 5G customers.

 

 

 

We know that we're going to have to use active antennas on the devices in base stations, and those are hard to deal with.

 

We know that spatial tests are slower than cabled, so you can expect long test times.

 

We've got the whole issue of head, hand, body blocking on devices—it's something that isn’t being considered for release-15 within 3GPP but will still impact customer experience.

 

We know that OTA testing requires large chambers and is expensive.

 

We know OTA accuracy is not as good as cabled testing—we're going to have to get used to that.

 

Channel models for demodulation and RRM tests haven’t been agreed upon yet, which is impacting agreement on baseline test methods for demodulation and RRM.

 

Takeaways

 

There's a paradigm shift going on because of mmWave. We used to work below 6 GHz and the question we asked at < 6 GHz frequencies was, "How good is my signal?" That question led to the development of non-spatial conducted requirements. The question now for mmWave is, "Where is my signal?" That's going to lead to the development of 3D spatial requirements and OTA testing. This is a fundamental shift in the industry.

 

It’s going to be a tall order…testing 5G mmWave devices.

 

Keysight is committed to getting our customers on the fastest path to 5G. Stay tuned as Keysight continues to roll out 5G testing methodologies and system solutions. Meanwhile, explore the 5G resources currently available.

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

My experience with broadcast TV in 1960’s Colorado was fraught with “ghost” images that distorted our television screen during episodes of my favorite TV programs. My parents’ explanation of this being an “echo” from the mountains was very confusing. How could light have an echo? It was in my early days at Hewlett-Packard that I learned the physics of multipath interference; and it was much later that I encountered the technology that would take advantage of these physics rather than fight them.

 

Multiple In, Multiple Out: This act of adding an advantageous term to the Shannon-Hartley theorem to squeeze a few more bits per second from our precious spectrum is enjoying its highest popularity ever. And while it sounds like a new concept the careful reader will note a reference in Dr. Thomas Marzetta’s 2010 seminal paper on Massive MIMO to a fascinating paper dating from 1919(Alexanderson, Ernst F. W., “Trans-Oceanic Radio Communication”: Transactions of the American Institute of Electrical Engineers Volume XXXVIII, Issue: 2, July 1919). Considerations that smell a lot like MIMO appear to date from a time when the founders of radio communications were but recently in their graves.

 

Most descriptions of Massive MIMO are either opaque with multi-dimensional calculus, or full of simple brightly colored cartoon diagrams of antennas with lightning bolts. Novices like myself struggle with the topic and there is even significant debate amongst the experts.

 

MIMO is the use of multiple independent transmit and receive chains each connected to its own antenna to take advantage of the different and independent paths that radio waves follow in a reflective environment. Sophisticated baseband systems split and reassemble signals to and from these different paths to create multiple useful radio communications channels out of what used to be just one. This enables any of the following:

  1. Use of more than one path to decrease the error rate of a single set of data
  2. Use of more than one path for different sets of data
  3. Manipulate the inherent nature of multipath interference to either cancel or emphasize the signal at any physical location in the radio channel

 

#3 is the essence of what is now called “Massive MIMO”. But based on the heated discussions at industry and academic symposia it is clear there is disagreement about “Massive MIMO” in 5G. A few of the more hotly-debated topics:

Is “Massive MIMO” the same as “Beamforming”? No—as above. MIMO can take advantage of beamforming and indeed FD MIMO has two modes that are strictly referred to as “beamforming” modes. But beamforming is done in many non-MIMO applications.

 

How many antennas does it take to be “Massive”? Dr. Marzetta stipulates that “Massive” means not only “many antennas” (many more base station antennas than users—and more is always better) but also that each is part of an independent transceiver chain.  But the economy of scale given technology available in the 5G time-frame probably means something less than 600.

 

Is FD-MIMO “Massive”?  3GPP’s FD MIMO introduced in Release 13, has a 64-antenna element count. Hence, many deem it as “massive”. 64 antennas is “much greater than” what??--probably not much more than 10 UE’s. Is FD MIMO really about servicing only 10 UE’s in any one cell? Probably not.

 

Can you do Massive MIMO in FDD systems?  At least one implementation of FD-MIMO in the R13 standard is for FDD scenarios. If one accepts that FD-MIMO is “massive”, the answer to this question is “yes”. But due to the lack of scalability, I do recall Dr. Marzetta stating flatly (and I quote): “FDD is a disaster. End of story.” 

 

Will we get Massive MIMO that will improve capacity, energy efficiency, and spectral efficiency for 5G systems?  Yes.  MWC 2017 was host to impressive Massive MIMO demonstrations. And the promise of using new digital technologies to take full advantage of a rich radio channel continues to drive innovation. I look forward to it just like I look forward to the next related heated discussion--which perhaps will be a result of this very post.

 

Learn the latest information on Massive MIMO in 5G and other 5G testing

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

Near field communication (NFC) is the radio technology that allows you to pay for things by holding a credit card or cell phone up to a sensor. NFC technology is rapidly being adopted in transportation, retail, health care, entertainment, and other applications. Narrowband Internet of Things (NB-IoT) is a radio communications technology that uses existing cellular infrastructure to communicate with and collect data from remote sensors, possibly over a distance of several miles. At first glance, it may seem that these two technologies have nothing in common, but in fact they have several commonalities.

 

Commonalities of NFC and NB-IoT

 

The first commonalities are obvious: both NFC and NB-IoT are wireless communications technologies, and both will have a large impact on the world economy. Most new cell phones come standard with NFC, and the number of devices using each technology will likely soon outnumber people. Furthermore, both NFC and NB-IoT have significant requirements for data security, as both are tempting targets.

 

From a technical perspective, both NFC and NB-IoT transmit small data packets intermittently and infrequently. Both use very inexpensive transmitters, and both transmit with very small amounts of power. Therefore, both NFC and NB-IoT devices require extensive testing to make sure that they operate robustly and securely, even in noisy electromagnetic environments. This includes tests in the component development, system integration, certification, and operator acceptance phases of product development.

 

Testing NFC Devices

 

To guarantee a successful roll out, compliance and interoperability testing are required. Some industry organizations, such as the NFC Forum, have already developed technical and test specifications to enable developers to successfully test and certify their NFC devices. Keysight has multiple solutions for NFC testing, including the one-box, standalone RIDER Test System (shown below). The system can execute NFC RF and protocol measurements throughout the product development cycle, and it can flexibly evolve along with NFC standards.

 

RIDER Test System

 

The RIDER Test System can emulate NFC readers and tags; generate, acquire, and analyze signals; code and decode protocols, and perform conformance testing. It also includes an optional Automatic Positioning Robot (shown below) for the accurate, automatic positioning of test interfaces that NFC Forum specifications require.

 

Keysight InfiniiVision

Keysight InfiniiVision 3000T and 4000A oscilloscopes also include an NFC trigger option. It includes hardware NFC triggering, PC-based automated tests for Poller and Listener modes and resonant frequency test capabilities. This automated NFC test system is ideal for manufacturing and design validation test.

 

Testing NB-IoT Devices

 

Like NFC, NB-IoT devices require sophisticated test solutions. In addition to the RF testing and coverage enhancement testing, battery runtime testing is essential for NB-IoT devices. Because IoT devices have deep sleep modes that quickly transmit into short RF transmission modes, a large dynamic range is critical. Power analyzers with seamless ranging are therefore very popular for these applications.

 

The Keysight E7515A UXM Wireless Test Set is a robust test solution for NB-IoT. It acts as a base station and network emulator to connect and control the NB-IoT devices into different operating modes. You can also synchronize it with the N6705C DC Power Analyzer to perform battery drain analysis on NB-IoT devices. See this application note for more details. The UXM Wireless Test Set makes Keysight the first company with end-to-end simulation, design verification test and production/manufacturing test solutions for NB-IoT. Follow this blog for more information about IoT testing.

With the 5G technology evolution just on the horizon, you can feel the momentum, excitement, and even tension building in the wireless industry.  3GPP is accelerating the New Radio (NR) standard and mobile network operators are fast-tracking their deployment plans.  Mobile data demand continues to grow at a rapid pace and new device categories, such as VR/AR headsets, connected/autonomous cars, and cloud-based AI-enabled home assistants, are gaining traction and poised to take advantage of new 5G infrastructure.  Although massive machine type communications (mMTC) is taking a backseat to enhanced mobile broadband (eMMB) in the initial 5G standard, operators are deploying new cellular IoT technologies, such as NB-IoT and CAT-M yielding new business models to more industry verticals. 

5G

Opportunities abound and so do challenges. To unleash the 5G business potential, chipset and device engineers must overcome significant challenges in the technology evolution and revolution:  mm-wave propagation and channel modeling, wideband calibration, antenna complexity, beamforming, over-the-air measurements, protocol optimization for peak data throughput, digital interface capacity, battery life; the list goes on. Those of us in the test and measurement world are addressing the same challenges. We have the extra requirement of ensuring tools are ready in time (i.e. before!) for the designers of 5G systems. But Keysight engineers have also taken full advantage of our deep experience in aerospace & defense mm-wave and wideband applications and from serving the previous four generations of wireless communications.

 

Collaborating Globally to Realize 5G Technologies

 

But that list of technical issues is long and the answers require teamwork, (Roger Nichols’ last blog described the benefits of early and broad engagement in these generational changes:  Getting Better in 5G—with a Little Help from Your Friends). Operator trials around the world are demonstrating promising results and continue to progress towards commercialization.  The path to 5G is forming through the 4G evolution and pre-5G activities.  While there is the inevitable concern about the “killer 5G use-case”, we see multiple ideas evolving over time:  from stand-alone fixed wireless access for the home to non-standalone mobile access with interworking between traditional LTE networks and the new 5G standard.  In addition, regulators are opening new spectrum between 3 and 5GHz for mobile wireless applications.  Industry leading chipset vendors have publicly announced 5G modem solutions that will support NR at both sub-6 GHz and mm-wave as well as legacy cellular formats.  That breadth of new technology, the addition of legacy technology, the mix of use-cases, and the range of carrier frequency and bandwidth is a huge map of complexity. Collaboration is necessary for success and we are pretty excited to announce another example just last week; that Keysight 5G Test Solutions have been selected by Qualcomm to help accelerate their commercialization strategy. And we are collaborating to test their 5G chipsets.  Keysight Technologies Selected by Qualcomm for 5G Test Solutions

 

Enabling the Fastest Path to 5G

 

So what does it take to gain the confidence of key players in the industry? Like making an excellent 5-course meal, the recipe is simple but the execution is rather involved.

  1. Be there on time with necessary tools. Our most recent example is this month’s announcement of Keysight’s  5G Protocol R&D Toolset:  Keysight Enables Prototyping of Next-Generation Mobile Devices with the Industry's First 5G Protocol R&D Toolset. Implementing and validating the new capabilities of NR will be particularly challenging given new capabilities like flexible numerology, channel codes, and management of active antenna systems (AAS).  Keysight 5G Network Emulation Solutions. 5G Protocol R&D Toolset
  2. Work very closely with market leaders to understand the nuances of their design needs.
  3. Add to and update the tools; even make real-time adjustments to plans. Like our first 5G SystemVue solution launched in 2015, the solution described above is only first in a series of (in this case) network emulation solutions to address the 5G device workflow.

 

What Will we See Next?

 

The tone of recent public 5G events is a glorious mix of hype, amazing technology, and anxiety about how best to succeed in this industry. We remain convinced that the business model concerns will be addressed by the same kinds of smart people who developed everything from smart-phones to the most popular social media applications. It is up to us on the technology side to be ready for those innovative business ideas that will use the 5G network to its hilt.  It is very exciting to be focused on enabling time to market and cost efficiencies by delivering solutions that stitch together the wireless device development workflow.  

Explore new signals, scenarios and topologies for 5G mobile communication

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

5G promises substantial improvements in wireless communications, including higher throughput, lower latency, improved reliability, and better efficiency. Achieving these goals requires a variety of new technologies and techniques: higher frequencies, wider bandwidths, new modulation schemes, massive MIMO, phased-array antennas, and more.

 

These bring new challenges in the validation of device performance. One of the key measurements is error vector magnitude (EVM), which is an indicator of quality for modulated signal as they pass through a device under test (DUT). In many cases, the EVM value must remain below a specific threshold—and getting an accurate measurement requires that the test system itself be very clean (i.e., have a low EVM itself). This includes all fixtures, cables, adaptors, couplers, filters, pre-amplifiers, splitters, and switches, between the DUT and the measurement system.

 

At 5G bandwidths and frequencies, the test fixture can impose a significant channel frequency response on the test system and adversely affect EVM results. Hence, the measurement now includes the characteristics of the test fixture and the DUT—and this makes it difficult, if not impossible, to determine the true performance of the DUT.

 

Calibration can move the test plane from the test instrument connector to the input connector of the DUT (Figures 1, 2). Keysight has created a solution that uses a NIST-traceable reference comb generator to enable complete channel characterization of the test fixture on both sides of a transceiver (or any other component or device).

Figure 1. This uncalibrated test system has unknown signal quality at the input to the DUT (A1’). A common mistake is to simply use equalization in the analyzer (A2), but this occurs after the DUT and it also removes some of the imperfect device performance we’re trying to characterize.

Figure 2. In this calibrated test system, the system and fixturing responses have been removed, enabling a known-quality signal to be incident to the DUT (B1). The analyzer errors can also be removed (B2).

 

Figure 3 shows the uncalibrated test fixture equalizer response for a 900 MHz BW signal at 28 GHz. The upper trace shows the amplitude response with a significant roll off at the upper end of the bandwidth. The lower trace shows the phase response, which also has considerable variation over the bandwidth. These imperfections would limit EVM to being no better than about 5 percent.

Figure 3. These OFDM frequency response corrections for an uncalibrated system show variations of nearly 7 dB in raw amplitude and 45 degrees of phase across a 900 MHz bandwidth at 28 GHz

 

 

Figure 4. Here is the same OFDM response for a calibrated system, showing variations of only 0.2 dB and 2 degrees. The resulting signal EVM dropped to less than 1 percent from more than 5 percent.

 

Figure 5 shows the demodulation results after calibration for single-carrier 16QAM signal nearly 1 GHz wide. The upper-left trace shows a very clean constellation diagram. The lower-left trace shows the spectrum with a bandwidth of approximately 1 GHz. The upper-left trace shows the equalizer response in both magnitude and phase: both of these are nearly flat, indicating the equalizer is not compensating for any residual channel response in the test fixture. The middle lower trace shows the error summary: EVM is approximately 0.7 percent, which is a very good result. This system would be ideal for determining a device’s characteristics.

 

Figure 5. Calibration enabled the signal generation of a 1 GHz wide signal with an EVM of less than 0.7 percent at 28 GHz. This EVM occurs at the input plane of the DUT.

 

In pursuit of tremendous improvements in cellular network capability, 5G is using new technologies that pose many challenges to testing. Fortunately, calibration will help ensure that we’re measuring the true performance of the DUT without the effects of the test fixture.

 

We can help you learn more about the testing of 5G wireless technologies.

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

Once you have taken the steps described in the Simple Steps to Optimize Battery Runtime blog, you still have opportunities to reduce power consumption in your battery-powered device. Be sure to measure the actual current consumption before and after each change, and try to understand why the results are as observed. The more understanding you develop, the better you will be at predicting the effects of future changes. This will help you get future products to market faster with optimized battery runtime.

 

Hardware optimizations

Consider using a simple analog comparator instead of an analog/digital converter (ADC) to trigger certain functions. The ADC is likely to be more accurate and faster than the comparator, but it has longer startup time and consumes more current. The comparator continuously compares signals against a threshold, and for some tasks, this may be sufficient. For cases where you need the accuracy and versatility of the ADC, turn off internal voltage references on the ADC and use Vcc as the reference voltage if possible.

 

Use two-speed startup procedures that rely on relatively slow RC timers to clock basic bootup tasks while the microcontroller unit (MCU) waits for the crystal oscillator to stabilize. Be sure to calibrate these internal RC timers or buy factory-trimmed parts.

 

Firmware optimizations

Use event-driven code to control program flow and wake up the otherwise-idle MCU only as necessary. Plan MCU wakeups to combine several functions into one wakeup cycle. Avoid frequent subroutine and function calls to limit program overhead, and use computed branches with fast table lookups instead of flag polling and long software calculations. Use single-cycle CPU registers for long software routines whenever possible.

 

Implement decimation, averaging, and other data reduction techniques appropriately to reduce the amount of data transmitted wirelessly. Also, make sure to thoroughly test various wireless handshaking options in an actual usage environment to strike the ideal balance between wasting time on unsuccessful communication attempts and performing excessive retries.

 

Your oscilloscope will probably be useful in obtaining quick measurements for these current waveforms, and depending on the communication protocol, an oscilloscope may be the only instrument with the necessary bandwidth to make such measurements. However, once you know the bandwidth of your signal, you may be able to use a DC power analyzer or device current waveform analyzer to make these measurements. These devices will make measurements with better precision and to provide more detailed analysis, such as automatic current profiles.

 

By implementing these strategies and measuring current consumption throughout your development process, you will quickly optimize battery runtime and drive success in IoT and other battery-driven applications for you and your customers.

 

Learn more about maximizing battery life of IoT smart devices by downloading helpful applications notes and webcasts from Keysight.

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

After you have selected your microcontroller unit (MCU), there are several simple steps you can take to optimize battery runtime. Correctly configuring and testing your hardware and firmware can help you to develop the optimal IoT device configuration.

 

Power budget

Begin by creating a theoretical power budget for your device. Using the MCU’s data sheet and manual, consider a complete cycle of events, such as waking, collecting data, processing data, turning on the radio, transmitting data, turning off the radio, and returning to sleep. Multiply the current by the duration of each step, and add the values to obtain a projected total for a typical operational cycle. Be sure to include the current consumed while the device is in its longest sleep mode; even nanoamps add up over long periods of time. Your MCU vendor should have software that helps you estimate current drain associated with various operational parameters, and you can use a DC power analyzer, digital multimeter (DMM), or device current waveform analyzer to fine tune the estimated values.

 

Hardware configuration

Begin by optimizing the clock speed at which the MCU runs. The current consumption for many MCUs is specified in units of µA / MHz, which means that a processor with a slow clock consumes less current than a processor with a faster clock. However, a processor working at 100% capacity will consume the same amount of energy at 10 MHz as at 20 MHz, because the 20-MHz processor will consume twice the current for half as long. The conclusion is that for code segments where the processor is largely idle, you can save current by running the MCU more slowly.

 

Next, optimize the settings associated with data sampling. These settings include the frequency with which the sensor wakes up to collect data, the number of samples taken, and the ADC sampling rate. There is often a tradeoff between measurement accuracy and these sampling parameters, so set the sampling parameters to minimize current drain while delivering acceptable accuracy. Similarly, you may be able to change the rate at which the MCU updates the device display, requests data from sensors, flashes LEDs, or turns on the radio.

 

Finally, carefully examine the various idle, snooze, sleep, and hibernation modes available on your MCU. For example, some MCUs have sleep modes that disable the real-time clock (RTC), and disabling the RTC may reduce your sleep current consumption by a factor of six or more. Of course, if you do this, you will likely need some mechanism to recover the date and time, perhaps through a base station.

 

Firmware options

Design your program to finish each task quickly and return the MCU to sleep. Cycle power on sensors and other peripherals so that they are on only when needed. When you cycle sensor power, remember power-on stabilization time to avoid affecting measurement accuracy. For ultra-low-power modes, consider using a precision source/measure unit (SMU) to make very accurate current measurements, especially when you have the option to power the MCU at different voltage levels.

 

Consider using relatively low-power integrated peripheral modules to replace software functions that would otherwise be executed by the MCU. For example, timer peripherals may be able to automatically generate pulse-width modulation (PWM) and receive external timing signals.

 

Use good programming practices, such as setting constants outside of loops, avoiding declaring unnecessary variables, unrolling small loops, and shifting bits to replace certain integer math operations. Also, use code analysis tools and turn on all compiler optimizations.

 

Test and learn

Finally, use your instruments’ software tools to analyze the actual current consumption frequently as you develop the MCU code. These tools may include a complementary cumulative distribution function (CCDF) or automatic current profile, and they will give you information to refine your power budget. Observe and document how your coding decisions affect current consumption to optimize the present program and give you a head start on subsequent projects.

 

Learn more about maximizing battery life of IoT smart devices by downloading helpful applications notes and webcasts from Keysight.

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

The number of IoT devices seems to be at an all-time high, from industrial sensors, to IoT clothespins (really!), to smart water dispensers for cats. For mainstream IoT applications, battery life is often a huge differentiator in buying decisions, so designing a device with an energy-frugal microcontroller unit (MCU) is a critical success factor. Once you have selected the MCU, however, your job has just begun. Your MCU firmware programming decisions can have effects ranging from fractions of a percent to an order of magnitude or more.

 

Architecture
Carefully select the MCU hardware architecture to match your application. For example, some MCUs include efficient DC-DC buck converters that allow you to specify voltage levels for the MCU and peripherals that can operate across a range of voltages. A transceiver IC that can operate at voltages ranging from 2.2 to 3.6 V will have substantially different power consumption, output power, attenuation, and blocking characteristics at different input voltages, so flexible DC-DC conversion is a plus. Also, an MCU may integrate RF radio capabilities, sensors, and other peripherals. Greater integration may decrease current draw by offering greater control over options that can reduce current.

 

Accelerators and peripherals
Some MCUs have hardware accelerators for rapid CRC, computation, cryptography, random number generation, or other powerful math operations. The high speed of these peripherals lets you put the MCU to sleep faster, but there may be a tradeoff between speed and current consumption. Other MCUs use a wake-up peripheral that saves time by using a low-power RC circuit to clock the MCU while the main crystal oscillator powers up and stabilizes, again saving MCU runtime. Some MCUs have sensors with buffers that accumulate multiple samples for efficient batch reading later – customers probably do not need their IoT aquarium thermometer to send data to the cloud every millisecond. Some of these peripherals may also improve the security of your device because they include hardware security features.

 

Memory
Every IoT device needs some memory to run its programs, but unused RAM simply wastes current. Some, but not all, MCUs allow you to turn off power to unused RAM. Also, the various memory technologies (EEPROM, SRAM, FRAM, flash, and so on) consume different amounts of power. Some MCUs have one type of memory technology for program storage, and small caches of SRAM that perform most program operations with low-power memory.

 

Low power states
The key to long battery life is to increase the time spent in low-power sleep states, but the sleep states in different MCU architectures vary dramatically. Furthermore, the names used for these states – sleep, hibernate, idle, deep sleep, standby, light sleep, snooze – lack consistency. Review the MCU’s low power modes carefully, along with the consequences of entering and exiting them, such as data loss, time to enter and exit the state, and requirements to reboot various levels of functionality.

 

Clocks
The variety of timers and clocks available on the MCU may also affect battery runtime. At a minimum, most MCUs have two clocks – a relatively fast master clock for fast reactions and signal processing and another, slower clock for keeping the real-time clock alive in sleep states. Other MCUs may combine a small current sink, a comparator, and an RC circuit to form a low-power wake-up source whose period depends on the circuit’s resistance and capacitance. This is less precise than a crystal, but it saves power, and it may be sufficient for some applications.

In conclusion, there are many things to consider when you select your MCU. What to do after you have selected your MCU will be described in future articles.

The never-ending drive to increase IoT battery life is great for customers, but it poses extraordinary challenges for design engineers. As sleep mode currents edge ever-closer to zero, the challenge of making measurements across a wide dynamic range becomes increasingly difficult.

 

Consider wireless transceiver chips, illustrated in the chart below. Each line represents one transceiver, and the lines go from a given device’s lowest sleep current to its highest operating current. The dynamic range, of course, is the ratio of the two current levels, and the base two logarithm of that ratio indicates how many bits are required to represent the dynamic range.

Merely representing the dynamic range, however, is not sufficient for accurate measurements. For example, if your measurement instrument uses 18 bits to represent a 250,000:1 dynamic range that spans 25 mA (middle of the chart), then your measurement resolution is approximately 100 nA. When you measure relatively large currents in the mA range, this is fine, but when you measure the 100 nA sleep current, your accuracy is ±100 percent – a rather coarse value.

 

For 25% accuracy, you need two additional bits, because the four possible values of two bits divide your resolution accordingly. Similarly, for 10, 5, and 1 percent accuracy, you need 4, 5, and 7 additional bits, as summarized in the following table, which uses non-integer base two logarithms to reduce the number of bits in some cases.

It is, of course, difficult to find instruments that provide accurate current measurements with 20 or more bits of resolution in a given measurement range. The best solution is to use an instrument with seamless ranging that avoids glitching during range changes, or to use a dual range current probe with an instrument that has the intelligence to use the appropriate range as the current level changes.

Learn more about IoT device design and test and explore the available Keysight IoT test solutions

Moray

The London 5G debates

Posted by Moray Employee Nov 18, 2016

Following on the theme from Roger’s recent post Blessings and Curses: Firsthand Commentary on the State of 5G Wireless Conclaves, I was invited to take part in one of the most enlightening 5G events I’ve encountered. It was the first of two 5G debates in London organized by Cambridge Wireless, a UK-based community bringing the mobile wireless community together to solve business problems.

 

The debate took place in the prestigious “Shard” building in central London and was chaired by Prof. William Webb, last year’s president of the Institute of Engineering and Technology. One of the things that distinguished this debate from so many others is that it was a stand-alone event, attracting a diverse audience not typically seen at industry conferences (e.g., the ones in which debates are often curtailed just when they get interesting). The three other panelists alongside myself were Howard Benn, Samsung’s head of standards and industrial affairs; Paul Ceely, head of mobile strategy at British Telecom (which recently bought operator EE); and Joe Butler, director of technology at UK regulator OfCom and, for this debate, representing the UK’s National Infrastructure Commission, which is tasked with planning the UK’s critical infrastructure.

 

The theme of the first debate was What’s left for 5G now that 4G can do IoT and Gbits/s speeds?” while the second had a business focus: “Will operators see increased ARPU from 5G?” A short video of the first debate and a full transcript is available here and the second debate is here.

 

Each panelist gave a short opening statement. Given the recent political environment in the UK, I led with the good news that “5G will be much easier than “Brexit,” and this raised the first of many laughs in what was a good-natured but insightful debate. I gave my reasoning that we have engineers who actually understand 5G whereas the world of politics and economics is populated with those who get by with subjective opinion. That said, I pointed out there is a lot of noise in this 5G space so it is important to know the credentials of those giving advice: are they based on commercial self-interest and hype or are they based on observation of reality backed by physics? After all, at the end of the day, 5G has to work before it can be commercially successful.

 

The debate covered a number of areas in sub-6 GHz territory, through to millimeter-wave developments, IoT and network evolution with NFV and SDN. But the key moment for me started when Joe Butler described his frustration with current infrastructure: “If I get on the train from Brighton to London, which I do on a very, very regular basis, I would dearly love to be able to make a phone call that lasted longer than 30 seconds!” After the laughter died down, the chairman used the opportunity to conduct one of many quick polls of the audience. In this one he asked for a vote in favour of 10 Mbps ubiquitous connectivity vs. delivering blindingly fast, 100 megabits (or even gigabits) a second in pockets of places and also some super low-latency services. The answer to the first question was spontaneously unanimous as can be seen in the picture below captured from the video.

So this means the second debate on the 5G business case will not be short of opinions.

 

The next opportunity for me to interact with the wider 5G community will be at an upcoming IWPC workshop in San Jose hosted by Verizon and focused on the role millimeter-waves will have in 5G. On this occasion I will be delivering a high-level technical paper called “Modelling what matters” that will ask important questions about the focus of current research into 5G. In particular, what concerns me is whether there is sufficient research targeting the design and test of 5G “new radio” to mitigate the spatial dynamics of millimeter-wave radio propagation. More on that later…

 

 

Roger Nichols

Break Away & Score in 5G

Posted by Roger Nichols Employee Nov 17, 2016

Soccer and 5G have at least 12 things in common. Check out the parallels in our new infographic.

 

Take the Lead in 5G Wireless Technology and learn more HERE

 

Don't forget to follow our blog to connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you. 

As engineers, sometimes the most useful way to imagine forward is to pause and look at how far we’ve come over the past several decades. These days, many of us are doing that with 5G.

 

Why am I so excited about the fifth generation of wireless communication? As history reveals, the hallmark of any great technology is the way it improves the human condition and revolutionizes our understanding of the world.

New technology builds on existing technology, and then takes it farther—sometimes in unexpected ways. The humble wheel led to carts and chariots, and it also led to cogwheels and bicycles and trains and automobiles. It turned into waterwheels and turbines, and it begot astrolabes and clocks and hard disk drives.

 

Evolution and revolution has also occurred in human communication, driven by a need to connect beyond shouting distance. In the beginning, we tried smoke signals and the beating of drums. As the human mind improved, our ancestors came to realize there might be better, more efficient, more nuanced ways to communicate over long distances.

 

Semaphore flags and newspapers all found their place in the timeline of communications. But the Cambrian explosion of communications came about in the 19th and 20th centuries, with pioneers introducing all kinds of new systems: rotary printing presses, electric telegraphs, telephones, and radios. Thank you, Mr. Edison and Professor Maxwell, and danke shoen, Herr Doktor Hertz.

 

Science fiction has also played its part, stirring our imaginations and inspiring innovation. The biggest names in science fiction have been renowned for being ahead of their time in terms of their ideas: the first geosynchronous communications satellite was launched less than two decades after Arthur C. Clarke wrote his article on the topic.

In 1966, the original Star Trek put wireless portable communication on our TV screens and firmly into our collective mind. Of course, the first “mobile phones” were so large the radios had to be mounted in car trunks or carried in a briefcase. And they weren’t cellular: they could connect into the local public telephone network, but they didn’t use base station sites to communicate.

 

That technology took root in the 1980s. Over the last 30 years we have been accelerating from 1G analog technology to the alphabet soup of digital modulation and multiple-access schemes: 2G had GSM, GPRS, cdmaOne and EDGE; 3G, which is still widely used, has W-CDMA, HSPA, HSPA+, cdma2000, and more; and 4G has OFDMA, SC-FDMA, CDMA, MC-CDMA, and more.

 

Innovations give us reason to be excited about 4G LTE and the future vision that may become reality in 5G. Big steps forward include spatial processing techniques such as multi-antenna schemes and multi-user MIMO, and these will give way to experimentation with massive MIMO, millimeter-wave frequencies and multi-gigahertz bandwidths.

 

It has already been a long journey from Maxwell’s equations to the too-large-for-my-pocket smartphone. Moving forward, 5G is expected to enable possibilities like an Internet of Things that may contain tens of billions of connected devices, enabling another technology revolution.

 

Although the 5G standards are yet to be finalized, a sizable workforce around the world is doing the difficult groundwork, once again turning science fiction into hard science. Here at Keysight, we’re doing our part to support those efforts. And we'll keep writing about it.

How much does a $1.00 battery cost? That may seem to be self-answering, along the lines of, “Who is in Grant’s Tomb?” or, “When was the War of 1812?” However, a single $1.00 battery may actually cost you hundreds or thousands of dollars when you consider all of the costs associated with its failure. Given the proliferation of batteries in the Internet of Things (IoT), it is especially important to understand the costs involved.

 

Before the Battery Fails

Before the battery fails, you have the transaction costs associated with ordering, receiving, accounting for, and stocking the battery. If you fail to stock replacement batteries, you may need to have an employee make a special trip to purchase the battery or to pay for a delivery service, either one of which might easily cost you many times the price of the battery.

 

For some applications, such as implanted medical devices or remote security devices, you need to consider the cost of actively monitoring the remaining battery level. These sorts of runtime-critical applications may also require you to test and verify the replacement battery’s capacity and charge level.

 

When the Battery Fails

The minute the battery fails, additional costs accrue due to the loss of device functionality. Perhaps a dead battery is just a minor inconvenience, such as loss of remote control for a projector, but perhaps a dead battery delays a production process or customer engagement. In the extreme, a dead battery could endanger lives, as in military, outdoor adventure, or medical applications.

 

Once you have identified the need to replace the battery, there are costs associated with the person who replaces the battery. Depending on the application, the replacement may be performed by an entry-level employee, a skilled technician, or even a cardiologist or thoracic surgeon.

 

In addition to the employee costs, there may be disruption costs associated with the replacement process. For example, consider a telemetry device that transmits patient data to a nursing station. A battery change disrupts other activities of the nurse aide, and the patient often wakes up when the nurse aide changes the telemetry device battery. If the battery is inside the patient, as in an implantable defibrillator, the total cost of the surgery, anesthesia, hospital stay, and follow-up care can cost $50,000.

 

A battery replacement procedure may also include transportation costs when special equipment is required. A consumer may be able to replace a battery on an inexpensive watch, but a high-end or waterproof watch may require special disassembly or resealing equipment.

 

Finally, there are opportunity costs associated with battery replacement. Every minute and every dollar devoted to battery replacement is a resource that cannot be devoted to other activities.

 

After the Battery Replacement

Once the battery has been replaced, there are additional costs to be considered. There is the waste management cost borne by the company, and in some cases there is an additional environmental cost borne by the larger community. If the battery is rechargeable, there are costs associated with the equipment, power, and people involved in the recharging process.

 

A short battery runtime may negatively affect the user’s view of the product, and if two similar devices have similar features, battery life may be the deciding factor in customers’ purchase decisions. In the extreme, there may even be product recall costs or legal liability associated with failed batteries, especially in the medical field.

 

Conclusion

In summary, a $1 battery can end up costing users far more than the basic purchase price. There are costs before, during, and after replacement, and in extreme situations, battery runtime can even be a life safety issue. Design engineers who focus on improving the battery runtime of their devices can substantially improve their customers’ bottom lines, and in so doing they may generate improved sales and customer loyalty.

The last few weeks have been a whirlwind—literally and figuratively.

 

On Tuesday, October 4, the European Microwave Week exhibition opened in London. We at Keysight—with some fanfare—pulled the fancy red drape off our new 110 GHz signal analyzer, the N9041B UXA. This thing is a total game-changer: it covers 3 Hz to 110 GHz in one unbanded sweep, has sensitivity 25 dB better than the alternatives, and provides up to 5 GHz of analysis bandwidth at high frequencies. Similar to other devices we all use, the UXA also has a large pinch/swipe multi-touch display.

Keysight N9041B UXA 110GHz Signal Analyzer showing a 3Hz to 110GHz sweep

 

Did I mention it goes all the way to 110 GHz? That’s like the volume going to 11 on a guitar amp! I mean, this is seriously kinda cool.

 

While my colleagues were uncorking champagne, chatting up journalists, nibbling on tiny cheese-and-cracker appetizers, and showing off our super analyzer to all comers, I skipped the party and did what every good sales and marketing manager does: jumped on a plane and headed the other direction, visiting Japan, Korea, and China in a whirlwind two-week trip.


Arriving in Tokyo ahead of the storm – everything looks calm

 

One small whirlwind problem: Super Typhoon Chaba. He started as a grumpy little storm, but during my long flight west, Chaba bulked up and turned into a typhoon (a hurricane to us Westerners) with a truly bad attitude. Monday morning, before the launch in London, Chaba was preparing to make landfall in Japan’s western islands while I was to the east near Tokyo, meeting with some of our backhaul customers.

 

Backhaul is a tricky business. You need to push lots of Pokémon GO and cat video data to the cell tower so it can be beamed to all those cellphones. If you’re like me, you always imagined this happening with a big fat optical pipe (fiber). But it turns out those cell towers aren’t easy to plumb and some just happen to be moving–like, say, on a high-speed train.

 

Point-to-point wireless costs less than digging up the neighborhood and laying pipe, and carriers can use high-frequency signals and high-gain antennas to solve their last-mile problem. These wireless solutions are also a good fit for seismically active areas (i.e., Japan). Thus, many network equipment manufacturers (NEMs) are investing in high-capacity backhaul to enable the new big-bandwidth requirements of 4.5G and 5G.

 

In backhaul, each pair of high-gain antennas better not be spewing signals at the wrong frequencies and in the wrong direction. Validating this requires out-of-band (OOB) spurious emissions testing, which my colleague Ben Zarlingo refers to as "compromising emanations” In a curious coincidence for our new 110 GHz Signal Analyzer, the Japanese government requires emissions testing all the way out to 110 GHz.

Watching the storm coming down on Japan

 

While Chaba continued to grow into an official Super Typhoon, my meetings were comparatively calm. At one key lab, a well-known Ph.D. thought we were joking when we described the analyzer’s performance and capabilities. My Japanese doesn’t go much beyond yakitori and Asahi, but I learned how the word “super” sounds when two separate hosts used the adjective to describe the new UXA: “suu-pah!

 

Because as it turns out, the alternative to a single-sweep instrument that can measure from 3 Hz to 110 GHz involves a harmonic mixer, which has inband imaging issues and can limit the analysis bandwidth due to IF complications. Mixer-based solutions are proving to be a big thorn in the side of the R&D teams responsible for some seriously complicated millimeter-wave testing. Thus, a 110 GHz Super Signal Analyzer is exactly what backhaul designers are looking for—and that made it a real pleasure to show these backhaul customers Keysight’s newest UXA.

 

As Super Typhoon Chaba moved north of Japan, I flew around the storm to South Korea. That’s where I met with some customers who are developing 5G wireless capability—and I’ll write about that next time.