Skip navigation
All Places > Keysight Blogs > Next Generation Wireless Communications > Blog > Authors BradJolly

The previous blog post discussed NB-IoT, and why, although it is a great IoT wireless solution that leverages massive infrastructure and technical momentum, it may not be the best choice in certain locations or in applications that demand great flexibility.

 

There are three more situations where other LPWAN technologies may be superior. These situations involve considerations in the areas of security, cost, and application requirements.

 

Security: Because it uses existing cellular infrastructure, NB-IoT comes with security built-in.  However, as a TCP/IP-based technology, it is susceptible to denial of service attacks. Sensors that communicate with their gateway without using IP addresses are not susceptible to these attacks. Furthermore, NB-IoT is inappropriate for applications where data must be kept in-house, on physically secured servers. Storing information on remote servers adds an additional layer of risk. Finally, you may want extra encryption for your data, especially if it is highly proprietary or associated with national defense or critical infrastructure. LoRaWAN has three layers of keyed encryption, which makes it exceptionally secure.

 

Cost: A second area where NB-IoT may not be ideal is cost. Depending on the cellular service provider, you are likely to encounter monthly device costs, SIM card costs, and data costs. For example, Verizon charges $2 per month for up to 200 kB of data. Prices do decrease dramatically for large data users; for example, Verizon charges less than $10 per GB. Where the volume of data is large, NB-IoT shines; for applications with many sensors that generate small amounts of data, NB-IoT can be a very expensive proposition.

 

In addition to the monthly costs, you may have costs associated with the IoT device development process. These include the expenses associated with PTCRB (PCS Type Certification Review Board) certification and with carrier certification costs (tens of thousands of dollars per carrier). These certifications are critical to the success of NB-IoT; flawed devices on the cellular network could generate several types of problems, including slowing down the overall flow of data.

 

Finally, you may have specific requirements for your application that make NB-IoT either impossible or impractical. For example, what if you have a massive array of sensors? A WAVIoT solution can handle more than two 2 million nodes per gateway, far more than NB-IoT. Perhaps you need a link budget larger than 164 dB, or you need data rates of up to 1 Mbps, such as LTE Cat-M1 provides.

 

Application requirements: Perhaps you have a high-mobility application that must be able to move from one cell to another in milliseconds when requested by the network (again, LTE Cat-M1 is a good choice here). Or perhaps your application is very specific, and you want to take advantage of work others have already done. One example would be to use Wireless M-Bus for smart utility metering. The fact that the system is optimized for a single application makes it very efficient and robust for its intended purpose, although it does lack many features that would be common with a general purpose IoT LPWAN solution. Another example would be a solution where you want to take advantage of an artificial intelligence (AI) solution such as IBM's Watson, using IBM Watson IoT Platform.

 

Although there are situations where NB-IoT may not be the best choice, it is a powerful solution for many IoT applications. However, it is important to consider your application’s location, requirements, security needs, and budget before selecting an LPWAN solution for the IoT.

The NB-IoT LPWAN radio technology is a great solution for many IoT applications because it leverages long-proven cellular radio technology and infrastructure that is supported by numerous cellular providers worldwide. Furthermore, NB-IoT has good security, and it is an LTE specification from 3GPP, which gives it substantial technical momentum for evolution today and in the future. Learn more about NB-IoT design and test challenges.

 

However, NB-IoT is not the right solution for every IoT application, and you should consider the information below to determine whether other LPWAN technologies  might better suit your application context and objectives. Note, however, that a technology that has an advantage over NB-IoT in one area may have a significant disadvantage in another area. Selecting an IoT radio technology involves a complex set of tradeoffs.

 

Coverage area: One reason that NB-IoT may not be the best choice is that your application is in an area with no or poor LTE cellular coverage; perhaps the cellular technology is GSM or CDMA, which is incompatible with NB-IoT. One LPWAN alternative, long range WiFi has been proven to work at over 350 km in certain cases. To be fair, very long range WiFi is not common, but it is relatively straightforward to achieve distances over 20 km with inexpensive, readily available hardware.

 

Even if you are in an area with good LTE coverage for cellular IoT, you may find that a solution specifically designed for IoT is already readily available. One example is Sigfox, which is widely available in Europe. Sigfox has an established presence for IoT connectivity, and its radio modules are less expensive than those used for NB-IoT.

 

Customizability and Control: Another reason that you might prefer an LPWAN solution other than NB-IoT is customizability and control. You may want or require the flexibility that comes from keeping all configuration and capacity expansion decisions in house, rather than being constrained by the NB-IoT standard and operators.

 

Perhaps you are limited in funds, but you have a technical staff with the capabilities to design and maintain custom software or hardware optimized for your particular application challenges. A university or research consortium with a substantial pool of graduate students would likely fail into this category. The cost of the technical staff may be less than the ongoing wireless data expense of NB-IoT.

 

Finally, you may want or need to use a vendor-provided API to create tailored software for your application. Companies such as Telensa offer LPWAN solutions with this sort of flexibility.

 

In short, NB-IoT is a powerful and robust LPWAN solution that takes advantage of existing infrastructure and technical momentum. In some situations, however, it may not be the best choice. We will consider this topic further in the next blog post.

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

Near field communication (NFC) is the radio technology that allows you to pay for things by holding a credit card or cell phone up to a sensor. NFC technology is rapidly being adopted in transportation, retail, health care, entertainment, and other applications. Narrowband Internet of Things (NB-IoT) is a radio communications technology that uses existing cellular infrastructure to communicate with and collect data from remote sensors, possibly over a distance of several miles. At first glance, it may seem that these two technologies have nothing in common, but in fact they have several commonalities.

 

Commonalities of NFC and NB-IoT

 

The first commonalities are obvious: both NFC and NB-IoT are wireless communications technologies, and both will have a large impact on the world economy. Most new cell phones come standard with NFC, and the number of devices using each technology will likely soon outnumber people. Furthermore, both NFC and NB-IoT have significant requirements for data security, as both are tempting targets.

 

From a technical perspective, both NFC and NB-IoT transmit small data packets intermittently and infrequently. Both use very inexpensive transmitters, and both transmit with very small amounts of power. Therefore, both NFC and NB-IoT devices require extensive testing to make sure that they operate robustly and securely, even in noisy electromagnetic environments. This includes tests in the component development, system integration, certification, and operator acceptance phases of product development.

 

Testing NFC Devices

 

To guarantee a successful roll out, compliance and interoperability testing are required. Some industry organizations, such as the NFC Forum, have already developed technical and test specifications to enable developers to successfully test and certify their NFC devices. Keysight has multiple solutions for NFC testing, including the one-box, standalone RIDER Test System (shown below). The system can execute NFC RF and protocol measurements throughout the product development cycle, and it can flexibly evolve along with NFC standards.

 

RIDER Test System

 

The RIDER Test System can emulate NFC readers and tags; generate, acquire, and analyze signals; code and decode protocols, and perform conformance testing. It also includes an optional Automatic Positioning Robot (shown below) for the accurate, automatic positioning of test interfaces that NFC Forum specifications require.

 

Keysight InfiniiVision

Keysight InfiniiVision 3000T and 4000A oscilloscopes also include an NFC trigger option. It includes hardware NFC triggering, PC-based automated tests for Poller and Listener modes and resonant frequency test capabilities. This automated NFC test system is ideal for manufacturing and design validation test.

 

Testing NB-IoT Devices

 

Like NFC, NB-IoT devices require sophisticated test solutions. In addition to the RF testing and coverage enhancement testing, battery runtime testing is essential for NB-IoT devices. Because IoT devices have deep sleep modes that quickly transmit into short RF transmission modes, a large dynamic range is critical. Power analyzers with seamless ranging are therefore very popular for these applications.

 

The Keysight E7515A UXM Wireless Test Set is a robust test solution for NB-IoT. It acts as a base station and network emulator to connect and control the NB-IoT devices into different operating modes. You can also synchronize it with the N6705C DC Power Analyzer to perform battery drain analysis on NB-IoT devices. See this application note for more details. The UXM Wireless Test Set makes Keysight the first company with end-to-end simulation, design verification test and production/manufacturing test solutions for NB-IoT. Follow this blog for more information about IoT testing.

Once you have taken the steps described in the Simple Steps to Optimize Battery Runtime blog, you still have opportunities to reduce power consumption in your battery-powered device. Be sure to measure the actual current consumption before and after each change, and try to understand why the results are as observed. The more understanding you develop, the better you will be at predicting the effects of future changes. This will help you get future products to market faster with optimized battery runtime.

 

Hardware optimizations

Consider using a simple analog comparator instead of an analog/digital converter (ADC) to trigger certain functions. The ADC is likely to be more accurate and faster than the comparator, but it has longer startup time and consumes more current. The comparator continuously compares signals against a threshold, and for some tasks, this may be sufficient. For cases where you need the accuracy and versatility of the ADC, turn off internal voltage references on the ADC and use Vcc as the reference voltage if possible.

 

Use two-speed startup procedures that rely on relatively slow RC timers to clock basic bootup tasks while the microcontroller unit (MCU) waits for the crystal oscillator to stabilize. Be sure to calibrate these internal RC timers or buy factory-trimmed parts.

 

Firmware optimizations

Use event-driven code to control program flow and wake up the otherwise-idle MCU only as necessary. Plan MCU wakeups to combine several functions into one wakeup cycle. Avoid frequent subroutine and function calls to limit program overhead, and use computed branches with fast table lookups instead of flag polling and long software calculations. Use single-cycle CPU registers for long software routines whenever possible.

 

Implement decimation, averaging, and other data reduction techniques appropriately to reduce the amount of data transmitted wirelessly. Also, make sure to thoroughly test various wireless handshaking options in an actual usage environment to strike the ideal balance between wasting time on unsuccessful communication attempts and performing excessive retries.

 

Your oscilloscope will probably be useful in obtaining quick measurements for these current waveforms, and depending on the communication protocol, an oscilloscope may be the only instrument with the necessary bandwidth to make such measurements. However, once you know the bandwidth of your signal, you may be able to use a DC power analyzer or device current waveform analyzer to make these measurements. These devices will make measurements with better precision and to provide more detailed analysis, such as automatic current profiles.

 

By implementing these strategies and measuring current consumption throughout your development process, you will quickly optimize battery runtime and drive success in IoT and other battery-driven applications for you and your customers.

 

Learn more about maximizing battery life of IoT smart devices by downloading helpful applications notes and webcasts from Keysight.

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

After you have selected your microcontroller unit (MCU), there are several simple steps you can take to optimize battery runtime. Correctly configuring and testing your hardware and firmware can help you to develop the optimal IoT device configuration.

 

Power budget

Begin by creating a theoretical power budget for your device. Using the MCU’s data sheet and manual, consider a complete cycle of events, such as waking, collecting data, processing data, turning on the radio, transmitting data, turning off the radio, and returning to sleep. Multiply the current by the duration of each step, and add the values to obtain a projected total for a typical operational cycle. Be sure to include the current consumed while the device is in its longest sleep mode; even nanoamps add up over long periods of time. Your MCU vendor should have software that helps you estimate current drain associated with various operational parameters, and you can use a DC power analyzer, digital multimeter (DMM), or device current waveform analyzer to fine tune the estimated values.

 

Hardware configuration

Begin by optimizing the clock speed at which the MCU runs. The current consumption for many MCUs is specified in units of µA / MHz, which means that a processor with a slow clock consumes less current than a processor with a faster clock. However, a processor working at 100% capacity will consume the same amount of energy at 10 MHz as at 20 MHz, because the 20-MHz processor will consume twice the current for half as long. The conclusion is that for code segments where the processor is largely idle, you can save current by running the MCU more slowly.

 

Next, optimize the settings associated with data sampling. These settings include the frequency with which the sensor wakes up to collect data, the number of samples taken, and the ADC sampling rate. There is often a tradeoff between measurement accuracy and these sampling parameters, so set the sampling parameters to minimize current drain while delivering acceptable accuracy. Similarly, you may be able to change the rate at which the MCU updates the device display, requests data from sensors, flashes LEDs, or turns on the radio.

 

Finally, carefully examine the various idle, snooze, sleep, and hibernation modes available on your MCU. For example, some MCUs have sleep modes that disable the real-time clock (RTC), and disabling the RTC may reduce your sleep current consumption by a factor of six or more. Of course, if you do this, you will likely need some mechanism to recover the date and time, perhaps through a base station.

 

Firmware options

Design your program to finish each task quickly and return the MCU to sleep. Cycle power on sensors and other peripherals so that they are on only when needed. When you cycle sensor power, remember power-on stabilization time to avoid affecting measurement accuracy. For ultra-low-power modes, consider using a precision source/measure unit (SMU) to make very accurate current measurements, especially when you have the option to power the MCU at different voltage levels.

 

Consider using relatively low-power integrated peripheral modules to replace software functions that would otherwise be executed by the MCU. For example, timer peripherals may be able to automatically generate pulse-width modulation (PWM) and receive external timing signals.

 

Use good programming practices, such as setting constants outside of loops, avoiding declaring unnecessary variables, unrolling small loops, and shifting bits to replace certain integer math operations. Also, use code analysis tools and turn on all compiler optimizations.

 

Test and learn

Finally, use your instruments’ software tools to analyze the actual current consumption frequently as you develop the MCU code. These tools may include a complementary cumulative distribution function (CCDF) or automatic current profile, and they will give you information to refine your power budget. Observe and document how your coding decisions affect current consumption to optimize the present program and give you a head start on subsequent projects.

 

Learn more about maximizing battery life of IoT smart devices by downloading helpful applications notes and webcasts from Keysight.

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

The number of IoT devices seems to be at an all-time high, from industrial sensors, to IoT clothespins (really!), to smart water dispensers for cats. For mainstream IoT applications, battery life is often a huge differentiator in buying decisions, so designing a device with an energy-frugal microcontroller unit (MCU) is a critical success factor. Once you have selected the MCU, however, your job has just begun. Your MCU firmware programming decisions can have effects ranging from fractions of a percent to an order of magnitude or more.

 

Architecture
Carefully select the MCU hardware architecture to match your application. For example, some MCUs include efficient DC-DC buck converters that allow you to specify voltage levels for the MCU and peripherals that can operate across a range of voltages. A transceiver IC that can operate at voltages ranging from 2.2 to 3.6 V will have substantially different power consumption, output power, attenuation, and blocking characteristics at different input voltages, so flexible DC-DC conversion is a plus. Also, an MCU may integrate RF radio capabilities, sensors, and other peripherals. Greater integration may decrease current draw by offering greater control over options that can reduce current.

 

Accelerators and peripherals
Some MCUs have hardware accelerators for rapid CRC, computation, cryptography, random number generation, or other powerful math operations. The high speed of these peripherals lets you put the MCU to sleep faster, but there may be a tradeoff between speed and current consumption. Other MCUs use a wake-up peripheral that saves time by using a low-power RC circuit to clock the MCU while the main crystal oscillator powers up and stabilizes, again saving MCU runtime. Some MCUs have sensors with buffers that accumulate multiple samples for efficient batch reading later – customers probably do not need their IoT aquarium thermometer to send data to the cloud every millisecond. Some of these peripherals may also improve the security of your device because they include hardware security features.

 

Memory
Every IoT device needs some memory to run its programs, but unused RAM simply wastes current. Some, but not all, MCUs allow you to turn off power to unused RAM. Also, the various memory technologies (EEPROM, SRAM, FRAM, flash, and so on) consume different amounts of power. Some MCUs have one type of memory technology for program storage, and small caches of SRAM that perform most program operations with low-power memory.

 

Low power states
The key to long battery life is to increase the time spent in low-power sleep states, but the sleep states in different MCU architectures vary dramatically. Furthermore, the names used for these states – sleep, hibernate, idle, deep sleep, standby, light sleep, snooze – lack consistency. Review the MCU’s low power modes carefully, along with the consequences of entering and exiting them, such as data loss, time to enter and exit the state, and requirements to reboot various levels of functionality.

 

Clocks
The variety of timers and clocks available on the MCU may also affect battery runtime. At a minimum, most MCUs have two clocks – a relatively fast master clock for fast reactions and signal processing and another, slower clock for keeping the real-time clock alive in sleep states. Other MCUs may combine a small current sink, a comparator, and an RC circuit to form a low-power wake-up source whose period depends on the circuit’s resistance and capacitance. This is less precise than a crystal, but it saves power, and it may be sufficient for some applications.

In conclusion, there are many things to consider when you select your MCU. What to do after you have selected your MCU will be described in future articles.

The never-ending drive to increase IoT battery life is great for customers, but it poses extraordinary challenges for design engineers. As sleep mode currents edge ever-closer to zero, the challenge of making measurements across a wide dynamic range becomes increasingly difficult.

 

Consider wireless transceiver chips, illustrated in the chart below. Each line represents one transceiver, and the lines go from a given device’s lowest sleep current to its highest operating current. The dynamic range, of course, is the ratio of the two current levels, and the base two logarithm of that ratio indicates how many bits are required to represent the dynamic range.

Merely representing the dynamic range, however, is not sufficient for accurate measurements. For example, if your measurement instrument uses 18 bits to represent a 250,000:1 dynamic range that spans 25 mA (middle of the chart), then your measurement resolution is approximately 100 nA. When you measure relatively large currents in the mA range, this is fine, but when you measure the 100 nA sleep current, your accuracy is ±100 percent – a rather coarse value.

 

For 25% accuracy, you need two additional bits, because the four possible values of two bits divide your resolution accordingly. Similarly, for 10, 5, and 1 percent accuracy, you need 4, 5, and 7 additional bits, as summarized in the following table, which uses non-integer base two logarithms to reduce the number of bits in some cases.

It is, of course, difficult to find instruments that provide accurate current measurements with 20 or more bits of resolution in a given measurement range. The best solution is to use an instrument with seamless ranging that avoids glitching during range changes, or to use a dual range current probe with an instrument that has the intelligence to use the appropriate range as the current level changes.

Learn more about IoT device design and test and explore the available Keysight IoT test solutions

How much does a $1.00 battery cost? That may seem to be self-answering, along the lines of, “Who is in Grant’s Tomb?” or, “When was the War of 1812?” However, a single $1.00 battery may actually cost you hundreds or thousands of dollars when you consider all of the costs associated with its failure. Given the proliferation of batteries in the Internet of Things (IoT), it is especially important to understand the costs involved.

 

Before the Battery Fails

Before the battery fails, you have the transaction costs associated with ordering, receiving, accounting for, and stocking the battery. If you fail to stock replacement batteries, you may need to have an employee make a special trip to purchase the battery or to pay for a delivery service, either one of which might easily cost you many times the price of the battery.

 

For some applications, such as implanted medical devices or remote security devices, you need to consider the cost of actively monitoring the remaining battery level. These sorts of runtime-critical applications may also require you to test and verify the replacement battery’s capacity and charge level.

 

When the Battery Fails

The minute the battery fails, additional costs accrue due to the loss of device functionality. Perhaps a dead battery is just a minor inconvenience, such as loss of remote control for a projector, but perhaps a dead battery delays a production process or customer engagement. In the extreme, a dead battery could endanger lives, as in military, outdoor adventure, or medical applications.

 

Once you have identified the need to replace the battery, there are costs associated with the person who replaces the battery. Depending on the application, the replacement may be performed by an entry-level employee, a skilled technician, or even a cardiologist or thoracic surgeon.

 

In addition to the employee costs, there may be disruption costs associated with the replacement process. For example, consider a telemetry device that transmits patient data to a nursing station. A battery change disrupts other activities of the nurse aide, and the patient often wakes up when the nurse aide changes the telemetry device battery. If the battery is inside the patient, as in an implantable defibrillator, the total cost of the surgery, anesthesia, hospital stay, and follow-up care can cost $50,000.

 

A battery replacement procedure may also include transportation costs when special equipment is required. A consumer may be able to replace a battery on an inexpensive watch, but a high-end or waterproof watch may require special disassembly or resealing equipment.

 

Finally, there are opportunity costs associated with battery replacement. Every minute and every dollar devoted to battery replacement is a resource that cannot be devoted to other activities.

 

After the Battery Replacement

Once the battery has been replaced, there are additional costs to be considered. There is the waste management cost borne by the company, and in some cases there is an additional environmental cost borne by the larger community. If the battery is rechargeable, there are costs associated with the equipment, power, and people involved in the recharging process.

 

A short battery runtime may negatively affect the user’s view of the product, and if two similar devices have similar features, battery life may be the deciding factor in customers’ purchase decisions. In the extreme, there may even be product recall costs or legal liability associated with failed batteries, especially in the medical field.

 

Conclusion

In summary, a $1 battery can end up costing users far more than the basic purchase price. There are costs before, during, and after replacement, and in extreme situations, battery runtime can even be a life safety issue. Design engineers who focus on improving the battery runtime of their devices can substantially improve their customers’ bottom lines, and in so doing they may generate improved sales and customer loyalty.