Skip navigation
All Places > Keysight Blogs > Next Generation Wireless Communications > Blog

Here’s a question for you. Is your connected home safe? I’m not talking about whether you remembered to lock your doors and turn on your security system. What I’m more interested in is if the wireless data being transmitted from one connected home device to another, and to the internet, is secure. It’s a potentially scary proposition and one you will likely have to confront in the very near future, if not already.

 

Gartner estimates that the number of connected things will increase multiple folds yearly—the latest forecast is up 31 percent from the year prior. By 2020, the number of connected things will top 21 billion. What that means is that sooner or later, you will have a growing number of consumer products and smart home devices connected  in your home. From televisions, refrigerators, audio speakers, and home alarm systems, to door locks, these devices will soon be able to talk to one another. Your home alarm system will be activated as soon as your door lock is turned on. And, when the room temperature in your living room reaches a certain limit, it will automatically turn on the air conditioning. One day soon, this type of activity will become the norm in every household.

 

And that brings me back to my original point. Have you ever wondered how well these devices are talking to one another, or how safe your home will be when these devices start to send information wirelessly? These devices may not necessarily be designed to perform across a wide range of real-world conditions, and if so, their performance can fall off. That performance might be affected by the device’s distance from the nearest wireless access point, density of wireless signals in the same area, interference from other radio-frequency noise sources, and of course, data interoperability. To make matters worse, the task of securing these smart devices is like trying to protect endangered wild species in a sub-Saharan desert or the Amazon forest.

 

Making sure new devices establish robust, reliable, and secure connections across the intended range of environments cannot be left to chance; it must be guaranteed. To do that, product makers, consumers, operators and cloud providers alike will need to implement different strategies, coupled with the right test tools, to ensure connected homes stay both connected and safe. 

 

Product Makers

With many connected things packed into a living area, product makers will need to test device performance in the presence of several wireless access points, and to ensure robust enterprise-grade security. They should also check for performance in the presence of other wireless emitters, such as microwave ovens operating in the same frequency band.

 

Consumers

For consumers, the best strategy is to raise public awareness about the dangers of buying hardware that connects to the unsecured internet. Without the proper protections in place, hackers could easily take over a home’s automation and collect sensitive personal information without the owner’s consent. Perhaps what’s needed to protect or help consumers is a public safety warning on every IoT device, much like the safety warnings found on alcohol bottles.

 

Operators and Cloud providers

Most IoT devices and applications deployed in a cellular or cloud provider environment require low latency. Because of that, operators tend to move functionality and content to the edge (Edge Computing (EC)) of networks to automatically respond to IoT devices instantaneously.  However, sensitive data should move away from edges to cloud and secured with encryption. Cloud providers should  do their part in data security, by providing services such as local encryption and digital certificate to authenticate other third-party applications trying to communicate with the cloud service, for their customers.

 

Granted, there is not much we can do to stop cyber criminals from trying to hack the smart devices in our connected homes. But, we can work together to make that task harder, if not impossible. Test and measurement vendors can play a critical role in this process by providing the solutions needed to perform end-to-end testing on devices before they hit the market. At least that way, consumers can be more assured that the devices themselves are secure. And, if consumers do their part by implementing their own security strategies, such as using strong passwords that are routinely changed, we can together ensure our connected homes are indeed safer and more secure.

 

For more information on solutions for ensuring smart device security, check out the following links: IoT Testing, Monitoring and Validation and BreakingPoint, an all-in-one applications and network security testing platform.

The Internet of Thing (IoT) is changing EVERYTHING. There are literally billions of IoT devices around us today, with hundreds more coming online each second. By 2020, there will be roughly 50 billion connected devices. By 2028, the IoT may become so pervasive that we won’t even need to refer to devices as being part of it anymore. It will just become a given that they are connected and interoperable.

As part of that evolution, the IoT will evolve from a focus on consumer-based applications like smart appliances for the home, connected clothing, and wearable fitness gadgets, to mission-critical applications for virtually every vertical IoT market there is. Mission-critical IoT devices will be used to automate energy distribution in smart grids, to enable remote machinery and remote surgery, and in autonomous vehicles for things like automatic emergency detection and autonomous vehicle accident prevention. It’s happening already.

As these applications proliferate, what will emerge is a mission-critical ecosystem designed and hardened to withstand the rigors of the real world. It will be able to deliver new functionality and new efficiencies, and it will bring with it many new opportunities for IoT designers and manufacturing engineers alike.

Here’s 3 important tips to help you realize success in the mission-critical IoT.

1. Understand your requirements

Unlike consumer-based IoT devices, mission-critical devices must work right every time, without fail. A failure in a pacemaker after all, could result in a patient’s death. That’s why mission-critical IoT devices have specialized requirements dictated by the industry in which they will work. Most require rock-solid security, unfailing reliability—even in harsh environments and remote locations—and the ability to operate with little or no human intervention. They also must abide by any applicable industry or government regulations.

Making sure you fully understand the requirements of the product you are designing is the quickest and easiest way to avoid any costly missteps during its development. It also can help improve your confidence that the product will be utilized as intended in the real world.

2. Don’t overlook these design considerations

                                   

Designing any product is hard. Designing a mission-critical IoT product is even harder! That’s because there are just so many things you have to consider. Here are a couple of the considerations you should not overlook. 

  • Battery Life. Many mission-critical IoT devices are not connected to power and often operate using a single battery for several years without maintenance or battery replacement. To ensure a long battery life, make sure your product’s battery and power management circuit have been optimized.

 

  • Signal and Power Integrity Issues. Interference and crosstalk between each of the product module’s blocks can degrade performance. Ripple, noise and transients riding on your circuit’s low-voltage rails can do the same. Be sure to identify and eliminate these issues.

 

  • EMI/EMC. Electromagnetic interference (EMI) can be problematic in scenarios where large numbers of IoT devices operate simultaneously in close proximity to one other. Be sure to weed out such problems early in the design process when they are easier and cheaper to fix.

 

  • Wireless Connectivity. Mission-critical IoT devices have to perform in the presence of multiple users, with different wireless technologies, in the same spectrum. Verifying that your device can handle this load is critical to ensuring robust wireless connectivity.

 

  • Co-Existence and Interference. With lots of mission-critical IoT devices entering the market, the chance of interference between devices goes up and that can impede the ability of your product to peacefully co-exist with others. This can be especially problematic in hospitals where medical monitoring devices have to share the 2.4-GHz ISM band with the likes of cordless phones, wireless video cameras, and microwave ovens. Making sure your product’s operation can work as anticipated in this type of environment is crucial. 

 

3. Choose your tools wisely

While your creativity and skill are essential to a successful product, if it is not built on a solid foundation, it could all come crumbling down. To ensure your product’s foundation is solid enough to survive the real world, you have to choose the right tools for the right job, and those tools must be accurate, high performance and flexible. There is no universal Swiss Army knife when it comes to designing for the mission-critical IoT.

 

One of the tools you should consider utilizing is battery drain analysis. It can help you accurately determine your device’s current use and the duration of each of its operating modes, which is critical information when trying to optimize battery life. Signal integrity and power integrity tools can be used to evaluate high-speed serial interconnect and analyze how effectively power is converted and delivered from the source to the load within a system. An accurate EMI simulation and modeling tool will allow you to estimate emission levels before your hardware is developed. And, to ensure your product can communicate effectively, wireless connectivity and co-existence testing are essential.

 

There is no denying that the mission-critical IoT is ripe with opportunity and will continue to be so for the foreseeable future. But whether or not you succeed in this arena will depend heavily on the choices you make about your design’s requirements, regarding design considerations, and on which test and measurement tools to use. If you are looking for more information on the choices you face, please go to www.keysight.com/find/IoT. And don’t forget to keep checking the Keysight Community Page for future blogs on other related IoT topics.iotdevicetest iotwireless iotsolutions iottechnology industrialiot iiot internetofthings iot

 

 

I was recently reminded of my over-the-air (OTA) experience with 5G channel sounding.  It was like black magic at the time, and now, as it turns out, is vitally important for the success of 5G. 

 

At Keysight, we realized early on that making measurements at millimeter-wave (mmWave) frequencies would be difficult. What we didn’t realize was that there would be so little information in the standards regarding how to test this far along in the development of 5G. The first 5G New Radio (NR) draft specification was released in December 2017. It documents the 3GPP physical layer, but absent are the specifications for the mmWave test environment. That means the chances of getting lost in your 5G OTA measurements goes up dramatically. And that’s where I want to help.

 

Let’s review some of the things you should know about for 5G OTA measurements.

 

Are OTA tests really required?

 

Here’s a question you are probably asking yourselves right about now: is OTA testing really needed? And here’s my answer: ABSOLUTELY!

 

Current sub 6 GHz RF performance tests are mostly done using cables. That changes when you move to massive MIMO in sub 6 GHz or mmWave frequencies. At mmWave frequencies, beamforming antenna technologies are used to overcome higher path loss and signal propagation issues and to take advantage of spatial selectivity by using narrow signal beams. Phased-array antennas, such as those shown in figure 1, are typically highly integrated devices, with antenna elements bonded directly to ICs, making it difficult, if not impossible, to connect and probe. OTA enables test, but it introduces a more challenging ‘air interface’ between the component or device and the base station wherethe imperfections in the air interface need to be accounted for during test.

 

Example mmWave antenna arrays

Figure 1. Example mmWave antenna arrays

 

You Have To Test OTA, But Now What?

 

Okay, so you need to test OTA, but what kind of tests?   The types of measurements for 5G products vary throughout the development lifecycle and are different for an UE versus a base station.  During design and development, RF parametric tests such as transmitted power, transmit signal quality, and spurious emissions are done for radiated transmitter tests.  Base stations add tests such as occupied bandwidth and adjacent channel leakage ratio (ACLR), to name a few.  Beam-pattern measurements in 2D and 3D and beamsteering or null-steering performance tests are also done during R&D.  Conformance testing is also done to ensure the device meets 3GPP minimum requirements. These can be grouped into RF, demodulation, radio resource management (RRM), and signaling tests. This white paper provides an explanation of these tests: OTA Test for Millimeter-Wave 5G NR Devices and Systems.

 

OTA tests are typically conducted in the radiated near-field or radiated far-field region of the antenna system under test. Measurements in the far-field are conceptually the simplest type of OTA measurement and an approved method identified by 3GPP.  A typical far-field anechoic chamber is shown in figure 2.  With the appropriate probing and test equipment, 2D and 3D beam patterns and RF parametric tests can be performed. The challenge is selecting a reasonable chamber that won’t take up your entire lab space. The length of a far-field chamber is roughly determined by 2D2/λ, where D is the diameter of the device being tested.  With this in mind, a 15-cm device at 28 GHz would therefore require a 4.2-m chamber as shown in figure 3. These chambers will be large and quite expensive.

 

Far-field measurement                                                         Figure 2.  Far-field measurement

 

D (cm)

Frequency (GHz)

Near/far boundary (m)

5

28

0.5

10

28

1.9

15

28

4.2

20

28

7.5

25

28

11.7

30

28

16.8

Figure 3. How far is far-field?

 

An alternative for 5G RF tests that is being used by market leaders and is now being considered by 3GPP is the compact antenna test range (CATR). In Figure 4, a CATR uses a reflector so that it looks like the waveform is coming from a long way away. This seems to be a very promising direction for 5G OTA testing, and 5G market leaders are seeing this as a comprehensive and accurate test method.  However, that’s not the case for RRM test where there is no clear solution because of the many open issues due to the dynamic, multi-signal 3D environment with signal tracking and handovers. 

 

    Figure 4. Compact antenna test range (CATR)

 

If We Could Only Tell the Future

 

Yes, I know life would be a whole lot easier if we knew what to expect, but unfortunately the jury is still out on this one. What I can tell you about OTA is that progress is being made. 

 

There are many really smart PhDs working on solutions, but it’s going to take time to get these testing methods into the standards. In the meantime, I solutions coming from market leaders working directly with test vendors to enable OTA tests of the first 5G devices and basestations. These OTA test solutions are the ones to watch, as they will pave the way for the standards.

 

If you are looking for more information on 5G OTA testing, then I highly recommend watching Malcolm Robertson’s video: Testing 5G: OTA and the Connectorless World or reading the : OTA Test for millimeter-Wave 5G NR Devices and Systems white paper. Looking forward, I’ll keep you posted on important 5G topics and developments in future blogs.

Analyzing all the best tech conferences and meetings helped us throw a great 5G party

 

The tables were turned on this well-documented “5G Symposium Critic” last month. This began last spring, when I am sure I visibly flinched when our group president said, “Let’s have a 5G Summit!”

 

Despite the risks of having YA5GE (Yet Another 5G Event), I was fortunate to host a very successful inaugural Keysight 5G Tech Connect event in which we drew on the best practices of the industry, and inserted a few of our own novel ideas.

 Roger Nichols kicking off the Keysight Tech Connect event

 

This post is a tribute to events and speakers who inspired us to throw an excellent technical party. We drew on many best practices, and here are just a few that are noteworthy:

 

Bookend With Charisma and Competence

 (Inspiration: 5G North American Workshop, hosted by Ericsson and Qualcomm, San Jose, Summer 2016)

 

Innovation happens when the unconstrained mind confronts the over-constrained problem. Making 5G real will require significant innovation and the keynote speakers highlighted innovative thinking. Maryam Rofougaran, co-founder of Movandi, opened the pre-event dinner with a description of how her organizations managed these processes through unprecedented mixed-signal IC integration in a previous role at Innovent, later with Broadcom, and now new phased-array antenna technology for 5G. 

 

Scene from Tech Connect

 

Peter Rabbeni of Global Foundries further underscored the potential of silicon technologies, even in our new millimeter-wave (mmW) world, during his opening Keynote the next morning. And Dr. Mischa Dohler of Kings College London, closed the event with an optimistic and energetic talk on the inevitability of 5G combined, enabled, and driven by profound changes coming to networks—changes that will disrupt that business so it will ultimately not look at all like it does today.

 

Stay Technical

 (Inspiration: IWPC, pretty much any event Tom Watson and team do)

 

Recall my criticism of overtly or thinly veiled commercial presentations. 5G Tech Connect avoided this by focusing not just on technology, but on technology for measurement. Professor Gabriel Rebeiz (UCSD), Dr. YiHong Qi (GTS), and Emil Olbrich (Signals Research) introduced and led discussions on phased-array antennas, over-the-air measurement, and 5G NR device validation, respectively.

 

Notwithstanding a few pleasant (and unsolicited) plugs for Keysight by Gabriel and YiHong, the discussions remained focused on key challenges in the technology. Here are some of my insights:

  • Reinforcement of my prediction of mobile commercial mmWave coming only after 2022;
  • Renewed confidence in silicon technologies making headway in 5G mmWave; 
  • The inevitability of the uncomfortable marriage of licensed and unlicensed spectrum—starting in Licensed Assisted Access (LAA), but moving full-force in 5G.

 

Others reached additional insights, which means there was technical fodder for all involved.

 

Provide Fascinating Toys for Engineers to Play With

 (Inspiration: Brooklyn 5G Summit, 2017)

 

One cannot host a proper 5G event without the “show floor/demo room.” It is on this real estate that the “overt commercial” behavior often becomes crushing. So, we adopted three rules:

  • Keep our demonstrations constrained to very newly released and cutting-edge technology, or even capabilities that have yet to see commercial exposure;
  • Only have our deepest technical experts available to discuss these technologies; 
  • No lead sheets within 50 miles of the venue. We ran the risk of tipping our hand too soon on some of this capability, but the animated discussions in the crowded demo room were evidence that this recipe worked.

 

Demo at Tech Connect 

 

I walked away from that initial discussion on hosting a “5G Summit” with a feeling of dread. Those of you who have managed such things know the work involved—the planning, finding participants and speakers, last-minute changes, panic, elation, terror, and anger. And finally, relief— relief followed by pride in managing a good use of time for all involved. But pride has again been unseated by dread. We had not yet opened the post-event cocktail bar when the group president shook my hand, thanked me for an excellent experience, and said, “Let’s do one of these in Asia!”

Here are a few video segments from the Keysight Tech Connect event.

The previous blog post discussed NB-IoT, and why, although it is a great IoT wireless solution that leverages massive infrastructure and technical momentum, it may not be the best choice in certain locations or in applications that demand great flexibility.

 

There are three more situations where other LPWAN technologies may be superior. These situations involve considerations in the areas of security, cost, and application requirements.

 

Security: Because it uses existing cellular infrastructure, NB-IoT comes with security built-in.  However, as a TCP/IP-based technology, it is susceptible to denial of service attacks. Sensors that communicate with their gateway without using IP addresses are not susceptible to these attacks. Furthermore, NB-IoT is inappropriate for applications where data must be kept in-house, on physically secured servers. Storing information on remote servers adds an additional layer of risk. Finally, you may want extra encryption for your data, especially if it is highly proprietary or associated with national defense or critical infrastructure. LoRaWAN has three layers of keyed encryption, which makes it exceptionally secure.

 

Cost: A second area where NB-IoT may not be ideal is cost. Depending on the cellular service provider, you are likely to encounter monthly device costs, SIM card costs, and data costs. For example, Verizon charges $2 per month for up to 200 kB of data. Prices do decrease dramatically for large data users; for example, Verizon charges less than $10 per GB. Where the volume of data is large, NB-IoT shines; for applications with many sensors that generate small amounts of data, NB-IoT can be a very expensive proposition.

 

In addition to the monthly costs, you may have costs associated with the IoT device development process. These include the expenses associated with PTCRB (PCS Type Certification Review Board) certification and with carrier certification costs (tens of thousands of dollars per carrier). These certifications are critical to the success of NB-IoT; flawed devices on the cellular network could generate several types of problems, including slowing down the overall flow of data.

 

Finally, you may have specific requirements for your application that make NB-IoT either impossible or impractical. For example, what if you have a massive array of sensors? A WAVIoT solution can handle more than two 2 million nodes per gateway, far more than NB-IoT. Perhaps you need a link budget larger than 164 dB, or you need data rates of up to 1 Mbps, such as LTE Cat-M1 provides.

 

Application requirements: Perhaps you have a high-mobility application that must be able to move from one cell to another in milliseconds when requested by the network (again, LTE Cat-M1 is a good choice here). Or perhaps your application is very specific, and you want to take advantage of work others have already done. One example would be to use Wireless M-Bus for smart utility metering. The fact that the system is optimized for a single application makes it very efficient and robust for its intended purpose, although it does lack many features that would be common with a general purpose IoT LPWAN solution. Another example would be a solution where you want to take advantage of an artificial intelligence (AI) solution such as IBM's Watson, using IBM Watson IoT Platform.

 

Although there are situations where NB-IoT may not be the best choice, it is a powerful solution for many IoT applications. However, it is important to consider your application’s location, requirements, security needs, and budget before selecting an LPWAN solution for the IoT.

As 3GPP moves from LTE to LTE-Advanced Pro to New Radio, MIMO continues to increase in strategic importance, adding implementation complexity and more specifically, channel count.  MIMO has moved from 2 streams to 8 streams with a small one-dimensional antenna, to “full-dimension” (FD-MIMO, 8 streams radiated through a 64-element array with beamforming) and soon, to Massive MIMO with hundreds of channels and array elements.  Are you ready to test 64, 128, or 256 channel devices?

 

The promise of higher-order MIMO and beamforming are clear:  they enable a superior end-user experience, providing higher data rates and a robust quality of service over a larger geographic area, without the need to add more highly-regulated frequency spectrum.  According to the Shannon-Hartley theorem, the three ways to increase the data capacity are:  increase the bandwidth; increase the number of channels; or increase the signal to noise ratio.

 

 

If public policy constrains you to a fixed bandwidth, then taking advantage of multipath propagation to deliver more channels of data to a user (MIMO) and sending signals in the same time/frequency resource block in two different directions (beamforming) are techniques to maximize the other two Shannon factors.  

 

There is always a price to be paid, however.  Namely, the expense of adding more physical baseband and RF channels, or antenna array elements, or both, and therefore the system implementation complexity and cost.   More subtly, there is also an increase in validation and test complexity.  “How will you quantify how well these 64-, 128-, or 256-channel radios work?” 

 

Reduced Channel Count for Test

 

It seems self-evident that if you have a 64-channel FD-MIMO system, then you need to buy a 64-channel tester.  However, is that really true for testing purposes?  Or can you design a test architecture that also takes into account cost and physical floor space?  

 

RF and millimeter-wave test equipment certainly make you think twice about these trade-offs.  Regardless of vendor, the cost per channel and larger form factors for millimeter-wave test equipment make adding wideband measurement channels a strategic decision, particularly for higher volume testing.  Moreover, one significant technical challenge is the calibration and alignment of 64 measurement channels to remove the system response of the measurement system itself. 

 

It turns out that if you have control of multiple channels of live signals, and are able to synchronize and trigger them repeatably, then you can use fewer live acquisition channels, and then algorithmically stitch them together into a larger, synthetic measurement of the true channel count.  For example, 64 RF or baseband channels for an FD-MIMO base station can be captured sequentially using a smaller array of 2, 4, or 8 measurement channels.  At least 2 channels are required for differential comparisons and alignments, but banks of 8 channels make a more optimal trade-off of measurement speed vs. equipment cost.  These architectures require precise switching, control, calibration, and triggering, and also introduce secondary considerations including post-processing and automation software, noise, flatness, timing skew, and other issues. 

 

Keysight has delivered systems like this, and calls this subsystem a “MIMO channel expander” that virtualizes a higher channel count measurement reference system--It is available as part of the S5020A MIMO reference solution. The result is that a reduced 8-channel system costs a fraction of a 64-channel system, fits in a single rack, can support tomorrow’s gigahertz bandwidths and 5G frequency bands, and successfully trades off slower acquisition time for huge savings in system cost and complexity. 

 

Calibrating Test System Flatness and Time Alignment

 

Multi-channel MIMO and beamforming systems have tight alignment and timing requirements to form and steer beams in different directions.  The primary sources of measurement uncertainty in these systems include the channel-to-channel variations in magnitude and phase (flatness), as well as time alignment (timing skew) due to cabling and instrument-to-instrument triggering (Figures 1 and 2).  

 

The S5020A MIMO reference solution includes software that addresses this challenge with calibration routines for the full channel count (due to the paths in the switch matrix) as well as each physical measurement channel in the test equipment.  Timing skew (primarily from cabling and signal distribution) is reduced to low picoseconds per channel; amplitude and phase variations are reduced to fractions of a dB and tenths of degrees across a full 1 GHz bandwidth at 28 GHz.

Figure 1 – Magnitude and phase of 8 raw measurement channels, before calibration

 Figure 1 – Magnitude and phase of 8 raw measurement channels, before calibration.

 

 Figure 2 – Magnitude and phase flatness of 8 corrected measurement channels

Figure 2 – Magnitude and phase flatness of 8 corrected measurement channels.

 

From this core set of conducted measurements, external radiation patterns can often be calculated or inferred from post-processing, reducing the need for chambers or nearfield probes (Figure 3). 

 

Figure 3 – Calculated radiation patterns post-processed from cabled, multi-channel measurements.   

Figure 3 – Calculated radiation patterns post-processed from cabled, multi-channel measurements.  Note the low residual sidelobe levels; this represents excellent measurement performance.

 

3D visualization and beamwidth predictions of the full 64-channel system are included in the MIMO software. With regards to these 3D beam visualizations, one practical observation is that uncorrected physical measurement system errors (such as gain, phase, and timing skew) increase an effective sidelobe “noise level” around the intended beam direction.  This lower “sidelobe dynamic range” masks the true sidelobe performance of the array-under-test.  When these residual channel-to-channel measurement errors are removed using calibration, the residual sidelobe “noise” can be reduced to 10-20 dB below the true sidelobes of the array-under-test (Figure 4).  Having sufficient baseband digitizer bandwidth and dynamic range, clean RF paths with low spurs and noise, and repeatable triggering/time alignment are critical to achieving this result.

 

Figure 4 – Visualized 64-element beamforming and sidelobes, based on conducted measurements, before and after calibration of the channel-to-channel flatness and RMS timing skew.

  

Figure 4 – Visualized 64-element beamforming and sidelobes, based on conducted measurements, before and after calibration of the channel-to-channel flatness and RMS timing skew.

 

Summary

 

The engineering effort required to establish the calibration and alignment of the measurement system turned out to be significant, relative to the lower complexity of previous generation single-channel and low-order MIMO systems at lower frequencies. Bundling this capability together with phase-coherent source and analyzer hardware delivered significant value to some of Keysight’s most advanced MIMO and beamforming customers.


There will always be measurement trade-offs of how many channels and what performance is necessary at specific phases of the product lifecycle: from early R&D prototyping to design validation vs. conformance testing vs. volume manufacturing.  In this case study, the key insight was that a user-controlled, repetitive stimulus and excellent raw performance allowed the same measurement to be iterated over a large channel count, with acceptable measurement results and dramatically lower cost.

 

Are you ready for 5G?  Keysight can help you handle the complexities of testing MIMO and beamforming systems.

 

Further info

The NB-IoT LPWAN radio technology is a great solution for many IoT applications because it leverages long-proven cellular radio technology and infrastructure that is supported by numerous cellular providers worldwide. Furthermore, NB-IoT has good security, and it is an LTE specification from 3GPP, which gives it substantial technical momentum for evolution today and in the future. Learn more about NB-IoT design and test challenges.

 

However, NB-IoT is not the right solution for every IoT application, and you should consider the information below to determine whether other LPWAN technologies  might better suit your application context and objectives. Note, however, that a technology that has an advantage over NB-IoT in one area may have a significant disadvantage in another area. Selecting an IoT radio technology involves a complex set of tradeoffs.

 

Coverage area: One reason that NB-IoT may not be the best choice is that your application is in an area with no or poor LTE cellular coverage; perhaps the cellular technology is GSM or CDMA, which is incompatible with NB-IoT. One LPWAN alternative, long range WiFi has been proven to work at over 350 km in certain cases. To be fair, very long range WiFi is not common, but it is relatively straightforward to achieve distances over 20 km with inexpensive, readily available hardware.

 

Even if you are in an area with good LTE coverage for cellular IoT, you may find that a solution specifically designed for IoT is already readily available. One example is Sigfox, which is widely available in Europe. Sigfox has an established presence for IoT connectivity, and its radio modules are less expensive than those used for NB-IoT.

 

Customizability and Control: Another reason that you might prefer an LPWAN solution other than NB-IoT is customizability and control. You may want or require the flexibility that comes from keeping all configuration and capacity expansion decisions in house, rather than being constrained by the NB-IoT standard and operators.

 

Perhaps you are limited in funds, but you have a technical staff with the capabilities to design and maintain custom software or hardware optimized for your particular application challenges. A university or research consortium with a substantial pool of graduate students would likely fail into this category. The cost of the technical staff may be less than the ongoing wireless data expense of NB-IoT.

 

Finally, you may want or need to use a vendor-provided API to create tailored software for your application. Companies such as Telensa offer LPWAN solutions with this sort of flexibility.

 

In short, NB-IoT is a powerful and robust LPWAN solution that takes advantage of existing infrastructure and technical momentum. In some situations, however, it may not be the best choice. We will consider this topic further in the next blog post.

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

The emergence of 5G mobile communications is set to revolutionize everything we know about the design, testing, and operation of cellular systems. The industry goal to deploy 5G in the 2020 timeframe demands mmWave over-the-air (OTA) test solutions and requirements in little more than half the time taken to develop the basic 4G MIMO OTA test methods we have today.

 

If you highlight anything from this blog post, know this:

 

We are going to have to move all of our testing to radiated, not just some of it like we do today, and that's a big deal.

 

 

First, a bit of background on the move from cabled to radiated testing, and then I’ll discuss the three main areas of testing that we're going to have to deal with: RF test, demodulation test, and radio resource management.

 

Millimeter-wave devices with massive antenna arrays cannot be tested using cables because there will be no possibility to add connectors for every antenna element. The dynamic (active) nature of antenna arrays means it isn’t possible to extrapolate end-to-end performance from measurements of individual antenna elements. So yes, for testing 5G, it really is time to throw away the cables…whether we want to or not!

 

A new radio design starts with the reality of the deployment environment, in this case a mmWave one. How this behaves isn’t a committee decision, it’s just the laws of physics. Next, we model the radio channel and once we have a model, we can design a new radio specification to fit the model. Then, we design products to meet the new radio specifications, and finally we test those products against our starting assumptions in the model.

 

If we have got it right—in other words, if the model is sufficiently overlapped with reality—then products that pass the tests should work when they are deployed in the real environment. That's the theory.

 

This process works well at low frequencies. For mmWave, however, there is a big step up as the difference in the propagation conditions is enormous.

 

Now let’s look at the categories of radio requirements that we're going to have to measure—that is, what we measure and the environments we measure them in.

 

For RF, it’s about what is already familiar—power signal quality, sensitivity—and those are all measured in an ideal line-of-sight channel.

 

With regards to demodulation, throughput tests will be done in non-ideal (faded) conditions as was the case for LTE MIMO OTA. There we had 2D spatial channels, but for mmWave, the requirement will be 3D spatial channels because the 2D assumptions at low frequencies are no longer accurate enough.

 

Radio resource management (RRM) requirements are about signal acquisition and channel-state information (CSI) reporting, signal tracking, handover, etc. That environment is even more complicated because now we’ll have a dynamic multi-signal 3D environment unlike the static geometry we have for the demodulation tests.

 

Opportunities and Challenges

 

The benefits of 5G and mmWave have been well publicized. There's a lot of spectrum that will allow higher network capacity and data rates, and we can exploit the spatial domain and get better efficiencies. However, testing all of this has to be done over the air and that presents a number of challenges that we have to solve if we're going to have satisfied 5G customers.

 

 

 

We know that we're going to have to use active antennas on the devices in base stations, and those are hard to deal with.

 

We know that spatial tests are slower than cabled, so you can expect long test times.

 

We've got the whole issue of head, hand, body blocking on devices—it's something that isn’t being considered for release-15 within 3GPP but will still impact customer experience.

 

We know that OTA testing requires large chambers and is expensive.

 

We know OTA accuracy is not as good as cabled testing—we're going to have to get used to that.

 

Channel models for demodulation and RRM tests haven’t been agreed upon yet, which is impacting agreement on baseline test methods for demodulation and RRM.

 

Takeaways

 

There's a paradigm shift going on because of mmWave. We used to work below 6 GHz and the question we asked at < 6 GHz frequencies was, "How good is my signal?" That question led to the development of non-spatial conducted requirements. The question now for mmWave is, "Where is my signal?" That's going to lead to the development of 3D spatial requirements and OTA testing. This is a fundamental shift in the industry.

 

It’s going to be a tall order…testing 5G mmWave devices.

 

Keysight is committed to getting our customers on the fastest path to 5G. Stay tuned as Keysight continues to roll out 5G testing methodologies and system solutions. Meanwhile, explore the 5G resources currently available.

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

My experience with broadcast TV in 1960’s Colorado was fraught with “ghost” images that distorted our television screen during episodes of my favorite TV programs. My parents’ explanation of this being an “echo” from the mountains was very confusing. How could light have an echo? It was in my early days at Hewlett-Packard that I learned the physics of multipath interference; and it was much later that I encountered the technology that would take advantage of these physics rather than fight them.

 

Multiple In, Multiple Out: This act of adding an advantageous term to the Shannon-Hartley theorem to squeeze a few more bits per second from our precious spectrum is enjoying its highest popularity ever. And while it sounds like a new concept the careful reader will note a reference in Dr. Thomas Marzetta’s 2010 seminal paper on Massive MIMO to a fascinating paper dating from 1919(Alexanderson, Ernst F. W., “Trans-Oceanic Radio Communication”: Transactions of the American Institute of Electrical Engineers Volume XXXVIII, Issue: 2, July 1919). Considerations that smell a lot like MIMO appear to date from a time when the founders of radio communications were but recently in their graves.

 

Most descriptions of Massive MIMO are either opaque with multi-dimensional calculus, or full of simple brightly colored cartoon diagrams of antennas with lightning bolts. Novices like myself struggle with the topic and there is even significant debate amongst the experts.

 

MIMO is the use of multiple independent transmit and receive chains each connected to its own antenna to take advantage of the different and independent paths that radio waves follow in a reflective environment. Sophisticated baseband systems split and reassemble signals to and from these different paths to create multiple useful radio communications channels out of what used to be just one. This enables any of the following:

  1. Use of more than one path to decrease the error rate of a single set of data
  2. Use of more than one path for different sets of data
  3. Manipulate the inherent nature of multipath interference to either cancel or emphasize the signal at any physical location in the radio channel

 

#3 is the essence of what is now called “Massive MIMO”. But based on the heated discussions at industry and academic symposia it is clear there is disagreement about “Massive MIMO” in 5G. A few of the more hotly-debated topics:

Is “Massive MIMO” the same as “Beamforming”? No—as above. MIMO can take advantage of beamforming and indeed FD MIMO has two modes that are strictly referred to as “beamforming” modes. But beamforming is done in many non-MIMO applications.

 

How many antennas does it take to be “Massive”? Dr. Marzetta stipulates that “Massive” means not only “many antennas” (many more base station antennas than users—and more is always better) but also that each is part of an independent transceiver chain.  But the economy of scale given technology available in the 5G time-frame probably means something less than 600.

 

Is FD-MIMO “Massive”?  3GPP’s FD MIMO introduced in Release 13, has a 64-antenna element count. Hence, many deem it as “massive”. 64 antennas is “much greater than” what??--probably not much more than 10 UE’s. Is FD MIMO really about servicing only 10 UE’s in any one cell? Probably not.

 

Can you do Massive MIMO in FDD systems?  At least one implementation of FD-MIMO in the R13 standard is for FDD scenarios. If one accepts that FD-MIMO is “massive”, the answer to this question is “yes”. But due to the lack of scalability, I do recall Dr. Marzetta stating flatly (and I quote): “FDD is a disaster. End of story.” 

 

Will we get Massive MIMO that will improve capacity, energy efficiency, and spectral efficiency for 5G systems?  Yes.  MWC 2017 was host to impressive Massive MIMO demonstrations. And the promise of using new digital technologies to take full advantage of a rich radio channel continues to drive innovation. I look forward to it just like I look forward to the next related heated discussion--which perhaps will be a result of this very post.

 

Learn the latest information on Massive MIMO in 5G and other 5G testing

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

Near field communication (NFC) is the radio technology that allows you to pay for things by holding a credit card or cell phone up to a sensor. NFC technology is rapidly being adopted in transportation, retail, health care, entertainment, and other applications. Narrowband Internet of Things (NB-IoT) is a radio communications technology that uses existing cellular infrastructure to communicate with and collect data from remote sensors, possibly over a distance of several miles. At first glance, it may seem that these two technologies have nothing in common, but in fact they have several commonalities.

 

Commonalities of NFC and NB-IoT

 

The first commonalities are obvious: both NFC and NB-IoT are wireless communications technologies, and both will have a large impact on the world economy. Most new cell phones come standard with NFC, and the number of devices using each technology will likely soon outnumber people. Furthermore, both NFC and NB-IoT have significant requirements for data security, as both are tempting targets.

 

From a technical perspective, both NFC and NB-IoT transmit small data packets intermittently and infrequently. Both use very inexpensive transmitters, and both transmit with very small amounts of power. Therefore, both NFC and NB-IoT devices require extensive testing to make sure that they operate robustly and securely, even in noisy electromagnetic environments. This includes tests in the component development, system integration, certification, and operator acceptance phases of product development.

 

Testing NFC Devices

 

To guarantee a successful roll out, compliance and interoperability testing are required. Some industry organizations, such as the NFC Forum, have already developed technical and test specifications to enable developers to successfully test and certify their NFC devices. Keysight has multiple solutions for NFC testing, including the one-box, standalone RIDER Test System (shown below). The system can execute NFC RF and protocol measurements throughout the product development cycle, and it can flexibly evolve along with NFC standards.

 

RIDER Test System

 

The RIDER Test System can emulate NFC readers and tags; generate, acquire, and analyze signals; code and decode protocols, and perform conformance testing. It also includes an optional Automatic Positioning Robot (shown below) for the accurate, automatic positioning of test interfaces that NFC Forum specifications require.

 

Keysight InfiniiVision

Keysight InfiniiVision 3000T and 4000A oscilloscopes also include an NFC trigger option. It includes hardware NFC triggering, PC-based automated tests for Poller and Listener modes and resonant frequency test capabilities. This automated NFC test system is ideal for manufacturing and design validation test.

 

Testing NB-IoT Devices

 

Like NFC, NB-IoT devices require sophisticated test solutions. In addition to the RF testing and coverage enhancement testing, battery runtime testing is essential for NB-IoT devices. Because IoT devices have deep sleep modes that quickly transmit into short RF transmission modes, a large dynamic range is critical. Power analyzers with seamless ranging are therefore very popular for these applications.

 

The Keysight E7515A UXM Wireless Test Set is a robust test solution for NB-IoT. It acts as a base station and network emulator to connect and control the NB-IoT devices into different operating modes. You can also synchronize it with the N6705C DC Power Analyzer to perform battery drain analysis on NB-IoT devices. See this application note for more details. The UXM Wireless Test Set makes Keysight the first company with end-to-end simulation, design verification test and production/manufacturing test solutions for NB-IoT. Follow this blog for more information about IoT testing.

With the 5G technology evolution just on the horizon, you can feel the momentum, excitement, and even tension building in the wireless industry.  3GPP is accelerating the New Radio (NR) standard and mobile network operators are fast-tracking their deployment plans.  Mobile data demand continues to grow at a rapid pace and new device categories, such as VR/AR headsets, connected/autonomous cars, and cloud-based AI-enabled home assistants, are gaining traction and poised to take advantage of new 5G infrastructure.  Although massive machine type communications (mMTC) is taking a backseat to enhanced mobile broadband (eMMB) in the initial 5G standard, operators are deploying new cellular IoT technologies, such as NB-IoT and CAT-M yielding new business models to more industry verticals. 

5G

Opportunities abound and so do challenges. To unleash the 5G business potential, chipset and device engineers must overcome significant challenges in the technology evolution and revolution:  mm-wave propagation and channel modeling, wideband calibration, antenna complexity, beamforming, over-the-air measurements, protocol optimization for peak data throughput, digital interface capacity, battery life; the list goes on. Those of us in the test and measurement world are addressing the same challenges. We have the extra requirement of ensuring tools are ready in time (i.e. before!) for the designers of 5G systems. But Keysight engineers have also taken full advantage of our deep experience in aerospace & defense mm-wave and wideband applications and from serving the previous four generations of wireless communications.

 

Collaborating Globally to Realize 5G Technologies

 

But that list of technical issues is long and the answers require teamwork, (Roger Nichols’ last blog described the benefits of early and broad engagement in these generational changes:  Getting Better in 5G—with a Little Help from Your Friends). Operator trials around the world are demonstrating promising results and continue to progress towards commercialization.  The path to 5G is forming through the 4G evolution and pre-5G activities.  While there is the inevitable concern about the “killer 5G use-case”, we see multiple ideas evolving over time:  from stand-alone fixed wireless access for the home to non-standalone mobile access with interworking between traditional LTE networks and the new 5G standard.  In addition, regulators are opening new spectrum between 3 and 5GHz for mobile wireless applications.  Industry leading chipset vendors have publicly announced 5G modem solutions that will support NR at both sub-6 GHz and mm-wave as well as legacy cellular formats.  That breadth of new technology, the addition of legacy technology, the mix of use-cases, and the range of carrier frequency and bandwidth is a huge map of complexity. Collaboration is necessary for success and we are pretty excited to announce another example just last week; that Keysight 5G Test Solutions have been selected by Qualcomm to help accelerate their commercialization strategy. And we are collaborating to test their 5G chipsets.  Keysight Technologies Selected by Qualcomm for 5G Test Solutions

 

Enabling the Fastest Path to 5G

 

So what does it take to gain the confidence of key players in the industry? Like making an excellent 5-course meal, the recipe is simple but the execution is rather involved.

  1. Be there on time with necessary tools. Our most recent example is this month’s announcement of Keysight’s  5G Protocol R&D Toolset:  Keysight Enables Prototyping of Next-Generation Mobile Devices with the Industry's First 5G Protocol R&D Toolset. Implementing and validating the new capabilities of NR will be particularly challenging given new capabilities like flexible numerology, channel codes, and management of active antenna systems (AAS).  Keysight 5G Network Emulation Solutions. 5G Protocol R&D Toolset
  2. Work very closely with market leaders to understand the nuances of their design needs.
  3. Add to and update the tools; even make real-time adjustments to plans. Like our first 5G SystemVue solution launched in 2015, the solution described above is only first in a series of (in this case) network emulation solutions to address the 5G device workflow.

 

What Will we See Next?

 

The tone of recent public 5G events is a glorious mix of hype, amazing technology, and anxiety about how best to succeed in this industry. We remain convinced that the business model concerns will be addressed by the same kinds of smart people who developed everything from smart-phones to the most popular social media applications. It is up to us on the technology side to be ready for those innovative business ideas that will use the 5G network to its hilt.  It is very exciting to be focused on enabling time to market and cost efficiencies by delivering solutions that stitch together the wireless device development workflow.  

Explore new signals, scenarios and topologies for 5G mobile communication

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

5G promises substantial improvements in wireless communications, including higher throughput, lower latency, improved reliability, and better efficiency. Achieving these goals requires a variety of new technologies and techniques: higher frequencies, wider bandwidths, new modulation schemes, massive MIMO, phased-array antennas, and more.

 

These bring new challenges in the validation of device performance. One of the key measurements is error vector magnitude (EVM), which is an indicator of quality for modulated signal as they pass through a device under test (DUT). In many cases, the EVM value must remain below a specific threshold—and getting an accurate measurement requires that the test system itself be very clean (i.e., have a low EVM itself). This includes all fixtures, cables, adaptors, couplers, filters, pre-amplifiers, splitters, and switches, between the DUT and the measurement system.

 

At 5G bandwidths and frequencies, the test fixture can impose a significant channel frequency response on the test system and adversely affect EVM results. Hence, the measurement now includes the characteristics of the test fixture and the DUT—and this makes it difficult, if not impossible, to determine the true performance of the DUT.

 

Calibration can move the test plane from the test instrument connector to the input connector of the DUT (Figures 1, 2). Keysight has created a solution that uses a NIST-traceable reference comb generator to enable complete channel characterization of the test fixture on both sides of a transceiver (or any other component or device).

Figure 1. This uncalibrated test system has unknown signal quality at the input to the DUT (A1’). A common mistake is to simply use equalization in the analyzer (A2), but this occurs after the DUT and it also removes some of the imperfect device performance we’re trying to characterize.

Figure 2. In this calibrated test system, the system and fixturing responses have been removed, enabling a known-quality signal to be incident to the DUT (B1). The analyzer errors can also be removed (B2).

 

Figure 3 shows the uncalibrated test fixture equalizer response for a 900 MHz BW signal at 28 GHz. The upper trace shows the amplitude response with a significant roll off at the upper end of the bandwidth. The lower trace shows the phase response, which also has considerable variation over the bandwidth. These imperfections would limit EVM to being no better than about 5 percent.

Figure 3. These OFDM frequency response corrections for an uncalibrated system show variations of nearly 7 dB in raw amplitude and 45 degrees of phase across a 900 MHz bandwidth at 28 GHz

 

 

Figure 4. Here is the same OFDM response for a calibrated system, showing variations of only 0.2 dB and 2 degrees. The resulting signal EVM dropped to less than 1 percent from more than 5 percent.

 

Figure 5 shows the demodulation results after calibration for single-carrier 16QAM signal nearly 1 GHz wide. The upper-left trace shows a very clean constellation diagram. The lower-left trace shows the spectrum with a bandwidth of approximately 1 GHz. The upper-left trace shows the equalizer response in both magnitude and phase: both of these are nearly flat, indicating the equalizer is not compensating for any residual channel response in the test fixture. The middle lower trace shows the error summary: EVM is approximately 0.7 percent, which is a very good result. This system would be ideal for determining a device’s characteristics.

 

Figure 5. Calibration enabled the signal generation of a 1 GHz wide signal with an EVM of less than 0.7 percent at 28 GHz. This EVM occurs at the input plane of the DUT.

 

In pursuit of tremendous improvements in cellular network capability, 5G is using new technologies that pose many challenges to testing. Fortunately, calibration will help ensure that we’re measuring the true performance of the DUT without the effects of the test fixture.

 

We can help you learn more about the testing of 5G wireless technologies.

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

Once you have taken the steps described in the Simple Steps to Optimize Battery Runtime blog, you still have opportunities to reduce power consumption in your battery-powered device. Be sure to measure the actual current consumption before and after each change, and try to understand why the results are as observed. The more understanding you develop, the better you will be at predicting the effects of future changes. This will help you get future products to market faster with optimized battery runtime.

 

Hardware optimizations

Consider using a simple analog comparator instead of an analog/digital converter (ADC) to trigger certain functions. The ADC is likely to be more accurate and faster than the comparator, but it has longer startup time and consumes more current. The comparator continuously compares signals against a threshold, and for some tasks, this may be sufficient. For cases where you need the accuracy and versatility of the ADC, turn off internal voltage references on the ADC and use Vcc as the reference voltage if possible.

 

Use two-speed startup procedures that rely on relatively slow RC timers to clock basic bootup tasks while the microcontroller unit (MCU) waits for the crystal oscillator to stabilize. Be sure to calibrate these internal RC timers or buy factory-trimmed parts.

 

Firmware optimizations

Use event-driven code to control program flow and wake up the otherwise-idle MCU only as necessary. Plan MCU wakeups to combine several functions into one wakeup cycle. Avoid frequent subroutine and function calls to limit program overhead, and use computed branches with fast table lookups instead of flag polling and long software calculations. Use single-cycle CPU registers for long software routines whenever possible.

 

Implement decimation, averaging, and other data reduction techniques appropriately to reduce the amount of data transmitted wirelessly. Also, make sure to thoroughly test various wireless handshaking options in an actual usage environment to strike the ideal balance between wasting time on unsuccessful communication attempts and performing excessive retries.

 

Your oscilloscope will probably be useful in obtaining quick measurements for these current waveforms, and depending on the communication protocol, an oscilloscope may be the only instrument with the necessary bandwidth to make such measurements. However, once you know the bandwidth of your signal, you may be able to use a DC power analyzer or device current waveform analyzer to make these measurements. These devices will make measurements with better precision and to provide more detailed analysis, such as automatic current profiles.

 

By implementing these strategies and measuring current consumption throughout your development process, you will quickly optimize battery runtime and drive success in IoT and other battery-driven applications for you and your customers.

 

Learn more about maximizing battery life of IoT smart devices by downloading helpful applications notes and webcasts from Keysight.

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

After you have selected your microcontroller unit (MCU), there are several simple steps you can take to optimize battery runtime. Correctly configuring and testing your hardware and firmware can help you to develop the optimal IoT device configuration.

 

Power budget

Begin by creating a theoretical power budget for your device. Using the MCU’s data sheet and manual, consider a complete cycle of events, such as waking, collecting data, processing data, turning on the radio, transmitting data, turning off the radio, and returning to sleep. Multiply the current by the duration of each step, and add the values to obtain a projected total for a typical operational cycle. Be sure to include the current consumed while the device is in its longest sleep mode; even nanoamps add up over long periods of time. Your MCU vendor should have software that helps you estimate current drain associated with various operational parameters, and you can use a DC power analyzer, digital multimeter (DMM), or device current waveform analyzer to fine tune the estimated values.

 

Hardware configuration

Begin by optimizing the clock speed at which the MCU runs. The current consumption for many MCUs is specified in units of µA / MHz, which means that a processor with a slow clock consumes less current than a processor with a faster clock. However, a processor working at 100% capacity will consume the same amount of energy at 10 MHz as at 20 MHz, because the 20-MHz processor will consume twice the current for half as long. The conclusion is that for code segments where the processor is largely idle, you can save current by running the MCU more slowly.

 

Next, optimize the settings associated with data sampling. These settings include the frequency with which the sensor wakes up to collect data, the number of samples taken, and the ADC sampling rate. There is often a tradeoff between measurement accuracy and these sampling parameters, so set the sampling parameters to minimize current drain while delivering acceptable accuracy. Similarly, you may be able to change the rate at which the MCU updates the device display, requests data from sensors, flashes LEDs, or turns on the radio.

 

Finally, carefully examine the various idle, snooze, sleep, and hibernation modes available on your MCU. For example, some MCUs have sleep modes that disable the real-time clock (RTC), and disabling the RTC may reduce your sleep current consumption by a factor of six or more. Of course, if you do this, you will likely need some mechanism to recover the date and time, perhaps through a base station.

 

Firmware options

Design your program to finish each task quickly and return the MCU to sleep. Cycle power on sensors and other peripherals so that they are on only when needed. When you cycle sensor power, remember power-on stabilization time to avoid affecting measurement accuracy. For ultra-low-power modes, consider using a precision source/measure unit (SMU) to make very accurate current measurements, especially when you have the option to power the MCU at different voltage levels.

 

Consider using relatively low-power integrated peripheral modules to replace software functions that would otherwise be executed by the MCU. For example, timer peripherals may be able to automatically generate pulse-width modulation (PWM) and receive external timing signals.

 

Use good programming practices, such as setting constants outside of loops, avoiding declaring unnecessary variables, unrolling small loops, and shifting bits to replace certain integer math operations. Also, use code analysis tools and turn on all compiler optimizations.

 

Test and learn

Finally, use your instruments’ software tools to analyze the actual current consumption frequently as you develop the MCU code. These tools may include a complementary cumulative distribution function (CCDF) or automatic current profile, and they will give you information to refine your power budget. Observe and document how your coding decisions affect current consumption to optimize the present program and give you a head start on subsequent projects.

 

Learn more about maximizing battery life of IoT smart devices by downloading helpful applications notes and webcasts from Keysight.

 

Follow our Next Generation Wireless Communications blog and connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

The number of IoT devices seems to be at an all-time high, from industrial sensors, to IoT clothespins (really!), to smart water dispensers for cats. For mainstream IoT applications, battery life is often a huge differentiator in buying decisions, so designing a device with an energy-frugal microcontroller unit (MCU) is a critical success factor. Once you have selected the MCU, however, your job has just begun. Your MCU firmware programming decisions can have effects ranging from fractions of a percent to an order of magnitude or more.

 

Architecture
Carefully select the MCU hardware architecture to match your application. For example, some MCUs include efficient DC-DC buck converters that allow you to specify voltage levels for the MCU and peripherals that can operate across a range of voltages. A transceiver IC that can operate at voltages ranging from 2.2 to 3.6 V will have substantially different power consumption, output power, attenuation, and blocking characteristics at different input voltages, so flexible DC-DC conversion is a plus. Also, an MCU may integrate RF radio capabilities, sensors, and other peripherals. Greater integration may decrease current draw by offering greater control over options that can reduce current.

 

Accelerators and peripherals
Some MCUs have hardware accelerators for rapid CRC, computation, cryptography, random number generation, or other powerful math operations. The high speed of these peripherals lets you put the MCU to sleep faster, but there may be a tradeoff between speed and current consumption. Other MCUs use a wake-up peripheral that saves time by using a low-power RC circuit to clock the MCU while the main crystal oscillator powers up and stabilizes, again saving MCU runtime. Some MCUs have sensors with buffers that accumulate multiple samples for efficient batch reading later – customers probably do not need their IoT aquarium thermometer to send data to the cloud every millisecond. Some of these peripherals may also improve the security of your device because they include hardware security features.

 

Memory
Every IoT device needs some memory to run its programs, but unused RAM simply wastes current. Some, but not all, MCUs allow you to turn off power to unused RAM. Also, the various memory technologies (EEPROM, SRAM, FRAM, flash, and so on) consume different amounts of power. Some MCUs have one type of memory technology for program storage, and small caches of SRAM that perform most program operations with low-power memory.

 

Low power states
The key to long battery life is to increase the time spent in low-power sleep states, but the sleep states in different MCU architectures vary dramatically. Furthermore, the names used for these states – sleep, hibernate, idle, deep sleep, standby, light sleep, snooze – lack consistency. Review the MCU’s low power modes carefully, along with the consequences of entering and exiting them, such as data loss, time to enter and exit the state, and requirements to reboot various levels of functionality.

 

Clocks
The variety of timers and clocks available on the MCU may also affect battery runtime. At a minimum, most MCUs have two clocks – a relatively fast master clock for fast reactions and signal processing and another, slower clock for keeping the real-time clock alive in sleep states. Other MCUs may combine a small current sink, a comparator, and an RC circuit to form a low-power wake-up source whose period depends on the circuit’s resistance and capacitance. This is less precise than a crystal, but it saves power, and it may be sufficient for some applications.

In conclusion, there are many things to consider when you select your MCU. What to do after you have selected your MCU will be described in future articles.