Skip navigation
All Places > Keysight Blogs > Next Generation Wireless Communications > Blog

The never-ending drive to increase IoT battery life is great for customers, but it poses extraordinary challenges for design engineers. As sleep mode currents edge ever-closer to zero, the challenge of making measurements across a wide dynamic range becomes increasingly difficult.

 

Consider wireless transceiver chips, illustrated in the chart below. Each line represents one transceiver, and the lines go from a given device’s lowest sleep current to its highest operating current. The dynamic range, of course, is the ratio of the two current levels, and the base two logarithm of that ratio indicates how many bits are required to represent the dynamic range.

Merely representing the dynamic range, however, is not sufficient for accurate measurements. For example, if your measurement instrument uses 18 bits to represent a 250,000:1 dynamic range that spans 25 mA (middle of the chart), then your measurement resolution is approximately 100 nA. When you measure relatively large currents in the mA range, this is fine, but when you measure the 100 nA sleep current, your accuracy is ±100 percent – a rather coarse value.

 

For 25% accuracy, you need two additional bits, because the four possible values of two bits divide your resolution accordingly. Similarly, for 10, 5, and 1 percent accuracy, you need 4, 5, and 7 additional bits, as summarized in the following table, which uses non-integer base two logarithms to reduce the number of bits in some cases.

It is, of course, difficult to find instruments that provide accurate current measurements with 20 or more bits of resolution in a given measurement range. The best solution is to use an instrument with seamless ranging that avoids glitching during range changes, or to use a dual range current probe with an instrument that has the intelligence to use the appropriate range as the current level changes.

Learn more about IoT device design and test and explore the available Keysight IoT test solutions

Moray

The London 5G debates

Posted by Moray Employee Nov 18, 2016

Following on the theme from Roger’s recent post Blessings and Curses: Firsthand Commentary on the State of 5G Wireless Conclaves, I was invited to take part in one of the most enlightening 5G events I’ve encountered. It was the first of two 5G debates in London organized by Cambridge Wireless, a UK-based community bringing the mobile wireless community together to solve business problems.

 

The debate took place in the prestigious “Shard” building in central London and was chaired by Prof. William Webb, last year’s president of the Institute of Engineering and Technology. One of the things that distinguished this debate from so many others is that it was a stand-alone event, attracting a diverse audience not typically seen at industry conferences (e.g., the ones in which debates are often curtailed just when they get interesting). The three other panelists alongside myself were Howard Benn, Samsung’s head of standards and industrial affairs; Paul Ceely, head of mobile strategy at British Telecom (which recently bought operator EE); and Joe Butler, director of technology at UK regulator OfCom and, for this debate, representing the UK’s National Infrastructure Commission, which is tasked with planning the UK’s critical infrastructure.

 

The theme of the first debate was What’s left for 5G now that 4G can do IoT and Gbits/s speeds?” while the second had a business focus: “Will operators see increased ARPU from 5G?” A short video of the first debate and a full transcript is available here and the second debate is here.

 

Each panelist gave a short opening statement. Given the recent political environment in the UK, I led with the good news that “5G will be much easier than “Brexit,” and this raised the first of many laughs in what was a good-natured but insightful debate. I gave my reasoning that we have engineers who actually understand 5G whereas the world of politics and economics is populated with those who get by with subjective opinion. That said, I pointed out there is a lot of noise in this 5G space so it is important to know the credentials of those giving advice: are they based on commercial self-interest and hype or are they based on observation of reality backed by physics? After all, at the end of the day, 5G has to work before it can be commercially successful.

 

The debate covered a number of areas in sub-6 GHz territory, through to millimeter-wave developments, IoT and network evolution with NFV and SDN. But the key moment for me started when Joe Butler described his frustration with current infrastructure: “If I get on the train from Brighton to London, which I do on a very, very regular basis, I would dearly love to be able to make a phone call that lasted longer than 30 seconds!” After the laughter died down, the chairman used the opportunity to conduct one of many quick polls of the audience. In this one he asked for a vote in favour of 10 Mbps ubiquitous connectivity vs. delivering blindingly fast, 100 megabits (or even gigabits) a second in pockets of places and also some super low-latency services. The answer to the first question was spontaneously unanimous as can be seen in the picture below captured from the video.

So this means the second debate on the 5G business case will not be short of opinions.

 

The next opportunity for me to interact with the wider 5G community will be at an upcoming IWPC workshop in San Jose hosted by Verizon and focused on the role millimeter-waves will have in 5G. On this occasion I will be delivering a high-level technical paper called “Modelling what matters” that will ask important questions about the focus of current research into 5G. In particular, what concerns me is whether there is sufficient research targeting the design and test of 5G “new radio” to mitigate the spatial dynamics of millimeter-wave radio propagation. More on that later…

 

 

Roger Nichols

Break Away & Score in 5G

Posted by Roger Nichols Employee Nov 17, 2016

Soccer and 5G have at least 12 things in common. Check out the parallels in our new infographic.

 

Take the Lead in 5G Wireless Technology and learn more HERE

 

Don't forget to follow our blog to connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you. 

As engineers, sometimes the most useful way to imagine forward is to pause and look at how far we’ve come over the past several decades. These days, many of us are doing that with 5G.

 

Why am I so excited about the fifth generation of wireless communication? As history reveals, the hallmark of any great technology is the way it improves the human condition and revolutionizes our understanding of the world.

New technology builds on existing technology, and then takes it farther—sometimes in unexpected ways. The humble wheel led to carts and chariots, and it also led to cogwheels and bicycles and trains and automobiles. It turned into waterwheels and turbines, and it begot astrolabes and clocks and hard disk drives.

 

Evolution and revolution has also occurred in human communication, driven by a need to connect beyond shouting distance. In the beginning, we tried smoke signals and the beating of drums. As the human mind improved, our ancestors came to realize there might be better, more efficient, more nuanced ways to communicate over long distances.

 

Semaphore flags and newspapers all found their place in the timeline of communications. But the Cambrian explosion of communications came about in the 19th and 20th centuries, with pioneers introducing all kinds of new systems: rotary printing presses, electric telegraphs, telephones, and radios. Thank you, Mr. Edison and Professor Maxwell, and danke shoen, Herr Doktor Hertz.

 

Science fiction has also played its part, stirring our imaginations and inspiring innovation. The biggest names in science fiction have been renowned for being ahead of their time in terms of their ideas: the first geosynchronous communications satellite was launched less than two decades after Arthur C. Clarke wrote his article on the topic.

In 1966, the original Star Trek put wireless portable communication on our TV screens and firmly into our collective mind. Of course, the first “mobile phones” were so large the radios had to be mounted in car trunks or carried in a briefcase. And they weren’t cellular: they could connect into the local public telephone network, but they didn’t use base station sites to communicate.

 

That technology took root in the 1980s. Over the last 30 years we have been accelerating from 1G analog technology to the alphabet soup of digital modulation and multiple-access schemes: 2G had GSM, GPRS, cdmaOne and EDGE; 3G, which is still widely used, has W-CDMA, HSPA, HSPA+, cdma2000, and more; and 4G has OFDMA, SC-FDMA, CDMA, MC-CDMA, and more.

 

Innovations give us reason to be excited about 4G LTE and the future vision that may become reality in 5G. Big steps forward include spatial processing techniques such as multi-antenna schemes and multi-user MIMO, and these will give way to experimentation with massive MIMO, millimeter-wave frequencies and multi-gigahertz bandwidths.

 

It has already been a long journey from Maxwell’s equations to the too-large-for-my-pocket smartphone. Moving forward, 5G is expected to enable possibilities like an Internet of Things that may contain tens of billions of connected devices, enabling another technology revolution.

 

Although the 5G standards are yet to be finalized, a sizable workforce around the world is doing the difficult groundwork, once again turning science fiction into hard science. Here at Keysight, we’re doing our part to support those efforts. And we'll keep writing about it.

How much does a $1.00 battery cost? That may seem to be self-answering, along the lines of, “Who is in Grant’s Tomb?” or, “When was the War of 1812?” However, a single $1.00 battery may actually cost you hundreds or thousands of dollars when you consider all of the costs associated with its failure. Given the proliferation of batteries in the Internet of Things (IoT), it is especially important to understand the costs involved.

 

Before the Battery Fails

Before the battery fails, you have the transaction costs associated with ordering, receiving, accounting for, and stocking the battery. If you fail to stock replacement batteries, you may need to have an employee make a special trip to purchase the battery or to pay for a delivery service, either one of which might easily cost you many times the price of the battery.

 

For some applications, such as implanted medical devices or remote security devices, you need to consider the cost of actively monitoring the remaining battery level. These sorts of runtime-critical applications may also require you to test and verify the replacement battery’s capacity and charge level.

 

When the Battery Fails

The minute the battery fails, additional costs accrue due to the loss of device functionality. Perhaps a dead battery is just a minor inconvenience, such as loss of remote control for a projector, but perhaps a dead battery delays a production process or customer engagement. In the extreme, a dead battery could endanger lives, as in military, outdoor adventure, or medical applications.

 

Once you have identified the need to replace the battery, there are costs associated with the person who replaces the battery. Depending on the application, the replacement may be performed by an entry-level employee, a skilled technician, or even a cardiologist or thoracic surgeon.

 

In addition to the employee costs, there may be disruption costs associated with the replacement process. For example, consider a telemetry device that transmits patient data to a nursing station. A battery change disrupts other activities of the nurse aide, and the patient often wakes up when the nurse aide changes the telemetry device battery. If the battery is inside the patient, as in an implantable defibrillator, the total cost of the surgery, anesthesia, hospital stay, and follow-up care can cost $50,000.

 

A battery replacement procedure may also include transportation costs when special equipment is required. A consumer may be able to replace a battery on an inexpensive watch, but a high-end or waterproof watch may require special disassembly or resealing equipment.

 

Finally, there are opportunity costs associated with battery replacement. Every minute and every dollar devoted to battery replacement is a resource that cannot be devoted to other activities.

 

After the Battery Replacement

Once the battery has been replaced, there are additional costs to be considered. There is the waste management cost borne by the company, and in some cases there is an additional environmental cost borne by the larger community. If the battery is rechargeable, there are costs associated with the equipment, power, and people involved in the recharging process.

 

A short battery runtime may negatively affect the user’s view of the product, and if two similar devices have similar features, battery life may be the deciding factor in customers’ purchase decisions. In the extreme, there may even be product recall costs or legal liability associated with failed batteries, especially in the medical field.

 

Conclusion

In summary, a $1 battery can end up costing users far more than the basic purchase price. There are costs before, during, and after replacement, and in extreme situations, battery runtime can even be a life safety issue. Design engineers who focus on improving the battery runtime of their devices can substantially improve their customers’ bottom lines, and in so doing they may generate improved sales and customer loyalty.

The last few weeks have been a whirlwind—literally and figuratively.

 

On Tuesday, October 4, the European Microwave Week exhibition opened in London. We at Keysight—with some fanfare—pulled the fancy red drape off our new 110 GHz signal analyzer, the N9041B UXA. This thing is a total game-changer: it covers 3 Hz to 110 GHz in one unbanded sweep, has sensitivity 25 dB better than the alternatives, and provides up to 5 GHz of analysis bandwidth at high frequencies. Similar to other devices we all use, the UXA also has a large pinch/swipe multi-touch display.

Keysight N9041B UXA 110GHz Signal Analyzer showing a 3Hz to 110GHz sweep

 

Did I mention it goes all the way to 110 GHz? That’s like the volume going to 11 on a guitar amp! I mean, this is seriously kinda cool.

 

While my colleagues were uncorking champagne, chatting up journalists, nibbling on tiny cheese-and-cracker appetizers, and showing off our super analyzer to all comers, I skipped the party and did what every good sales and marketing manager does: jumped on a plane and headed the other direction, visiting Japan, Korea, and China in a whirlwind two-week trip.


Arriving in Tokyo ahead of the storm – everything looks calm

 

One small whirlwind problem: Super Typhoon Chaba. He started as a grumpy little storm, but during my long flight west, Chaba bulked up and turned into a typhoon (a hurricane to us Westerners) with a truly bad attitude. Monday morning, before the launch in London, Chaba was preparing to make landfall in Japan’s western islands while I was to the east near Tokyo, meeting with some of our backhaul customers.

 

Backhaul is a tricky business. You need to push lots of Pokémon GO and cat video data to the cell tower so it can be beamed to all those cellphones. If you’re like me, you always imagined this happening with a big fat optical pipe (fiber). But it turns out those cell towers aren’t easy to plumb and some just happen to be moving–like, say, on a high-speed train.

 

Point-to-point wireless costs less than digging up the neighborhood and laying pipe, and carriers can use high-frequency signals and high-gain antennas to solve their last-mile problem. These wireless solutions are also a good fit for seismically active areas (i.e., Japan). Thus, many network equipment manufacturers (NEMs) are investing in high-capacity backhaul to enable the new big-bandwidth requirements of 4.5G and 5G.

 

In backhaul, each pair of high-gain antennas better not be spewing signals at the wrong frequencies and in the wrong direction. Validating this requires out-of-band (OOB) spurious emissions testing, which my colleague Ben Zarlingo refers to as "compromising emanations” In a curious coincidence for our new 110 GHz Signal Analyzer, the Japanese government requires emissions testing all the way out to 110 GHz.

Watching the storm coming down on Japan

 

While Chaba continued to grow into an official Super Typhoon, my meetings were comparatively calm. At one key lab, a well-known Ph.D. thought we were joking when we described the analyzer’s performance and capabilities. My Japanese doesn’t go much beyond yakitori and Asahi, but I learned how the word “super” sounds when two separate hosts used the adjective to describe the new UXA: “suu-pah!

 

Because as it turns out, the alternative to a single-sweep instrument that can measure from 3 Hz to 110 GHz involves a harmonic mixer, which has inband imaging issues and can limit the analysis bandwidth due to IF complications. Mixer-based solutions are proving to be a big thorn in the side of the R&D teams responsible for some seriously complicated millimeter-wave testing. Thus, a 110 GHz Super Signal Analyzer is exactly what backhaul designers are looking for—and that made it a real pleasure to show these backhaul customers Keysight’s newest UXA.

 

As Super Typhoon Chaba moved north of Japan, I flew around the storm to South Korea. That’s where I met with some customers who are developing 5G wireless capability—and I’ll write about that next time.