Skip navigation
All Places > Keysight Blogs > Insights Unlocked > Blog

I hate trade shows. I love Barcelona. I recently found myself once again in this juxtaposition. And thus, I will expound upon “the state of the industry as seen through the eyes of the MWC attendee” and force you to read yet another review of 2017’s Mobile World Congress (MWC).


Scale foretells opportunity

According to Ericsson’s Mobility Report, humankind consumes eight exabytes (8´1018!!) of mobile data every month—so the massive scale of GSMA’s flagship event should not be a surprise. But even we regular attendees of the show are overwhelmed by the 100,000-plus attendee list and the breadth of business represented by the 2,300 exhibitors. And I do mean “breadth.” Even though the industry manufactures between two and three billion mobile phones every year (over 100 every second) the show is hardly dominated by those showing off new mobile devices or infrastructure.


According to Cheetan Sharma, over-the-top (OTT) revenues surpassed access revenues in 2014. This business opportunity in applications and content was immediately obvious in Barcelona with heavy focus on connected cars and gaming. Most of the major automotive companies and their technology supply-chain were front and center, showing 5G applications relating to connected vehicles and automated or semi-autonomous driving.


New and virtualized architectures impress

Virtualized and flexible mobile networks will enable these new applications and new business models. I saw very impressive demonstrations of network slicing enabled by new software architecture combined with scalable high-speed general-purpose processing hardware configurations. From my earliest days in 5G, it was obvious that the biggest part of the 5G revolution will be the new and virtualized network architectures. Even without the hype this was evident at MWC this year.


But there is policy-related tension on this topic. A dramatic departure from the previous US administration’s approach to net neutrality became evident during incoming FCC Chairman Ajit Pai’s comments: “The private sector has spent $1.5 trillion since 1996 to deploy broadband infrastructure…We would not have seen such innovation if, during the 1990s, the government had treated broadband like a railroad or a water utility.”


It is not clear how this will become manifest in the US or whether other countries will follow suit; however, the implications are that operators and even their supply chains—those who will make the largest capital investments in 5G technology —could end up on better footing if they can avoid being regulated as utilities.


Analytic software and machine learning step forward

This emphasis on 5G “wireless” solutions based in software is growing in two other areas. There were significant demonstrations of analytic software and machine learning to be applied to the mountains of data generated by IoT. The scope of this ecosystem, made clear in the first paragraphs above, indicates ripe opportunity to leverage all that information for everything from business optimization to entirely new business models. Coupled with this was significant attention to security issues relating to protection of privacy, prevention of threats to system performance and availability, and preemption of misuse of wireless systems.


The (innovation) game is afoot

In a future blog post I will review William Webb’s rather skeptical book, The 5G Myth: When Vision Decoupled From Reality. At the risk of revealing my position, I will close with this: In a panel discussion involving mobile operators and their network suppliers, the CTO of one large operator was asked which one of the network equipment manufacturers would most benefit from 5G technology. He stated that such a company was not on the stage. Rather, it was a startup somewhere on the exhibit floor—one that was innovating and iterating at a pace he had not seen in prior deployment generations.


Virtualized network architectures and OTT business models requiring large technology investments: where will the money come from? You may recall my comments about Pokémon GO. Snap, the maker of an application used primarily by teenagers to share photos, is now public and valued at over $25 billion. Did you see that coming five years ago?


The scope of business models enabled by a faster, more flexible network and empowered by very sophisticated devices means opportunity for the innovative that can leverage the massive scale we see manifest at the world’s biggest show of its kind.


Whether you were in the throng or followed from afar, what stood out to you? Where do you think we will be when MWC 2018 rolls around?


The electronics industry is buzzing with the Internet of Things (IoT). One engineer described IoT as a vast conspiracy to put network connections on every object in the world—completely independent of whether it makes sense or not.


When we start connecting up all of these “things,” new system requirements creep in that may not be obvious. Your “thing” is no longer happily independent: it is part of a larger system. With connectivity comes new responsibility—in product design and on into the product’s useful life as a network citizen.


From a developer’s point of view, many IoT devices are “embedded designs.” This means they use microprocessors and software to implement functionality, often with an eye towards small compute footprint, just-enough processing power, lean software design and demanding cost targets. This is familiar territory for many design engineers, perhaps with the added requirement of providing a robust network connection. This is often going to be a wireless connection, be it Wi-Fi or some other common standard.


It’s unlikely that you are designing, defining or controlling that entire system. That raises a few key questions, starting with the issue of interoperability: How do you know that your device is going to be compatible with others on the network? Your device may also present a security risk to the network: How much protection can you build in? The assumedly wireless connection is both a source and receiver of RF emissions: How do you make sure it behaves properly in both roles?


Building on these topics, here are a few things to consider as you try to avoid the ills of this brave new world of everything connected.


Ensuring interoperability

Start by understanding the requirements of the larger system as they apply to your product. What other devices, compute resources, servers, etc., must you interoperate with? Because these systems are inherently complex and multivendor, all IoT devices need to “play well with others,” as described in Systems Computing Challenges in the Internet of Things. Creating a robust test strategy will help ensure that you have this covered. My prescription: Think like a systems engineer.


Managing potential security risks

Recent hacks have provided a much-needed wake-up call concerning IoT security. Low-cost devices may not have much data to protect, but they can be a giant security hole that tempts the bad guys with access to the rest of the network. Distributed denial-of-service (DDOS) attacks have been launched using simple devices such as netcams, and even lightbulbs have been hacked. (For some interesting thoughts on this topic, see IoT Problems Are about Psychology, Not Technology.)


Manufacturers that leave gaping security holes may face legal consequences: in early January, the Federal Trade Commission filed a complaint against D-Link for inadequate security measures in certain of its wireless routers and internet cameras.


You can start by plugging known holes in operating systems and other platforms. And, of course, don’t hardcode default passwords. Think through better ways to test for security problems: no, it isn’t easy—but it’s starting to feel mandatory. My prescription: Think like an IT engineer... or a hacker.


Preparing for interference

Your device probably uses one of many wireless connections. First, cover the basics of electromagnetic compatibility (EMC). Be sure the device passes relevant radiated-emission standards so it isn’t spewing RF that interferes with other devices. Also, ensure that your device isn’t susceptible to emissions from other sources.


It’s also important to consider the RF environment your device will live in—especially if unlicensed spectrum is being used, as in the case of Wi-Fi. The great thing about the unlicensed airwaves is that they’re free to everyone to use. The really bad thing about unlicensed spectrum is that everyone is using it and it’s a free-for-all. Thus, you’ll likely have to contend with other emitters in the same frequency span. See Chris Kelly’s post Preparing Your IoT Designs for the Interference of Things for some additional thoughts. My prescription: Think like an EMC engineer.


Wrapping up

My key point: IoT devices may seem familiar and comfortable to engineers who work with embedded systems—but this is a trap. Avoiding this trap starts with a shift in perspective, viewing IoT devices as citizens on a network that has critical requirements for systems, security and RF behavior. Without this shift, we will expose our products—and ultimately our companies—to customer dissatisfaction, customer returns and product liability.

You have been tasked with leading a team to go after this 5G business. Your strategic imperatives include success and leadership in 5G and you are an essential part of making it happen. Your management, your team, your C-suite, your board of directors, and perhaps even your family, are all counting on you to deliver.


No pressure. Just do it, right?


Before drawing up your battle plans and starting the assault, press pause and ask a few crucial questions. After almost 32 years in high tech, mostly in engineering management, I have found that one’s team often has a clear sense of the success-factors, enablers, and roadblocks on the road ahead. Check in and get their input. Even if you are two years down that road, there is still time to make course corrections.


Do you have the right background and expertise?

For many, 5G applications imply new technologies. This might mean designing for carrier frequencies and bandwidths that seem like black magic to most of your engineers. Have you considered all of the things they will have to learn or acquire? How many lessons will be acquired the hard way—by making mistakes and discovering, too late, that there will be a dreadful impact to your schedule or costs? How quickly can your team pivot when it needs to gain new or different expertise?


The 5th generation wireless technology may also require new business models. This could entail a foray into open-source software (or hardware) when your organization has never bothered with this sort of thing in the past. It might point you toward selling services or upgrades rather than making net-30 sales. How might you adjust your design team’s skills to suit this new paradigm?


Do you have the right tools?

Some of those new technical areas will require new tools for your team. Is it new hardware? Is it RF chambers of a different size? A new computer system focused on fast, efficient handling of much more data? Could you use EDA tools to reduce hardware turns? Which ones? And, with a nod to the preceding entry, how long will it take for your team to use them efficiently and effectively?


Are you closely connected to your key customers?

I am a fan of the agile software manifesto, especially the notion of putting designers close to their customers and providing rapid and frequent updates while embracing regular and rapid (if not capricious) changes in requirements.


In an industry driven by consumer fad, these elements are critical to all of your development activities—software, hardware and business. Without close ties to your most important customers (the ones with the most money, not the loudest voices) you will be too slow to address needs, anticipate changes, and respond.


Be sure you are talking to the right people in your key accounts. I once watched a major project fail because the team was getting its guidance from the wrong individuals. After significant investment on both sides, the ones who were really in charge appeared and unceremoniously shut down the entire program.


Is your timing consistent with theirs?

The recipe for a fabulously successful project is probably familiar to you: your project is synchronized with your two or three best lead customers and perfectly aligned with their project timing, and market demand is simultaneously growing. This magical combination is, to some extent, the result of luck—but we often make our own luck since the good lady “prefers the prepared mind.”


Do you have the support you need from your organization?

This is hardly unique to 5G, but we all need this reminder. I have occasionally been frustrated by subordinate managers who, after producing inadequate results, have said to me, “If only we would have had A or B…” to which my response was, “Why didn’t you ask?” I am embarrassed to admit I have made the same mistake myself.


Talk to your team and find out what they think would make the difference. They may be keenly aware of some bit of organizational support that will improve their chances of success; sometimes, it will be easy to get that help. Even if you can’t secure everything they want, the process will likely clear enough roadblocks to speed things along. (Keysight’s Sigi Gross offers a related perspective in point #3 of his 7/13/16 post.)


Where to go from here?

No pressure: just ask the right questions and act on the most actionable and efficacious answers. There is no substitute for the innovation associated with a well-prepared, customer-connected, and motivated team to overcome the other challenges.


Any other cautions, questions or stories you would care to share?

Future projections for the Internet of things (IoT) are staggering: billions of devices around the world, with 500 residing in a typical home. That’s why the wireless designers I know are preparing their devices for the Interference of Things—and they’re doing it early in the development process. This is especially important for IoT widgets intended for mission-critical applications.


A recent customer visit drove this point home. They had released a new healthcare-related IoT device that uses Wi-Fi to send information to a centralized database. Even though testing in the lab went well, the device was consistently losing connections due to interference from myriad onsite devices—literally 913 devices on Wi-Fi alone in one hospital. When we met, our customer had an urgent need to resolve this problem and recover from the figurative black eye it had received from hospital administrators.


Troubleshooting with 20:20 hindsight

After the fact, avoidable problems in receivers, transmitters and the RF environment may suddenly seem obvious. In a receiver, blocking is related to its ability to tolerate strong or local signals on adjacent channels that use the same standard or protocol. When an adjacent signal is too strong, the receiver becomes so desensitized that it can’t recognize an on-channel signal.


An ill-behaved transmitter has a poor adjacent-channel power ratio (ACPR) when too much of its signal spills into neighboring channels. That’s one reason why we see ACPR in virtually every wireless standard. In a noisy wireless environment like a hospital, better ACPR helps improve overall system behavior.


In today’s airwaves, one issue keeps cropping up: the overabundance of standards- and non-standards-based devices clogging up the industrial, scientific and medical (ISM) band at 2.4 GHz. Devices that use different standards simply see each other as interference and thus don’t play well together.


Using foresight to design for interference

Long before a wireless device enters field trials, developers can explore potential issues using the latest design, simulation and test tools. These enable such extensive wringing-out—ranging from obvious to insidious—that a manufacturer can send its prototype into the field with much greater confidence that it will perform as intended.


Of course, there’s an important reality check: Where does your IoT device reside on the continuum between mission-critical and throwaway? The economics change significantly if your device trends towards either end of that spectrum.


The IoT device in my story would have fared much better in the Interference of Things if it had been debugged early on using a more thorough approach to design, simulation, emulation, test and analysis. An integrated IoT solution is a crucial step toward overcoming likely IoT challenges—and setting up your IoT end users for success.


What approaches have you used to set yourself up for success in IoT and other wireless products?

Nearly every manufacturing executive I talk to is looking for ways to improve product quality and reliability, and they often think that drastic measures are required. They assume they’ll have to reinvent their engineering design processes, implement expensive upgrades on the manufacturing floor, or restructure their supply chain. They’re often surprised to learn that the key to improving product quality is already in place in their organization, and it can be summed up in one word:  Accountability.   


For most large manufacturers, the responsibility for designing and building a product often rests with one part of the organization while the responsibility for providing warranty support lands elsewhere. Each area has its own budget and reporting structure, so costs incurred in warranty support are rarely traced back to the design/build process where defects often originate. By creating a culture of shared accountability between upstream and downstream teams, you can dramatically improve product quality using the people and processes you have in place today.


The concept is simple, intuitive, and nearly free to implement. But putting it in place takes three key steps.



1. Identify exactly where your costs are.

When a product fails under warranty, you might repair it, replace it, or even provide a discount on a future purchase to secure your customer’s business. Who covers those costs? Many manufacturers have corporate accounts that are funded annually to cover warranty service—that was Keysight’s approach up until a few years ago. Other manufacturers track warranty expenses at the business-unit level or the product level. The problem is, those costs are often hidden from view. In fact, many of the executives I work with have only a vague idea of who actually pays for warranty service. The first thing I encourage them to do is follow the money: Trace warranty expenses to the payer so you know exactly who’s footing the bill and what it’s really costing the company. The next thing I have them do is look upstream and identify the source of the problems that are costing them money. Typically, when a product is returned under warranty, the issue can be traced to a design flaw, a manufacturing process, or a perceived flaw—meaning the product is functioning exactly as intended but does not meet the expectations of the buyer. Each scenario is expensive for manufacturers. By looking upstream and downstream, you get a clear understanding of where problems come from and who pays, so you know where to focus your efforts in the next two steps.


2. Empower your teams to fix the problem. 

Product teams are under dire pressure to meet schedule and cost targets. They’re probably also thinking about sales targets, competitive threats, supply chain logistics, environmental issues, you name it. It’s no surprise that product reliability sometimes falls off their radar, especially if a product can pass inspection and ship on time. But here’s the thing. When product teams are held financially responsible for warranty repairs that occur a few years down the road, product quality becomes a priority. Give them the power to solve the problem. Let them decide how to allocate budgets and resources, and make the process changes and business tradeoffs that need to be made. Ultimately, their goal is not to ship a product on time or on budget but to make the company more profitable and successful. In that sense, empowering product teams is a lot like parenting teenagers: instead of telling them exactly what to do, tell them what the goal is, make sure they understand the consequences, then let them figure it out.    


3. Extend P&L accountability across the product lifecycle.

In many OEM environments, R&D designs a product, manufacturing builds it, the support team fixes it, and everyone’s responsible for their own budget. If you’re serious about improving quality, that model needs to change. Have all teams operate under a single P&L structure that spans the entire product lifecycle, from design and build through end of warranty. With a unified accounting structure, everyone shares the cost of repairing a faulty product, and everyone gains when quality is improved.


This isn’t theoretical. At Keysight we implemented a companywide accountability program in the mid-2000s. Most divisions were able to make the transition in less than a year, producing a 50 percent reduction in failure rates across the company. In fact, warranty repairs declined so dramatically that we were able to extend our standard warranty coverage from one year to three years, creating a powerful competitive advantage at zero cost to our company.


It’s never easy to change corporate culture. It takes vision and fortitude in the C-suite and buy-in down the line. But changing your company’s culture of accountability is one of the best competitive moves you can make, and it’s available to any manufacturer.


DuaneLowenstein is a Test Strategy Analysis Manager for Keysight Technologies.  Read his bio.

Only a few years ago, six billion hertz was plenty to manage Facebook, WeChat, and YouTube. But mobile wireless owns a fraction of that six billion so we are driving to frequencies far beyond what many of us consider our radio comfort zone.

I have seen multiple radio engineering labs coming to grips with these new frequencies. As 5G mmWave goes from obscure to elite to mainstream, the number of engineers doing component, subsystem, and radio design in these rarified wavelengths will skyrocket.


Many will have little experience with wavelengths no wider than their thumbs, and with bandwidths that sound like carrier frequencies. How will you set up your lab to ensure success? I offer the following questions and suggestions to those braving this territory.


1.  Which frequency bands are you targeting?

While 3GPP’s new radio (NR) development is aimed at carriers up to 100 GHz, I do not see a 5G wireless future in which this entire range will be used for access. So you not only have to anticipate which bands the policy groups will stipulate, you must speculate on which will be used for your target application spaces.


I also have doubts about 5G mobile multi-user access above 45 GHz. 802.11ad/ay will occupy the current 60 GHz band (and possibly the FCC’s extension of this band to 71 GHz). Point-to-point for backhaul, distributed antenna systems, and fronthaul will be implemented above 45 GHz. There is also early work in high-speed train communications up to 100 GHz (for on-board Wi-Fi “backhaul”).


Do you need to cover this entire range? Only part of it? Consider carefully because the tools and accessories become more expensive as you get closer to triple-digit gigahertz.


2.  What bandwidth do you need to support?

While there is talk about information bandwidths of 2 GHz, consider the following for frequencies below 45 GHz:

  •  Licensed bands will be divided between at least two licensees.
  • The widest in the FCC’s recent announcement is 425 MHz (28 GHz band)
  • The new air interface access designs are aimed at aggregating carriers modulated to no more than 200 MHz.


Notwithstanding your potential need to manage aggregated carriers and perhaps do work above 45 GHz, consider how wide—and thus how complex and expensive—you will want your lab to go.


3.  How will you connect to your device-under-test?

I have yet to see a serious design of a commercial mmWave transceiver system that includes a connector between the antenna and the amplifier. Thus, the labs I have seen all include anechoic chambers equipped with directional antennas with varying styles of positioners, and (often) open on one side. Smaller wavelengths and antenna apertures, highly directional propagation, and the lower likelihood of interfering signals allow for a different approach. But the requirement to make calibrated measurements in free space without violating regulations means a mix of enclosure, positioner, antenna, measurement equipment, and the necessary software.


4.  What software tools will your team need?

Your software arsenal ought to include six items: EDA; system design and simulation; EM simulation, measurement and analysis; device and test-equipment control; data manipulation and management; and mathematics tools. While the associated learning curve for your engineers is substantial, the productivity gains of working in the virtual world, particularly in the uncharted seas of small waves, quickly repay this investment. Then, the software-enabled power to generate test stimuli for radio components and systems, and analyze measurement and sample results, will give your designers new insights in time for your target release date.


5.  How do you future-proof your investments?

The world of commercial wireless is not for the faint of heart, and the foray into millimeter wave technology is an expensive step deeper into fraught territory. Short wavelengths mean exotic materials; tighter mechanical tolerances; and bandwidth, sampling rates, and digital speeds requiring significant CAPEX for a productive lab. CAPEX also implies these purchases must hold their value throughout and even after your depreciation period.


Successful organizations will future-proof these investments. Things to look for include capabilities to serve needs that arise during your depreciation period; modular software and hardware with an upgrade path; and proven vendors with technology-upgrade programs and expert support staff.


Wrapping up

Lastly, stay close to what is going on in the market. Your view of the considerations listed above will clarify as new policy emerges, 3GPP standardizes, and innovators make new and more capable technology available for your own building blocks.


Get more mmWave resources!

I hate trade shows. But this element of my role is both a curse and a blessing. Whether you call it a symposium or a circus, a convention or a carnival, events such as Globecom, EDI CON, European Microwave Week (EuMW), and Mobile World Congress mean long flights, jetlag, sore feet, and being subjected to the requisite barrage of wireless hype. But they are also an important and engaging part of staying in touch with the communications industry and the fascinating personalities therein. In the past few weeks I have added two of these events to my diary: Microwave Journal’s EDICON in Boston, and the Next Generation Mobile Networks (NGMN) Industry Conference and Exhibition in Frankfurt. With a focus on 5G wireless, here are a few observations and comments.


At this point, we really do know what 5G is.

Most 5G presentations still start with “nobody knows what 5G will really be” followed by the ubiquitous “the vision for 5G” summary. Consecutive speakers (me too) cannot resist the urge to show and describe that “5G Vision Slide.” There is beauty in consistency. There is also boredom. I am sure there are people in the world who have not seen such presentations, but by now that group is constrained to dairy farmers and rat-poison chemists.


We are gaining clarity on new “vertical businesses” enabled through 5G applications.

My reference to gaming in the now-famous Pokémon GO blog post was further clarified during the NGMN event. High data-rates, ubiquitous coverage, and low latency will enable opportunities for gaming and other emerging entertainment industries (and likely some that are not quite so innocuous).


There also seems to be more and more compelling arguments for the automotive industry to fully embrace wireless communications. While I still believe that a relatively conservative and highly regulated industry will take its time steering in this direction, the first clear lanes ahead will be navigation aids and mobile entertainment.


Open source is changing the game.

The NGMN event featured compelling sessions that examined open-source software. One was in the context of new business models, and another was a rather heated exchange around different approaches to intellectual property. The business models evolving around open source infiltrating network virtualization will be driving significant change in the industry. Giving away the code your software gurus struggle to generate while guzzling gallons of Mountain Dew and Rock Star may have once seemed anathema—in some circles, it now appears to be a requirement.


The industry is dead serious about mmWave.

EDI CON and NGMN featured plenty of discussions and exhibits regarding 5G mmWave in mobile communications. “Gee Roger, what about the other 25 5G shows so far this year that also had plenty of mmWave?” OK, I admit that this is nothing new, but the innovation featured at EDI CON (and immediately thereafter at EuMW) was then underscored by the MNO-focused NGMN event in which AT&T, SK Telecom, KT, and of course Verizon, highlighted their mmWave trials and plans.


My earliest posts stated that I do not see mobile multiple-access mmWave being commercial before 2022 or so. While I still believe this to be the case, these MNOs, who are the single most important entities to determine whether or not a new air-interface technology will be commercialized, are “all in”—and when they are successful, others will rapidly follow.


Wrapping up and looking ahead

One final comment: panel discussions are the most useful when panelists disagree. Now I am not advocating the circus of the recent U.S. Presidential debates, but folks (especially you moderators), we do not learn much when everyone smiles and nods in these discussions. More interesting to me have been a recent academic-versus-commercial showdown on massive MIMO and a dustup around the respective merits of open-source and royalty-based business models.


Here’s hoping there will be more such lively discussion at IWPC’s pair of meetings in November—featuring automotive wireless and (ahem!) mmWave in 5G—and then I’m off to Globecom. I look forward to providing some serious updates regarding our favorite 5G themes, and likely some flippant remarks about still more “5G Vision” and “5G KPI” slides as well as a few more “Kumbaya” panel discussions.


Industry gatherings: Love them? Dread them? Why?

The ability to accurately measure and quantify a digital design is essential to actually knowing what’s going on. A fellow named William Thomson, better known as Lord Kelvin, captured this concept in one of my favorite quotes:


When you can measure what you are speaking about, and express it in numbers, you know something about it.


This was simple back in the good old days. To measure a digital waveform, we would just connect an oscilloscope to the right node and take a look at the waveform. Oh, and we’d be sure the scope had enough bandwidth and the probe wasn’t loading the circuit or introducing distortion. We rarely, if ever, compared the results to a simulation. Mostly, we just checked to make sure the waveform looked “about right.”


Changing tactics in design and test

Today, the world’s insatiable demand for bandwidth continues to drive the need for ever-faster high-speed digital interfaces. As designers try to push more bits through the channel, they’re pushing the limits of what’s possible using the latest equalization and signaling techniques—decision feedback equalization (DFE), continuous-time linear equalization (CTLE), feed-forward equalization (FFE), PAM-4 (four-level logic), and more.


When characterizing the results, test equipment must often emulate those same techniques. For example, when physical transmitters and receivers are not yet available, an instrument has to mimic their respective behaviors at the input or output of the device under test (DUT). Even when the transmitter or receiver is available, it’s likely to be embedded on a chip. That makes it difficult to probe and measure—and, once again, the instrument must emulate either or both devices.


Addressing the problem: a real-world example

The process of creating accurate, realistic models is an iterative process. To ensure increasingly accurate models, the latest measured results must be fed back into the simulation system.


Although this process has many challenges, possible solutions are spelled out in a recent DesignCon paper on measuring PAM-4 signals at 56 Gb/s: PAM-4 Simulation to Measurement Validation with Commercially Available Software and Hardware. The DUT was a 3 m Quad Small Form-factor Pluggable Plus (QDFP) cable, driven by an arbitrary waveform generator (AWG) and measured using a high-bandwidth sampling oscilloscope (Figure 1).


Figure 1. Measurement of the DUT resides within a larger process that also includes simulation.


The channel configuration was first simulated in software using IBIS-AMI models for the transmitter and receiver. In this case, the transmitter was not available and the designer utilized an AWG to replicate in hardware the same transmitter waveform the simulator used. The simulator-provided transmitter waveform also included the FFE correction needed to open the eye at the receiver for CDR and data acquisition. [Aside: During early-stage development, you can use an AWG to emulate the absent transmitter using an industry-standard model.]


Similarly, to accurately measure the received signal, the oscilloscope executed a model of the not-yet-available receiver that included clock data recovery (CDR), CTLE and DFE. As above, the team used the same receiver model for design simulation.


Creating a new ecosystem—in your lab

Although the IBIS-AMI models have been developed and standardized by the electronic design automation (EDA) industry, they have also made their way into the measurement world. As described in the PAM-4 paper, connecting the physical and digital worlds creates a measurement/simulation ecosystem. As this ecosystem comes into alignment, simulated and measured results become increasingly well-correlated (Figure 2).


Figure 2. A tighter connection between simulation and measurement ensures closer correlation of results.


Mastering both realms, together, results in fewer design cycles and better predictability of design quality. In the PAM-4 example, appropriate application of the models ensures the ability to get a useful picture of the waveform at the output of the DUT, and from that gain better insight into how the receiver will decode it.


The age-old alternative to this beneficial ecosystem is the time-consuming “cut and try” approach that may never yield a reliable design. Worse than that, engineers are left to iterate their designs based on limited knowledge of system performance.


Going beyond “measure then know”

In reality, most teams include some engineers who are highly proficient with simulation tools and others who are deep into measurement tools. For the ecosystem to work, engineers must be able to apply tools from both worlds in a coherent manner. As teams learn, they feed new information back into the models and make them more accurate. Portions of those same, improved models can then be used to perform useful measurements.


This measurement/simulation ecosystem becomes “must have” if you are doing leading-edge digital design. Within this symbiotic ecosystem, Kelvin’s idea of “measure then know” expands to become “model, measure, and know.” And that’s when breakthroughs become more predictable.  

A tectonic shift is underway in how electronic products are being designed and tested. The change is being driven by customer requirements for product performance and the communications technologies needed to meet those requirements. To understand how dramatic this change is, it helps to have a little perspective. Here’s how I explained it in my keynote address at EDICON 2016.


Early in my career, in the 1980s, I was a field engineer in Florida. My best customer was Motorola, and their business in pagers and SMR technology was exploding. RF design was just moving from MS-DOS to Unix. Unprecedented processor speed, memory, and display systems supported the move from netlists to schematics, from linear s-parameter simulation to nonlinear design. I was lugging 50-pound monitors and 30-pound CPUs through the Florida heat to work onsite with Motorola engineers, and man, was it exciting. We were using new tools to design voltage-controlled oscillators (VCOs) that worked the first time. It was a huge breakthrough in productivity. We were on the cutting edge of a booming industry, building breakthrough products faster and at lower cost than ever before.


The design flow back then was linear: Measure components and create models; then use the models to design and simulate a circuit; then test the prototype. Each stage was separate and distinct. A transistor or inductor model was developed once and used in multiple types of designs. Simulators combined the models and predicted response. Verification was simple: did the VCO sweep over the right range, output the right power, and have good phase noise?


What amazes me is that today, three decades later, design flows are fundamentally unchanged.


Instead of pagers, we have complex handheld computers in our pockets. Our cell phones have more functionality, parts are packed together more closely, and more bands are supported, leading to complex signal routing. But design flows in 2016 have not fundamentally changed since we were developing pagers.


True, today’s components are measured and modeled with better fixturing and probing. Circuits are designed and simulated on vastly faster computers with better algorithms and solvers. EM simulation has become a standard part of the design flow, and thermal tools are emerging to optimize performance based on temperature. In prototype testing, more sophisticated measurements are made on complex OFDM waveforms, and these measurements are made faster and more accurately than ever before. But the workflow is the same: independent steps with limited interaction between each step.


That world is changing in front of our eyes due to four key drivers.

  •  Channel complexity. Instead of single channels, or even simple MIMO configurations, the industry is moving to massive phased array systems with 100s or even 1000s of elements. We see it in 5G and modern radar systems. To deal with such complex systems, measurements are moving much closer to design because even the best compute farms cannot fully model today’s complex systems. Instead, rapid-prototyping systems are starting to be deployed by manufacturers worldwide. 
  • Bandwidths. Instead of a few tens of MHz, proposed standards call for GHz bandwidths, creating even more processing challenges. On top of that, carrier frequencies have moved from RF and low microwave to well into mmW, creating new challenges in fixturing and even connector-less measurements.
  • Big data. Not long ago, 40 Gbit links failed due to lack of market needs. Now we are seeing systems deployed at 100GB with a mad rush to 400GB. Recently I was speaking to a researcher who needed to move 10 petabytes of data across the country for Big Data analysis. It’s an incredibly difficult problem to solve. This requirement for data naturally leads to larger, more complex networking equipment. Boards are more complex, too: it’s not uncommon to see boards that are 24x18 inches with dozens of layers. Simulation is rapidly improving, but compute farms don’t have the capacity for those designs.
  • IoT. I recently met with a base station design team that needs to evaluate radio performance with 50,000 devices simultaneously communicating. Since combining 50,000 separate signal generators is impractical, new methods are emerging that use design tools to generate signals from a single generator that appear to be coming from hundreds or possibly thousands of devices. In this case, the tables have turned. Instead of simulation capacity being limited, it is the measurement capacity that is limited.


These drivers require an inextricable connection between design and test. The old ways simply don’t work. Only by combining the best of design simulation and test can amazing new products and communications systems be created. Take massive MIMO systems as an example. Instead of designing the entire systems in simulation, a better workflow is:

  1. Architect the overall system at a behavioral level.
  2. Design a single channel, with perhaps near channel coupling included.
  3. Prototype the complete system.
  4. Measure it.


Using the measured results, the system-level design can be refined and improved. Prototyping will occur much earlier in the design cycle. Instead of measurements being done for validation, they become an integral part of the design process. This is more than theory. 5G advanced research labs around the world are using this exact approach.


Consider another example—designing a high-speed network card. These cards can have dozens of serial and parallel busses to transport data. Each channel is individually simulated and compliance tests verify performance against specs. Then the overall card is prototyped. The board is probed and measured and the exact same compliance test software used in simulation verifies performance against spec. Just as with the MIMO system, the overall high-speed system will be refined in simulation. Again, measurements are done earlier in the workflow and become an integral part of the design process.


I’ve been in this business a long time, and the changes we’re seeing now in communications systems will challenge our industry as never before. The way our industry designs and verifies new systems is changing quickly and for the better. It takes a new level of integration, a new way of thinking, and a realization that we’re entering a whole new era. I’m thrilled to be part of it.


Todd Cutler is Vice President of Design & Test Software for Keysight Technologies. Read his bio or view his keynote address at EDICON 2016.

Is your life ruled by electronics? Mine is. It seems that everything I do—from opening a garage door to adjusting the thermostat to tracking my heartbeats during a workout—relies on electronic sensors that are embedded not just in my devices but in my life. That’s why I’m always surprised to hear people refer to electronic measurement as a “mature” industry. Nothing could be further from the truth.


The electronics industry that I work in is a hotbed of innovation. It has evolved from the early days when we used sensors to measure the physical and operational nature of devices to confirm that they were working correctly. Today, we tie these sensors into distributed networks, and we connect them to centralized computing engines where state-of-the-art data analysis software makes sense of the data and puts it to work.  So we’re not only collecting vast amounts of measurement data but we are also analyzing it, communicating it, and acting on it.  In this way, the convergence of electronic measurement, centralized data analysis software and communication networks is enabling manufacturers to introduce innovations that were unimaginable a few years ago.  

In my role at Keysight, I see these 5 areas evolving fast, creating new business opportunities for entrepreneurs in electronics manufacturing—and interesting new challenges for companies like mine. 

Transportation: Rolling to autonomy

A car is a rolling measurement system that collects vast amounts of data. Add data analysis and control and it becomes a proactive safety system that sees traffic slowing and applies the brakes, detects ice on the road and reduces power, and alerts you to a car in your blind spot, a deer in the road, or a bicycle behind you. The next evolutionary wave in transportation involves communications: Cars are communicating not just with the driver but with other cars and with regional transportation networks that control traffic flows through our busiest cities. Connected cars will reduce traffic jams, accidents, traffic-related injuries, emergency response times, car insurance rates, our use of fossil fuels, greenhouse gas emissions—and even the amount of time we spend commuting to work.


Healthcare: Giant steps towards personalized insight

You may already own a wristband or watch that can measure your steps, your heartrate, or your respirations when you sleep. Want to take it a step further? Add network-based data analysis and communications and that same data-collection technology can literally save lives—for example, by dialing 9-1-1 if a heart patient has an event or providing an accurate health history to an emergency responder. It may even save you money on your health insurance: If you agree to communicate your data to your health insurance provider, you might be eligible for preferred rates by proving that you run for 30 minutes twice a week or get seven hours of sleep per night. In the near future, healthcare delivery and insurance rates will be customized to you and you alone—like everything else in your digital life.


Aging: There’s no place like home

By 2030, one in five Americans will be over the age of 65. An AARP study says that 87 percent of them want to age in place—meaning they hope to live in their homes, not in nursing homes. Sensors and actuators make it possible, centralized data analysis and communications make it feasible. In-home systems can monitor real-time health status, detect emergency situations such as falls, and notify healthcare providers if there’s a problem. This has significant ramifications for the cost of healthcare since home-based care is a fraction of the cost of institutional care. 


Retail: Delivery takes off

Five years ago, drone delivery was science fiction. Today, Amazon sounds a lot like NASA, using terms like ConOps (concept of operations) to evaluate low-altitude airspace, vehicle-to-vehicle (V2V) communications, command and control networks, and sense-and-avoid (SAA) technology. Nokia has already demonstrated a solution that maps a three-dimensional road system for Munich. The app identifies no-fly zones and landmarks, and ties into land- and space-based communication networks to allow retail drones to accurately deliver packages to their intended destination. It’s a great example of how measurement systems, data analysis software, and communications are creating not just new income streams but entirely new business models for established companies.


Workplace productivity: Making virtual a reality

In my career, the workplace has changed drastically and for the better. Hi-fidelity audio and video make it easy to have web meetings with people who are a thousand miles away. Camera-tracking technology automatically directs web video to the person who’s speaking, just as though we were sitting across the table. Soon holographic meetings will be mainstream, so virtual teams will literally be in the same room. Think of the impacts on family life, travel budgets, the environment, and product design cycles alone.


The convergence of electronics measurement and control, data analysis, and communications will continue to present new business models and opportunities to innovators and entrepreneurs. The transformation is happening so fast that I can barely imagine the digital world my children will inhabit. Whatever their world looks like, I’m proud to say that electronics design and test companies like Keysight will be in the mix, helping to solve these challenges so entrepreneurial companies can continue to amaze us with their innovations.


What does your crystal ball tell you? Leave a comment or send me an email, and share your list of the hottest opportunities you see on the horizon.


Siegfried  Gross is vice president and general manager of Automotive and Energy Solutions for Keysight Technologies. Read his bio.

I recall detailed conversations during the 2G-to-3G and 3G-to-4G transitions about the elusive “killer app” that would drive the ROI for a better mobile phone system. Our transition to 5G technology is no different, and while the term killer app does not pervade the vernacular like it once did, the concept lives on. And it lives on in an environment of significant doubt. Even comments on my first post indicate significant skepticism about the justification of such a large technical push: “Why would anyone ever need <insert your favorite 5G KPI here>[i] on their mobile phone?”


Perhaps the most skeptical are my own family, who are often subjected to dry runs of my 5G presentations. On July 25, however, I had the good fortune to receive a very telling, if not somewhat flippant, email from my engineering-grad-student daughter (nope, not electrical engineering). I present an insightful excerpt:


Hi Daddy!


In the past two weeks, Pokémon GO has netted at least $35 million in revenue. While I haven't been able to find any analyses of how much total  mobile data it has used, most online sources agree that one person spends about 20 MB of mobile data per hour of gameplay. This number is reduced significantly if you're in a Wi-Fi-rich area (e.g., on a college campus), and can be increased significantly if you use other data-chewing apps at the same time while you wait for the Pokémon GO servers to come back up.


Also, I have tried the game, and it is ridiculously fun, especially when half your lab group decides to walk to lunch and catch Pokémon together. For a smartphone game, it can be surprisingly social. Also, extremely nerdy. But we're Ph.D. students, so that ship sailed ages ago.


If 5G wireless can make my Pokémon hunting and battling more efficient and reliable, I'd be pretty happy. It is rife with possibility for showing off 5G capabilities. Now, I don't know much about these capabilities aside from what I've learned from you making me sit through the dry runs of your keynote talks, but in case you get stuck presenting the need for 5G to a room full of people my age, you could consider arguing the merits of Keysight's test equipment solutions on the basis of:


  • faster communication with the app's servers, so that the millions of users playing the game can crash them that much more effectively
  • more accurate location services inside buildings and outside, so that the game populates the area around a player with Pokémon more quickly
  • device-to-device communications, enabling live Pokémon battles or trades between users
  • working well in a crowd, so that when someone is caught up in a mass of hundreds of people all trying to catch the same **** Vaporeon, nobody gets trampled and (more importantly) nobody's connection slows down or drops


I argue that this millennial—raised on a certain video game, television, and trading-card phenomenon—has highlighted key indicators in least one market segment of what 5G needs to address. The Pokémon GO phenomenon hardly requires 5G technology to drive the faddish behavior that created massive financial ecosystems around smartphones. But my lovely daughter, through the context of a relatively simple augmented reality (AR) game, highlights what really happens in our industry.


I ask my readers to simply look at the four points rostered above—all of which, with significant improvement, will enable AR games that will make Pokémon GO look like Pong. I suggest that this is the shape of things to come. And I am not alone


And if you think that such trivial applications like video games are perhaps not the best things to drive our industry, then take a hard look at the amount of money generated in mobile communications in general by entertainment. Some of us technical people will fret about the mountains of research, patents, prototypes, cell sites, networks, antennas, and software all apparently dedicated to little more than catching Mewtwo. Hey folks, it pays the bills.


[i] Choose from: 1 ms latency, 10 Gbps, device-to-device communications, 500 kmph mobility, etc., etc. (and see the NGMN White Paper for more).

Most of the students who take my Executive MBA class are working for companies that have an ocean and several time zones between their R&D and manufacturing teams. They’ll spend their careers looking for ways to close the gap in their supply chain so their companies can move faster, work more efficiently, and be more competitive.


As I mentioned in a previous post, the old concept of throwing a design “over the wall” to manufacturing is not feasible in a world where most companies are moving at warp speed, and introducing new products at a furious pace to meet global demand. Design and manufacturing teams need to be joined at the hip to keep up. The key is to make it easy for upstream teams to address downstream requirements in the earliest phases of design. Design teams know that it’s the right thing to do, they just need good tools. Here are four that have had a major impact on supply chain optimization at Keysight.


  1. Design guidelines


Awhile back, we developed a process at Keysight that automates the wire-bonding of microcircuits. The process boosts production throughput—but only if boards are laid out correctly. If a bonding pad is off by even a couple of millimeters, then the automated arm can’t access the pad, and production is delayed while the design is reworked. That kind of scenario is why design guidelines are needed. Design guidelines provide detailed specifications so designers know their layouts are compatible with downstream processes. These are living documents:  When an unforeseen event triggers a rework cycle, the guidelines are updated to eliminate the problem from future production runs. Design guidelines have the added advantage of getting designers and production engineers talking. Both sides have an equal voice in updating the guidelines, so if anyone sees an opportunity to improve efficiency or save steps, their input can be incorporated into the workflow. That kind of cross-discipline dialog expands institutional knowledge and ultimately reduces time, rework, and headaches on both ends.


  1. Preferred parts database


A preferred parts database (PPD) helps R&D teams quickly select parts that will work in their designs. The database should include known-good parts that are reliable, available, and can be purchased cost effectively. It saves design time by streamlining the decision-making process, and improves design quality by allowing R&D teams to focus on design refinements rather than part selection. Like the design guidelines, this also is a living document. The new product introduction (NPI) engineering team is responsible for keeping it up to date, and that can be a challenge. A preferred part can become a non-preferred part overnight if, for example, the supplier discontinues the part. Keeping the list current in real time saves design cycles later in the process.


  1. Common components


When I started working at Hewlett-Packard in the early ‘80s, we had hundreds of 100-ohm resistors in our inventory.  Keysight has a handful. By using a common set of components across multiple product lines and platforms, we save time and money in much the same way that automotive manufacturers gain efficiencies by sharing parts across models. It can be a hard sell with R&D teams. Designers often have visibility into a range of parts for a given function, so they may know of a cheaper part than the one being recommended in the PPD or common component library. In some cases, the savings may look significant. My advice:  Hold the line and trust your database. Introducing a new part into inventory means setting up, managing, and supporting a new part number. If it’s a one-off part, it will have low usage and no economies from buying in volume. And adding a part affects how other parts are used, so you may reduce volume purchasing power and increase your unit cost on other parts. If and when you do add components, make sure they have application across multiple product lines and thoroughly evaluate the business case and related costs.


  1. Rapid prototyping process


When you’re developing a new product, R&D will have ideas they want to try, and will turn to manufacturing to build prototypes for testing. If R&D and manufacturing are in different time zones, you might burn a couple of weeks on each prototype to learn that a design is not quite right or an idea is not feasible; co-located teams can often find out in hours or minutes. Colocation not only speeds new designs into volume production but also improves the quality of your products since you can evaluate more ideas in the limited time you have. 


Integrating design and manufacturing has far-reaching cultural and operational implications. It requires a shift in thinking at the management level and a change in workflow for design and manufacturing teams. That may sound like a heavy lift, but as I tell my MBA students, the right tools make it easy. In any case, it’s a fact of life in our global economy. The business benefits of integrating design and manufacturing are undeniable, which is why nearly every company I talk to is heading in that direction.



PatHarper is Vice President at Keysight Technologies and an adjunct professor teaching global supply chain management in the Executive MBA program at Sonoma State University and Project Management in the Executive MBA program at the University of San Francisco.

Now to finish my series on predictions, let us turn to one of the more exciting concepts in 5G: the mobile and tactile wireless Internet. The terrible triumvirate of technology, policy and business model is once again aligned against this one for 2020.


Roger’s claim: Wireless tactile Internet will not be commercial in 2020.


The objective of 1 millisecond end-to-end latency for virtual reality, automated semi-autonomous vehicles, and even “remote surgery” (every policy-maker’s favorite), has the trappings of science fiction. While I believe the applications will someday reward the investment, the combination of low latency and very high reliability represents significant technical challenges at all levels of network and UE implementation: air-interface, network protocols, front-haul, and backhaul technologies all require complete redesign.


The industry has actually moved away from such KPIs. I do not know if anyone has noticed, but when it comes to voice, the quality of service (QoS) and quality of experience (QoE) of all of our telecommunications systems is significantly worse than it was even five years ago. The mix of latency problems and reliability problems is obvious to those of us who regularly use our employers’ systems for teleconferencing with our mobile phones. And recall my comment about VoLTE in my introductory post.


One could argue that the automotive industry will aggressively drive these needs as it moves to autonomous vehicles. But automakers adopt new communications technology at a deliberately slow pace, braked by a huge installed base and justifiably heavy regulation.


If anyone wants to see how fast the automotive industry will move to 5G, just take a quick look at two topics. First, recall the protracted delay in shutting down the AMPS cellular system in the US: launched for voice in 1983 and ultimately used also for in-vehicle roadside-assistance services, it held on like grim death until 2008—the year the first LTE standard was adopted. Second, consider Dedicated Short Range Communications (DSRC): today, the auto industry is slowly adopting DSRC, a technology based on circa-1999 standards. As 2016 slides past us, the US DOT is only now considering passing a rule to mandate DSRC. A compelling presentation by NXP at the recent Brooklyn 5G Summit suggested even DSRC would not be mainstream in automotive until well after 2020.


I can also foresee enormous challenges with business model and policy issues once companies want to take full advantage of high-reliability and low-latency mobile communications. Investments will be significant and they will likely come in unexpected ways (foreshadowing a future post regarding entertainment as a significant driver). And I cannot wait to watch the circus that will evolve just in my own country around the Affordable Care Act and the insurance lobby when our government starts to wrangle the issues around remote surgery. As with mobile, multiple-access mmWave systems, the wireless tactile internet will come, but is going to be very much paced by that difficult triumvirate of technology, policy, and business model.


The quote “Predictions are difficult, especially about the Future,” has been attributed to at least two famous people and I cannot wrap this note without using these words as disclaimer. In at least some of the cases I have described, I hope I am proven wrong. My intent is not to criticize the brilliant people working to move these technologies from vision to commercial reality, but to take a run at the hype generated by the equally brilliant people who have to build outbound marketing programs in the interim. Your mileage may vary…

Continuing to expand on my first post, let us now turn to another overused term in the communications industry: Internet of Things (IoT). The challenges once again reside in the intertwined evolution of technology, policy and business model. I may be accused of unbounded skepticism but read on…


Roger’s claim: 5G wireless IoT will not be commercial in 2020.


Wireless IoT is upon us. We see it every day in the various widgets—fitness trackers, wireless cameras, and so on—that often consume more of our time than we spend on their associated activities.


So why do I say it will not be commercial in 2020? Similar to my examination of massive MIMO, it comes down to definition. The 5G vision is for “ubiquitous things communicating.” 3GPP is well on the way to standardizing that vision with LTE Machine-to-Machine (LTE-M) and narrowband IoT (NB-IoT) already released in standards in 2016. We can expect to see both of these commercialized in 2017.


There are also myriad (well, my last count exceeded 80) non-3GPP standards under development and in deployment by smaller consortia for various low-power wide-area or personal-area networks. But none of these are 5G and a new air-interface, proprietary or otherwise, is not enough for the tens of billions of connected devices coming in the next ten years (some claim trillions, but I will slay that myth in a future post).


To achieve massive connectivity, 5G developers must address two more technical challenges, and both are in the protocol stack. One is managing a new media access control (MAC) scheme that enables communications with limited or no use of the “ACK” (acknowledge) concept. This is required for effective management of device power and interference. Such is among the ideas being studied in the new 5G radio-access technology (“New RAT” or NR). Not only is the technology necessary to enable such a “grant-less” system still just emerging from research groups, the 3GPP is focusing its R14 and R15 efforts for NR on eMBB and UR/LLC and not so much mMTC (and hence not yet focused on 5G IoT).


The second technical challenge involves messaging above OSI’s layers 2 and 3. Addressing unique identifiers for huge numbers of devices using today’s protocol standards will create networking overhead that consumes far more resources than the payloads themselves, and thus would burden the mobile packet core (MPC) to a point significantly affecting quality of service (QoS). 


But even if this new standard is complete by late 2018—in time for my rule-of-thumb gestation period of 18 months between “dry ink on a standard” and “commercialization”—the industry will have to recoup the investments currently being made on NB-IoT and LTE-M. This will move 5G NR mMTC commercialization almost definitely beyond the 2020 timeframe.


Adding another new IoT-ready air interface so closely on the heels of these efforts is likely similar to new standards work done in the past when such has stagnated in the standard without commercial deployment. Thus, 2020 will see plenty of wireless IoT, but the 5G part—the part that is defined in the new radio interface and the associated higher-layers in network protocol—will have to wait until a future release.

In my two previous posts, I’ve discussed the factors affecting commercialization and laid claim that some much-touted 5G technologies (e.g., millimeter-wave) will not be commercialized by 2020. Massive MIMO, on the other hand, starts with two factors heavily in its favor: implementation will require less policy change, and it has a potentially large benefit to mobile network operator (MNO) business models. Thus, developers can focus their energy and attention on the technical challenges.


Roger’s claim: Massive MIMO will be commercial in 2020.


At an IWPC meeting in the spring, representatives from China Mobile clearly stated that massive MIMO is implemented and running in its network. Some of the argument about the timing of massive MIMO depends upon one’s definition of the term.


Since Dr. Thomas Marzetta’s seminal paper in 2010, the term has come to mean just about anything with more than four antennas. What I will call the “academic definition” was clearly outlined in an excellent panel discussion (featuring Marzetta) at IEEE Globecom 2015 in San Diego. This refers to something that has the following attributes:

    • Is based on TDD (although Marzetta recently suggested that FDD may be possible)[I]
    • Uses only uplink pilots for the determination of channel state information (CSI)
    • Provides significant gain in performance even in a non-scattered and 100 percent line-of-sight (LOS) environment
    • Requires the number of antenna ports to greatly exceed the eight defined in the current 3GPP MU FD MIMO standard


On the fourth point, I have seen some definitions of massive MIMO that focus mostly on getting the antenna count to at least eight; most of the other criteria included above appear to be less important. I simply do not consider eight to be “massive.”


But, my key point is that spatial multiplexing within cells, specifically to improve capacity, throughput, and spatial_multiplexing.png

especially energy efficiency, is mandatory for the 5G vision to become real. At the recent IWPC meeting, multiple MNOs agreed that their number-one OPEX beyond depreciation was paying for electrical power. Massive MIMO’s approach to limiting radio energy only to where it is needed is a huge step—if the industry can manage the technical challenges and increased power demands of the incremental baseband processing and more-complex antenna schemes. The innovation that I have seen from those researching and prototyping this concept is very impressive.


Implementing the academic definition of massive MIMO puts relatively small demands on the user equipment (UE) design (i.e., fewer technical challenges), requires fewer policy changes, and has a potentially large benefit to MNO business models. China Mobile’s focus in this area is driven by annual energy consumption that is significantly north of 14 TWh, and that is motivation enough to make this technology work ASAP[ii]


I have recently read statements from some MNOs suggesting that 5G “phase 2” (i.e., 2022 or later) will include massive MIMO, thereby refuting my claim. After all, MNOs are the ones who will dictate the timing for commercialization of any of these technologies. But given China Mobile’s clear statement and the lower technological, policy, and business model hurdles, I think massive MIMO will see reality.


Will users care? Probably—this is at least one facet of creating “great service in a crowd.” What do you think?


[i] See “Massive MIMO: ten myths and one critical question” in the February 2016 issue of IEEE Communications

[ii] See “Toward green and soft: a 5G perspective” in the Feb 2014 issue of IEEE Communications