Skip navigation
All Places > Keysight Blogs > Insights Unlocked > Blog > 2016 > August
2016

Continuing to expand on my first post, let us now turn to another overused term in the communications industry: Internet of Things (IoT). The challenges once again reside in the intertwined evolution of technology, policy and business model. I may be accused of unbounded skepticism but read on…

 

Roger’s claim: 5G wireless IoT will not be commercial in 2020.

 

Wireless IoT is upon us. We see it every day in the various widgets—fitness trackers, wireless cameras, and so on—that often consume more of our time than we spend on their associated activities.

5G_Myth_Image4.png

So why do I say it will not be commercial in 2020? Similar to my examination of massive MIMO, it comes down to definition. The 5G vision is for “ubiquitous things communicating.” 3GPP is well on the way to standardizing that vision with LTE Machine-to-Machine (LTE-M) and narrowband IoT (NB-IoT) already released in standards in 2016. We can expect to see both of these commercialized in 2017.

 

There are also myriad (well, my last count exceeded 80) non-3GPP standards under development and in deployment by smaller consortia for various low-power wide-area or personal-area networks. But none of these are 5G and a new air-interface, proprietary or otherwise, is not enough for the tens of billions of connected devices coming in the next ten years (some claim trillions, but I will slay that myth in a future post).

5G_Myth_Part4.png

To achieve massive connectivity, 5G developers must address two more technical challenges, and both are in the protocol stack. One is managing a new media access control (MAC) scheme that enables communications with limited or no use of the “ACK” (acknowledge) concept. This is required for effective management of device power and interference. Such is among the ideas being studied in the new 5G radio-access technology (“New RAT” or NR). Not only is the technology necessary to enable such a “grant-less” system still just emerging from research groups, the 3GPP is focusing its R14 and R15 efforts for NR on eMBB and UR/LLC and not so much mMTC (and hence not yet focused on 5G IoT).

 

The second technical challenge involves messaging above OSI’s layers 2 and 3. Addressing unique identifiers for huge numbers of devices using today’s protocol standards will create networking overhead that consumes far more resources than the payloads themselves, and thus would burden the mobile packet core (MPC) to a point significantly affecting quality of service (QoS). 

 

But even if this new standard is complete by late 2018—in time for my rule-of-thumb gestation period of 18 months between “dry ink on a standard” and “commercialization”—the industry will have to recoup the investments currently being made on NB-IoT and LTE-M. This will move 5G NR mMTC commercialization almost definitely beyond the 2020 timeframe.

 

Adding another new IoT-ready air interface so closely on the heels of these efforts is likely similar to new standards work done in the past when such has stagnated in the standard without commercial deployment. Thus, 2020 will see plenty of wireless IoT, but the 5G part—the part that is defined in the new radio interface and the associated higher-layers in network protocol—will have to wait until a future release.

In my two previous posts, I’ve discussed the factors affecting commercialization and laid claim that some much-touted 5G technologies (e.g., millimeter-wave) will not be commercialized by 2020. Massive MIMO, on the other hand, starts with two factors heavily in its favor: implementation will require less policy change, and it has a potentially large benefit to mobile network operator (MNO) business models. Thus, developers can focus their energy and attention on the technical challenges.

 

Roger’s claim: Massive MIMO will be commercial in 2020.

 

At an IWPC meeting in the spring, representatives from China Mobile clearly stated that massive MIMO is implemented and running in its network. Some of the argument about the timing of massive MIMO depends upon one’s definition of the term.

5G_Myth_Image3.png

Since Dr. Thomas Marzetta’s seminal paper in 2010, the term has come to mean just about anything with more than four antennas. What I will call the “academic definition” was clearly outlined in an excellent panel discussion (featuring Marzetta) at IEEE Globecom 2015 in San Diego. This refers to something that has the following attributes:

    • Is based on TDD (although Marzetta recently suggested that FDD may be possible)[I]
    • Uses only uplink pilots for the determination of channel state information (CSI)
    • Provides significant gain in performance even in a non-scattered and 100 percent line-of-sight (LOS) environment
    • Requires the number of antenna ports to greatly exceed the eight defined in the current 3GPP MU FD MIMO standard

 

On the fourth point, I have seen some definitions of massive MIMO that focus mostly on getting the antenna count to at least eight; most of the other criteria included above appear to be less important. I simply do not consider eight to be “massive.”

 

But, my key point is that spatial multiplexing within cells, specifically to improve capacity, throughput, and spatial_multiplexing.png

especially energy efficiency, is mandatory for the 5G vision to become real. At the recent IWPC meeting, multiple MNOs agreed that their number-one OPEX beyond depreciation was paying for electrical power. Massive MIMO’s approach to limiting radio energy only to where it is needed is a huge step—if the industry can manage the technical challenges and increased power demands of the incremental baseband processing and more-complex antenna schemes. The innovation that I have seen from those researching and prototyping this concept is very impressive.

 

Implementing the academic definition of massive MIMO puts relatively small demands on the user equipment (UE) design (i.e., fewer technical challenges), requires fewer policy changes, and has a potentially large benefit to MNO business models. China Mobile’s focus in this area is driven by annual energy consumption that is significantly north of 14 TWh, and that is motivation enough to make this technology work ASAP[ii]

 

I have recently read statements from some MNOs suggesting that 5G “phase 2” (i.e., 2022 or later) will include massive MIMO, thereby refuting my claim. After all, MNOs are the ones who will dictate the timing for commercialization of any of these technologies. But given China Mobile’s clear statement and the lower technological, policy, and business model hurdles, I think massive MIMO will see reality.

 

Will users care? Probably—this is at least one facet of creating “great service in a crowd.” What do you think?

 


[i] See “Massive MIMO: ten myths and one critical question” in the February 2016 issue of IEEE Communications

[ii] See “Toward green and soft: a 5G perspective” in the Feb 2014 issue of IEEE Communications

I saw a recent LinkedIn study that says most millennials who graduate from college this year will work for four different companies in the first decade of their career. I’m guessing my MBA students are amused when I tell them that early in my career, many engineers not only worked for a single employer for ten years, we sometimes worked on a single product for ten years.

 

How times have changed. Today, manufacturers like Keysight go from product idea to first shipment in a matter of months. It’s the “and/and” world we all live in: today’s products have to meet strict time-to-market windows and be produced at the lowest possible cost and be extremely high quality. The students who ace my class know that the answers are found in the supply chain: Tear down the wall between design and manufacturing, and you can create a single integrated supply chain that allows you to move at warp speed. Here are three keys to making that happen.

 

1. Make it easy for R&D to do the right thing.

When I started my engineering career, it was common practice for R&D teams to complete a new product design, then “throw it over the wall” to manufacturing so the product could be built. Many companies continue to use that same siloed business model because they think that introducing manufacturing requirements into the design process slows things down. I’ve found the opposite to be true. When manufacturing issues are addressed early, delivery schedules are accelerated. Deep down, R&D teams know that designing for manufacturing is the right way to go, and they’re happy to do it—as long as there are tools in place that make it easy to create manufacturing-friendly designs. At Keysight, we found that the ideal toolset includes:

  • Design guidelines to reduce rework in manufacturing
  • A preferred parts database that allows known-good parts to be procured in volume
  • Common components that are shared across platforms, reducing integration time and maintenance costs
  • Rapid prototyping models that allow design ideas to be validated or abandoned quickly

 

shutterstock_457840624.jpg

2. Co-locate design and manufacturing.

I know what you’re thinking. Too expensive, right? What I’ve learned in my career and what we’ve proven at Keysight is that keeping design and manufacturing separate is actually more expensive than co-location. With separate teams in different time zones, small delays in communication can have a big impact downstream. The takeaway is that if you’re planning to design and introduce a major new product in Spain, it pays to have a New Product Introduction (NPI) manufacturing team co-located with the design team. Direct contact improves communication, so minor tweaks and course corrections can be discussed and implemented in real time. It keeps launch schedules on track, reduces overall product costs, and accelerates time to revenue.

 

3. Get senior management on board.

When I talk with customers about supply-chain optimization and the far-reaching cultural and operational shifts that come with it, I always get the same question. How do you get senior management buy in to something like this? It’s a fair question because of the misperceptions that exist around design for manufacturing (“slows us down”) and colocation (“too expensive”). The reality is that senior decision makers in every area of the company—whether R&D, manufacturing, test, or procurement—are focused on time, cost, and quality. Supply chain optimization touches all three. To prove it will work for your organization, identify a small project, find sponsors in R&D and manufacturing who are willing to do a test case, and track your results. When senior management sees the business case, they’ll get on board.

 

The business case at Keysight is compelling. Over the past five years, our integrated teams have cut annualized product failure rates by 50 percent, raised on-time arrival of new products to over 93 percent, achieved greater than 90 percent scheduling accuracy, and increased annual profit margins by reducing scrap, rework, and inventory. It’s a case study for how modern manufacturing can keep up with the disruptive pace of today’s technologies. It also makes a pretty good curriculum for the next crop of management executives who will soon occupy the C-suite.

 

Pat Harper is vice president and general manager at Keysight Technologies and an adjunct professor teaching global supply chain management in the Executive MBA program at Sonoma State University and Project Management in the Executive MBA program at the University of San Francisco. Read his bio.

As noted in my introductory post, the advent of 5G will be paced by three market forces: technology, policy, and business model. My last post referenced the past but our topic is predictive, so let us cover the future of perhaps the most visible of all 5G enabling technologies:

 

Roger’s claim: Millimeter-wave for eMBB will not be commercial in 2020.

 

First a couple of clarifications: My casual reference to “millimeter-Wave” (mmWave) means the use of any carrier above 6 GHz; and “eMBB” implies “mobile” and “multiple-access.” “Mobile” (mobility) means tolerance to group-delay at greater than walking speeds combined with handovers and Quality-of-Service (QoS) management from the network. “Multiple access” means managing many diverse users and use-models simultaneously. This means contiguous bandwidths of perhaps as much as 1 GHz and associated peak data-rates on the order of 10 Gbps.

 

I have tested my prediction with network equipment manufacturers

(NEMs) and m5G_Myth_Image1.pngobile net work operators (MNOs) and the initial responses have ranged from “You are right” to—are you ready for this?—“You are wrong.” Digging deeper yields greater clarity around the three big drivers, and all are stacked against mmWave.

 

Technology will not be ready. In spite of some impressive demonstrations of high-speed links—even mobile ones—in the rarified mmWave bands, the technology still has far to go. Although 802.11ad provides affordable mmWave communications with a truly elegant implementation, it is neither mobile nor multiple-access. Just a few examples that highlight the technical challenges include random-access, tracking, fading and blocking, and transceiver front-end design. Random-access and tracking alone are daunting: Where do you point your antenna first? How do you keep it pointed the right direction? How do you manage a directed and directional handover? All of these are getting serious attention so by 2020 we will likely have answers but probably not commercial deployment.

 

Policy will not be in place for licensed bands above 6 GHz. Policy always lags technology, and spectrum policy is no exception. Five simple assumptions reinforce this position:

  • Policymakers must declare which bands will be licensed for mobile.
  • Licensing structure must be determined.
  • Licenses must be allocated to the licensees, typically through auction.
  • The incumbents must be re-farmed.
  • The legal fallout must be resolved.

 

Even if we overcome the plodding precedents of the past, doing so across each facet in the next three-plus years seems virtually impossible.

 

“But,” I hear astute readers exclaim, “on July 14, 2016, the FCC paved the way to aggressively license mmWave for 5G!” This is indeed a positive step, and arrived sooner than I expected. Arguably, the FCC also covered the second bullet—but getting through the others, and especially the last one, will take time.

 

Cost and business model will not be ready for associated applications. Mobility using mmWave will require much greater density of base-stations due to issues with signal propagation. Each new site will require backhaul (and perhaps fronthaul) capacity and speed and this is unprecedented. User equipment (UE) will require multiple antennas and perhaps multiple mmWave bands.

 

5G_Myth_Image2.png

All of this new technology will require investment by MNOs and users. Killer apps will have to go way

beyond 4K YouTube cat videos to drive the average revenue per user (ARPU) necessary to justify these investments. Many of the envisioned applications involve augmented reality (AR) or virtual reality (VR), implying costly UE devices. I do believe this is coming—demand for higher data rates is inexorable—but resolving this host of issues in less than four years is a longshot at best.

 

One more counterargument is Verizon’s claim it will commercialize mmWave 5G capability in 2017. But they have also explicitly stated that this is for fixed wireless only, at least at first. This is an admirable goal and I have no doubt it will be accomplished, but it does not achieve the “mobile” part of eMBB.

 

Will we have commercial mmWave systems in 2020? We already do with 802.11ad. It is possible that even policy will move fast enough for MNOs to implement fixed-wireless capabilities in licensed bands. But for mobile, multiple-access services so we can experience our 8K YouTube cat-videos with VR goggles, I say we have a few more years to wait. I actually hope someone proves me wrong—I welcome all comers!