Skip navigation
All Places > Keysight Blogs > Insights Unlocked > Blog > 2016 > September
2016

I recall detailed conversations during the 2G-to-3G and 3G-to-4G transitions about the elusive “killer app” that would drive the ROI for a better mobile phone system. Our transition to 5G technology is no different, and while the term killer app does not pervade the vernacular like it once did, the concept lives on. And it lives on in an environment of significant doubt. Even comments on my first post indicate significant skepticism about the justification of such a large technical push: “Why would anyone ever need <insert your favorite 5G KPI here>[i] on their mobile phone?”

 

Perhaps the most skeptical are my own family, who are often subjected to dry runs of my 5G presentations. On July 25, however, I had the good fortune to receive a very telling, if not somewhat flippant, email from my engineering-grad-student daughter (nope, not electrical engineering). I present an insightful excerpt:

 

Hi Daddy!

 

In the past two weeks, Pokémon GO has netted at least $35 million in revenue. While I haven't been able to find any analyses of how much total  mobile data it has used, most online sources agree that one person spends about 20 MB of mobile data per hour of gameplay. This number is reduced significantly if you're in a Wi-Fi-rich area (e.g., on a college campus), and can be increased significantly if you use other data-chewing apps at the same time while you wait for the Pokémon GO servers to come back up.

 

Also, I have tried the game, and it is ridiculously fun, especially when half your lab group decides to walk to lunch and catch Pokémon together. For a smartphone game, it can be surprisingly social. Also, extremely nerdy. But we're Ph.D. students, so that ship sailed ages ago.

 

If 5G wireless can make my Pokémon hunting and battling more efficient and reliable, I'd be pretty happy. It is rife with possibility for showing off 5G capabilities. Now, I don't know much about these capabilities aside from what I've learned from you making me sit through the dry runs of your keynote talks, but in case you get stuck presenting the need for 5G to a room full of people my age, you could consider arguing the merits of Keysight's test equipment solutions on the basis of:

 

  • faster communication with the app's servers, so that the millions of users playing the game can crash them that much more effectively
  • more accurate location services inside buildings and outside, so that the game populates the area around a player with Pokémon more quickly
  • device-to-device communications, enabling live Pokémon battles or trades between users
  • working well in a crowd, so that when someone is caught up in a mass of hundreds of people all trying to catch the same **** Vaporeon, nobody gets trampled and (more importantly) nobody's connection slows down or drops

 

I argue that this millennial—raised on a certain video game, television, and trading-card phenomenon—has highlighted key indicators in least one market segment of what 5G needs to address. The Pokémon GO phenomenon hardly requires 5G technology to drive the faddish behavior that created massive financial ecosystems around smartphones. But my lovely daughter, through the context of a relatively simple augmented reality (AR) game, highlights what really happens in our industry.

 

I ask my readers to simply look at the four points rostered above—all of which, with significant improvement, will enable AR games that will make Pokémon GO look like Pong. I suggest that this is the shape of things to come. And I am not alone

 

And if you think that such trivial applications like video games are perhaps not the best things to drive our industry, then take a hard look at the amount of money generated in mobile communications in general by entertainment. Some of us technical people will fret about the mountains of research, patents, prototypes, cell sites, networks, antennas, and software all apparently dedicated to little more than catching Mewtwo. Hey folks, it pays the bills.

 

[i] Choose from: 1 ms latency, 10 Gbps, device-to-device communications, 500 kmph mobility, etc., etc. (and see the NGMN White Paper for more).

Most of the students who take my Executive MBA class are working for companies that have an ocean and several time zones between their R&D and manufacturing teams. They’ll spend their careers looking for ways to close the gap in their supply chain so their companies can move faster, work more efficiently, and be more competitive.

 

As I mentioned in a previous post, the old concept of throwing a design “over the wall” to manufacturing is not feasible in a world where most companies are moving at warp speed, and introducing new products at a furious pace to meet global demand. Design and manufacturing teams need to be joined at the hip to keep up. The key is to make it easy for upstream teams to address downstream requirements in the earliest phases of design. Design teams know that it’s the right thing to do, they just need good tools. Here are four that have had a major impact on supply chain optimization at Keysight.

 

  1. Design guidelines

 

Awhile back, we developed a process at Keysight that automates the wire-bonding of microcircuits. The process boosts production throughput—but only if boards are laid out correctly. If a bonding pad is off by even a couple of millimeters, then the automated arm can’t access the pad, and production is delayed while the design is reworked. That kind of scenario is why design guidelines are needed. Design guidelines provide detailed specifications so designers know their layouts are compatible with downstream processes. These are living documents:  When an unforeseen event triggers a rework cycle, the guidelines are updated to eliminate the problem from future production runs. Design guidelines have the added advantage of getting designers and production engineers talking. Both sides have an equal voice in updating the guidelines, so if anyone sees an opportunity to improve efficiency or save steps, their input can be incorporated into the workflow. That kind of cross-discipline dialog expands institutional knowledge and ultimately reduces time, rework, and headaches on both ends.

 

  1. Preferred parts database

 

A preferred parts database (PPD) helps R&D teams quickly select parts that will work in their designs. The database should include known-good parts that are reliable, available, and can be purchased cost effectively. It saves design time by streamlining the decision-making process, and improves design quality by allowing R&D teams to focus on design refinements rather than part selection. Like the design guidelines, this also is a living document. The new product introduction (NPI) engineering team is responsible for keeping it up to date, and that can be a challenge. A preferred part can become a non-preferred part overnight if, for example, the supplier discontinues the part. Keeping the list current in real time saves design cycles later in the process.

 

  1. Common components

 

When I started working at Hewlett-Packard in the early ‘80s, we had hundreds of 100-ohm resistors in our inventory.  Keysight has a handful. By using a common set of components across multiple product lines and platforms, we save time and money in much the same way that automotive manufacturers gain efficiencies by sharing parts across models. It can be a hard sell with R&D teams. Designers often have visibility into a range of parts for a given function, so they may know of a cheaper part than the one being recommended in the PPD or common component library. In some cases, the savings may look significant. My advice:  Hold the line and trust your database. Introducing a new part into inventory means setting up, managing, and supporting a new part number. If it’s a one-off part, it will have low usage and no economies from buying in volume. And adding a part affects how other parts are used, so you may reduce volume purchasing power and increase your unit cost on other parts. If and when you do add components, make sure they have application across multiple product lines and thoroughly evaluate the business case and related costs.

 

  1. Rapid prototyping process

 

When you’re developing a new product, R&D will have ideas they want to try, and will turn to manufacturing to build prototypes for testing. If R&D and manufacturing are in different time zones, you might burn a couple of weeks on each prototype to learn that a design is not quite right or an idea is not feasible; co-located teams can often find out in hours or minutes. Colocation not only speeds new designs into volume production but also improves the quality of your products since you can evaluate more ideas in the limited time you have. 

 

Integrating design and manufacturing has far-reaching cultural and operational implications. It requires a shift in thinking at the management level and a change in workflow for design and manufacturing teams. That may sound like a heavy lift, but as I tell my MBA students, the right tools make it easy. In any case, it’s a fact of life in our global economy. The business benefits of integrating design and manufacturing are undeniable, which is why nearly every company I talk to is heading in that direction.

                                                                                                                   

--

PatHarper is Vice President at Keysight Technologies and an adjunct professor teaching global supply chain management in the Executive MBA program at Sonoma State University and Project Management in the Executive MBA program at the University of San Francisco.

Now to finish my series on predictions, let us turn to one of the more exciting concepts in 5G: the mobile and tactile wireless Internet. The terrible triumvirate of technology, policy and business model is once again aligned against this one for 2020.

 

Roger’s claim: Wireless tactile Internet will not be commercial in 2020.

 

The objective of 1 millisecond end-to-end latency for virtual reality, automated semi-autonomous vehicles, and even “remote surgery” (every policy-maker’s favorite), has the trappings of science fiction. While I believe the applications will someday reward the investment, the combination of low latency and very high reliability represents significant technical challenges at all levels of network and UE implementation: air-interface, network protocols, front-haul, and backhaul technologies all require complete redesign.

 

The industry has actually moved away from such KPIs. I do not know if anyone has noticed, but when it comes to voice, the quality of service (QoS) and quality of experience (QoE) of all of our telecommunications systems is significantly worse than it was even five years ago. The mix of latency problems and reliability problems is obvious to those of us who regularly use our employers’ systems for teleconferencing with our mobile phones. And recall my comment about VoLTE in my introductory post.

 

One could argue that the automotive industry will aggressively drive these needs as it moves to autonomous vehicles. But automakers adopt new communications technology at a deliberately slow pace, braked by a huge installed base and justifiably heavy regulation.

 

If anyone wants to see how fast the automotive industry will move to 5G, just take a quick look at two topics. First, recall the protracted delay in shutting down the AMPS cellular system in the US: launched for voice in 1983 and ultimately used also for in-vehicle roadside-assistance services, it held on like grim death until 2008—the year the first LTE standard was adopted. Second, consider Dedicated Short Range Communications (DSRC): today, the auto industry is slowly adopting DSRC, a technology based on circa-1999 standards. As 2016 slides past us, the US DOT is only now considering passing a rule to mandate DSRC. A compelling presentation by NXP at the recent Brooklyn 5G Summit suggested even DSRC would not be mainstream in automotive until well after 2020.

 

I can also foresee enormous challenges with business model and policy issues once companies want to take full advantage of high-reliability and low-latency mobile communications. Investments will be significant and they will likely come in unexpected ways (foreshadowing a future post regarding entertainment as a significant driver). And I cannot wait to watch the circus that will evolve just in my own country around the Affordable Care Act and the insurance lobby when our government starts to wrangle the issues around remote surgery. As with mobile, multiple-access mmWave systems, the wireless tactile internet will come, but is going to be very much paced by that difficult triumvirate of technology, policy, and business model.

 

The quote “Predictions are difficult, especially about the Future,” has been attributed to at least two famous people and I cannot wrap this note without using these words as disclaimer. In at least some of the cases I have described, I hope I am proven wrong. My intent is not to criticize the brilliant people working to move these technologies from vision to commercial reality, but to take a run at the hype generated by the equally brilliant people who have to build outbound marketing programs in the interim. Your mileage may vary…