Skip navigation
All Places > Keysight Blogs > Insights Unlocked > Blog
1 2 3 4 5 6 Previous Next

Insights Unlocked

84 posts

I saw a recent LinkedIn study that says most millennials who graduate from college this year will work for four different companies in the first decade of their career. I’m guessing my MBA students are amused when I tell them that early in my career, many engineers not only worked for a single employer for ten years, we sometimes worked on a single product for ten years.

 

How times have changed. Today, manufacturers like Keysight go from product idea to first shipment in a matter of months. It’s the “and/and” world we all live in: today’s products have to meet strict time-to-market windows and be produced at the lowest possible cost and be extremely high quality. The students who ace my class know that the answers are found in the supply chain: Tear down the wall between design and manufacturing, and you can create a single integrated supply chain that allows you to move at warp speed. Here are three keys to making that happen.

 

1. Make it easy for R&D to do the right thing.

When I started my engineering career, it was common practice for R&D teams to complete a new product design, then “throw it over the wall” to manufacturing so the product could be built. Many companies continue to use that same siloed business model because they think that introducing manufacturing requirements into the design process slows things down. I’ve found the opposite to be true. When manufacturing issues are addressed early, delivery schedules are accelerated. Deep down, R&D teams know that designing for manufacturing is the right way to go, and they’re happy to do it—as long as there are tools in place that make it easy to create manufacturing-friendly designs. At Keysight, we found that the ideal toolset includes:

  • Design guidelines to reduce rework in manufacturing
  • A preferred parts database that allows known-good parts to be procured in volume
  • Common components that are shared across platforms, reducing integration time and maintenance costs
  • Rapid prototyping models that allow design ideas to be validated or abandoned quickly

 

shutterstock_457840624.jpg

2. Co-locate design and manufacturing.

I know what you’re thinking. Too expensive, right? What I’ve learned in my career and what we’ve proven at Keysight is that keeping design and manufacturing separate is actually more expensive than co-location. With separate teams in different time zones, small delays in communication can have a big impact downstream. The takeaway is that if you’re planning to design and introduce a major new product in Spain, it pays to have a New Product Introduction (NPI) manufacturing team co-located with the design team. Direct contact improves communication, so minor tweaks and course corrections can be discussed and implemented in real time. It keeps launch schedules on track, reduces overall product costs, and accelerates time to revenue.

 

3. Get senior management on board.

When I talk with customers about supply-chain optimization and the far-reaching cultural and operational shifts that come with it, I always get the same question. How do you get senior management buy in to something like this? It’s a fair question because of the misperceptions that exist around design for manufacturing (“slows us down”) and colocation (“too expensive”). The reality is that senior decision makers in every area of the company—whether R&D, manufacturing, test, or procurement—are focused on time, cost, and quality. Supply chain optimization touches all three. To prove it will work for your organization, identify a small project, find sponsors in R&D and manufacturing who are willing to do a test case, and track your results. When senior management sees the business case, they’ll get on board.

 

The business case at Keysight is compelling. Over the past five years, our integrated teams have cut annualized product failure rates by 50 percent, raised on-time arrival of new products to over 93 percent, achieved greater than 90 percent scheduling accuracy, and increased annual profit margins by reducing scrap, rework, and inventory. It’s a case study for how modern manufacturing can keep up with the disruptive pace of today’s technologies. It also makes a pretty good curriculum for the next crop of management executives who will soon occupy the C-suite.

 

Pat Harper is vice president and general manager at Keysight Technologies and an adjunct professor teaching global supply chain management in the Executive MBA program at Sonoma State University and Project Management in the Executive MBA program at the University of San Francisco. Read his bio.

As noted in my introductory post, the advent of 5G will be paced by three market forces: technology, policy, and business model. My last post referenced the past but our topic is predictive, so let us cover the future of perhaps the most visible of all 5G enabling technologies:

 

Roger’s claim: Millimeter-wave for eMBB will not be commercial in 2020.

 

First a couple of clarifications: My casual reference to “millimeter-Wave” (mmWave) means the use of any carrier above 6 GHz; and “eMBB” implies “mobile” and “multiple-access.” “Mobile” (mobility) means tolerance to group-delay at greater than walking speeds combined with handovers and Quality-of-Service (QoS) management from the network. “Multiple access” means managing many diverse users and use-models simultaneously. This means contiguous bandwidths of perhaps as much as 1 GHz and associated peak data-rates on the order of 10 Gbps.

 

I have tested my prediction with network equipment manufacturers

(NEMs) and m5G_Myth_Image1.pngobile net work operators (MNOs) and the initial responses have ranged from “You are right” to—are you ready for this?—“You are wrong.” Digging deeper yields greater clarity around the three big drivers, and all are stacked against mmWave.

 

Technology will not be ready. In spite of some impressive demonstrations of high-speed links—even mobile ones—in the rarified mmWave bands, the technology still has far to go. Although 802.11ad provides affordable mmWave communications with a truly elegant implementation, it is neither mobile nor multiple-access. Just a few examples that highlight the technical challenges include random-access, tracking, fading and blocking, and transceiver front-end design. Random-access and tracking alone are daunting: Where do you point your antenna first? How do you keep it pointed the right direction? How do you manage a directed and directional handover? All of these are getting serious attention so by 2020 we will likely have answers but probably not commercial deployment.

 

Policy will not be in place for licensed bands above 6 GHz. Policy always lags technology, and spectrum policy is no exception. Five simple assumptions reinforce this position:

  • Policymakers must declare which bands will be licensed for mobile.
  • Licensing structure must be determined.
  • Licenses must be allocated to the licensees, typically through auction.
  • The incumbents must be re-farmed.
  • The legal fallout must be resolved.

 

Even if we overcome the plodding precedents of the past, doing so across each facet in the next three-plus years seems virtually impossible.

 

“But,” I hear astute readers exclaim, “on July 14, 2016, the FCC paved the way to aggressively license mmWave for 5G!” This is indeed a positive step, and arrived sooner than I expected. Arguably, the FCC also covered the second bullet—but getting through the others, and especially the last one, will take time.

 

Cost and business model will not be ready for associated applications. Mobility using mmWave will require much greater density of base-stations due to issues with signal propagation. Each new site will require backhaul (and perhaps fronthaul) capacity and speed and this is unprecedented. User equipment (UE) will require multiple antennas and perhaps multiple mmWave bands.

 

5G_Myth_Image2.png

All of this new technology will require investment by MNOs and users. Killer apps will have to go way

beyond 4K YouTube cat videos to drive the average revenue per user (ARPU) necessary to justify these investments. Many of the envisioned applications involve augmented reality (AR) or virtual reality (VR), implying costly UE devices. I do believe this is coming—demand for higher data rates is inexorable—but resolving this host of issues in less than four years is a longshot at best.

 

One more counterargument is Verizon’s claim it will commercialize mmWave 5G capability in 2017. But they have also explicitly stated that this is for fixed wireless only, at least at first. This is an admirable goal and I have no doubt it will be accomplished, but it does not achieve the “mobile” part of eMBB.

 

Will we have commercial mmWave systems in 2020? We already do with 802.11ad. It is possible that even policy will move fast enough for MNOs to implement fixed-wireless capabilities in licensed bands. But for mobile, multiple-access services so we can experience our 8K YouTube cat-videos with VR goggles, I say we have a few more years to wait. I actually hope someone proves me wrong—I welcome all comers!

Earlier this month, the Federal Communication Commission (FCC) decided to allocate nearly 11 GHz of spectrum for 5G mobile broadband use. If you need some good bedtime reading, try the 278-page document;

for a concise summary, see “FCC OKs sweeping Spectrum Frontiers rules to open up nearly 11 GHz of spectrum.”

 

The FCC made this bold move to get out in front of the coming 5G technology wave- and its decision will help the rest of us focus our energies on the crucial innovations that will enable 5G.

 

The commissioners wisely chose to include 3.85 GHz of licensed spectrum and 7 GHz of unlicensed spectrum, supporting both types of business innovation. The newly allocated spectrum sits at 28 GHz, 37 GHz, 39 GHz and 64-71 GHz, and the FCC will seek additional comment on the bands above 95 GHz. The new unlicensed band (64 to 71 GHz) is adjacent to the existing 57 to 64 GHz ISM band, creating a 14 GHz band of contiguous unlicensed spectrum (57 to 71 GHz).

 

I am struck by the huge amount of high-frequency spectrum that has been allocated for future wideband mobile use. For an interesting comparison, look back at the spectrum that launched the first analog cellular systems in the US: the Advanced Mobile Phone System (AMPS) used 824-849 MHz and 869-894 MHz for a total spectrum 50 MHz wide. The FCC’s 5G spectrum decision allocates more than 200 times that amount, underlining the kind of bandwidth required to meet the aggressive goals of 5G.

 

FCC Chairman Tom Wheeler was very clear about the how the FCC is approaching the 5G opportunity. In a recent speech, he said, “With today’s Order, we are repeating the proven formula that made the United States the world leader in 4G: one, make spectrum available quickly and in sufficient amounts; two, encourage and protect innovation-driving competition; and three, stay out of the way of market-driven, private sector technological development.”

 

To open up wide chunks of spectrum, the FCC had to reach for higher frequencies, which bring with them plenty of technical challenges. Millimeter-wave (mmWave) frequencies have higher path loss and undergo different effects from scattering, diffraction and material penetration. Also, mmWave components and subsystems are harder to design due to significant tradeoffs between energy efficiency and maximum power level. Compounding the difficulty, frequency bands below 6 GHz will also be critical for 5G deployment, working in concert with the mmWave bands. From this perspective, I see three areas that will require significant innovation on the path to 5G:

 

New channel models: Today, millimeter frequencies are often used for fixed terrestrial communication links and satellite communications. These tend to be stationary point-to-point links that don’t have to deal with radio mobility. At Keysight, we have been working with communications researchers at higher frequencies to develop channel models that are appropriate for mobile broadband use at mmWave. The higher-frequency, wideband nature of the channel and the dynamics of the mobile environment require more robust modeling approaches than those used for lower frequencies.

172656743_medium.jpg

 

Beamforming needs to work: The remedy for higher signal loss is to increase the antenna gain and make it steerable, a technique commonly known as beamforming. This method focuses radio signals from an array of multiple antenna elements into narrow beams that can be pointed for maximum overall system performance. Wireless LAN at 60 GHz (802.11ad) offers 7 Gbps connectivity for short-distance or “in room” applications—and 802.11ad does implement beamforming to optimize signal strength. While some of this work will leverage into 5G, 802.11ad is neither mobile nor multiple access (handling multiple diverse users simultaneously). There’s more work to be done here.

 

New air interface: Not to be overlooked is the need for a new air interface to take advantage of wide spectrum (when available). This interface must be scalable by design so that it can deliver unprecedented high bandwidth while still performing well for lower-bandwidth applications. The aggressive goals for 5G also include improved spectral efficiency, low battery drain for mobile devices and low latency for IoT devices.

 

We’ve been here before: you may recall the difficult list of challenges associated with LTE (4G) technology. Just like 5G, LTE was an aggressive technology development pursued by the wireless industry. Somehow we got it done. Challenges like this drive innovation in electronic communications.

Upgrading consumer technology is often advertised as being “easy,” and sometimes it is. But it’s the things they don’t tell you that can drive you crazy. That new cell phone looks sleek and perfect—until you realize you now have a drawer full of charging cords you can’t use. That high-resolution, eight-inch display in the new car is beautiful—except now you have to pull over to the side of the road to change the radio station.

 

Refreshing test technology can be like that, too. There are advantages and pitfalls, but a basic truth is this: No matter how much better your test floor will run after a technology deployment, getting there requires some disruption. The key is to minimize it. In test environments, that means addressing at least the following four things before you sign the P.O.

TL_1.png

 

Measurement capability

In my experience, customers who purchase new test equipment usually fall into one of three categories. Some buy too much functionality and end up overpaying for their particular needs. Others buy too little functionality and then need to allocate more funds to either upgrade their system or trade it in on a more powerful instrument. The third group—those who buy exactly the right amount of functionality and measurement accuracy for their unique needs—are in the sweet spot. To make sure you fall into that third category, take a fresh look at your products and development plan. Understand exactly what you need to be able to test today, and try to anticipate your test needs 18 to 24 months down the road. Chances are, your new test system will be faster, smaller, and maybe even simpler to operate, but if it can’t measure what you need it to measure today and a year from now, with the accuracy you need, nothing else matters.

 

Code compatibility

I’ve been told that the cost of creating new code for test sets can be up to three times the cost of the instrument. That’s a sobering thought. What’s more, the speed with which code is deployed affects how quickly you can get equipment up and running. Although many instrument suppliers claim to have code-compatible equipment, be careful and read the fine print. Many companies simply transfer existing commands to the new equipment, so your old programs run the same way on new equipment that they did on your old systems. Even though you purchased state-of-the-art equipment, performance stays the same. It’s like buying a new hybrid car and disabling the EV battery, so you’re getting the same gas mileage that you got with your old car. An even worse outcome is if performance degrades. With old programs running on the new equipment, measurements might not perform the same way, giving you inaccurate or misleading results.

 

Physical envelope

Does a new instrument need to fit into an existing rack? Is floor space limited or being reallocated? This is a case where a little planning goes a long way. Just as you measure the doorways at home before buying a new couch or a high-tech workout machine, make sure your new equipment fits the physical space you’ve allocated.

 

User interface

Does your new equipment have the right connections on the front and back panels? If not, there’s usually a workaround, so it’s not a showstopper. But like the new cell phone that has a completely different charger interface, it’s a good idea to avoid surprises—and deployment delays—whenever you can.

 

The good news is, there will always be newer technology. That’s the challenge, too. But with a little planning, your next technology refresh can be smooth and even transformative for your test floor. I won’t describe it as “easy,” but done right, it can be surprise-free.

 

What about you? What’s the most interesting surprise you’ve had when updating equipment in your test environment? I’m always interested to know what my fellow technologists are up to.

 

Duane Lowenstein is a Test Strategy Analysis Manager for Keysight Technologies. Read his bio.

Late seventeenth century Europe saw the publication of Newton’s Philosophiae Naturalis Principia Mathematica, perhaps the penultimate document of the Enlightenment. While this and Newton’s other key works place him on the highest dais in the Pantheon of the scientific revolution, he was also devoted to experimentation with the transformation of base metals into gold. With our twenty-first century perspective, it is difficult to reconcile Newton’s unparalleled understanding of nature with the relative absurdity of this faux science. During his lifetime, however, it was inappropriate to call this dichotomy into question.

 

As risky as it once was to challenge Newton’s work with alchemy, I have found it equally perilous to take a stance on separating myth (or hype) from reality in 5G technology. Because I am a self-taught industry analyst from the test and measurement business, you may question what I can surmise about what the giants of communications can and will do. But since we no longer burn witches, I will explore at least one facet of this topic in suggesting which parts of the 5G vision we can expect to be commercialized in 2020.

Myth_5G_Blog.png

 

Now completed, the METIS 2020 Project envisioned 5G as “Amazingly Fast, Great Service in a Crowd, Best Service Follows You, Super Real-Time and Reliable Communications, and Ubiquitous Things Communicating.” In pursuit of this vision, their work-products show insightful thought on how we can measure progress in these areas. But on January 1, 2020, nobody will throw a “5G switch” causing this vision to explode into reality. As with every other generational change, 5G communications will grow slowly from subset functionality deployed in a few second-tier city-centers; and its growth will be fraught with the same kinds of challenges we saw with every previous generation—plus a few new ones.

 

There. I made my first prediction. The sentences above suggest some more significant underpinnings of what we can expect from those developing and implementing 5G.

 

My logic follows from the challenges that hindered previous generations. Does anyone remember voice-codec battles in 2G? (Male vs. female voices? Codec data-rates? Language and phoneme problems?) What about the devastating financial impact of the 3G spectrum auctions, especially in Europe? How about WiMAX vs. LTE? Or 3GPP vs. 3GPP2?

 

We can also see the continuing 3G-to-4G rollout challenges in front of us even now: the drastically different levels of maturity in the various carriers’ networks around the world; Europe’s latest legal implementation of flat roaming rates for mobile wireless; the list goes on. And if any of my dear readers believe that problems with managing voice are now behind us, have a conversation with anyone involved in the deployment (and, frustratingly, the use) of VoLTE.

 

All of the challenges in wireless—past, present and future—fit into a common framework: the intertwined evolution of technology, policy and business model. Thus far, the success of the industry is testimony to its ability to overcome the challenges posed by that daunting triumvirate.

 

Untangling the inherent dynamics can lead to a clearer understanding of how and when the challenges can be overcome. That clarity provides a foundation for industry participants and observers alike to sketch promising business plans—and this is part of my role inside of Keysight, looking for opportunities to create 5G test solutions that will help the industry drive forward to its future vision.

 

These themes continue in my next two posts as I dig deeper on four specific topics floating inside my crystal ball. Two of these are enabling technologies: millimeter-wave for enhanced mobile broadband and massive MIMO. The other two are potential applications of 5G mobile: wireless IoT and tactile wireless Internet. As a teaser, I state here that just one of these will be commercial by 2020. What’s your take?

Early in my career, I was leading a small R&D team and found ourselves in a challenge to meet a key customer’s needs and timelines.  A proposal was made for a “minimum-viable” product that could be delivered within two years. I asked my team: Should we spend the next two years of our lives, not to mention our intellect and energy, developing something that has already been done or is barely acceptable? Their answer was “No,” and what followed was a proposal for a highly differentiated product—the world’s first protocol exerciser for that technology, the descendants of which are still in use decades later in validation labs worldwide.

 

That formative experience turned my belief in innovation into an obsession. It may be an overused term, but innovation—call it disruption if you prefer—is the most important aspect of a technology company, yet it’s often sacrificed for expediency. It doesn’t have to be that way. Over the past three decades, I’ve learned that innovation is repeatable and sustainable if you do five things.

 

1. Refuse to be a fast follower.

The mistake we almost made on that early project was focusing on our competitor. If you set your strategy based on what your competitor is doing, then by definition, your competitor is the leader and you’re the follower. True, it’s important to keep tabs on competitors, but make sure your customers’ needs are driving strategy. What do they need to succeed? What problems are not being addressed? What new challenges are coming? You have to understand your customer’s business almost better than they do to identify challenges and opportunities. But that level of understanding puts you in a unique position. Rather than being a seller of products or services, you’re a value creator. You’ll find yourself creating products that don’t currently exist to solve problems your customers haven’t even identified. Think of it as a Declaration of Innovation: commit to doing something that hasn’t been done.

Jobs_Quote.png

2. Replace your products.

 

I truly believe that resting on success is one of the biggest failures of most companies and teams. Yet it’s what large companies often do. That’s why disruption tends to come from new, nimble startups. Large companies make a big investment in their products, and if things go well, they see a steady income stream from the investment. It’s hard to walk away, but that’s exactly what needs to happen. We’re lucky in the electronics test business because each generation of new technology renders the last generation of test equipment obsolete. You can’t do a good job testing 5G technology, for example, with 4G equipment. Last quarter’s big breakthrough? It’s in the past. The best innovators accept it and move on.

 

3. Focus on opportunities, not obstacles.

When learning to ride a motorcycle, instructors usually teach you  to deal with emergencies by focusing on the path  you want to take, not the obstacle  you’re trying to avoid. The same is true for driving a company forward. Fix your gaze on where you want to go—on solving your customer’s Big Problem with a Big Idea. To be sure, obstacles will be placed in your path. You’ll hear from your internal teams that there’s not enough budget, not enough time, or too few resources to deliver what’s being asked. Don’t believe it. Convince your team that the opportunity is non-negotiable, and change the assumptions instead. Get resources reassigned to the project. Reallocate funds. Buy time by outsourcing. Revisit the timeline. Do whatever it takes to bring your Big Idea to life. Because while meeting schedules and staying on budget are important, that’s not what the customer will remember. Solve their problem, make them successful, and you and your team will be the stuff of legend.

 

4. Get specific.

Often I hear from teams, “If we want to do this, we’ll need 2X our current budget.” My response is almost always:  Tell me exactly how much you need, why you need it, and how it will be used. Only with specifics can you get to the root cause of a problem, and that’s where innovation is born. I learned that lesson years ago with a Japanese customer that asked my team to reprogram a pulse generator in a way that really wasn’t feasible. Rather than telling the customer they couldn’t have what they wanted, we got more details. We visited their site, dug into their business, and discovered they were trying to solve an entirely different problem around productivity. So we developed a new product that met their exact needs, became a best seller for us, and opened a major new revenue stream. 

 

5. Let your team lead.

Bill Hewlett and Dave Packard were technical geniuses, but I think one of their best inventions was on the management side of the business. They pioneered the concept of management by objective, or MBO, and it was the key to decades of innovation at Hewlett Packard. The concept behind MBO is that you define an objective, get agreement from the team that it’s the right objective, then let your team decide how to get there. And agreement means really agreeing, not dictating. It’s a conversation, an honest exploration of restrictions and ideas to build consensus. Once you have agreement, give your team the resources they need, then step aside and let them race forward. As long as the objective is clear, they’ll get there.

 

What are your keys to making innovation happen? How do you make it sustainable and repeatable? Leave a comment here and let’s keep the dream of innovation alive for a new generation of engineers.

 

Siegfried  Gross is vice president and general manager of Automotive and Energy Solutions for Keysight Technologies. Read his bio.

Technology breakthroughs often have early beginnings in academia.  Take for example Bill Hewlett and Dave Packard who founded the electronics business that is now Keysight Technologies. It was during Hewlett’s master’s project at Stanford University that he developed the innovative technology for the audio oscillator, Hewlett-Packard’s first successful commercial instrumentation.

 

What makes some partnerships, specifically those with academia, effective and others not?

 

shutterstock_153983249.jpg

Based on my experience facilitating hundreds of such partnerships over the years, I repeatedly come back to the following four best practices for maximizing successful corporate collaboration:

 

  1. Cast a wide net for ideas.  Involve your entire R&D team to brainstorm and source ideas for projects, professors, and universities with the leading-edge expertise your company needs.  This is invaluable for seeking out potential university research that might be relevant to your company and something in which you have people interested in investing.  This could be putting a call out to the broad R&D community soliciting proposals and ideas to generate possible partnerships to investigate that align to corporate interests.
  2. Align goals. Expect to have a lot of conversations before you find a good fit with an institution or a professor.  It may be an iterative process to create alignment between the corporate researcher and the academic researcher.
  3. Consider only those opportunities that have an owner and advocate.  Making sure there is someone back in your business that will drive and lead the relationship with the academic institution ensures continued relevancy and progress toward mutual success.  It will naturally limit which ideas make it over the threshold for investment.   Take for example, Keysight engineer Bernd Nebendahl who was pivotal in manifesting an industry academia partnership resulting in a new communications modulation scheme. The impetus for this research collaboration came from industry concerns about the “capacity crunch” in worldwide data traffic and the need for next generation optical communication tools.
  4. Enable face-face engagement. Preferably, match up your key contact to an institution that is close enough in geographic proximity to have regular face-to-face interactions.  Even though ‘virtual’ is becoming increasingly common, there is nothing as valuable as face-to-face interactions.   In one research collaboration between Keysight Labs in Santa Clara, CA and neighboring Stanford University, the close proximity facilitated the frequent visits and the easy exchange of equipment necessary for the experimental setups and data analysis methods for measuring parameters of interest.

 

Companies that engage universities get access to research in areas they may not be able to fund for themselves, not to mention in areas that the fast-paced, results-oriented business world doesn’t have the patience to nurture.  They may also get access to specialized infrastructure or capabilities (such as test bed environments for experimental research).   Another often-overlooked benefit is that company employees involved in the collaboration are able to deepen or broaden their knowledge without the impracticalities of long sabbaticals to go back to school.

 

For academic institutions, partnering with an industry leader provides in-depth, current, end-user insight they may not have access to otherwise.  If the relationship is set up well, institutions get access to industry-savvy people who lend a practical bent to their research.  The collaboration may also add credibility to grant proposals, provide additional funding, or furnish them with equipment they may not otherwise afford.

 

As we look ahead, it won’t be too long before the next waves of technology arrive in our lives: whether they are for devices that control our homes, drive our vehicles, or make us even more connected.   Corporate collaboration, in Keysight’s case, test and measurement in education, are driving faster and more relevant technological advances.

 

Where have you participated in an industry academia partnership to stimulate innovation?  Do you have additional best practices to share?

 

Kent Carey is the Director of University Relations and Research Services at Keysight Technologies.

During our annual spring-cleaning, my wife commented that we have not made any upgrades to our home since we bought it nearly ten years ago. I mentioned that we had the exterior painted, had the driveway sealed, replaced appliances, and got new floor coverings. Those count, right? Sure, she said, but those are maintenance items. She’s ready to renovate: New kitchen, new bathroom, new fixtures, new floors. She was looking at our house through a 10- to 20-year lens, and making a long-term investment in a house we like and a neighborhood we love. I was looking just a few years out, thinking wshutterstock_417005893.jpge might downsize to a new home in two or three years when we’re empty nesters. Both are valid points of view whether you’re talking about renovating a home or a test environment.

 

At work the following day, I was discussing options with my team for dealing with aging test stands and test equipment that was approaching obsolescence. Should we spend our budget extending the life of the current technology, migrate incrementally to newer technology, or start from scratch with a fully modern test floor? Each strategy is viable, each has its pros and cons, but like the home discussion I had with my wife, there are considerations that go beyond purchase price.

 

Short view? Extend.

How long do you need to use the equipment you have? If you just need two or three more years out of it, then extending the life of your equipment is the easiest way to go. You just need to keep the equipment running—there’s no need to research new technology, write new software, or re-validate and requalify measurements. There are drawbacks, of course. As I mentioned in a previous post , it gets harder to find replacement parts for older equipment, downtime tends to increase, and the speed and accuracy of older equipment tends to lag behind newer equipment. Some companies use what I call the “eBay strategy” to cope: they stockpile older instruments for spare parts. It can work for a few years, but after that, the test systems and stockpiles often return to eBay.

 

Long view?  Modernize.

New technology is faster, more accurate, and can revitalize your test program with more capability, more features, and equipment that’s covered under warranty. It comes at a cost, though, and not just hardware. You’ll also need to address software, measurement verification, racking and stacking, training, code compatibility (if needed), and new processes on the test floor. Modernization works great when there is an inflection point in the technology you need to test on your product. It’s also a compelling strategy if there’s a breakthrough in test technology that will make you more competitive, or if you have long-term production needs that require an upgrade in throughput or capacity. Like renovating a house, modernization can be daunting. But many times I’ve seen companies achieve a 100 percent return on investment in as little as six months, and the competitive advantages can be dramatic.

 

Mid-range view? Migrate.

With an incremental migration, you replace only the underperforming assets—the oldest, slowest, least accurate, or least reliable instruments. This allows you to minimize hardware and software changes, and incrementally increase reliability, reduce test time, increase accuracy, and minimize downtime. You’ll have some capital costs for new equipment, and you may have to address code compatibility, requalify some of your products, and continue to deal with occasional unplanned downtime. But for production lines that need to keep running without interruption for four to ten years or more—typical of the aerospace and defense industry, for example—this can be a good strategy.

 

There’s no one-size-fits-all answer to building the right test environment, but there is a common set of questions that should be asked. How well is your current environment working? How long do you need it to work? What are your competitors doing? What products are coming? What do your budgets look like today versus next year? What’s your total cost for testing your products? How much could you improve test results by replacing one, two, or three instruments? The key is to look at your test investment from all angles, move forward with a plan, and be willing to re-evaluate as conditions change.

 

Duane Lowenstein is a Test Strategy Analysis Manager for Keysight Technologies.

 


 

I have a friend who owns a classic old BMW. He loves that car. It’s fast, reliable, and fits him like a glove. It has plenty of bells and whistles—nothing like the newer models, of course, which are rolling data centers. But that’s part of the appeal. There are fewer parts to break. Except that now, when they do break, it’s an odyssey. Replacement parts are getting harder to find and can be shockingly expensive. And it’s harder to find technicians who know the model and can diagnose problems. The numbers just aren’t what they used to be. A car that for years was inexpensive to operate and easy to own is suddenly throwing big, unbudgeted expenses at its owner.

BMW_Duane_Article.png

 

Test systems are a lot like that old BMW. With steady maintenance, they can run flawlessly and improve your bottom line—until they don’t. At some point, even the best-maintained systems give way to repairs that cost a little more, take a little longer, and leave you with unplanned downtime. As you begin to see increases in maintenance costs and downtime, you’ll know it’s time for a technology refresh. The good news is, you can expect to see four bottom-line impacts almost immediately when you upgrade your test systems.

 

1. Higher throughput

Technology keeps advancing, so newer test systems are not only faster but also more accurate. The cost savings here are compelling. If you currently need, say, nine test stations for your product line, and new test systems are 33 percent faster, you can get the same throughput with six test stations that you’re currently getting with nine.

 

2. Lower OPEX and maintenance

To carry the example further, some would argue that it costs less to own nine fully depreciated test stations than to purchase six new test stations. I would argue the opposite. Nine test stations require nine operators whereas six test stations require six operators—a recurring annual savings that continues every year. Six test stations also means you’re consuming one-third less floor space, one-third less power for heating and cooling, and one-third less electricity to run the systems.

 

What’s more, older systems typically need to be calibrated every year whereas newer models have a two- or three-year calibration cycle. So you’re reducing not only the number of systems you’re maintaining but also the amount of maintenance required for each.

 

3. Less downtime

Unplanned downtime is a special kind of misery and an expense that’s hard to control. It upends manufacturing schedules and deadlines, interrupts nights and weekends, jeopardizes commitments, and bleeds profit. Some companies never recover. Newer equipment tends to be more reliable, and repairs are covered under warranty. In some cases—as with Keysight instruments, for example—a standard three-year warranty can be extended up to ten years, so you can know exactly what your maintenance costs will be for the life of the equipment.

 

4. Lower CAPEX

New equipment triggers a near-term increase in capital expenses, but after the initial bump, CAPEX trends lower over the life of the equipment. And remember that a spike in CAPEX is offset almost immediately with lower OPEX, overhead, and tax deductions. Many of the companies I work with see 100 percent payback on a modernization investment within a few years, then ride a downward slope on CAPEX for the rest of the time they own the equipment.

 

It’s a fact that every machine reaches end-of-life at some point. Having consulted with over 100 companies on major technology upgrades, I can attest to a simple truth: Planned upgrades cost less and provide better outcomes than emergency fixes. Your mileage may vary, but in general, newer test systems are faster, more reliable, more accurate, and have a lower total cost of ownership. And let’s face it, that new-car smell is pretty nice once in awhile.

 

What business benefits could a technology refresh have for your company?   What is stopping you from starting a modernization initiative?

 

Duane Lowenstein is a Test Strategy Analysis Manager for Keysight Technologies.