Skip navigation
All Places > Keysight Blogs > Insights Unlocked > Blog > 2018 > January
2018

If I close my eyes, lean back in my chair and take a breath, I can almost hear the echoes of kids’ voices. Tri-fold cardboard displays and colorful ribbons are splayed across rows of folding tables. I’m at my 4th grade science fair and I’ve just won first place for my solar-powered hot dog cooker.


Solar Energy School ProjectA lot of things have changed since then, but the scientific method used for my science project still stands: make an observation, form a hypothesis, conduct an experiment, capture and analyze the data and draw conclusions. Sounds simple enough, right?

 

In my previous blog, I talk about not being afraid to take your first step in data analytics (DA) for testing and measurement (T&M) and I outline tips on getting started. But then what?


If you want that coveted blue ribbon, read on.

 

 

 

 

Our Ambitions May Be Tripping Us Up

From where I sit in data analytics for T&M, I often see my customers – brilliant scientists, engineers, architects, project managers – get hung up towards the end of the scientific method process. After observing, hypothesizing and conducting the test, they struggle with managing and analyzing the data and drawing confident conclusions.

 

It’s easy to understand why: we’re all doing everything we can to get to market fast – to get to market first – that we lose the rigor we once had when we started engineering.

 

Some steps are taken to collect and analyze data, but they are misguided. Engineers use complex Excel spreadsheets, pivot tables, custom MS Access databases and other home-grown tools. While these tools appear good on the surface, they ultimately don’t deliver. Not only do they come up short but the tools can only be used once, and usually poorly.

 

Despite the best intentions, most don’t have the resources, time or expertise to deal with data or data analysis in an effective way. And it’s not just about dealing with data from one step in the product lifecycle. DA for T&M happens throughout the lifecycle:

Design -> Validation and characterization -> Release to manufacturing -> Full-scale production

So how do you find the right tools to help with your messy data?

 

More Data, More Confidence

It’s a basic concept: by making accurate measurements (and lots of them), the more confident you’ll be in your conclusion.

 

I wrote in my previous blog about how you must have your Design of Experiments (DOE) planned before you do anything. What questions are you trying to answer?

 

If you build a car, you’re not just going to ask, “Does it turn on?” You’ll want to know, what is its maximum speed? How will we measure speed? What kind of gas mileage does it get? How will we measure gas mileage? Will people survive in a fender bender? How many measurements do we need to ensure confidence in our eventual conclusions?

 

Your DOE can’t be set in stone. It needs to be fluid and you need a tool that allows you to change your DOE on the fly and not spend days – or even weeks – going back and forth with your own personal (and expensive!) IT database architecture team to get things just right. And you certainly don’t want your Team Lead spending weeks on end buried in her cubicle designing a system that only she understands or that blows up with one little change in your DOE plan.

 

Figure out what questions you have (and how you’ll get answers), and understand you’ll probably have more down the road.

 

Get Your Hands Off the Data

Imagine: you walk into a bakery and it smells of fresh cinnamon rolls. The baker rushes out to help you. Mouth watering, you point through the glass case at the squishy, doughy roll. The baker takes your $2 with his flour-dusted hands and returns a grimy quarter. He hands you your roll with his bare hands and rushes back to continue his work.

Is something wrong with this picture? Yes! The baker should hire a clerk – let’s call her a retail expert – to focus on helping customers. And the baker should focus on what he does best: bake.

 

It’s not much different in our world. If your designer is responsible for building cars, she shouldn’t be graphing data from her T&M. She needs to focus on what she does best and not waste valuable time formatting and processing data.

 

Keep your experts doing what they do expertly.

 

Make a Decision, but Make It with Confidence!

Analyzing big dataEventually, you need to make a decision about your product.

 

Is it time to build a set of prototypes? 

 

Can you launch to market? 

 

What will be your production yield? 

 

You can’t make a decision with confidence unless you have good data and insight from this data.

 

There are hundreds of information sources to help you determine how to achieve 99% confidence based on measured data. Online articles, data entry tools and tables help aggregate data and determine how much you need to get to that desired level of confidence. But first...

  1. Figure out the questions you want answered (remember your DOE!) before you begin collecting and analyzing data, and...
  2. Don’t overlook your testing environment. Would my solar-powered hot dog cooker from 4th grade perform the same on a cloudy and sunny day? Of course not. Your results won’t align if you’re not considering what’s going on in your testing environment.

Once you have your questions figured out and your testing environment set, it’s time to find your data analytics tool. 

 

A good tool will easily adapt with you. 

A poor tool will cause headaches, delays in your design and, even worse, cost a lot of money.

 

Product Lifecycles are Global and Your Data Should Be, Too

You have a globally distributed team, so your data analytics tool can’t be located on Some Guy’s laptop. And your data can’t be understood/analyzed/charted by just Some Guy who might be spending half of his time on LinkedIn looking for that next big engineering gig, when he should be designing driverless cars. It’s even worse if Some Guy is your top designer and he’s spending 50% of his time wrangling data (possibly poorly).

 

Data and decisions need to be communicated across your distributed organization throughout the product’s lifecycle. You need a team collaboration tool that is understood by multiple people, across your organization, and that extends throughout the entire lifecycle.

 

Here’s a real-life scenario that might make you sweat. Susan, a VP of validation engineering, wants to fabricate a prototype of an important new design. Tom, her top engineer, has been managing the vast amount of test data and analysis with his own system. She asks him if they’re ready to fabricate the design as he’s the only person on this highly skilled team who has the ability to analyze the data, and he says he sees no concerns. Long story short? Tom is an expert engineer, but not an expert at all facets of the design – and the others on the team were not involved in the analysis. This mistake cost the company $1 million and three months as they built something that didn’t fully work.

 

Imagine if all your company’s presentations had to be created using Adobe Photoshop. The hardship this would cause in terms of time, productivity, and expertise! Your data analysis tool needs to be as intuitive and easy to use as PowerPoint is when putting together a presentation.

 

Don’t Reinvent the Wheel

If you have butterflies in your gut when talking to your manager about a conclusion you’ve come to, it’s time to look into your data analytics tool. Explore what’s out there and test it with a small project. Report back and let me know how it goes.

 

In my next blog, we’ll dive a little deeper into decision-making with data analytics and figure out how to move from validation to manufacturing.

Corporate Social Responsibility (CSR) – also referred to as corporate citizenship or sustainability – has morphed from the right thing to do several years ago to a business imperative today, requiring detailed monitoring, reporting, and accountability to support various stakeholder requirements. This shift, and the ongoing global transformation, requires diligence to ensure corporate CSR strategies are aligned to industry progress in this area while supporting global communities and business commitments.

 

As an individual who grew up with, and continues to have, a strong sense of community and volunteerism across the socio-economic spectrum, I am thrilled with these developments in the business environment. And as a business leader, I have seen first-hand how social responsibility results in a multi-faceted win for communities, individuals and corporations alike. Just one example is the recent devastating wildfires in Santa Rosa, California where Keysight is headquartered.

 

Keysight Strong sticker rolls during the Santra Rosa fire crisisSupported by our CEO, Ron Nersesian, executive staff, and existing CSR-related programs, Keysight was able to quickly offer affected employees emotional support through our employee assistance program, facilitate site emergency response, health and safety information, and provide funding of $1000 to displaced employees and $10,000 for those who lost homes to help in their personal recovery. Donations of clothing, necessities and funds from our global employee community were funneled to a Santa Rosa relief center we had set up to distribute them not only to Keysight employees and their extended families, but anyone in the local community that needed assistance. It was the right thing to do, we had the programs in place to support it, and I was proud to be a part of Keysight’s contribution. If you are interested in learning more about this, Keysight’s Brand Manager recently published details about the company’s response to the Tubbs fire crisis on another post.

 

As a CSR executive sponsor, however, I have experienced the challenges many corporations face to address the ever-evolving standards and industry guidance in this broad, cross-functional space. Instituting an effective CSR program model that supports the company and global community while ensuring it continues to meet today’s, and tomorrow’s, stakeholder requirements takes significant commitment to balance company resources, programs and obligations.

 

The Right Thing to Do is Now Required 

For more than 75 years – as part of Hewlett-Packard, then Agilent, and now a separate company – Keysight has acknowledged its responsibility to help address global social and environmental challenges. Beyond just being the right thing to do, it is part of our company DNA going back to HP founders Bill Hewlett and Dave Packard.

 

Long before companies were required to monitor, track and report on such activities, Bill and Dave built a strong brand culture – referred to as “The HP Way” – that included company sponsored philanthropic, education, community and sustainability programs. They didn’t do this because they had to, they did it because they knew it made good business sense! In the words of Bill Hewlett, the HP Way “is a core ideology … which includes a deep respect for the individual, a dedication to affordable quality and reliability, a commitment to community responsibility, and a view that the company exists to make technical contributions for the advancement and welfare of humanity.”

 

Flash forward more than 75 years, and this core value continues in our company DNA today. However, the broader CSR landscape has changed dramatically. While the main tenets remain intact, supporting programs in this area are now business success imperatives. Here's why:

  • I regularly review environmental, social and governance (ESG) industry ratings and reports to gauge Keysight’s effectiveness. Our investors are doing the same. As a Barclays report stated, “many responsible investors believe that ESG criteria are material to future business success and, ultimately, to financial performance.”1
  • Customers need us, as their supplier, to help meet their own CSR commitments and growth strategies. According to Accenture, corporations work with suppliers "to turn supply chain sustainability into a driver of competitive advantage."2
  • Beyond that, today’s highly desirable, skilled workforce wants to work for sustainable and ethical companies. The 2016 Cone Communications Millennial Employee Engagement Study noted that 76 percent of millennials consider a company’s social and environmental commitments when deciding where to work, and 64% won’t take a job if a company doesn’t have strong CSR values.3
     

Any corporation, Keysight included, cannot be successful without positive relationships with investors, customers and their workforce. Thus, the CSR space has become a business imperative that creates sustainable business value while driving competitive differentiation. Again, the recent Santa Rosa wildfire response is a great example.

 

From a business perspective, the unintended, but positive consequence of the company’s response was our employees recognizing that Keysight has, and will, support them and their communities in times of need. As a result, Keysight employees worldwide responded by doubling-down on their day-to-day jobs, and in some cases taking on more responsibility in the short term, to make sure the company recovered as quickly as possible and continued to meet customer and market commitments. Such events illustrate how critical CSR programs are to the success of local communities, employees and companies. It just makes business sense.

 

As with any business-critical program, periodical reviews, rightsizing and evolving of corporate CSR programs are necessary to navigate emerging industry standards and guidance in this space. At Keysight, we used our company formation as the impetus to embark on a 6-step journey through which our CSR program model evolves, as we align it with industry trends, and our new brand and business strategy. More on this journey in my upcoming posts.

 

In the meantime, feel free to share how CSR has shaped and supported your business success.

 

Sources: 

1 "Sustainable investing and bond returns," Barclays Report

2 "Why a sustainable supply chain is good business," Accenture

3 “2016 Cone Communications Millennial Employee Engagement Study,” Cone Communications

The emergence of 5G mobile communications is set to revolutionize everything we know about the design, testing, and operation of cellular systems. The industry goal to deploy 5G in the 2020 timeframe demands mmWave over-the-air (OTA) requirements and test solutions in little more than half the time taken to develop the basic 4G MIMO OTA test methods we have today.

 

If you remember anything from this blog post, know this:

"At mmWave frequencies, we are going to have to do all of our testing radiated, and not just some of it like we do today for LTE, and that's a BIG deal."

 

 

First, a bit of background on the move from cabled to radiated testing, and then I’ll discuss the three main areas of testing that we're going to have to deal with: RF test, demodulation test, and radio resource management (RRM).

 

Millimeter-wave devices with massive antenna arrays cannot be tested using cables because there will be no possibility to add connectors for every antenna element. The dynamic (active) nature of antenna arrays means it isn’t possible to extrapolate end-to-end performance from measurements of individual antenna elements. So yes, for testing 5G, it really is time to throw away the cables…whether we want to or not!

Keep calm because we are going over the air

 

Correctly Modelling the mmWave Channel is the Key to Designing a 5G System That Actually Works

5G millimeter wave new radio design modelling

 

A new radio design starts with the reality of the deployment environment, in this case a mmWave one. How this behaves isn’t a committee decision, rather it’s the laws of physics that are not up for debate! Next, we model the radio channel, and once we have a model, we can design a new radio specification to fit the model. Then, we design products to meet the new radio specifications, and finally we test those products against our starting assumptions in the model. If we have got it right—in other words, if the model is sufficiently overlapped with reality—then products that pass the tests should work when they are deployed in the real environment. That's the theory. While we know how to run this process at low frequencies, for mmWave, there is a big step up as the difference in the propagation conditions is enormous and our understanding for that is still growing.

 

Now let’s look at the scope of radio requirements that we're going to have to validate—that is, what we have to measure, and critically, the environment or channel in which we measure them.

 

The Scope of 5G mmWave OTA Testing

5G millimeter wave OTA test requirements

For RF, it’s about what is already familiar—power signal quality, sensitivity—and those are all measured in an ideal line-of-sight channel. With regards to demodulation, throughput tests will be done in non-ideal (faded) conditions as was the case for LTE MIMO OTA. There we had 2D spatial channels, but for mmWave, the requirement will be 3D spatial channels because the 2D assumptions at low frequencies are no longer accurate enough. In addition, we need to include spatial interference, since the omnidirectional interference assumed for LTE is no longer realistic at mmWave due to narrow bandwidths. Radio resource management (RRM) requirements are about signal acquisition and channel-state information (CSI) reporting, signal tracking, handover, etc. That environment is even more complicated because now we’ll have a dynamic multi-signal 3D environment unlike the static geometry we have for the majority of demodulation tests.

 

Balancing Out 5G mmWave Opportunities and Challenges

5G millimeter wave OTA test opportunities and challenges

 

The benefits of 5G and mmWave have been well publicized. There's a lot of spectrum that will allow higher network capacity and data rates, and we can exploit the spatial domain and get better efficiencies. However, testing all of this has to be done over the air and that presents a number of challenges that we have to solve if we're going to have satisfied 5G customers.

  • We know that we're going to have to use active antennas with narrow beams in user devices and base stations, and those are much harder to deal with than fixed antennas with wide beams.
  • We know that spatial tests are slower than cabled, so you can expect long test times.
  • We've got the whole issue of head, hand, body blocking on user devices—it's something that isn't being considered a priority for release-15 within 3GPP but will nevertheless impact customer experience.
  • We know that OTA testing requires large chambers and is expensive.
  • We know OTA accuracy is not as good as cabled testing—we're going to have to get used to that, particularly at mmWave frequencies where provisional uncertainties are above 6 dB.
  • Channel models for demodulation and RRM tests haven't been agreed upon yet, which is impacting agreement on baseline test methods for demodulation and RRM.

 

Takeaways

With 5G mobile communications, there's a paradigm shift going on because of mmWave. We used to work below 6 GHz and the question we asked was, "How good is my signal?" That question led to the development of non-spatial conducted requirements. The question now for mmWave is, "Where is my signal?" That's going to lead to the development of 3D spatial requirements which can only be validated using OTA testing. This is a fundamental shift in the industry that will impact the entire design and test flow.

 

It’s going to be a tall order testing 5G mmWave devices. In spite of the unknowns, Keysight is committed to getting our customers on the fastest path to 5G. Stay tuned as Keysight continues to roll out 5G testing methodologies and system solutions. Explore the 5G resources currently available.

 

This article is an adaptation of Moray's original post published in the Next Generation Wireless Communications Blog, where you can connect with our industry and solution experts as they share their experiences, opinions and measurement tips on a number of cellular and wireless design and test topics that matter to you.

Tubbs fire aftermath, Santa RosaWhen the Tubbs fire, dubbed the most destructive and costly in California history, swept through Santa Rosa, we at Keysight found ourselves on ground zero of the mandatory evacuation zone, rushing to assess impact to our headquarters and our 1500 employees and families. It tested our crisis management and leadership skills beyond what we could have prepared for, exposed the true nature of our values, and changed us in indelible ways. It also left us with a monumental challenge: how, and how fast, to rebuild. This is what we learned.

 

Preparedness is essential but only takes you so far.

A vetted and practiced crisis playbook proved indispensable; so did local and global crisis response teams pre-assigned to critical roles and ready to spring to action. But every crisis is unique and this one was massive. Families were evacuated in the middle of the night. Tens of thousands were displaced, nearly five thousand homes destroyed. We were left to balance the established process with the unexpected and dynamic nature of the fire, the blinding speed at which it unfolded, and the myriad related crises it created.

 

A strong leadership shadow drives action.

CEO Ron Nersesian at the help of the Santa Rosa fire crisesThe fire broke out at 10 p.m. and, fanned by 50 mile-per-hour winds, reached Santa Rosa by 1:30 a.m. With CEO Ron Nersesian out of the country, the rest of the executive team had to deploy crisis response in the middle of the night, making on-the-spot decisions the first 13 hours, while Nersesian jumped back on the first flight back. The team set up a command center away from the fires, directed immediate action and decided on employee aid and compensation. Our ability to take the helm during the crisis was enabled by the strong leadership shadow Nersesian had cast in his 3-year tenure as CEO.

 

 

It really does take a village.

#KeysightStrong banner put up to untie and encourage the community during the Santa Rosa crises

We expected the rest of our 145 sites around the world to focus on business continuity. The crisis, however, proved just too big for the executive and crisis team to handle by themselves. With unreliable cell and internet coverage, employees outside Santa Rosa organized phone trees and deployed multiple forms of communications to reach impacted colleagues.  A software team in Atlanta had an SMS text solution working within hours as well as a public website for matching requests with aid. The social media team in Colorado used its channels to help employees keep track of one another. Another team set up a charitable fund. Critical to the effort was our ability to use these extended teams across the company to solve problems that could be addressed remotely.

 

Business continuity is not business-as-usual.

Keysight customers are innovators who win or lose in the market based on being first and best and whose timelines don’t leave room for equipment delays. We had to put mitigation actions in place, from special customer outreach, to loaner equipment, to addressing predatory actions from competitors, keeping our customer-facing teams around the world on alert.

 

No task too small or far-fetched.

By day 3, the crisis team had hardly slept. There was no time to eat. The CMO and CFO shopped for supplies for the command center, including a pillow and two dog beds for a makeshift bed for the crisis commander – who hadn’t slept in 36 hours. Another leader offered to come to the crisis response lead’s home to “sit there and get food and water.” An R&D director researched and set up an external call center within 2 hours to take calls from the mounting number of employees needing assistance. Whatever was needed, whatever it took… we set aside roles and titles to do the right thing.

 

Rebuilding in the aftermath takes more than we think

...and longer than we expect.

Rebuilding the Keysight community after the Tubbs fire aftermath

Employees who had minutes to evacuate left their homes with only their families in tow and the clothes on their backs.  Rebuilding from a large-scale crisis takes time well-beyond when the last embers of the fires are extinguished, and requires much more than re-opening the company’s doors.

 

Address physical and emotional needs. The makeshift relief center sponsored by the company addressed basics like phone chargers, underwear and bottled water. But it also became a place people could come to connect, help, get support, and receive counseling – from professionals as well as colleagues who were themselves crisis survivors. We learned to start with the most basic, then build from there as we understood other needs.

 

Acknowledge the heart. We had to remember these were people’s homes, families, and friends affected. CEO Nersesian’s messages to employees emphasized people first, response and resources second. We learned we couldn’t take a fact-based, checklist-driven approach when people’s lives were intertwined with the crisis.

 

Offer respite. When personal life is in flux, work can become something to hold on to, a source of stability. Setting up temporary work spaces while the site was being cleaned up turned out to be a source of healing, as did photos of the beloved campus when it first restored power, and its hundreds of centuries-old trees that had survived.

 

Don’t rush. The reality is that we are still navigating the crisis, and may be for some time. While our culture is intact, our community must rebuild. As we move into this next phase, we’re figuring out day by day what that rebuilding looks like. It’s new homes for employees, new lives as children enroll in different schools, community gatherings to get stronger together. Now, as the fires come under control, we’re filtering decisions based on what our people, and the business, are ready for. Re-starting critical operations is a source of stability; delaying optional ones gives us breathing room to find our new normal.

 

We are not the same company or the same people we were before the fires. But we ARE stronger, more resilient, and more resourceful as a result of having lived through them. #KeysightStrong

Rebuilding_Santa_Rosa_after_the_fire

There are two types of business assets: tangible and intangible. Your largest set of tangible assets is a major line item in your financials: plant, property and equipment (PPE), which includes test equipment. These days, the abundant data coming from that test equipment is among your largest intangible assets.

 

Such data delivers tremendous value because it can tell you what’s actually happening inside your operations—if you choose to listen to it. A few example use cases will illustrate what your data can tell you when data analytics (DA) converts it into actionable insights around key performance indicators (KPIs) such as yield, quality, throughput, utilization, and cost.

 

Exploring four specific examples

Many of our customers are in the early stages of applying data analytics for test and measurement (DA for T&M) to their operational data. As Bob Witte pointed out in his post, understanding your existing data helps improve overall knowledge of your business—and this enables you to define the key questions operational data can help you answer.

 

Among the Keysight customers who are actively climbing the maturity curve, many are applying DA for T&M in manufacturing. A majority of these projects align with one of four KPIs: warranty returns, mean time between failures (MTBF), test throughput, or quality and yield.

 

Reduce warranty returns

Let’s suppose one of your product lines is passing a finely tuned battery of tests in manufacturing; however, many are failing in the field and coming back as warranty returns. I suggest that you use DA for T&M to analyze data from every test in each step of your production process. Any outliers or problematic trends in the data will be visible, and you can correlate measurement results to every device by serial number. This will enable you to capture “walking wounded” devices—those that marginally pass or have been poorly reworked—before they can disappoint your customers.

 

Improve MTBF

As a regular practice, your staff may perform routine maintenance on crucial test equipment according to a fixed schedule. The upside: greater peace of mind. The downside: hours of instrument downtime, even though it is scheduled; and lost time for the people who are working on equipment that doesn’t actually need attention.

 

I would suggest a more efficient approach: Applying DA for T&M lets you shift from routine maintenance to preventive maintenance based on statistical, data-driven predictions of emerging issues or pending failures. This extends the mean time between failures and reduces downtime. It also leads to greater asset efficiency and utilization.

 

Accelerate test throughput

Looking across multiple lines that are manufacturing the same product, you may see significant variability in test times. Using DA for T&M, you can isolate those variations down to the exact test or measurement. This reveals actionable information about differences in test programs, and you can recommend changes that will optimize and accelerate specific tests or procedures. Taking this idea even further, one of our most advanced customers is improving throughput by applying basic machine-learning techniques to real-time data and making on-the-fly adjustments to test programs.

 

Improve quality and yield

Outsourced manufacturing adds complexity to many of your processes. DA for T&M opens the door to real-time process monitoring and control. For example, it can provide alerts based on variations in measurement data from specific components. This may reveal issues such as dual sourcing of components or accidental (or unauthorized) changes to test limits.

 

Taking the next step

Operating without DA for T&M is like driving in an unfamiliar city without a map app: you’ll eventually reach your destination, but you could have gotten there faster and with less frustration. Automated tools, dashboards and reports can guide you along the entire product lifecycle—and this applies to virtually every function, department and team within your operation.

 

Let’s discuss: What tools are you using? Which KPIs are you tracking? What sorts of improvements have you been able to achieve?