Skip navigation
All Places > Keysight Blogs > Insights Unlocked > Blog > 2018 > July
2018

One common trait of many engineers is that we are naturally curious and enjoy exercising our brains. Take math for example, I always found solving math problems on par with solving a puzzle, and might even go as far to say, ‘math is fun’. This was in direct contrast to my younger sister who loathed everything math related and went on to become a university professor in English literature. Mastering language was her idea of fun.

 

As I transitioned in my career from highly technical systems engineering for optical networks 20 years ago, to now marketing data center solutions at Keysight, I have gained a profound new respect for language. Surrounded by colleagues with PhDs in physics and engineering, I have come to realize that they sometimes speak a language of their own filled with abbreviations and acronyms that not everyone understands.  

 

Abbreviations and acronyms have been around since the beginning of written language. In our efforts to communicate more quickly with one another, we instinctively use abbreviations and acronyms out of convenience. It seems that every industry has its own language filled with commonly used terms. As I write white papers and content for Data Center Infrastructure solutions, I pay close attention to spell out the first instance of any acronyms used and to define them wherever possible. Over time, I have compiled 14 pages of data center infrastructure related terms and their definitions. No matter how technically savvy you are, I think you will find this document handy. Download your copy and let me know if I missed any important terms?

 

For those of you who still enjoy puzzles, here is a fun word search of a dozen common terms I use almost every day related to data center infrastructure solutions. See if you can find them in the puzzle below and look them up in the Data Center Infrastructure Glossary of Terms if you want to learn how we define them here at Keysight.

 

400GE, DCI, FEC, NFV, PAM, QSFP, BERT, DDR, IEEE, NRZ, PCIe, SDN

 

As discussed in recent blogs, 5G’s momentum is unstoppable, with trials by commercial operators due to be in place in several cities globally this year. But despite the technology’s rapid advances and the success of early deployments, there are still some major challenges that need to be overcome. One of the most significant of these is in delivering high-frequency 5G signals to users’ mobile devices reliably, in typical urban environments.

 

As you may have heard, 5G adds new spectrum both sub-6 GHz and at mmWave frequencies above 24 GHz. More spectrum is key to delivering the multi-gigabit speeds promised by 5G. The new sub-6GHz frequencies behave similarly to existing LTE spectrum, but the new mmWave frequencies are notorious for high propagation loss, directivity, and sensitivity to blockage.

 

mmWave frequencies don’t travel well through solid objects, such as buildings, car bodywork, or even our own bodies. In practice, this means that a user could potentially lose a 5G mmWave signal simply by holding or using their device in the ‘wrong’ way.

 

People problems

You may recall the ‘Antennagate’ issue that Apple faced back in 2010, in which its iPhone 4 would lose signal when it was held by the lower-left corner. This forced Apple to give away free bumper cases so that users’ hands wouldn't touch the edge of the phone, where the antenna was positioned. It’s a problem that can affect any handheld device because skin and bone is a very effective absorber of radio waves. However, the mobile industry can’t afford to have similar issues affect an entire generation of 5G phones and tablets.

 

To compound this issue, it’s also hard to predict what will happen to 5G mmWave signals when the receiver, the transmitter, or obstacles between them are moving relative to each other, such as in a busy city street. Earlier this year, Keysight and NTT DOCOMO cooperated on a channel sounding study at mmWave frequencies, investigating signal propagation at 67 GHz in urban areas with crowds of people.

 

The research found that the radar cross section of human bodies varies randomly over a range of roughly 20 dB – a significant variance. It also concluded that ‘the effects of shadowing and scattering of radio waves by human bodies on propagation channels cannot be ignored.’

 

Sounding out

Given this, it’s essential to conduct channel sounding tests in real-world environments rather than just in the lab, simply because the complexities and constant changes of real-world usage cannot easily be replicated. For example, indoor channels will behave differently to outdoor channels. Even factors such as the number of people in the room, or whether a window in a room is single-paned or double-paned will influence signal behavior.

 

Outdoor environments add a vast number of unpredictable complications. People, vehicles, foliage and even rain or snow will affect 5G mmWave signals, introducing free-space path losses, reflection and diffraction, Doppler shifts, and more.

 

Further variables include the base station’s antenna gain, pattern, and direction; the behavior of the channel itself; and the mobile device’s antenna gain, pattern, and direction. When the base station and user device’s antenna beams are connected, they need to maintain that connection as the device moves in space or changes orientation. The user device may also need to switch to another base station, repeating the beam-directing and forming cycle.

 

The result of all this is clear: exhaustive real-world testing of mmWave 5G base stations and user devices is critical to 5G’s commercial success. And at Keysight, we’re accelerating our testing capabilities to help the wider mobile ecosystem gain insights and advance their innovations. Find out more about our 5G testing methodologies and system solutions.

Next generation optical transceivers are expected to use less power per gigabit, be less expensive per gigabit, and operate at four times the speed of 100GE transceivers. It is rather puzzling how 400GE transceivers will meet all these expectations. In fact, the move from 100GE to 400GE in the data center is revolutionary, not evolutionary.

 

It took 15 years for data centers to evolve from 10GE to 100GE. Data centers began implementing 100GE in 2014, yet full build-outs only became cost-effective over the last couple of years thanks to the availability of more affordable optical transceiver modules. Emerging technologies enabled by fifth generation wireless (5G), such as artificial intelligence (AI), virtual reality (VR), Internet of Things (IoT), and autonomous vehicles, will generate explosive amounts of data in the network. Current 100GE speeds available in data centers will not be able to support the speeds and data processing requirements needed by these technologies. As a result, data center operators are looking to evolve their networks from 100GE to 400GE.

 

There are three key challenges that need to be addressed to make the transition from 100GE to 400GE as smooth as possible:

 

Challenge 1: Increase Channel Capacity

According to the Shannon Hartley theorem, there is a theoretical maximum amount of error-free data over a specified channel bandwidth in the presence of noise. Therefore, either the channel bandwidth or the number of signal levels must be increased to improve the data rate or channel capacity to reach 400GE speeds.

 

Challenge 2: Guarantee Quality & Interoperability

As new 400GE transceiver designs transition from simulation to first prototype hardware, engineers face the challenging task of developing a thorough, yet efficient, test plan. Once deployed in data centers, marginally performing transceivers can bring down the network link, lowering the overall efficiency of the data center as the switches and routers need to re-route the faulty link. The cost associated with failed transceivers once deployed in the data center is enormous. Since large hyperscale data centers can house more than 100,000 transceivers, even a small one-tenth of one percent failure rate would equate to 100 faulty links.

 

Challenge 3: Reduce Test Time, Reduce Cost

Keeping the cost of the optical transceivers low is a high priority for data center operators. To be competitive, transceiver manufacturers must find ways to drive down production costs. Like most new technologies, the price of next-generation optical transceivers tends to drop sharply after introduction to the market, and development costs amortize as volume ramps. Test time contributes significantly to overall transceiver cost. Therefore, more efficient testing of the broad range of transceiver data rates accelerates innovation and lowers cost.

 

The Next Test Challenge

Many data center operators are moving to virtualized networks using software-defined networking (SDN) and network functions virtualization (NFV). They need full network test of Layers 2-7, including SDN/NFV validation and traffic loading, to ensure that traffic flows through the virtualized network as expected. This is the next challenge data center operators will need to overcome.

 

400GE in data centers will soon become a reality.  Find the solutions to address these 400GE transceiver test challenges here.

A good idea is never lost, as Thomas Edison said. At the turn of the 20th century, nearly 30% of all cars manufactured in the US were electric-powered, but by the 1920s they had effectively disappeared, thanks to cheap gas and mass production of conventional cars. But fast-forward 100 years and electric cars are back with a vengeance, in response to rising fuel prices and Government-mandated emissions controls.

 

The number of pure electric and plug-in hybrid cars on the world’s roads passed the 3-million mark in 2017, and will accelerate to over 5 million by the end of 2018, according to analyst EV-Volumes. This rapid growth in production volumes is being matched by equally rapid advances in powertrains, power electronics and battery technologies.

 

These advances also highlight the increasingly critical role played by a previously unsung vehicle component: the DC/DC converter. Irrespective of whether a car is a pure EV or a hybrid, DC/DC voltage conversion is at the heart of its power electronics systems. It manages the energy exchange between the vehicle’s high-voltage (HV) bus which serves the main HV battery, electric traction which drives the motors and generators, and the traditional 12 V power bus, from which most of the car’s electrical systems (in-car entertainment, navigation, heating, lights and more) are powered.

 

The converter is fundamental to the overall efficiency and performance of electric vehicle (EV) and hybrid vehicles, for several reasons:

  • The flow of power between the HV and 12 V buses is bi-directional and can change in milliseconds according to the demands of the vehicle’s systems and driver input (for example, when transitioning from acceleration to regeneration), so large loads have to be converted seamlessly and safely for smooth, predictable vehicle operation.
  • Because of the high electrical loads they carry, most DC:DC converters are water-cooled, which adds weight and costs when installed in the vehicle. There is strong pressure to simplify this, to minimize cooling requirements, cut weight, and improve efficiency.
  • The converter must be robust enough to continue to operate efficiently across a wide range of environmental and driving conditions.

 

Given the pace of development of vehicles, and the constant pressure to keep costs down across the test and development lifecycle for converters, efficient simulation, design, validation, and manufacturing test of the converters is essential. But this, in turn, presents its own challenges.


Converter testing challenges

The design and test obstacles result from many factors:

  • Bi-directional test: Testing bi-directional power flow demands equipment that can both source and sink power to the converter. But conventional test methods using external circuits and multiple instruments typically doesn’t allow for smooth signal transitions between sourcing and sinking power, and give inaccurate simulations of operating conditions. It also leads to excess heat build-up in the test environment.
  • New power semiconductor technology: Designers are starting to use Wide Bandgap (WBG) devices. While these offer better power efficiency and the ability to handle higher voltages and temperatures than conventional silicon devices, their use complicates the simulation and design of DC:DC converters. Users need to evaluate each device to determine if the WBG devices will work in their designs.
  • Reliability and safety concerns: Using new semiconductors means extra validation and reliability testing is needed to ensure converters will last in harsh operating conditions. Also, given the power levels used with converters, testers need to be careful when testing them. This requires special safety mechanisms in manufacturing, including redundant systems that do not expose the tester to high voltages if a failure occurs.
  • Maximizing efficiency: Because of the various operational and environmental influences on efficiency, it’s difficult for testers to simulate all of these to evaluate the real-world, whole system operation of the converter. Also, measuring small percentage changes in efficiency demands instruments with high dynamic range.


Converting challenges into solutions

To address these design and test issues, Keysight has developed and introduced new, innovative approaches that help manufacturers accelerate their programs. For example, high-frequency enabled simulators can accurately simulate the behavior of new WBG semiconductors, helping to improve the efficiency of DC:DC converter designs.

 

New, fully integrated source/sink solutions enable more accurate simulation and testing of DC:DC converters’ transitions in power flow direction during acceleration and regeneration. These test solutions also feature the ability to return clean power to the test lab’s AC mains, reducing regenerative heat and dramatically cutting the need for costly HVAC equipment in the test system environment.

 

Find out more about how Keysight enables more accurate and efficient testing of the systems at the heart of the electric and hybrid vehicle revolution here.

As a technology corporation, there are many tables to consider claiming a seat at. Economic policy boards, 3GPP standards bodies, other corporate boards, chambers of commerce – the list is endless. But there is one table that I’ve been at for some time and I think every company should pull up a seat to: the energy conservation table.


In May 2018, I had the opportunity to participate in a panel at the ASPIRE Global Forum in Mountain View, California, on how the global business community can work together to address the energy challenges that exist in the world today. With the U.S. Energy Information Administration’s (EIA’s) projection that energy consumption will increase by 28% by 2040, there is no better time than now for global corporations to influence policies and take steps to address the impact of global energy and natural resource needs on the planet. At Keysight, we expect to recognize $2 million in cost avoidance, 10% energy conservation and 15% water conservation by the end of fiscal year 2020 (see our latest 2017 CSR Report here).

 

At the forum, I was fortunate enough to share the stage with high-profile speakers including Jeff Emelt, former CEO of General Electric, and General Colin Powell, former Secretary of State. I spoke about micro-grids and their importance for large corporations, especially as they relate to energy redundancy. We are currently in the process of installing a 4.3-megawatt fuel cell at Keysight’s corporate headquarters in Santa Rosa, which runs on natural gas and provides almost zero emissions.

 

This isn’t the first time I’ve spoken about Keysight’s investment in sustainability and energy efficiency. Keysight has a strong commitment to minimizing our carbon footprint and we have taken actions for several years to conserve natural resources and improve efficiency.

 

In 2008, we installed a 1-megawatt solar array that shifts with the movement of the sun – at the time, it was the largest solar array in the Northern California Bay Area region. We were also early adopters in providing free electric vehicle charging stations (also powered by the sun) for our employees. We have an integrated water reclamation system to utilize recycled water in our landscaping. And in 2017, we installed one million square feet of energy efficient LED lighting with smart sensors driven by a software backbone in our Santa Rosa, Hachioji, Penang and Colorado Springs campus locations.

 

Sure, we do this because we have a strong belief that large businesses like Keysight have a role in pushing initiatives like these forward. But it also makes good business sense. Our suppliers, our competitors, and our communities all take notice when we lead.

 

But even more importantly – our customers, investors, and employees take notice. Leading in intelligent sustainability practices is not only the right thing to do, it’s a business imperative as Hamish Gray, corporate services vice president, recently noted. Companies throughout the world are setting standards and when we illustrate our leadership not just in enabling technology but in sustainability, we win business. By demonstrating leading-edge sustainability practices – whether through our processes, tools or systems – we gain credibility and, ultimately, market share.

 

Energy and natural resource conservation is a journey: Where to begin?

The investments we’ve made in sustainability and efficiency have been done over time. It would have been overwhelming and cost prohibitive to do all of these things at once. Companies who are looking to address these opportunities may wonder where to begin.

 

Companies should start by evaluating where they are today, and aim at getting some quick wins. Setting big, lofty goals is great, but simple policy changes can really inspire people and begin compounding quickly. Consider turning down the thermostat on the weekends. Or start evaluating your own suppliers and see that their practices fit with your goals to be environmentally friendly.

 

I also suggest that you celebrate your small wins and make them visible. Touting your sustainability metrics can help build on that momentum and gain some positive attention, and may even lead to partnerships with suppliers that ultimately make capital investments down the line easier.

 

Also, consider the role of energy and natural resource conservation as part of your overall corporate citizenship efforts. At Keysight, we implemented a six-step journey to an evolved CSR program model that helped gain traction not only in the energy and water conservation space but across our citizenship efforts worldwide.

 

It’s never too late to pull up a chair for energy and natural resource conservation. Leading in sustainability practices not only looks good, but it feels good knowing we’re on the right side of history.

It has never been more important for businesses to consider the impact their operations have on the environment. For example, according to the U.S. Environmental Protection Agency, 30% of total U.S. greenhouse gas emissions came from the industrial sector in 2016 – the most of any sector.

 

As a global business, we need to take a holistic approach to sustainability. We must not only consider carefully the direct environmental impact of the materials and processes we use; we must also manage our operations in the most energy-efficient and sustainable ways possible. We need to look outwards, considering how we use and replace natural resources, as well as inwards to minimize the emissions and waste we produce. Our own operations, and those of our suppliers and partners, are linked in a continuous chain of environmental responsibility.

 

During 2017, we worked to clarify what it means to build and maintain a better planet through our CSR programs, and developed a set of key environmental impact goals to achieve by the end of 2020. We aimed to recognize $2 million in cost avoidance, 10% energy conservation, and 15% water conservation compared to our fiscal year 2015 baseline.

 

Accelerating sustainability

To meet these goals, we have put in place multiple systems and initiatives across our business. A key example is the one megawatt, three-acre solar electricity system at our headquarters in Santa Rosa, which reduces our carbon footprint by using renewable energy. It not only provides 5% of our site’s electrical needs, but also powers more than 30 vehicle charging stations, so our employees can charge their electric vehicles while they are at work.

 

We are proud to run an ISO 14001:2015-certified Environmental Management System which continuously reduces adverse environmental impacts from our operations and drives ongoing improvements in our environmental performance. This compliance framework applies to the entire product development lifecycle, from initial design and development, throughout production and delivery, to refurbishment and support.

 

We also use the General Specification for the Environment (GSE) directive, which sets restrictions for any hazardous substances in the materials and components we use for our products. We have developed a remarketing solutions business to address this issue. This operation recovers and repurposes older instruments for resale, and helps us to reduce the number of products that end up in landfills. Further program options help customers safely dispose of or recycle used instrumentation.

 

These efforts are already bearing fruit

Since November 2014, we have recognized 4.69% and 12.44% respectively energy and water conservation. This has resulted in approximately $850,000 in cost avoidance. We also don’t believe in standing still: our excellent results in water conservation last year led us to increase our water conservation goal from the original 10% target to 15%. Crucially, these efforts have led to no material negative impacts on our profit and loss, or institutional investment levels.

 

It has always been important to us that our activities in running a successful, profitable and innovative business go hand-in-hand with sustainability. As we help people and organizations globally to solve problems by accelerating innovation, we’re ensuring that this has the minimum possible impact on the planet’s ecosystem. Find out more about our progress towards our 2020 environmental CSR goals by downloading our 2017 CSR Report.

The business maxim “If you can’t measure it, you can’t improve it” is also a pretty good definition of the value of test when developing new products and technologies. That’s why for many companies, test and measurement equipment is often one of the biggest capital expenditures on their balance sheets.

However, this means that test departments are themselves increasingly subject to scrutiny and measurement. They’re being pressured to accelerate their processes and deliver results faster to speed up development cycles and meet market demand. At the same time, the business is squeezing test budgets to minimize capital and operating expenditure and maximize the return on existing test investments.

 

To meet these conflicting demands, test departments need in-depth visibility into what’s happening with the test equipment on their benches. It’s no longer sustainable for instrument utilization to be tracked and recorded manually by staff using paper or spreadsheet-based processes.

 

Who’s using my equipment?

Departments need to know where their critical assets are, who’s using them, and how often. Is costly equipment sitting idle under benches, or only used infrequently? It’s also critical to understand the health of instruments: do they need recalibrating, or are they operating in environments that could affect their accuracy or lead to premature failure?

 

Without these insights, test departments can’t measure and improve the efficiency of their own processes – and can’t make the best decisions when scoping out the resourcing and equipment needs for upcoming test projects. So, what’s needed to give departments the visibility they need and enable them to gain full control of all their test assets, to maximize their productivity and ROI? There are three fundamental processes involved:

 

1. Asset tracking and control

It’s essential to know what test equipment is available to teams, where it is located, and who is currently controlling its usage. This makes it easier to locate instruments when they are needed for a test, or for calibration or maintenance. The benefits of asset tracking include time savings during audit processes, an updated equipment inventory and fewer lost assets. Standard asset tracking tools can provide access to this data.

 

2. Assessing instrument utilization and health

As well as knowing an instrument’s location, it’s important to have specific details on how it is being utilized. Not just whether it’s switched on, but being able to access detailed real-time application logging to show precisely what it is being used for. This telemetry will show the health of the asset (such as operating voltage and temperature) which can indicate the early signs of a problem, or when maintenance is due – helping to avoid any potentially costly downtime from premature failure. It also helps to identify assets that are not in regular use, and those which may be surplus to requirements because they are no longer adding productive value to the test department. As a result, decisions can be made to trade-in or sell under-utilized equipment.

 

3. Optimizing asset use

When the location and utilization of assets is being managed effectively, a loan pool of instruments can be created to enable scheduled sharing across groups of users. This cuts costs by avoiding unnecessary new equipment purchases, and helps to maximize usage of existing assets.

 

Making the most of your test assets

When applied using an integrated approach, these management processes enable organizations to do and achieve more with their existing test assets, while saving on future CapEx and OpEx investments. And to help organizations put these processes in place, we recently introduced our Test Asset Optimization Services, the industry’s first integrated solution to address the complete asset management needs for all test equipment used in R&D and manufacturing.

 

With our integrated suite of services, organizations’ test departments can:

  1. See all their test assets from multiple equipment vendors, track them across multiple labs, locations, and users, and manage their compliance. This reduces the time spent on physical inventory counts and improves the productivity of engineering teams by giving fast access to the right assets.
  2. Know the physical condition and true utilization of test equipment through monitoring, to increase asset usage, decrease cost of test, and identify unhealthy instruments before a bigger problem occurs.
  3. Optimize use of existing equipment across the organization with a central loan pool. This assists with smarter procurement decisions such as the need to purchase or rent new instruments, and helps customers to realize the residual market value of older equipment through trade-ins or upgrades.


Integrated Test Asset Optimization Services ensure that teams always have access to the test equipment they need, at the right time. They also enable organizations to unlock powerful, actionable insights into asset usage and ROI that they’ve never previously had access to, helping to boost test efficiency and agility. Find out more about the services here.