Skip navigation
All Places > Keysight Blogs > Insights Unlocked > Blog
1 2 3 Previous Next

Insights Unlocked

84 posts

It’s official: Keysight Technologies is now certified as a Great Place to Work by the leading global authority on high trust, high-performance workplace cultures. Great Places to Work® surveyed our employees, who said that they enjoy a great atmosphere, great challenges, and great rewards. As a result, 92% are proud to tell others they work at Keysight, which explains why over half of our employees have been with us for more than 10 years. So what is it that makes Keysight a great place to work?

 

Put simply; it’s a combination of fantastic people and the innovative programs that Keysight runs. This combination gives all our employees a powerful sense of being part of a culture that is truly unique and special. The roots of this culture go back 80 years, to when Bill Hewlett and David Packard founded their company in a Palo Alto garage, based on business principles that became known as the “HP Way.” We strive to honor that culture by maintaining a best-in-class work environment that fosters respect for individuals, their ideas, and contributions.

 

Getting onboard

This starts even before a new employee joins Keysight. Instead of using external recruiters, our college recruiting teams are employees in the jobs that candidates will be doing. This gives candidates the opportunity to talk with employees that they can relate to. During the interview process, we bring candidates on-site to meet with team members for a realistic job preview, featuring site tours, lunch in the cafeteria, and lab tours to give our candidates a clear preview of the work environment and culture. 

 

And when new employees join Keysight, we ensure they have a front row seat to see our culture in action. Leaders can be seen and heard helping employees directly. These everyday interactions bring to life Keysight's policy and practice of treating all employees with dignity, courtesy, and respect. They also create an environment where ideas are shared freely at all levels, helping to instill our culture through positive influence.  

 

Inspiration and learning

The positive influences extend through initiatives such as mentoring and “buddy” programs, training, stretch assignments, and special projects. Managers explore the interests, passions, and strengths of new employees and connect them to a team to best utilize those strengths. We also believe deeply in personal and professional development, which is why we offer resources such as Keysight University, which enables timely and personalized employee-driven education based on individual needs and interests.

 

 

Real help in times of real need

In early October 2017, the devastating wildfires in Santa Rosa, California impacted our headquarters and the homes of more than 1,500 employees. Led by our CEO, Ron Nersesian, we immediately put in place a comprehensive crisis management plan. We contacted each employee to check on and offer support for them and their families, and to assure them they would receive full pay while they began to pull their lives back together. We set up a disaster recovery center and gave direct financial assistance to those who lost their homes or were displaced.

 

To help employees get back to their normal lives, we set up an Employee Relief Fund, to which Keysight employees around the world have donated more than $1 million, and a share vacation bank that allowed employees to donate their time off to those impacted by the fires. We also offered professional support and counseling to affected workers in the aftermath of the fires.

 

Supporting diversity

Keysight is committed to diversity and has set up programs to attract the most diverse pool of candidates, along with outreach efforts to empower under-represented groups. We are truly an equal opportunity employer, and our personnel policies and practices are built on the principle that all employees are treated with dignity and respect. A particular focus is investing in the success and advancement of female employees, helping to develop the next generation of engineers. That’s why we foster long-standing relationships with organizations that empower and inspire women — such as the Society of Women Engineers (SWE). We also run programs in schools and universities to introduce women to engineering and technology subjects.

 

Driving positive change

At Keysight, we believe the purpose of corporate social responsibility is to do good, not just to look good. We have a long tradition of outstanding corporate citizenship, and we’re proud of the role we’ve played to enrich the many communities where we operate. We continuously assess our impact on the environment, reducing our emissions, and conserving more energy and water every year, as outlined in our recent blog. A key part of our CSR programs is also actively supporting employees in working on philanthropic and environmental projects, and education-based initiatives, to help them reach their full potential both inside and outside of the workplace.

 

These are just a few of the many reasons why Keysight is a great place to work. If you’re interested in finding out more about becoming part of our valued team, why not start your journey here?

As a millennial, I want to work for a company that meets three top criteria:
1. Offers innovative work opportunities
2. Has an ethical and prestigious business reputation
3. Maintains realistic environmental and socially responsible practices.

 

In searching for an internship, I reviewed several company 10Ks, websites, and blogs to find the right company that aligned with my values. Eventually, I landed my summer position at Keysight.

 

What first attracted me to Keysight was its business involvement with cutting-edge technologies like 5G, Internet of Things (IoT), and radar. And while I was not familiar with the Keysight name, I did recognize its heritage from well-reputed HP and Agilent. Check off two of my top three criteria right there!

 

Then, in talking with Keysight about a specific internship opportunity in its corporate social responsibility (CSR) team, I knew that Keysight had a vested interest in the practices I valued – check off my third criteria! My choice to accept Keysight’s internship offer was easy.

 

Fast forward three months – as I enter the final week of my internship, I have been reflecting upon my time with Keysight, its CSR program, and the valuable experiences I’ve had. There has not been a dull moment!

 

Keysight employees exemplify the company’s values

Tasked with researching and developing global response materials for the environmental and social responsibility area, I was able to engage with Keysight employees across several departments and across the world. My research spanned from  investigations to individual employee conversations including the company’s senior executive staff. It was a unique experience that showed me the diversity of the company as well as how strongly Keysight employees, and their work practices, emulated the company’s values and CSR vision.

 

The company walks the talk on CSR investments

Additionally, I found Keysight’s environmental and sustainability practices impressive. Locations across the globe actively invest in low-emission building amenities, including some leading-edge installations, to offer a comfortable, beautiful work environment for its employees. The installations were both practical and innovative. My favorite Keysight initiatives have been the transfer to natural gas to power electricity, goats to maintain fire breaks in the landscape, and native grass planted to replace higher energy and water consuming grass.

 

Specifically related to CSR, I’ve been impressed with Keysight’s strong support of its local communities around the world, especially in STEM education for all levels and in crisis support and recovery. I had first-hand experience being located at Keysight’s headquarters where last year’s wildfires damaged the facility. In the face of this crisis, Keysight has remained stable, illustrating the high aptitude of the employees and the business support infrastructure, all while supporting the local community.

 

Taking a piece of CSR learnings with me

As I leave my internship this week and head back to complete my MBA at Penn State University, I’ll take with me a new perspective that I learned from Keysight. Work-life balance is not only important for morale, but also it is important for local communities. Rewards come from building relationships within local communities as it grows the local economy and innovative solutions come from these collaborations. Not to mention, it’s enjoyable to step away from the office! Being a person of action, I plan to organize a CSR-related event when I return to school. After all, my fellow millennial classmates will welcome the opportunity to give back.

 

Diana at her internship at Keysight/

 

See the Keysight CSR Web page for more information. Diana returns to Penn State University in mid-August to complete her MBA. If you would like to contact her, please message directly on LinkedIn.

Marie Hattar

One Step Away from 5G

Posted by Marie Hattar Employee Aug 10, 2018

One of the challenges with introducing a revolutionary new technology is that people want to be able to use it right away. They don’t want to have to wait to enjoy its promised benefits. 5G is a good example: it’s been publicly showcased at scale, and initial rollouts are under way. But it will be 18 months or more before services start on the path to ubiquity. And in the meantime, customers are demanding ever-faster mobile data connections.

 

Ericsson’s Mobility Report from June 2018 stated that monthly data traffic per mobile device in the U.S. will increase by nearly 7x, from 7.2 GB today to 49 GB in 2023. This upward trend has been steady since 2011, representing a 43% compound annual growth rate in traffic per smartphone subscriber. Further, cellular networks are becoming the central platform for connecting IoT devices and enabling M2M communications. The result is that service providers have to satisfy these growing, seemingly insatiable demands now, while they continue to build, test, and deploy their 5G networks.

 

The good news is that implementing new capabilities of 4G LTE can help to meet those demands. 4G LTE-Advanced Pro (also known as 4.9G) can satisfy subscribers’ appetite for data and connectivity, as it offers incremental improvements to existing 4G networks. Put simply, 4.9G supercharges conventional 4G LTE with use of carrier aggregation and large antenna arrays, meaning that 4.9G-enabled sites can deliver greater capacity and much faster performance on compatible devices. This offers a stepping-stone to full 5G services and gives providers a testing ground for full 5G applications and business models, without the huge upfront investments.

 

However, evolving an existing LTE network to 4.9G does present challenges that service providers need to overcome to maximize their capabilities. These challenges fall into in five areas:

 

Making networks gigabit-ready

4.9G should offer gigabit speeds, to help bridge the performance gap to 5G. To take full advantage of 4.9G technology, service providers should focus on optimizing spectrum usage and network capacity, with carrier aggregation and rolling out pre-5G FD-MIMO. They should also ensure that networks are optimized for 4.9G by working with network equipment manufacturers (NEMs) on an upgrade path that supports these areas and invest in platforms that offer software-defined capabilities to future-proof network upgrades.

 

Network virtualization

Migrating to a cloud-based, virtual evolved packet core (vEPC) platform is an initial step that allows more efficient deployment of network resources. Many service providers are doing this by implementing network functions virtualization (NFV), to enable a more flexible and adaptable 5G-ready platform. It’s also important to evolve the RAN to maximize 4.9G performance. By implementing LTE-A and LTE-A Pro, 4G LTE networks can evolve in terms of FD-MIMO and NB-IoT to offer more capacity, lower latency, more connections, and a more flexible architecture.

 

Both the RAN and the core will need to be extensively tested to determine how the infrastructure handles the massive amounts of data driven by 4.9G technologies, with minimal latency. This is especially important in networks supporting real-time traffic and mission-critical applications.

 

Gaining experience in new business models

5G will support a broad range of use cases, with the focus on enhanced mobile broadband, ultra-reliable low-latency communications, and massive M2M connectivity. Implementing 4.9G will take service providers a step closer to 5G, using current 4G technologies such as CAT-M1 and NB-IoT. These offer experience in operating the type of network that can support wide-area, low-bandwidth, low power consumption services.

 

This is an opportunity to build a customer base, explore potential use cases in vertical industries, and engage in large scale IoT initiatives such as smart grid/smart city prior to the release of 5G standards and technologies. 4.9G means providers can pilot business models and develop a 5G-like ecosystem ahead of the curve.

 

Efficient, flexible spectrum use

Given the huge costs of spectrum, a 4.9G strategy must be planned out, including 5G coexistence. Service providers’ spectrum holdings should be used now to gain experience with technologies such as MIMO, carrier aggregation, unlicensed shared spectrum using license-assisted access (LAA), and small cells. This gives more options for repurposing existing spectrum resources and helps to develop a long-term plan that supports coexistence of 4.9G and 5G.

 

Investing in future-proof infrastructure

4.9G shares many performance attributes with 5G and requires similar infrastructure hardware upgrades when considering sub-6 GHz frequency bands. Upgrade considerations include antenna modernization and cell densification, as well as deploying new cell sites. Underneath all of this is the requirement for a reliable, secure backhaul network, which means building out fiber networks. Given the costs of doing this, from securing leases and regulatory approval, through to construction and commissioning, an end-to-end test strategy to validate the performance of the network as a whole is essential.

 

Evolving to 4.9G by implementing LTE-Advanced Pro will help to meet customers’ data demands, as technology and standards advance towards 5G. The service providers who invest the time and resources in understanding the implementation challenges now, and in comprehensive testing of how their systems perform as they evolve, will be well positioned to lead the market.

 

Find out more by reading our new white paper on maximizing 4G LTE networks.

 

 

 

 

In my short time with Keysight at Santa Rosa, I’ve been amazed by its attention to sustainable environmental practices. Signs dot the lawn indicating “Recycled (non-potable) water used here.” Solar panels cover a huge portion of the employee parking. Three acres to be exact. Vegetable gardens located near the office buildings are cultivated by employees. Compost trash is collected in the employee breaks areas.

 

During an employee 5K walk on the 200-acre property, I discovered that half of the landscape is covered with native trees and grasses on rolling hills in their natural habitat. And, Keysight has a recycling center on its property!

 

With all these sustainable amenities in place, I should not have been surprised by some unusual practices too.

 

Walking out of the office one late afternoon I heard an unusual sound: hundreds of goats grazing near the parking lot! Not what I was expecting when I left for the day. Later that week the goats appeared again, only this time they grazed closer to the office buildings and near the vegetable gardens. Employees interacted with the goats by taking pictures and even petting the young kids.

 

As I watched, I could see that goats were effective in clearing the dried grass surrounding the property, and I wondered if they were doing more for the health of the landscape? I contacted Tricia Burt, Keysight’s site manager, to find out more about the goat herd. Tricia informed me that Keysight used goat and sheep herds annually to maintain the fire breaks around the property. Keysight started using herds nine years ago. Goats are preferred as they eat invasive plants, like Scotch Broom, and lower tree limbs which keep the landscape maintained to code levels. Tricia said that locals informed Keysight that areas where the goats had grazed the grass and underbrush last year were less impacted by the 2017 fires.

 

I was not in Santa Rosa during the fires, but I still see the environmental and emotional impact left behind. So, while these goats have been entertaining to watch, they also play a crucial role in maintaining the natural landscape. And in a sustainable way that does not interrupt the natural habitat surrounding Keysight Santa Rosa.

What are some surprising sustainability practices that your company has done?

If my 35-year career in the tech industry has taught me anything, it is that we need to educate children in science, technology, engineering, and math (STEM) skills NOW to prepare them for our future tomorrow. If we miss this opportunity, we may see these fields start to slip behind their potential in the not-too-distant future. We are already on the cusp of such impact. The Smithsonian Science Education Center has projected that 2.4 million STEM jobs will go unfilled this year. That statistic alone should be a wake-up call for the tech industry to not only support but actively push opportunities for STEM development within school systems worldwide. If not for our collective future, for the future of their businesses.

 

The good news is that I have seen a lot of mobilization in this area in recent years. Many organizations and companies, Keysight included, have programs focused on STEM education and outreach to school-aged students. As an example, my colleague, Rice Williams, recently discussed the Keysight After School program in his post Making STEM Visible to Children through Invisible Forces – Figuratively and Literally! Such efforts provide hands-on experiences for children in a rich learning environment supported by individuals active in STEM fields. It is great hearing about, and actively participating in, such programs. But that is not what I want to talk about today. Today, I want to talk about a different angle to STEM education. One that supports the same goals, but through a different, yet critically important avenue … the teachers.

 

Teachers are on the front-line of educating future engineers and scientists

While they have many tools to choose from, for many teachers, private industry is a mystery. Some have never worked outside a classroom and have no experience understanding the needs and expectations of the companies that will ultimately employ their students in the future. That is where companies can help: educating our educators on what technology companies do, who they hire, why they operate, and the skills needed in their future workforce. Ultimately, such engagements can help teachers develop learning plans that align to corporate needs while answering the age-old question from students of “why do I need to learn this?”

 

To this end, Keysight employees collaborated with the CTE (Career Technical Education) Foundation and Sonoma County Office of Education (SCOE) to develop a Teacher Externship Program. The goal of the program is for teachers to gain local industry experience for a week, enabling them to develop project-based learning (PBL) lesson plans from their experience to take back to the classroom.

 

As an example of the program in action, Keysight headquarters recently hosted six teachers from Rancho Cotate High School and Tech Middle School for a week of full-day sessions. In this event, a broad cross-section of teachers – in math, English, language arts, and fashion design disciplines – engaged with Keysight presenters that shared their own journeys at the company, their roles, and the future employee skillsets needed in their fields. In addition, the event had teachers take part in a role-reversal, playing the role of students. In a unique twist, during the event, an 11th-grade teacher participant had lunch with a former student who now works at Keysight. They chatted about how high school did and did not prepare him for work here.

 

By exposing these teachers to what it takes to work in a technology company and experience the student perspective, they gained new insights about the process of learning and what it means to work in a technology company. A participating 6th-grade teacher noted to me that the Keysight employee stories were invaluable and motivated them to create and implement new classroom projects. And as Brandon Jewell, Director of Industry Engagement, CTE Foundation noted, “your team provided unique insight into the skills needed to enter the industry while also giving teachers a hands-on experience that will be critical as the teachers build their projects and will positively impact their future teachings going forward.”

 

Teacher Externship Program for STEM: gaining traction and recognition

While the program was developed in 2014, it has now spread to several other companies across Sonoma County and other Keysight offices, fostering value-added connections between teachers and local industry. Over the last several years, the program has been extended to other local companies and collectively have trained dozens of teachers and impacted thousands of students as we help to build the STEM workforce of the future. I was thrilled when Keysight and the SCOE were recently recognized by the State of California Department of Education at the California STEM Symposium for the program, giving it increased visibility as a program that any tech company can consider.

 

Now it’s your turn. If your business has not already started a similar program, consider this concept as an opportunity to expand efforts in the STEM education space. You won’t regret it!

One common trait of many engineers is that we are naturally curious and enjoy exercising our brains. Take math for example, I always found solving math problems on par with solving a puzzle, and might even go as far to say, ‘math is fun’. This was in direct contrast to my younger sister who loathed everything math related and went on to become a university professor in English literature. Mastering language was her idea of fun.

 

As I transitioned in my career from highly technical systems engineering for optical networks 20 years ago, to now marketing data center solutions at Keysight, I have gained a profound new respect for language. Surrounded by colleagues with PhDs in physics and engineering, I have come to realize that they sometimes speak a language of their own filled with abbreviations and acronyms that not everyone understands.  

 

Abbreviations and acronyms have been around since the beginning of written language. In our efforts to communicate more quickly with one another, we instinctively use abbreviations and acronyms out of convenience. It seems that every industry has its own language filled with commonly used terms. As I write white papers and content for Data Center Infrastructure solutions, I pay close attention to spell out the first instance of any acronyms used and to define them wherever possible. Over time, I have compiled 14 pages of data center infrastructure related terms and their definitions. No matter how technically savvy you are, I think you will find this document handy. Download your copy and let me know if I missed any important terms?

 

For those of you who still enjoy puzzles, here is a fun word search of a dozen common terms I use almost every day related to data center infrastructure solutions. See if you can find them in the puzzle below and look them up in the Data Center Infrastructure Glossary of Terms if you want to learn how we define them here at Keysight.

 

400GE, DCI, FEC, NFV, PAM, QSFP, BERT, DDR, IEEE, NRZ, PCIe, SDN

 

As discussed in recent blogs, 5G’s momentum is unstoppable, with trials by commercial operators due to be in place in several cities globally this year. But despite the technology’s rapid advances and the success of early deployments, there are still some major challenges that need to be overcome. One of the most significant of these is in delivering high-frequency 5G signals to users’ mobile devices reliably, in typical urban environments.

 

As you may have heard, 5G adds new spectrum both sub-6 GHz and at mmWave frequencies above 24 GHz. More spectrum is key to delivering the multi-gigabit speeds promised by 5G. The new sub-6GHz frequencies behave similarly to existing LTE spectrum, but the new mmWave frequencies are notorious for high propagation loss, directivity, and sensitivity to blockage.

 

mmWave frequencies don’t travel well through solid objects, such as buildings, car bodywork, or even our own bodies. In practice, this means that a user could potentially lose a 5G mmWave signal simply by holding or using their device in the ‘wrong’ way.

 

People problems

You may recall the ‘Antennagate’ issue that Apple faced back in 2010, in which its iPhone 4 would lose signal when it was held by the lower-left corner. This forced Apple to give away free bumper cases so that users’ hands wouldn't touch the edge of the phone, where the antenna was positioned. It’s a problem that can affect any handheld device because skin and bone is a very effective absorber of radio waves. However, the mobile industry can’t afford to have similar issues affect an entire generation of 5G phones and tablets.

 

To compound this issue, it’s also hard to predict what will happen to 5G mmWave signals when the receiver, the transmitter, or obstacles between them are moving relative to each other, such as in a busy city street. Earlier this year, Keysight and NTT DOCOMO cooperated on a channel sounding study at mmWave frequencies, investigating signal propagation at 67 GHz in urban areas with crowds of people.

 

The research found that the radar cross section of human bodies varies randomly over a range of roughly 20 dB – a significant variance. It also concluded that ‘the effects of shadowing and scattering of radio waves by human bodies on propagation channels cannot be ignored.’

 

Sounding out

Given this, it’s essential to conduct channel sounding tests in real-world environments rather than just in the lab, simply because the complexities and constant changes of real-world usage cannot easily be replicated. For example, indoor channels will behave differently to outdoor channels. Even factors such as the number of people in the room, or whether a window in a room is single-paned or double-paned will influence signal behavior.

 

Outdoor environments add a vast number of unpredictable complications. People, vehicles, foliage and even rain or snow will affect 5G mmWave signals, introducing free-space path losses, reflection and diffraction, Doppler shifts, and more.

 

Further variables include the base station’s antenna gain, pattern, and direction; the behavior of the channel itself; and the mobile device’s antenna gain, pattern, and direction. When the base station and user device’s antenna beams are connected, they need to maintain that connection as the device moves in space or changes orientation. The user device may also need to switch to another base station, repeating the beam-directing and forming cycle.

 

The result of all this is clear: exhaustive real-world testing of mmWave 5G base stations and user devices is critical to 5G’s commercial success. And at Keysight, we’re accelerating our testing capabilities to help the wider mobile ecosystem gain insights and advance their innovations. Find out more about our 5G testing methodologies and system solutions.

Next generation optical transceivers are expected to use less power per gigabit, be less expensive per gigabit, and operate at four times the speed of 100GE transceivers. It is rather puzzling how 400GE transceivers will meet all these expectations. In fact, the move from 100GE to 400GE in the data center is revolutionary, not evolutionary.

 

It took 15 years for data centers to evolve from 10GE to 100GE. Data centers began implementing 100GE in 2014, yet full build-outs only became cost-effective over the last couple of years thanks to the availability of more affordable optical transceiver modules. Emerging technologies enabled by fifth generation wireless (5G), such as artificial intelligence (AI), virtual reality (VR), Internet of Things (IoT), and autonomous vehicles, will generate explosive amounts of data in the network. Current 100GE speeds available in data centers will not be able to support the speeds and data processing requirements needed by these technologies. As a result, data center operators are looking to evolve their networks from 100GE to 400GE.

 

There are three key challenges that need to be addressed to make the transition from 100GE to 400GE as smooth as possible:

 

Challenge 1: Increase Channel Capacity

According to the Shannon Hartley theorem, there is a theoretical maximum amount of error-free data over a specified channel bandwidth in the presence of noise. Therefore, either the channel bandwidth or the number of signal levels must be increased to improve the data rate or channel capacity to reach 400GE speeds.

 

Challenge 2: Guarantee Quality & Interoperability

As new 400GE transceiver designs transition from simulation to first prototype hardware, engineers face the challenging task of developing a thorough, yet efficient, test plan. Once deployed in data centers, marginally performing transceivers can bring down the network link, lowering the overall efficiency of the data center as the switches and routers need to re-route the faulty link. The cost associated with failed transceivers once deployed in the data center is enormous. Since large hyperscale data centers can house more than 100,000 transceivers, even a small one-tenth of one percent failure rate would equate to 100 faulty links.

 

Challenge 3: Reduce Test Time, Reduce Cost

Keeping the cost of the optical transceivers low is a high priority for data center operators. To be competitive, transceiver manufacturers must find ways to drive down production costs. Like most new technologies, the price of next-generation optical transceivers tends to drop sharply after introduction to the market, and development costs amortize as volume ramps. Test time contributes significantly to overall transceiver cost. Therefore, more efficient testing of the broad range of transceiver data rates accelerates innovation and lowers cost.

 

The Next Test Challenge

Many data center operators are moving to virtualized networks using software-defined networking (SDN) and network functions virtualization (NFV). They need full network test of Layers 2-7, including SDN/NFV validation and traffic loading, to ensure that traffic flows through the virtualized network as expected. This is the next challenge data center operators will need to overcome.

 

400GE in data centers will soon become a reality.  Find the solutions to address these 400GE transceiver test challenges here.

A good idea is never lost, as Thomas Edison said. At the turn of the 20th century, nearly 30% of all cars manufactured in the US were electric-powered, but by the 1920s they had effectively disappeared, thanks to cheap gas and mass production of conventional cars. But fast-forward 100 years and electric cars are back with a vengeance, in response to rising fuel prices and Government-mandated emissions controls.

 

The number of pure electric and plug-in hybrid cars on the world’s roads passed the 3-million mark in 2017, and will accelerate to over 5 million by the end of 2018, according to analyst EV-Volumes. This rapid growth in production volumes is being matched by equally rapid advances in powertrains, power electronics and battery technologies.

 

These advances also highlight the increasingly critical role played by a previously unsung vehicle component: the DC/DC converter. Irrespective of whether a car is a pure EV or a hybrid, DC/DC voltage conversion is at the heart of its power electronics systems. It manages the energy exchange between the vehicle’s high-voltage (HV) bus which serves the main HV battery, electric traction which drives the motors and generators, and the traditional 12 V power bus, from which most of the car’s electrical systems (in-car entertainment, navigation, heating, lights and more) are powered.

 

The converter is fundamental to the overall efficiency and performance of electric vehicle (EV) and hybrid vehicles, for several reasons:

  • The flow of power between the HV and 12 V buses is bi-directional and can change in milliseconds according to the demands of the vehicle’s systems and driver input (for example, when transitioning from acceleration to regeneration), so large loads have to be converted seamlessly and safely for smooth, predictable vehicle operation.
  • Because of the high electrical loads they carry, most DC:DC converters are water-cooled, which adds weight and costs when installed in the vehicle. There is strong pressure to simplify this, to minimize cooling requirements, cut weight, and improve efficiency.
  • The converter must be robust enough to continue to operate efficiently across a wide range of environmental and driving conditions.

 

Given the pace of development of vehicles, and the constant pressure to keep costs down across the test and development lifecycle for converters, efficient simulation, design, validation, and manufacturing test of the converters is essential. But this, in turn, presents its own challenges.


Converter testing challenges

The design and test obstacles result from many factors:

  • Bi-directional test: Testing bi-directional power flow demands equipment that can both source and sink power to the converter. But conventional test methods using external circuits and multiple instruments typically doesn’t allow for smooth signal transitions between sourcing and sinking power, and give inaccurate simulations of operating conditions. It also leads to excess heat build-up in the test environment.
  • New power semiconductor technology: Designers are starting to use Wide Bandgap (WBG) devices. While these offer better power efficiency and the ability to handle higher voltages and temperatures than conventional silicon devices, their use complicates the simulation and design of DC:DC converters. Users need to evaluate each device to determine if the WBG devices will work in their designs.
  • Reliability and safety concerns: Using new semiconductors means extra validation and reliability testing is needed to ensure converters will last in harsh operating conditions. Also, given the power levels used with converters, testers need to be careful when testing them. This requires special safety mechanisms in manufacturing, including redundant systems that do not expose the tester to high voltages if a failure occurs.
  • Maximizing efficiency: Because of the various operational and environmental influences on efficiency, it’s difficult for testers to simulate all of these to evaluate the real-world, whole system operation of the converter. Also, measuring small percentage changes in efficiency demands instruments with high dynamic range.


Converting challenges into solutions

To address these design and test issues, Keysight has developed and introduced new, innovative approaches that help manufacturers accelerate their programs. For example, high-frequency enabled simulators can accurately simulate the behavior of new WBG semiconductors, helping to improve the efficiency of DC:DC converter designs.

 

New, fully integrated source/sink solutions enable more accurate simulation and testing of DC:DC converters’ transitions in power flow direction during acceleration and regeneration. These test solutions also feature the ability to return clean power to the test lab’s AC mains, reducing regenerative heat and dramatically cutting the need for costly HVAC equipment in the test system environment.

 

Find out more about how Keysight enables more accurate and efficient testing of the systems at the heart of the electric and hybrid vehicle revolution here.

As a technology corporation, there are many tables to consider claiming a seat at. Economic policy boards, 3GPP standards bodies, other corporate boards, chambers of commerce – the list is endless. But there is one table that I’ve been at for some time and I think every company should pull up a seat to: the energy conservation table.


In May 2018, I had the opportunity to participate in a panel at the ASPIRE Global Forum in Mountain View, California, on how the global business community can work together to address the energy challenges that exist in the world today. With the U.S. Energy Information Administration’s (EIA’s) projection that energy consumption will increase by 28% by 2040, there is no better time than now for global corporations to influence policies and take steps to address the impact of global energy and natural resource needs on the planet. At Keysight, we expect to recognize $2 million in cost avoidance, 10% energy conservation and 15% water conservation by the end of fiscal year 2020 (see our latest 2017 CSR Report here).

 

At the forum, I was fortunate enough to share the stage with high-profile speakers including Jeff Emelt, former CEO of General Electric, and General Colin Powell, former Secretary of State. I spoke about micro-grids and their importance for large corporations, especially as they relate to energy redundancy. We are currently in the process of installing a 4.3-megawatt fuel cell at Keysight’s corporate headquarters in Santa Rosa, which runs on natural gas and provides almost zero emissions.

 

This isn’t the first time I’ve spoken about Keysight’s investment in sustainability and energy efficiency. Keysight has a strong commitment to minimizing our carbon footprint and we have taken actions for several years to conserve natural resources and improve efficiency.

 

In 2008, we installed a 1-megawatt solar array that shifts with the movement of the sun – at the time, it was the largest solar array in the Northern California Bay Area region. We were also early adopters in providing free electric vehicle charging stations (also powered by the sun) for our employees. We have an integrated water reclamation system to utilize recycled water in our landscaping. And in 2017, we installed one million square feet of energy efficient LED lighting with smart sensors driven by a software backbone in our Santa Rosa, Hachioji, Penang and Colorado Springs campus locations.

 

Sure, we do this because we have a strong belief that large businesses like Keysight have a role in pushing initiatives like these forward. But it also makes good business sense. Our suppliers, our competitors, and our communities all take notice when we lead.

 

But even more importantly – our customers, investors, and employees take notice. Leading in intelligent sustainability practices is not only the right thing to do, it’s a business imperative as Hamish Gray, corporate services vice president, recently noted. Companies throughout the world are setting standards and when we illustrate our leadership not just in enabling technology but in sustainability, we win business. By demonstrating leading-edge sustainability practices – whether through our processes, tools or systems – we gain credibility and, ultimately, market share.

 

Energy and natural resource conservation is a journey: Where to begin?

The investments we’ve made in sustainability and efficiency have been done over time. It would have been overwhelming and cost prohibitive to do all of these things at once. Companies who are looking to address these opportunities may wonder where to begin.

 

Companies should start by evaluating where they are today, and aim at getting some quick wins. Setting big, lofty goals is great, but simple policy changes can really inspire people and begin compounding quickly. Consider turning down the thermostat on the weekends. Or start evaluating your own suppliers and see that their practices fit with your goals to be environmentally friendly.

 

I also suggest that you celebrate your small wins and make them visible. Touting your sustainability metrics can help build on that momentum and gain some positive attention, and may even lead to partnerships with suppliers that ultimately make capital investments down the line easier.

 

Also, consider the role of energy and natural resource conservation as part of your overall corporate citizenship efforts. At Keysight, we implemented a six-step journey to an evolved CSR program model that helped gain traction not only in the energy and water conservation space but across our citizenship efforts worldwide.

 

It’s never too late to pull up a chair for energy and natural resource conservation. Leading in sustainability practices not only looks good, but it feels good knowing we’re on the right side of history.

It has never been more important for businesses to consider the impact their operations have on the environment. For example, according to the U.S. Environmental Protection Agency, 30% of total U.S. greenhouse gas emissions came from the industrial sector in 2016 – the most of any sector.

 

As a global business, we need to take a holistic approach to sustainability. We must not only consider carefully the direct environmental impact of the materials and processes we use; we must also manage our operations in the most energy-efficient and sustainable ways possible. We need to look outwards, considering how we use and replace natural resources, as well as inwards to minimize the emissions and waste we produce. Our own operations, and those of our suppliers and partners, are linked in a continuous chain of environmental responsibility.

 

During 2017, we worked to clarify what it means to build and maintain a better planet through our CSR programs, and developed a set of key environmental impact goals to achieve by the end of 2020. We aimed to recognize $2 million in cost avoidance, 10% energy conservation, and 15% water conservation compared to our fiscal year 2015 baseline.

 

Accelerating sustainability

To meet these goals, we have put in place multiple systems and initiatives across our business. A key example is the one megawatt, three-acre solar electricity system at our headquarters in Santa Rosa, which reduces our carbon footprint by using renewable energy. It not only provides 5% of our site’s electrical needs, but also powers more than 30 vehicle charging stations, so our employees can charge their electric vehicles while they are at work.

 

We are proud to run an ISO 14001:2015-certified Environmental Management System which continuously reduces adverse environmental impacts from our operations and drives ongoing improvements in our environmental performance. This compliance framework applies to the entire product development lifecycle, from initial design and development, throughout production and delivery, to refurbishment and support.

 

We also use the General Specification for the Environment (GSE) directive, which sets restrictions for any hazardous substances in the materials and components we use for our products. We have developed a remarketing solutions business to address this issue. This operation recovers and repurposes older instruments for resale, and helps us to reduce the number of products that end up in landfills. Further program options help customers safely dispose of or recycle used instrumentation.

 

These efforts are already bearing fruit

Since November 2014, we have recognized 4.69% and 12.44% respectively energy and water conservation. This has resulted in approximately $850,000 in cost avoidance. We also don’t believe in standing still: our excellent results in water conservation last year led us to increase our water conservation goal from the original 10% target to 15%. Crucially, these efforts have led to no material negative impacts on our profit and loss, or institutional investment levels.

 

It has always been important to us that our activities in running a successful, profitable and innovative business go hand-in-hand with sustainability. As we help people and organizations globally to solve problems by accelerating innovation, we’re ensuring that this has the minimum possible impact on the planet’s ecosystem. Find out more about our progress towards our 2020 environmental CSR goals by downloading our 2017 CSR Report.

The business maxim “If you can’t measure it, you can’t improve it” is also a pretty good definition of the value of test when developing new products and technologies. That’s why for many companies, test and measurement equipment is often one of the biggest capital expenditures on their balance sheets.

However, this means that test departments are themselves increasingly subject to scrutiny and measurement. They’re being pressured to accelerate their processes and deliver results faster to speed up development cycles and meet market demand. At the same time, the business is squeezing test budgets to minimize capital and operating expenditure and maximize the return on existing test investments.

 

To meet these conflicting demands, test departments need in-depth visibility into what’s happening with the test equipment on their benches. It’s no longer sustainable for instrument utilization to be tracked and recorded manually by staff using paper or spreadsheet-based processes.

 

Who’s using my equipment?

Departments need to know where their critical assets are, who’s using them, and how often. Is costly equipment sitting idle under benches, or only used infrequently? It’s also critical to understand the health of instruments: do they need recalibrating, or are they operating in environments that could affect their accuracy or lead to premature failure?

 

Without these insights, test departments can’t measure and improve the efficiency of their own processes – and can’t make the best decisions when scoping out the resourcing and equipment needs for upcoming test projects. So, what’s needed to give departments the visibility they need and enable them to gain full control of all their test assets, to maximize their productivity and ROI? There are three fundamental processes involved:

 

1. Asset tracking and control

It’s essential to know what test equipment is available to teams, where it is located, and who is currently controlling its usage. This makes it easier to locate instruments when they are needed for a test, or for calibration or maintenance. The benefits of asset tracking include time savings during audit processes, an updated equipment inventory and fewer lost assets. Standard asset tracking tools can provide access to this data.

 

2. Assessing instrument utilization and health

As well as knowing an instrument’s location, it’s important to have specific details on how it is being utilized. Not just whether it’s switched on, but being able to access detailed real-time application logging to show precisely what it is being used for. This telemetry will show the health of the asset (such as operating voltage and temperature) which can indicate the early signs of a problem, or when maintenance is due – helping to avoid any potentially costly downtime from premature failure. It also helps to identify assets that are not in regular use, and those which may be surplus to requirements because they are no longer adding productive value to the test department. As a result, decisions can be made to trade-in or sell under-utilized equipment.

 

3. Optimizing asset use

When the location and utilization of assets is being managed effectively, a loan pool of instruments can be created to enable scheduled sharing across groups of users. This cuts costs by avoiding unnecessary new equipment purchases, and helps to maximize usage of existing assets.

 

Making the most of your test assets

When applied using an integrated approach, these management processes enable organizations to do and achieve more with their existing test assets, while saving on future CapEx and OpEx investments. And to help organizations put these processes in place, we recently introduced our Test Asset Optimization Services, the industry’s first integrated solution to address the complete asset management needs for all test equipment used in R&D and manufacturing.

 

With our integrated suite of services, organizations’ test departments can:

  1. See all their test assets from multiple equipment vendors, track them across multiple labs, locations, and users, and manage their compliance. This reduces the time spent on physical inventory counts and improves the productivity of engineering teams by giving fast access to the right assets.
  2. Know the physical condition and true utilization of test equipment through monitoring, to increase asset usage, decrease cost of test, and identify unhealthy instruments before a bigger problem occurs.
  3. Optimize use of existing equipment across the organization with a central loan pool. This assists with smarter procurement decisions such as the need to purchase or rent new instruments, and helps customers to realize the residual market value of older equipment through trade-ins or upgrades.


Integrated Test Asset Optimization Services ensure that teams always have access to the test equipment they need, at the right time. They also enable organizations to unlock powerful, actionable insights into asset usage and ROI that they’ve never previously had access to, helping to boost test efficiency and agility. Find out more about the services here.

A couple of weeks ago, Jeff Harris wrote the first of a series of blogs on Blockchain. In this second installment, I build on his overview and explore the methods used to create a trusted distributed ledger. To develop this understanding, I am going to use the example of a checkbook to explore one of the first blockchain implementations: cryptocurrency. I’ll provide an overview of:

 

  • How a transaction is initiated on the network
  • How the transaction moves on the network and is validated
  • How the transaction is recorded into the permanent ledger (the blockchain)

 

The parts of a checkbook: the ledger, the checkbook, the check, and the bank

 

1. The ledger

Blockchain technology is a distributed ledger with no centralized storage. All cryptocurrencies are built on blockchain. You can think of blockchain like a checkbook ledger. When you write a check to someone, you enter the transaction into the ledger. When someone sends you a check, you deposit it and that transaction is also recorded in the ledger. In a cryptocurrency system, blockchain is the ledger, but unlike your checkbook ledger, it is a public file and it contains copies of every account and every transaction extending back to the beginning of time for the currency.

 

2. The check

When we want to pay a debt or transfer funds, we pull out our checkbook and write a check. The check records the parameters of the transaction and provides a physical form of proof, including our signature. When we take the check to the bank, the bank transforms that paper record into a digital transaction conducted on a centralized compute system. Blockchain eliminates the check and the central compute system, replacing it with a distributed system.

 

3. The checkbook

In bitcoin and other cryptocurrencies, the checkbook is replaced by a computer client called a “wallet.” The wallet client lets the end-user submit transactions to the cryptocurrency network. It also transforms the client into a node that can validate and process transactions coming from other places on the network. This latter functionality is called mining, and it is critical to building trust and consensus on the blockchain. Individuals are incentivized to participate in mining via rewards given for finding solutions to extremely difficult mathematical problems.

 

4. Writing a check

The act of writing a check is executed through the wallet client or an API which communicates to the wallet client. Each currency is different, but in general a transaction includes the sender wallet ID, the receiver or receivers, the amount to be transferred, and any fees associated with the transaction. Some currencies include other fields as well. Once this information is entered, the node announces the transaction to its peers. As an example, in a Bitcoin network, a node will typically establish at least 8 peer-to-peer connections.

 

5. Processing the check

Banks process checks centrally. They verify the integrity of the check (signature, anti-counterfeiting measures, account numbers, etc). Once validated, they create a digital transaction in a central computer system. The central system applies debits and credits to the accounts in question, and money is exchanged.

 

In a cryptocurrency transaction, the blockchain replaces the central computer system and the accounts. The blockchain is the ledger of all accounts. It is distributed, open, and pseudo-anonymous: Anyone can inspect the blockchain and see all the transactions, and which accounts they went to, but there is no personally identifying information to connect an account to a person. It is possible to walk through the blockchain just like you can walk back through your checkbook ledger to the beginning of time when you opened the account.

 

6. Validating and recording the transaction

Validating a transaction in a cryptocurrency network works similarly to our checking example. Blockchain transactions are submitted to the network by a wallet node. The wallet node shares the transaction to its peers via a peer-to-peer mechanism. Receiving nodes immediately validate the transaction and, if valid, announce it to their peers. All peers in the network will repeat this process, ensuring the integrity of the blockchain.

 

Validation may include many checks, but two are critical: Verifying the authenticity of the sender, and verifying that funds exist to pay the debt in the transaction. The sender used private key cryptography to verify his or her identity when the transaction was submitted, thus that condition is already met. To verify that funds exist, each node searches the blockchain and determines if the sender has received or earned funds that have not yet been committed to any other transaction. To perform this validation, every full node maintains either a full copy of the blockchain, or a subset with all the relevant unspent prior transactions.

 

Once any node validates a received transaction, it forwards it to its peer nodes, and also stores it in a transaction queue for later recording into the blockchain. Through this process, the transaction propagates through the entire network.

 

7. Recording the transaction to the ledger

Up until this point, the transaction is only stored in a queue, waiting to be committed to the ledger. This last step is the equivalent of the bank actually moving money between accounts.

 

When enough transactions have accumulated, or enough time has passed, the transactions in queue will be assembled into a block. Each cryptocurrency has specific rules about how blocks are put together. These rules may cover details such as: What order can transactions be put it? What defines the priority? How many transactions are allowed per block? How often is a block committed to the chain? And so on. These details are important to the performance characteristics of the blockchain in question. As you think about other applications for blockchain technology, you will find parameters like time to resolution, latency, and number of transactions per second are important to different systems.

 

The blockchain network for the currency dictates the rules of how blocks are assembled. Once a client can meet those rules, it forms a block from the transaction queue, and then starts work on a very hard computational puzzle related to the block. The block is not committed to the blockchain until a valid solution to that puzzle is solved. Each cryptocurrency has its own puzzle. The process of finding the solution to the puzzle is called mining. We’ll take a deeper look at how mining works in the next chapter.

 

When a node finds the solution to a puzzle, it broadcasts that solution and associated block to the network. Remember, finding the solution to the puzzle is exceptionally difficult and time consuming. However, verifying a given solution is fast and easy. All receiving clients check the work, and confirm the solution is valid. Once they validate the block, they record the block to the local copy of the blockchain and forward on the new block to the network. The block is now committed to the blockchain. As a reward for the work, the owner of the lucky node that solved the puzzle gets a prize: cryptocurrency coins.

 

Integrity of the blockchain

At this point, the process repeats itself, with the next batch of transactions being formed into a new block. One last concept is crucial to understanding blockchain: The new block contains a permanent reference to the previous block. Because each block has a link to the previous block, if one block is changed, all blocks that reference it must have their work puzzle re-computed.

 

This chaining mechanism ensures the integrity of the blockchain by making it nearly impossible to change transactions once they are validated and recorded. The longer a transaction has been on the blockchain, the less likely it is to ever be changed. This is why the computationally expensive work function is integral: it is the mechanism that creates the trust necessary for a distributed, decentralized ledger.

 

While blockchain is a peer-to-peer technology, not all parts of the process require full peer-to-peer meshing. In particular, solving the puzzle may be done by a system of many computers that share the work. When this is implemented, the mining traffic will look more like client-server traffic than peer-to-peer. If you want to understand more about the network traffic of a blockchain transaction, watch for the fourth part in this series, where we look at this in greater depth.

It feels like it’s been a long time coming, but 5G is nearly here. After the 3rd Generation Partnership Project (3GPP) published the first specifications in December 2017, 5G gained real momentum following its successful commercial debut at the Winter Olympics. The Games showcased a range of advanced applications delivered at scale, including driverless buses using 5G links to navigate, and live 4K video streaming of high-profile events.

 

This was followed closely by the giant Mobile World Congress 2018. During the event, leading mobile operators including AT&T, Sprint, T-Mobile and Verizon all announced timetables for commercial 5G rollouts in the U.S. over the next 18 months. The 3GPP is also expected to publish the final 5G standards in the next few weeks.

 

However, despite all this high-profile activity and media hype around applications such as autonomous cars and instant HD video streaming on mobiles, there’s still a long way to go before the technology becomes fully mainstream. Progress towards large-scale 5G deployments is going to take time. Innovative new products and services will need careful development and exhaustive testing to ensure they meet the required performance and reliability standards.

 

With this in mind, what will the initial 5G implementations look like over the next 18 to 24 months? And what can we anticipate from the technology in the longer term? Network Computing recently published our article describing what we can realistically expect to see between now and 2020, and here’s a recap of what’s coming into view:

 

Raising speed limits

Commercial 5G networks are due to be in place in several cities worldwide by the end of this year, with South Korea likely to be first and the U.S. and Europe close behind. But this won’t immediately herald a raft of new services and applications. Instead, consumers in these cities will experience faster performance on their mobile devices (regardless of whether they are 5G enabled) as carriers test the scalability of their networks and services.

 

As a result, existing high-bandwidth, low-latency services such as video streaming will be the most notable difference experienced by users. As we move into 2019, we’ll also see the launch of a range of 5G-enabled devices, which will be able to exploit emerging fixed wireless internet services. These will deliver even faster content delivery for both consumers and business users.

 

Catching the mmWave

3GPP’s imminent release of the next set of 5G standards will be focused on mmWave. We can expect rapid progress to be made in the next 12 months in high-density deployments of small cells and mmWave-ready devices, ready to take advantage of the higher bandwidth and low latency it offers. mmWave will also be the enabler for large-scale IoT deployments. This will accelerate the move towards smart cities, in which tens of millions of devices will connect and interact to streamline processes and inform decisions.

 

Diving into immersive experiences

Much has been made of the immersive VR and AR experiences that 5G will support, in areas ranging from leisure and sports to education, training, and even remote medicine. In most cases, we’re unlikely to see these become everyday applications until at least 2020. However, many leading carriers and manufacturers such as Korea Telecom, Verizon, Samsung and Qualcomm are conducting demonstrations at scale, so we can fully expect the promise of these experiences to be realized.

 

In conclusion, the rollout of 5G will not be a sprint, but a marathon. While the deployments we’ve seen to date show how the technology can be deployed at scale, there’s still a way to go before it can be extended to a national or international level. As the standards crystallize, 5G will evolve through extensive testing of networks and devices in real-world conditions, to ensure that it delivers the performance and reliability expected of it.

 

Find out about how Keysight is helping world-leading companies to accelerate their 5G innovations here.

It happened to me again today. Someone asked me for information that I know I have on my hard drive, somewhere in my archival “system.” All I had to do was find the file and send it to the person and life would be good. How hard can that be?


Instead, I spent 20 minutes searching through my C: drive. Surely, I had put it in a safe place where it was easy to find. Often, I end up with important things archived as email, so I checked my humongous inbox and a few of my large email archives. (I do save everything. Disk storage is cheap.)

 

Success! I did find the file and sent it to my colleague who was pleased to see it.

 

All Too Common

I’m sure this has never happened to you. You are organized. You know where all your important documents are and can access them instantly. But for most of us, we struggle with managing data.

 

Recently, a study of design engineers found that many of them are wasting 20% of their technical time on nonproductive data management tasks. That is one day per week. I was surprised by how large this number was so we asked our customers about this issue. Sure enough, many of them reported similar percentages of wasted time in their teams, so this problem is real. What would you give to improve your engineer’s effectiveness by 20%?

 

I suppose this should not surprise us. We live in the information age, but many of our tools are not that great at handling information. It seems that most of these tools put the burden on the user to manage the data, instead of automating the task.

 

Make Decisions and Get to Market

The pressure to get products to market fast never stops. Being first to market is a critical factor in most industries. 

When we asked customers about their frustrations with managing design-related data, we got an earful.

 

Yes, they see wasted time due to fumbling with their data, and this results in engineering inefficiency. More importantly though, they often lacked confidence in the data they used to make key decisions. Is the design ready to be released to manufacturing? Well, the data indicates so, but are we looking at the right data?

 

Often, an expert engineer has to personally check the data to be sure it is right. In some cases, customers reported that they rerun a set of measurements because they aren’t confident in the archived data. It shouldn’t be this difficult or time consuming.

 

Management of Product Development Data integrated into enterprise systems

Figure 1: Management of product development data must be integrated into the enterprise systems

 

Easy access to the right data is never the end goal. There is always some business decision in play based on the data (check out Brad Doerr's post Extracting Insights from Your Messy Data for another perspective). What we really want and need is to be able to pull insight from our design and test data, to make critical decisions quickly and with confidence.

 

Check out this whitepaper where I explore the topic more fully and learn what actions you can take: Accelerate Innovation by Improving Product Development Processes