Skip navigation
All Places > Keysight Blogs > Insights Unlocked > Blog > Author: Bob Witte

Insights Unlocked

6 Posts authored by: Bob Witte Employee

It happened to me again today. Someone asked me for information that I know I have on my hard drive, somewhere in my archival “system.” All I had to do was find the file and send it to the person and life would be good. How hard can that be?

Instead, I spent 20 minutes searching through my C: drive. Surely, I had put it in a safe place where it was easy to find. Often, I end up with important things archived as email, so I checked my humongous inbox and a few of my large email archives. (I do save everything. Disk storage is cheap.)


Success! I did find the file and sent it to my colleague who was pleased to see it.


All Too Common

I’m sure this has never happened to you. You are organized. You know where all your important documents are and can access them instantly. But for most of us, we struggle with managing data.


Recently, a study of design engineers found that many of them are wasting 20% of their technical time on nonproductive data management tasks. That is one day per week. I was surprised by how large this number was so we asked our customers about this issue. Sure enough, many of them reported similar percentages of wasted time in their teams, so this problem is real. What would you give to improve your engineer’s effectiveness by 20%?


I suppose this should not surprise us. We live in the information age, but many of our tools are not that great at handling information. It seems that most of these tools put the burden on the user to manage the data, instead of automating the task.


Make Decisions and Get to Market

The pressure to get products to market fast never stops. Being first to market is a critical factor in most industries. 

When we asked customers about their frustrations with managing design-related data, we got an earful.


Yes, they see wasted time due to fumbling with their data, and this results in engineering inefficiency. More importantly though, they often lacked confidence in the data they used to make key decisions. Is the design ready to be released to manufacturing? Well, the data indicates so, but are we looking at the right data?


Often, an expert engineer has to personally check the data to be sure it is right. In some cases, customers reported that they rerun a set of measurements because they aren’t confident in the archived data. It shouldn’t be this difficult or time consuming.


Management of Product Development Data integrated into enterprise systems

Figure 1: Management of product development data must be integrated into the enterprise systems


Easy access to the right data is never the end goal. There is always some business decision in play based on the data (check out Brad Doerr's post Extracting Insights from Your Messy Data for another perspective). What we really want and need is to be able to pull insight from our design and test data, to make critical decisions quickly and with confidence.


Check out this whitepaper where I explore the topic more fully and learn what actions you can take: Accelerate Innovation by Improving Product Development Processes


After the FCC announced the availability of a huge amount of 5G spectrum in 2016, I wrote about it in this blog posting. I was rather impressed with the amount of spectrum made available but I also identified three areas that needed breakthrough innovation for 5G to be successful:

  1. New channel models
  2. Beamforming
  3. New air interface


The industry is making progress in all three of these areas and there’s still more work to be done. For a good overview of 5G status, see this article on ElectronicDesign "5G—It’s Not Here Yet, But Closer Than You Think".


The 3GPP standards body has been working on the new air interface, now referred to as New Radio (NR). A major milestone was the release of the first NR standard, known as the Non-Standalone (NSA) Release 15 specification. “Non-Standalone” means that the 5G network is dependent upon the existing LTE evolved packet core (EPC) network and an LTE “anchor” carrier for control signaling aggregated with an NR carrier for data. Industry leaders pushed for and got the NSA early release to expedite their deployments. The full up (Stand Alone) Release 15 using the next generation (NG) core network and NR air interface is due out in June 2018. This phased approach makes a lot of sense for a complex system like 5G. 


The wireless network operators are already planning and doing 5G field trials of various forms, including proprietary pre-5G implementations. These early trials are mostly focused on delivering broadband wireless to fixed locations, called Fixed Wireless Access (FWA). These deployments not only deliver immediate value to customers but also allow the industry to gain experience with NR and the higher frequency bands.


The goals of the NR specification are very aggressive, and cover use cases including:

  • Very low to very high data rates
  • Low latency
  • Massive machine-to-machine communication
  • High reliability
  • Low power operation


Think about those requirements a bit and you’ll see that they are full of contradictions and engineering tradeoffs. But engineers do what they do and the NR spec handles these conflicting requirements via a new highly scalable orthogonal frequency division multiplexing (OFDM) system. I won’t try to describe the complex system of variable subcarrier spacing, symbol length and timing but it is designed to be very flexible to cover all the desired use cases.


Up In Frequency

To achieve high data rates while supporting more users, the plan for 5G is to move up in channel bandwidth and frequency. There’s just more spectrum (as measured in Hz) at higher frequencies. These ranges are now referred to as Frequency Range 1 (FR1) and Frequency Range 2 (FR2).


Frequency range designationCorresponding frequency range
FR1450 MHz – 6,000 MHz
FR224,250 MHz – 52,600 MHz


FR1 extends somewhat higher than the existing LTE spectrum in use now and will require some incremental improvements in technology and approach for 5G, particularly for the much wider bands and channel bandwidths. FR2 is another ballgame, well into the mmWave range where signal power is more difficult to achieve and much easier to lose to propagation losses.


Throw Your Cables Away

With signal loss a problem, adding some additional antenna gain can certainly help. At these higher frequencies (shorter wavelengths), phased array antennas can be used to improve the gain and steer it to where we want it to go. To keep cost down and performance up, these compact phased-array antennas are being attached directly to the RF Integrated Circuit (RFIC). This tight integration into the system means the usual output connectors are not available for measurement use. All measurements must be made Over The Air (OTA). So, yes, it’s time to throw your cables away.


Accurate connected measurements at FR2 can be a challenge, but decades of measurement science work has made them commonplace. Making accurate OTA measurements is a lot harder, introducing a much larger measurement uncertainty. Think many dB of uncertainty instead of <1 dB for connected measurements! In other words, OTA measurements are going to be less accurate than we have become used to – making everything more difficult.

Enter the Spatial Domain

Mobile wireless devices have always operated in three dimensions…the world tends to be configured that way. When a 3G mobile phone changes location, the system just has to track signal strength and make a handover to the right base station at the right time. Now consider a 5G device working at FR2: the variables now include the base station antenna gain, pattern and direction; the behavior of the channel including fast fading and the mobile antenna gain, pattern and direction. We have moved into the spatial domain.


Let’s consider how the User Equipment (UE) makes and maintains a wireless connection. The UE and base station need to find each other by sweeping their antenna beams around in some organized fashion. Once they lock onto beam settings that work, they’ll need to keep updating the beam directions as the UE moves through the network or changes orientation. Especially at FR2 frequencies, shadowing and blocking can be severe. At some point, the UE will need to switch to another base station, causing the cycle to repeat. Beam management is the key, at the UE and at the base station.


The test challenge is made more difficult by this beamforming operation. How do we ensure that the UE can steer the beam appropriately so that the 5G devices will work? Do we need to test in all 3D directions? Or can we just rely on a few key samples to ensure proper operation?


At our recent 5G Tech Connect Conference, my colleague Moray Rumney spoke about these spatial challenges, "For FR1 the question being asked for the last 100 years was “How good is my signal?” But 5G NR at FR2 brings a new paradigm which is “Where is my signal?” since if it is pointing in the wrong direction its quality is no longer relevant."



FR1 cellular vs FR2 cellular requirements


These design and test challenges are being worked on every day by Keysight engineers and other technical experts in the industry. Learn more about Keysight 5G technology and solutions here.

Data analytics is an emerging technology that is getting a lot of attention in the press these days. As with many exciting technologies, there is a mix of real opportunity surrounded by a lot of hype. While it is sometimes difficult to separate the two, I am among those who believe data analytics can make your business run better. As with any technology development, though, a positive outcome starts with a clear definition of the question you are trying to answer.


We can start with this one: Which tangible business results can come from data analytics? For most technology-intensive companies, one key driver is getting the right new product to market quickly and efficiently. The benefits are faster time-to-market, reduced risk and lower costs. In addition, topline results will improve when data analytics is used to optimize product plans and customer engagement.


Deloitte posted an article that suggests many companies are finding value in data analytics but, because these are the early days, there’s more insight yet to come. One early conclusion: the key benefit of analytics is “better decision-making based on data” (Figure 1).

Figure 1. Summary of findings from Deloitte’s Analytics Advantage Survey (pg 4, The Analytics Advantage: We’re just getting started)


Drowning in data, grasping for scalability

Companies that create electronic products are part of the overall trend toward data analytics. In a recent report, Frost and Sullivan sees growing demand for big data analytics applied in the test and measurement market. Keysight is part of this ecosystem, and our design and test solutions generate critical data from measurements of product performance.


We see many of our customers swimming in this data, and some are drowning in it. There are so many data sources that it is easy to accumulate a bunch of disjoint files that are poorly organized and difficult to manage.


This is typical, and it is why most large data analytics projects currently involve way more investment in data collection than in actual analysis. It is estimated that 80% of the effort goes into data collection and “data wrangling.” To me, “data wrangling” is the perfect phrase because it conjures up images of a cowboy tossing a rope around a spreadsheet in hopes of subduing it.


Many electronics firms have created homegrown solutions, ranging from simple collections of Excel files to complex software systems coded in C. Spreadsheet wrangling can work well for small, localized datasets—but it won’t scale up because data is isolated among individual users or test stations, perhaps spread across multiple sites. Revision control may be weak and it can be difficult to ensure that you have the most recent data. What’s worse, it usually turns into lost productivity as key technical staff spends time fiddling with all those spreadsheets. Over time, this maps to lost velocity towards finishing and shipping the product.


One alternative is reaching out to the IT department to create a more robust system. The resulting solution will be more scalable and supportable, but it also has internal costs. For one, changes fall to the IT team, robbing resources from other priorities. This is workable as long as all ongoing IT projects are fully supported and staffed.


Taking baby steps toward better data for better results

The actual analytics required can often be very basic. Sure, we’d like to turn on some machine-learning application that derives brilliant insight from our manufacturing test line and then feeds it back into the next design revision.


More likely, we are just trying to look at product performance in a consistent way so we can tell if the design is performing correctly. This is especially true in the prototype phase, when there are fewer devices to evaluate and the actual specification is still in flux. Later, in the manufacturing phase, we usually have plenty of data but it may still be siloed at the various production line stations or stored at another site, perhaps at a contract manufacturer.


Getting better at clarifying the target problem

As you apply the ideas discussed above, you will get better at defining the business problem you want to solve using in-hand design and test data. It may be improved time-to-market, lower R&D cost, better production yield, or something more specific to your operation. The next crucial step is creating or acquiring scalable tools that enable you to get your data under control.


My questions for you: Do you see this challenge in your business? What sources of data feel a bit disorganized or maybe completely out of control? Which tools have been most useful? We will be exploring these ideas in future blog posts, so stay tuned for additional insights.

The electronics industry is buzzing with the Internet of Things (IoT). One engineer described IoT as a vast conspiracy to put network connections on every object in the world—completely independent of whether it makes sense or not.


When we start connecting up all of these “things,” new system requirements creep in that may not be obvious. Your “thing” is no longer happily independent: it is part of a larger system. With connectivity comes new responsibility—in product design and on into the product’s useful life as a network citizen.


From a developer’s point of view, many IoT devices are “embedded designs.” This means they use microprocessors and software to implement functionality, often with an eye towards small compute footprint, just-enough processing power, lean software design and demanding cost targets. This is familiar territory for many design engineers, perhaps with the added requirement of providing a robust network connection. This is often going to be a wireless connection, be it Wi-Fi or some other common standard.


It’s unlikely that you are designing, defining or controlling that entire system. That raises a few key questions, starting with the issue of interoperability: How do you know that your device is going to be compatible with others on the network? Your device may also present a security risk to the network: How much protection can you build in? The assumedly wireless connection is both a source and receiver of RF emissions: How do you make sure it behaves properly in both roles?


Building on these topics, here are a few things to consider as you try to avoid the ills of this brave new world of everything connected.


Ensuring interoperability

Start by understanding the requirements of the larger system as they apply to your product. What other devices, compute resources, servers, etc., must you interoperate with? Because these systems are inherently complex and multivendor, all IoT devices need to “play well with others,” as described in Systems Computing Challenges in the Internet of Things. Creating a robust test strategy will help ensure that you have this covered. My prescription: Think like a systems engineer.


Managing potential security risks

Recent hacks have provided a much-needed wake-up call concerning IoT security. Low-cost devices may not have much data to protect, but they can be a giant security hole that tempts the bad guys with access to the rest of the network. Distributed denial-of-service (DDOS) attacks have been launched using simple devices such as netcams, and even lightbulbs have been hacked. (For some interesting thoughts on this topic, see IoT Problems Are about Psychology, Not Technology.)


Manufacturers that leave gaping security holes may face legal consequences: in early January, the Federal Trade Commission filed a complaint against D-Link for inadequate security measures in certain of its wireless routers and internet cameras.


You can start by plugging known holes in operating systems and other platforms. And, of course, don’t hardcode default passwords. Think through better ways to test for security problems: no, it isn’t easy—but it’s starting to feel mandatory. My prescription: Think like an IT engineer... or a hacker.


Preparing for interference

Your device probably uses one of many wireless connections. First, cover the basics of electromagnetic compatibility (EMC). Be sure the device passes relevant radiated-emission standards so it isn’t spewing RF that interferes with other devices. Also, ensure that your device isn’t susceptible to emissions from other sources.


It’s also important to consider the RF environment your device will live in—especially if unlicensed spectrum is being used, as in the case of Wi-Fi. The great thing about the unlicensed airwaves is that they’re free to everyone to use. The really bad thing about unlicensed spectrum is that everyone is using it and it’s a free-for-all. Thus, you’ll likely have to contend with other emitters in the same frequency span. See Chris Kelly’s post Preparing Your IoT Designs for the Interference of Things for some additional thoughts. My prescription: Think like an EMC engineer.


Wrapping up

My key point: IoT devices may seem familiar and comfortable to engineers who work with embedded systems—but this is a trap. Avoiding this trap starts with a shift in perspective, viewing IoT devices as citizens on a network that has critical requirements for systems, security and RF behavior. Without this shift, we will expose our products—and ultimately our companies—to customer dissatisfaction, customer returns and product liability.

The ability to accurately measure and quantify a digital design is essential to actually knowing what’s going on. A fellow named William Thomson, better known as Lord Kelvin, captured this concept in one of my favorite quotes:


When you can measure what you are speaking about, and express it in numbers, you know something about it.


This was simple back in the good old days. To measure a digital waveform, we would just connect an oscilloscope to the right node and take a look at the waveform. Oh, and we’d be sure the scope had enough bandwidth and the probe wasn’t loading the circuit or introducing distortion. We rarely, if ever, compared the results to a simulation. Mostly, we just checked to make sure the waveform looked “about right.”


Changing tactics in design and test

Today, the world’s insatiable demand for bandwidth continues to drive the need for ever-faster high-speed digital interfaces. As designers try to push more bits through the channel, they’re pushing the limits of what’s possible using the latest equalization and signaling techniques—decision feedback equalization (DFE), continuous-time linear equalization (CTLE), feed-forward equalization (FFE), PAM-4 (four-level logic), and more.


When characterizing the results, test equipment must often emulate those same techniques. For example, when physical transmitters and receivers are not yet available, an instrument has to mimic their respective behaviors at the input or output of the device under test (DUT). Even when the transmitter or receiver is available, it’s likely to be embedded on a chip. That makes it difficult to probe and measure—and, once again, the instrument must emulate either or both devices.


Addressing the problem: a real-world example

The process of creating accurate, realistic models is an iterative process. To ensure increasingly accurate models, the latest measured results must be fed back into the simulation system.


Although this process has many challenges, possible solutions are spelled out in a recent DesignCon paper on measuring PAM-4 signals at 56 Gb/s: PAM-4 Simulation to Measurement Validation with Commercially Available Software and Hardware. The DUT was a 3 m Quad Small Form-factor Pluggable Plus (QDFP) cable, driven by an arbitrary waveform generator (AWG) and measured using a high-bandwidth sampling oscilloscope (Figure 1).


Figure 1. Measurement of the DUT resides within a larger process that also includes simulation.


The channel configuration was first simulated in software using IBIS-AMI models for the transmitter and receiver. In this case, the transmitter was not available and the designer utilized an AWG to replicate in hardware the same transmitter waveform the simulator used. The simulator-provided transmitter waveform also included the FFE correction needed to open the eye at the receiver for CDR and data acquisition. [Aside: During early-stage development, you can use an AWG to emulate the absent transmitter using an industry-standard model.]


Similarly, to accurately measure the received signal, the oscilloscope executed a model of the not-yet-available receiver that included clock data recovery (CDR), CTLE and DFE. As above, the team used the same receiver model for design simulation.


Creating a new ecosystem—in your lab

Although the IBIS-AMI models have been developed and standardized by the electronic design automation (EDA) industry, they have also made their way into the measurement world. As described in the PAM-4 paper, connecting the physical and digital worlds creates a measurement/simulation ecosystem. As this ecosystem comes into alignment, simulated and measured results become increasingly well-correlated (Figure 2).


Figure 2. A tighter connection between simulation and measurement ensures closer correlation of results.


Mastering both realms, together, results in fewer design cycles and better predictability of design quality. In the PAM-4 example, appropriate application of the models ensures the ability to get a useful picture of the waveform at the output of the DUT, and from that gain better insight into how the receiver will decode it.


The age-old alternative to this beneficial ecosystem is the time-consuming “cut and try” approach that may never yield a reliable design. Worse than that, engineers are left to iterate their designs based on limited knowledge of system performance.


Going beyond “measure then know”

In reality, most teams include some engineers who are highly proficient with simulation tools and others who are deep into measurement tools. For the ecosystem to work, engineers must be able to apply tools from both worlds in a coherent manner. As teams learn, they feed new information back into the models and make them more accurate. Portions of those same, improved models can then be used to perform useful measurements.


This measurement/simulation ecosystem becomes “must have” if you are doing leading-edge digital design. Within this symbiotic ecosystem, Kelvin’s idea of “measure then know” expands to become “model, measure, and know.” And that’s when breakthroughs become more predictable.  

Earlier this month, the Federal Communication Commission (FCC) decided to allocate nearly 11 GHz of spectrum for 5G mobile broadband use. If you need some good bedtime reading, try the 278-page document;

for a concise summary, see “FCC OKs sweeping Spectrum Frontiers rules to open up nearly 11 GHz of spectrum.”


The FCC made this bold move to get out in front of the coming 5G technology wave- and its decision will help the rest of us focus our energies on the crucial innovations that will enable 5G.


The commissioners wisely chose to include 3.85 GHz of licensed spectrum and 7 GHz of unlicensed spectrum, supporting both types of business innovation. The newly allocated spectrum sits at 28 GHz, 37 GHz, 39 GHz and 64-71 GHz, and the FCC will seek additional comment on the bands above 95 GHz. The new unlicensed band (64 to 71 GHz) is adjacent to the existing 57 to 64 GHz ISM band, creating a 14 GHz band of contiguous unlicensed spectrum (57 to 71 GHz).


I am struck by the huge amount of high-frequency spectrum that has been allocated for future wideband mobile use. For an interesting comparison, look back at the spectrum that launched the first analog cellular systems in the US: the Advanced Mobile Phone System (AMPS) used 824-849 MHz and 869-894 MHz for a total spectrum 50 MHz wide. The FCC’s 5G spectrum decision allocates more than 200 times that amount, underlining the kind of bandwidth required to meet the aggressive goals of 5G.


FCC Chairman Tom Wheeler was very clear about the how the FCC is approaching the 5G opportunity. In a recent speech, he said, “With today’s Order, we are repeating the proven formula that made the United States the world leader in 4G: one, make spectrum available quickly and in sufficient amounts; two, encourage and protect innovation-driving competition; and three, stay out of the way of market-driven, private sector technological development.”


To open up wide chunks of spectrum, the FCC had to reach for higher frequencies, which bring with them plenty of technical challenges. Millimeter-wave (mmWave) frequencies have higher path loss and undergo different effects from scattering, diffraction and material penetration. Also, mmWave components and subsystems are harder to design due to significant tradeoffs between energy efficiency and maximum power level. Compounding the difficulty, frequency bands below 6 GHz will also be critical for 5G deployment, working in concert with the mmWave bands. From this perspective, I see three areas that will require significant innovation on the path to 5G:


New channel models: Today, millimeter frequencies are often used for fixed terrestrial communication links and satellite communications. These tend to be stationary point-to-point links that don’t have to deal with radio mobility. At Keysight, we have been working with communications researchers at higher frequencies to develop channel models that are appropriate for mobile broadband use at mmWave. The higher-frequency, wideband nature of the channel and the dynamics of the mobile environment require more robust modeling approaches than those used for lower frequencies.



Beamforming needs to work: The remedy for higher signal loss is to increase the antenna gain and make it steerable, a technique commonly known as beamforming. This method focuses radio signals from an array of multiple antenna elements into narrow beams that can be pointed for maximum overall system performance. Wireless LAN at 60 GHz (802.11ad) offers 7 Gbps connectivity for short-distance or “in room” applications—and 802.11ad does implement beamforming to optimize signal strength. While some of this work will leverage into 5G, 802.11ad is neither mobile nor multiple access (handling multiple diverse users simultaneously). There’s more work to be done here.


New air interface: Not to be overlooked is the need for a new air interface to take advantage of wide spectrum (when available). This interface must be scalable by design so that it can deliver unprecedented high bandwidth while still performing well for lower-bandwidth applications. The aggressive goals for 5G also include improved spectral efficiency, low battery drain for mobile devices and low latency for IoT devices.


We’ve been here before: you may recall the difficult list of challenges associated with LTE (4G) technology. Just like 5G, LTE was an aggressive technology development pursued by the wireless industry. Somehow we got it done. Challenges like this drive innovation in electronic communications.