2017

May 2017

# Tim's Blackboard #1.5: the Blink of an Eye

Posted by Tim Wang Lee May 30, 2017

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.

There is no new post this week on Tim’s Blackboard, but while you are waiting for next Wednesday to come, give this SI journal article a read. I collaborated with Signal Integrity industry's leading experts, Al Neves and Mike Resso, performed simulations in ADS and wrote about S-parameter Analysis and the Blink of an Eye.

See you next week!

TWL: 5/30/17

# Tim’s Blackboard #1: The Melting Trace Paradox

Posted by Tim Wang Lee May 24, 2017

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.

This week on Tim’s Blackboard is the “melting trace paradox.”

# Introduction

Unlike other famous paradoxes such as the Zeno’s paradox, where Achilles and the Tortoise are involved, the melting trace paradox is one with a segment of copper trace and a current source, see Fig. 1.

Fig. 1: The circuit setup for the melting trace paradox.

If you find the equations for power of the trace and the required energy to change the trace temperature, you will find the temperature change of a trace expressed as follows:

whereis the temperature change, and  is the specific heat and mass of the copper and  is the time elapsed.

The equation states that the temperature increases with time. That is, the longer I leave the current source on, the hotter the trace gets. As the temperature reaches the melting temperature of copper, the trace melts.

There is clearly a paradox here. In labs, we know that the temperature of the trace does not increase with no bound, and the trace does not come with a warning label that says “don’t leave current on for too long, trace will melt.” At some point, the temperature of the trace reaches a steady state value.

But how come our prediction is not consistent with our expectation?

Is our math wrong? Or is the world we know broken?

We will now look at the melting trace paradox and find an explanation and solution for it.

# Power and Energy of the Trace

To reconstruct the melting trace paradox, let’s first look the power dissipated by the trace. Assume the trace has resistance  and the current source is delivering  Amps. We can write:

Since power has the unit Joule per second, we know that the longer we leave the current source on, the more energy,, is consumed:

Next, we will look at how much energy it takes to increase the temperature of the trace.

# Increasing the Trace Temperature

Let’s calculate the energy required to increase the temperature of the trace. Recall the definition of specific heat: the amount of heat per unit mass required to raise the temperature by one degree Celsius. Let be the specific heat of the trace and the mass of the trace, we write:

where  is the energy required to increase the trace temperature bydegree Celsius.

Because we have an expression for the energy of the trace, we can re-write the temperature change in terms of energy:

Here is the energy delivered to the trace:

Here is the temperature change of the trace with given energy:

Replacing  in first equation with  in the second equation , we get:

Putting in the numbers, given 1 A current source and a 1 inch long 20 mil wide copper trace, leaving the current source on for 1 minute results in 1100 oC temperature change, which exceeds the melting point of copper at 1085 oC.

We end up with a melted trace and a frowny face.

# What Is Going On?

The assumption we made in calculating the energy generated is that ALL the electric energy goes to heat up the trace. The assumption is wrong for a lab environment.

In a lab environment, we have air to provide heat transfer through convection. However, if we are in space (vacuum) where there is no air for convective heat transfer, the lack of heat transfer traps the heat in the trace and increase the trace temperature with time and finally, the copper trace melts.

Fig. 2: Air should be included in the setup to correctly represent the lab environment.

As shown in Fig. 2, in most labs, air is present to provide heat transfer through convection, allowing the trace to reach a steady state temperature.

It would be hard to calculate the steady state temperature with pencil and paper. I’ll show you how to find the temperature by performing an electro-thermal simulation with ADS PIPro.

# Electro-Thermal Simulation of the Trace

Entering the same parameters in the ADS PIPro electro-thermal simulator, we can setup two experiments: one with a vacuum and another where air and convection is present.

We would expect the trace in the experiment with a vacuum to have an extremely high temperature, and expect the one with air to reach an equilibrium temperature.

Moreover, with air surrounding the trace, we expect the surrounding of the trace to heat up and have a higher temperature. On the other hand, if the trace is in a vacuum, the surrounding of the trace should stay at the ambient temperature.

Fig. 3: ADS PIPro electro-thermal simulation of a trace in a vacuum. Since there is no air to allow convective heat transfer, the trace reaches a very high temperature, ~35000 oC.

Shown in Fig. 3, as expected, the trace in vacuum reaches a high temperature, with no heat spreading to the trace’s surroundings. (But if there is no convective heat transfer in vacuum, how does the warmth of the sun gets to the earth!?)

Fig. 4: ADS PIPro electro-thermal simulation of a trace in a lab environment. Since air is present to provide convective heat transfer, the simulated temperature reaches a reasonable value, 56.8 oC

Shown in Fig. 4 is the analysis done with air around the trace. As expected, the air surrounding the trace provides means of heat transfer and allows some energy to escape instead of being all trapped in the trace. Reading from the result plot, we find the final temperature of the trace to be about 56.8 oC.

# The World is Not Broken and Math is Good

After the analysis, we are now sure the world is not broken and our calculations are correct. It is our assumption that needs to be improved. We took the air around us for granted and forgot to include it in our initial analysis.

By performing the consistency tests, we found the solution to the melting trace paradox and now have a better understanding of the thermal aspect of the current source and the trace.

That's this week's Tim's Blackboard. See you in two weeks!

Posted by Vandana Duff May 15, 2017

You successfully created a Low Pass Filter in ADS by following my example in the last blog post.  Let’s build off what you know from the Low Pass Filter exercise and learn how to add flexibility to your design, including PDKS and compare an ideal schematic to a schematic with external vendor’s components.

What’s a PDK?

You now want to design a low noise amplifier and need to select what type of process the device needs to be fabricated, whether it’s Si, SiGe, GaAs, or other high frequency manufacturing processes.  A Process Design Kit (PDK) contains active and passive device components with symbols, parameterized layouts, simulations models, and much more for IC design.   A PDK provides both the opportunity to shorten the product design-cycle for high frequency chip design, and the capability to simulate your chip exactly as you expect it before the chip is manufactured.  Some features of a PDK may include:

Schematic Example

Parametric Layout Cells

Design Rule Checks

Simulation Results

Layout Options

Vendor supplied models provide more realistic results, that may include parasitics.  Once your LNA has been created with the PDK of your choice, you can now compare your data to an ideal design by adding your PDK design, and ideal when setting up your plots.  The data can be viewed in several plots, including rectangular and smith chart plots. When setting up your plot, you can plot several traces, which can include any S-parameter measurement, in dB or log.

The lab attached at the bottom shows you how.

What are Cell Views?

Another capability is cell view, which is a way of capturing a design.  It has multiple views and can define a design in that cell using a schematic.  Another way to think about it is within a hierarchy type setting for coding, like pointers in the programming language C.  This capability simplifies what can be viewed in your workspace.

Check out the attached lab with step-by-step instructions that walk you through the topics we covered:

• What is a PDK?
• Data Comparison
• What is Cell View?

For other getting started topics, check out our video playlist:

# Have you run into cases where you could not find agreement between your pre-fabrication measurements and post-fabrication lab measurements?

Posted by KeysightEEsofEDA May 11, 2017

In the past, the pre-manufacture design could be simulated and tested using compliance tools from an EDA vendor. And the post-manufacture prototype could be bench tested using compliance tools from test and measurement instrument vendors. However, because of subtle differences between the two vendor’s independent approaches to compliance, it was almost impossible to correlate the two.

This opened the possibility of the pre-manufacture design passing and the post-manufacture prototype failing, necessitating a time-consuming and expensive design spin. In contrast, Compliance Test Benches leverage the exact same industry-leading Compliance App used on Keysight Infiniium oscilloscopes. Compliance Test Bench mimics a real hardware test bench, and, using a scripting technology called “Waveform Bridge,” emits the same waveforms that the Infiniium app receives when you are testing in your lab. The new approach allows engineers to apply the same set of compliance tests in all three cases: real-time/on-scope, offline/on-scope, and offline/remote.

In Summary, Compliance Test benches can:

Break the wall between Design Simulation and Lab Measurement

• Exact same compliance software used for simulation and measurement

Provide probing point where it is not accessible inside the IC chip

• Equalization takes place inside the chip for SERDES devices
• It must be simulated to show if the data can be recovered by the receiver

Provide compliance validation before committing to hardware fabrication

Compliance test benches in Keysight ADS are virtual workspaces that can be customized for specific applications. Most of the connection in the design specification are included on these virtual workspaces. You can simply select them and run your simulation. Below is an example of the process to run a simulation to generate waveforms:

Through this test, you can look at the host side eye diagram as well as the receiver side eye diagram and save the waveform results. Then you can run the exact same compliance application on the virtual scope that you have on your computer/machine. Now, at this stage when the simulation results pass the specification you can have confidence that there is an agreement between your pre-fabrication and post-fabrication results. This way if you find any inconsistency between the two results, you can target the specific problem area.

# SRAM Cell Model Generation and Modeling Efficiency Take Center Stage in New Software Releases

Posted by kaelly_farnham May 4, 2017

Accurate and efficient modeling is critical to successful design, especially when it comes to the Static Random Access Memory (SRAM) cell, the minimum geometry devices in integrated circuit technology. Modeling such circuits has grown increasingly complex with the advent of nanometer scale process geometries. That’s because increasing process variation makes model stability more challenging.

The latest release of Keysight Technologies’ Model Builder Program (MBP) 2017 now features a SRAM cell model generation package that’s designed to address this challenge head on, by enabling engineers to extract transistor-level and memory-cell models in one MBP session. The user can easily simulate cell-level figures-of-merit, tune model parameters and even compare two memory cell models (Figure 1).

Figure 1. With MBP 2017, users can easily compare SRAM cell models.

According to Roberto Tinti, Keysight’s Device Modeling Planning Manager, the extraction package came about as a result of a collaboration with a major customer. “Working together we developed a solution that not only reduces modeling iteration but cuts the design cycle as well. It promises to bring many benefits to both existing and future MBP customers.”

Additional enhancements in MBP 2017 include:

• An enhanced statistical model extraction flow and updated application examples
• Enhanced extraction flows for BSIM3v3, BSIM4, and BSIM-CMG
• Updates to the following models: BSIM-CMG 110.0, BSIM-CMG 109.0, BSIM-IMG 102.8, BSIM-IMG 102.7, HiSIM2 2.9.0, HiSIM_HV 2.3.2, HiSIM_HV 2.3.1, HiSIM_HV 2.3.0, EKV 302.00

Figure 2. Available in MBP 2017 is an updated scripts-based model extraction flow.

Keysight has also released a new version of its Model Quality Assurance (MQA) 2017 software with enhancements designed to improve modeling efficiency and model quality. MQA 2017 contains a new internal SPICE3 engine that allows users to run quick simulation and quality assurance (QA). It supports the latest compact model versions. Python scripts are also now supported, enabling generation of user-defined Excel tables based on exciting QA results.

"The advanced effects and parasitics in new devices make device modeling more complicated than ever,” said MA Long, Device Modeling Product Manager with Keysight. “With the new internal engine, users can run model quality checks during parameter extraction and uncover potential risks in the early design stage. Support for Python scripts provides the user even more flexibility and functionality in generating tables over the existing TCL and Perl solutions offered."

Figure 3. N/P compare table generation with Python Script as provided by MQA 2017.

Other enhancements in MQA 2017 include support for Spectre native aging simulation, SmartSpice version 4.26.7.R and Microsoft Office 2016. Unlike traditional manual scripting methods, MQA enables users to check their SPICE models, compare models and generate QA reports in a complete and efficient way.

MBP and MQA are Keysight’s industry-leading device modeling and characterization products. MQA is the industry standard for SPICE model acceptance and sign off, and is widely adopted by leading integrated device manufacturers (IDMs), foundries and design houses. Information on MBP 2017 and MQA 2017, is available at  respectively. To apply for a free software trial, go to www.keysight.com/find/mytrial.mbp.blg and www.keysight.com/find/mytrial.mqa.blg.

# Controlling Measurement Test Equipment from Keysight ADS

Posted by kaelly_farnham May 1, 2017

There’s a saying around the industry that goes something like “everyone trusts a measurement, except the person who made it, and no one trusts a simulation, except for the person who did the simulation”.  Either way, both have lots of nuances which can impact the end result. Comparing simulation to measurement can be a very difficult task.

For example, consider the case of a PAM-4 signal through a channel.  By transmitting 2 bits per clock cycle, PAM-4 (Pulse-Amplitude-Modulation, 4 amplitude levels) offers high data rate transmission (56 Gb/s), but there are challenges to implementation; with 4 levels, the traditional eye diagram splits into 3 eyes, meaning less noise and distortion can be tolerated in the channel.

Figure 1. PAM-4 waveform.

Because the signal in PAM-4 is more sensitive, an accurate simulation of the channel is essential.  A channel simulation takes the modulated signal and sends it through a physical link (like a cable) to see how the clean signal gets distorted by the channel.   Given the straightforward setup, it may seem simple to correlate simulation to measurement – after all, we can potentially use the same waveform in both simulation and measurement (using an AWG), or measure the S-parameters of the channel directly and use that data in the simulation.  The results should easily correlate, right?  Unfortunately, there are still nuances that can lead to different results.

Figure 2. Channel measurement setup.

In an eye diagram measurement on an oscilloscope, the signal will likely be preprocessed prior to measurement.  For example, the waveform will be filtered or noise will be injected into the raw waveform.  Modern instruments contain all kinds of tools and functions for data analysis – and these are not always easy to recreate elsewhere (sometimes they’re even proprietary!).

There are also functions in simulation tools that generate eye diagrams– but we wonder what processing is being done in the background?  It’s not always clear.  We could have the exact same signal on the scope and in the simulation, and due to different data processing techniques, get different eye diagrams!  Yikes!

Figure 3. Measured eye diagram on Keysight DCA-X 86100D oscilloscope compared to a simulated eye diagram in Keysight ADS.

To eliminate uncertainty, its best to process both waveforms in the same place with the same functions.  But getting measurement data into the simulation environment can be tricky, and getting simulated data onto an oscilloscope could be even trickier.

Wouldn’t it be great if you could just talk to the instrument directly from the simulation tool?  That way, you could configure the instrument, load a waveform, make some measurements, and maybe even return data back to simulation – all without ever leaving your desk.

Well, now you can!  Using a link between Keysight ADS and Python, it’s possible to transfer data between the simulation and measurement environments, and even control the instrument directly using SCPI commands.  And the great thing is, you don’t even need to know Python to do basic instrument IO.

ADS functions are now available which invoke Python behind the scenes, allowing you to send SCPI commands from ADS to configure a measurement or capture a waveform trace.  All you need to do is load the ADS functions, set up your libraries (www.keysight.com/find/iolibraries) and you’ll be controlling instruments in no time.  The figure below shows how it’s possible to capture a measured PAM-4 waveform directly into ADS with a few simple SCPI commands.

Figure 4. Capturing measured PAM-4 waveform into ADS using SCPI commands.

In the case of the PAM-4 signal mentioned at the beginning, it might be better to process the simulated waveform on the measurement test equipment because the Keysight DCA-X 86100D oscilloscope has lots of built in functionality.  In this case, we can use a Python script to do the heavy lifting: loading the simulation data into the DCA-X 86100D, configuring the scope, processing the eye statistics, and returning the levels back to ADS.  At the same time, you could also measure the physical channel for direct comparison with simulation, using the exact same processing algorithms.  After the Python script has been developed, you can do this all in one step using the ADS Data Link to call the script, transfer the data and receive results back.

Figure 5. Calling a Python script from ADS to load simulated channel output waveform and compare it directly with the measured waveform.  ADS calls the script and the corresponding PAM-4 levels for both simulation and measurement are returned to ADS.

3. Video: Instrument Connectivity (Part 3 of 3)

Registered users can view the following application notes in the Keysight EEsof EDA Knowledge Center (register here):