Skip navigation
All Places > Keysight Blogs > EEsof EDA > Blog
1 2 3 Previous Next


90 posts

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.


This week on Tim’s Blackboard is “Your Channel, PRBS and the Eye.”



Previously on Tim’s Blackboard, we showed the convolution process and the helpful single pulse response. This week, we will extend the previously learned single pulse response (SPR) to explore Pseudo-Random Bit Sequence (PRBS) and the eye diagram, see Fig. 1.


Fig. 1: Left: A Pseudo-Random Bit Sequence. Right: An Eye Diagram.


What the Single Pulse Is Not Telling Us

Although the single pulse response gives us information on how a single pulse reacts to the channel under test, it does not tell us how previous pulses affect the shape of the current pulse.


In an ideal world, where the channel does not distort the signal with its frequency-dependent loss, the shape of each pulse is not dependent on other pulses. However, since we live in the real world, we often observe Inter-Symbol Interference (ISI) caused by rise time degradation, a consequence of frequency-dependent loss.  


Shown in Fig. 2 is the ADS simulation result of two different patterns followed by a single pulse pattern: 01000. After going through a channel with considerable frequency-dependent loss, the single pulse waveform comes after a string of one’s is not the same as the single pulse waveform comes after a string of zero’s. Because the previous symbols-string of one's-is interfering with the single pulse pattern, the voltage that is representing zero increased from 0 V to almost 0.3 V. If we are not careful, this increase would cause false triggering in the receiver. 


Fig. 2: Shown in ADS, the shape of the single pulse depends on the pattern before the pulse.


To add, although the single pulse response is helpful, it is rare for one to transmit or receive only a single pulse in practical high speed digital applications. Normally, the data pattern consists of different combinations of one’s and zero’s that we do not know a priori.


To mimic different data patterns and to characterize the level of ISI introduced by the channel, the Pseudo-Random Bit Sequence was born.


PRBS Pattern and the Channel

Shown in Fig. 3 is an example of PRBS. As the name suggests, the Pseudo-Random Bit Sequence is a sequence of one’s and zero’s that are independent of each other. The randomness provided by PRBS gives us some ideas on how the channel affects transmitted digital data.


Fig. 3: Example of a PRBS pattern at the transmitter side before going through the channel.


Much like the single pulse response, the response of the channel to PRBS is the convolution of the PRBS pattern and the impulse response of the channel.


From the single pulse response, we learned that after going through the channel, the sharp zero-to-one transition of a single pulse becomes a slower rising curve at the beginning. Also, the single pulse gains a longer tail (all thanks to frequency-dependent loss, which we will discuss in the future). In the same way, we should expect the received PRBS pattern to not have a sharp transition between the zero’s and one’s.


Fig. 4: After going through the channel, the sharp transition edges of the original PRBS pattern become slower rising and falling curves.


Fig. 4 shows the PRBS pattern after going through the channel. As expected, after going through the channel, the sharp transition between zero’s and one’s are reduced by the channel impulse response.


Eye Diagram: The Comprehensive Version of PRBS

Although PRBS gives us an idea on how the channel affects digital data pattern, the information is scattered throughout a large time scale. It is hard to come up with a figure of merit to describe the quality of the channel by looking at data that goes on and on in time. 


To create a better representation of the channel, we can manipulate the received PRBS waveform using our knowledge of the data coming in.


For example, if we are sending data at 10 Gbps, we know the unit interval (UI) of each bit is 0.1 nsec. Using our knowledge of the UI, we can “slice” the long received PRBS waveform and examine the waveform one UI at a time. Now, because we are also interested in the transition from one bit to another, we increase our observation window to 2 UI’s, corresponding to half of an UI before and half an UI after the current bit.


Fig. 5: Three example time slices of a received PRBS waveform. The data rate is 10 Gbps, corresponding to 0.1 nsec UI. To observe the bit transitions, the observation window is extended to 0.2 nsec. 


Shown in Fig. 5 are example “slices” of the received PRBS waveform. The eye diagram, which is the comprehensive version of the received PRBS waveform, is constructed by overlaying these observed partial waveforms on top of each other, as demonstrated in Fig. 6.


Fig. 6: Illustration of overlaying the three slices of PRBS waveform shown in Fig. 5 to create an eye diagram.


By combining many of these time slices, an eye diagram is formed. The resulting eye diagram for the example channel is shown in Fig. 7. According to the eye diagram, one would say the example channel is a good one because of the clear eye opening. With a clear eye opening, the receiver is able to distinguish the two different voltage levels at 100 psec.   


Fig. 7: The eye diagram result of the channel shows an open eye. The receiver can easily tell the high voltage level from low voltage level if a decision is made at 100 psec.


To further specify the quality of the channel, one can now quote the vertical amplitude measurements of an eye (eye height, eye level, etc.) and/or the horizontal timing measurement of an eye such as jitter or eye width.


How to Deal with a Closed Eye?

Although the example gives us a pretty clean and open eye, it is unlikely to always find such a pristine channel in practice. You are more likely to find a channel with an eye that looks like the one shown in Fig. 8.


Fig. 8: A channel that has an almost closed eye.


To deal with a closed eye, we will need to first find the root cause for the eye closure. We will do that in the next post!

That's this week's Tim's Blackboard. See you in two weeks!


To download a free trial of ADS to generate a PRBS pattern and an eye diagram:

Working with data tables is a basic skill every engineer learns early on in their career. When organized properly, tables can summarize a great deal of device data into a helpful information format, and that makes them extremely useful. A 1-page data table can pack as much punch as a thirty-page report that’s full of curves. The table significantly minimizes the time it takes engineers to find the information they're after, while also minimizing paper waste and saving trees!


But not every engineer wants their information displayed in the same tabular format. How the tables are created, organized and filled may differ from one engineer or application to another and that’s why the ability to customize tables is so critical. Doing so, allows engineers to derive the specific information that is so critical to optimizing their design. Unfortunately, generating tables and customizing them to meet specific needs is not always a straightforward task. And that can translate into wasted design time, added cost and slower time to market.


For those engineers wanting to quickly and easily customize tables, the answer comes in the form of Keysight Technologies’ Model Quality Assurance (MQA) solution. MQA is a well known, automated SPICE model validation software that allows engineers to check and analyze SPICE model libraries, compare different models, and generate quality assurance (QA) reports in a complete and efficient way. MQA 2017 extends these capabilities by introducing the Python Report Formatting System (PyRFS) module, which allows engineers to customize tables—either generate new tables or update existing tables—in .csv and .xlsx file formats.


PyRFS is simple enough to generate all sorts of tables quickly, with plenty of options to customize those tables in a flexible and scalable way. For engineers using MQA, that means the ability to sort, filter, formulate, format, and layout data in many different ways. It also enables them to query and have fine control over MQA project results.


And, because it’s Python based, engineers can even feel free to blend in their own favorite Python stuff. For you personally, having access to this functionality promises to cut your design time and speed time to market.


But, how do you leverage this module? To do that, I’ll walk you through a simple, yet complete flow for creating a .xlsx file using the table in Figure 1 as an example. The table filters and sorts W/L/T for certain device targets (Idsat).

excel example table for MQA module

Figure 1. Example table.

The example table comes from the QA result of one MQA rule “Model Scalability -> Check Idsat vs L,” where there are Idsat curves from 3 temperatures and different W and L. The demo project to reproduce this example can be found as $MQAHOME/kefrfs/python/demo/data/. The first few lines of the code for generating this table are shown in Figure 2.

python code for MQA module

Figure 2. Initial code for creating the table in Figure 1.


In Figure 2, line 1 and 2 import necessary Python modules; “os” is Python’s native module for the operation system, and “pyrfs” is the module that comes with MQA 2017. Line 4 gets the current python file’s directory, while line 5 locates the example MQA project folder’s path. Line 6 prepares the option called “config,” which specifies that the project’s path is to be fed into line 7. Line 7 then creates a data provider “dp”, and gives it access to all of the information of the specified project for you to query.


On line 9, the “dp.query()” function is called by specifying “rule_group,” “rule,” and “check” as arguments. The values of these arguments are the folder names, respectively. The “dp” object has been narrowed down to information only from this node. Lines 10 – 12 are designed to get W/L/T as conditions, while line 13 gets Idsat as a target.


Figure 3 shows the next few lines of code. On line 15, a “table” is created by calling “rfs.ReportTable()” and giving “example_idsat_table” as its name. At the end, a file with this name is created.

 additional lines of code for MQA module

Figure 3. Additional lines of code need to generate the table in Figure 1.


Line 16 defines a RightLayout and associates it with the “table.” A Layout is how we fill up the table. RightLayout means PyRFS will automatically fill the table from left to right, starting from the top-left cell “A1” by default, and growing rows and columns as necessary so that you don’t need to worry about cell indices.


Line 17 adds W, L into the table with a few options to make them appear in one column with a certain format, ascending, and filtered by L=Lmin. Line 24 adds Idsat into the table, and repeats it in 3 columns because it is dependent on the 3 temperatures to be added in line 25. Once these constraints are well defined, lines 30 and 31 fill up and save the table, respectively.


From this simple example, it’s easy to see how useful the PyRFS module is and how helpful it can be in automating your table generation tasks. For more information on the PyRFS module functionality and step-by-step examples of how it can be used, refer to the “PyRFS Function List” at the end of the blog and check out the online PyRFS Tutorial at:


PyRFS Function List

The MQA 2017 PyRFS is a powerful Python module for easy, yet fully customizable table generation and reporting. Below is a summary of the functions that PyRFS provides:

  • Automatic extraction of constraints (data collections) from MQA result directories
  • Generation of tables and ability to save them as Microsoft Excel .xlsx files or .csv files on Linux and Windows platforms
  • Support for updating .xlsx files under Windows.
  • Support for sorting and filtering per user specification
  • Support for customized formatting of constraints in tables
  • Support for formulas calculated from other constraints in tables
  • Ability to divide data into different tables, sheets, or files per user specified conditions


apply for a free MQA trial

FREE Evaluation of MQA | Keysight EEsof EDA  

The fundamental goal in cellular and network wireless development is maximizing antenna performance while minimizing antenna size. In order to achieve this, antennas have increased in the number of array elements and in complexity of broadband networks, requiring highly accurate simulation programs for significant designs and tests. With ADS 3D EM simulation software, users can easily design and simulate many different types of antennas. EMPro can simulate the antenna in realistic surroundings, including the phone components, housing and even the human hand and head. Compliance testing can also be performed, such as specific absorption ratio (SAR) and hearing aid compatibility (HAC). Below are two examples of types of antennas you can design and simulate with ADS. Visit Keysight’s EM Applications Page for more examples!


  1. Multi-Band Planar Array Antenna

    Large-array antennas become more challenging when the antenna is used for multiple bands. The field solver required to handle the capacity and speed is a design challenge for many engineers. One approach to this challenge is to divide the EM problem into small “sub-cells” that are integrated individually without carrying out an EM simulation at the full structure level. However, the coupling between the sub-cells will not be taken into account.

    Using Momentum 3D Planar EM Simulator, you can design and simulate an entire multi-band planar array antenna for communication and radar applications with optimum accuracy. The complexity of this design requires a highly accurate simulation software, such as Momentum, to accurately characterize the antenna in terms of radiation and return loss.

    Figure 1: Using Momentum 3D Planar EM Simulator, you can design and simulate an entire multi-band planar array antenna for communication and radar applications.

  2. 8x16 Patch Array Antenna
    In order to create the desired directive radiation patterns, designers arrange multiple antennas so that their coexisting wave patterns add constructively or destructively in a specific formation. The main lobe antenna can be steered by changing the phase of excitations at each array element. Depending on the number of array elements and the complexity of the feeder network, the simulation of a patch array antenna can be quite challenging. Although simulation time and speed are mostly related to the problem size, another effect on simulation time is the frequency bandwidth.
    The EMPro FDTD simulation engine is preferred because it produces a wide band simulation result with a single simulation. No frequency sweeping is necessary. FDTD also uses less memory while speeding up the simulation utilizing GPU acceleration.

    patch array
    Figure 2: The EMPro FDTD simulation engine is preferred because it produces a wide band simulation result with a single simulation. No frequency sweeping is necessary.




These are two of many prevalent applications of ADS and EMPro that can be found on Keysight’s EM Applications Page. Apply for a free trial of EMPro today!

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.


This week on Tim’s Blackboard, convolution workspace!

Get Your Workspace Today!

At the bottom of the page, there are links to the available workspaces.

Let me know if you have any questions at .




Use ADS to unarchive the workspaces:

         For engineering students and professionals alike, ADS simulations are a dream come true. A software that does all the calculations for me? Count me in!


         As today’s circuits grow more and more complex, the processes involved in creating, testing, and simulating them tend to follow suit. RF engineers face many challenges in RF design. With vast applications and flourishing technology, engineers need an efficient, easy-to-use workspace to construct their innovative creations.


         In his webcast, RF Simulations Basics, Andy Howard, Senior Applications Engineer and EEsof Applications Expert for 30 years, guides us through the basics of RF Simulation in ADS, showing how it is a valuable tool with multiple applications. For those just getting introduced to RF simulations, this webcast is a great resource for understanding how to perform RF simulations and why. Andy provides six prevalent applications for RF Simulations, two of which I will discuss here, that show us why ADS is the integral part in bringing engineering ideas to life.



Figure 1: ADS helps RF engineers bring their ideas to fruition with its efficient, easy-to-use RF simulation guides.



RF Simulations with ADS

ADS offers a comprehensive set of advanced simulation tools, integrated into a single environment. In the webcast, Andy gives you a feel for what it’s like to run the ADS software. He shows how an ADS user can design a simple block diagram using models. These models represent transmission lines, transistors, capacitors, etc. These are “the building blocks of effective simulations.” Simulating S-Parameters of your RF design is similar to using a network analyzer; however, with ADS you can combine multiple blocks with several different parameters. There is no limit to the number of ports.


The interface allows you to adjust the parameters of each component as you place it in your schematic and set frequency limits. When the design is complete, Andy shows how you can view the Data Display window and tune the component values simultaneously. This provides an excellent visualization of how your design truly depends on the different component values.



Figure 2. With ADS, users can view the Data Display window and tune the component values simultaneously.


What can be achieved with RF Simulation?


1. Better receiver performance for all your communication devices.

The applications of ADS are vast. A simple example Andy gives is an FM Radio Receiver. The simulation provides data at each node of the block diagram, allowing him to determine precisely where the receiver has performance degradation. He also views a spectrum of a particular node, which can indicate spectral content. By doing this example in ADS, the designer can more quickly see which components are causing the degradation.  The engineer can make more accurate predictions about their design without having to do many messy calculations. ADS also allows you to tune the parameters of your devices within the block diagram, allowing you to make adjustments as needed. 



Figure 3: The simulation of an FM Radio Receiver provides data at each node of the block diagram, allowing one to determine the precise location of performance degradation. Simulation done by ADS.




2. Integration of Multiple Technologies

The coupling of multiple technologies is a challenge often faced by RF designers, especially now as multi-tech devices are becoming more common place. Therefore, the need for compatibility between the technologies is imperative. For example, many companies use ADS to design parts such as a Multi-Chip RF Front End Module, as it is popularly used to power smart phones and tablets. ADS allows designers to take a pre-existing module, and virtually mount it on a board. Using multiple technologies can often cause performance degradation and increased simulation times. ADS is unique in its ability to ensure compatibility between different modules and technologies.



Figure 4. ADS allows designers to take a pre-existing module and virtually mount it on a board, allowing RF designers to couple multiple technologies into one simulation. Simulation done by ADS.


         These are only two of many applications of RF Simulations through ADS. Andy provides insight into these as well as four other applications of RF Simulations, including Electro-Thermal Analysis and High Speed Digital Design. He also delves into the different types of simulation engines which allow you to analyze your design’s harmonic balance results and transient simulations.

         Andy has over 15 years of experience designing and simulating circuits. For those looking for an introduction to the fundamentals of RF simulations, Andy provides a well-structured one-hour lecture that describes the multiple uses of RF design software. With these fundamentals, RF students and professionals can easily make their ideas into a reality.   




Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.


This week on Tim’s Blackboard is “Convolution and Single Pulse Response.”



Two weeks ago, in the “Dirac Delta Misnomer” post, I explained why Dirac delta function is technically a distribution. I also talked about the impulse response: the response of a system given the Dirac delta distribution as the input.


This week, I will demonstrate the concept of convolution and the process of generating single pulse response using convolution, as shown in Fig. 1.

Fig. 1: ADS simulation result of the single pulse response generated by old-school convolution.


System Level Abstraction

Before delving into the concept of convolution, I want to first show the top-level system diagram to place the impulse response in context.


In Fig. 2, you can see the input goes into the channel and out comes the output at the…… output. In general, we don’t know how the channel operates because we do not have the information for the channel behavior , which is the reason for the question marks.  


Fig. 2: By injecting an impulse at the input of the channel, one obtains the impulse response of the channel under test. Because the impulse response completely characterizes a system, one now knows the channel behavior.


To obtain a behavioral model for the channel, one sends an impulse (Dirac delta distribution) at the input of the channel. Because the impulse response completely characterizes the channel, the channel response at the output is the behavioral model of the channel we are looking for. (To learn more about signal and systems, I strongly suggest 6.003 from MIT open courseware.)      


The Concept of Convolution

The form of the expression,


shows up naturally in the solution of differential equations by integral transforms (like Fourier or Laplace transforms) related to electrical engineering, optics, probability and statistics, and other disciplines [1].


In the context of the linear time-invariant (LTI) systems, such as a high-speed channel, once we have the impulse response of the channel, the output of the system is the convolution of the input and the channel behavior, see Fig. 3.      

Fig. 3: The convolution operation appears when one computes the output of a channel, where the channel has impulse response, h(t), and x(t) at the input.  


If we expand the short hand notation, , we arrive at the familiar form


Graphical Convolution in Action

It’s hard to make out what is happening from the classic convolution integral, so I will represent the equation graphically.

Say we have a function, , shown in Fig. 4, and we want to calculate the convolution of the function with itself.  


Fig. 4: Illustration of the function f(t)


To compute the convolution of function with itself, we will flip the function and generate the mirror image, , see Fig. 5.   

Fig. 5: Illustration of f(-t), the mirror image of the function f(t)


Finally, we slide the mirrored function towards the right. As we slide the mirrored function, we will multiply the two functions and integrate to find the overlapped area. Fig. 6 is the illustration of convolving  with itself at some time t1.


Fig. 6: The shaded region is the value of convolving the two functions at time t1. According to the graph, at time equals to 0.6 units, the result of convolution is 0.6.


In Fig. 6, the shaded area is the value of the convolution at time t1, to find out convolution results for all time, we will keep sliding, increasing time and calculate the overlapped area at each time step. We can expect that as we slide the pink rectangle to the right, we will reach a maximum overlapped area at time equals to 1 unit, where the two rectangles are right on top of each other. Afterwards, we expect the area to start decreasing to zero.


The result of the convolution is shown by the red curve in Fig. 7. The red dot shows the value of the product of the two functions integrated throughout the entire time axis at a specific time t. The red trace left behind the red dot is the convolution result for all time.  


Fig. 7: Animation of convolving a rectangular function with itself. The red curve shows the result of the convolution. Fun Fact: convolving two rectangles gives you a triangle.


Generating Single Pulse Response

The single pulse response of a channel is the result of convolving a single pulse with the impulse response of the channel. In this example, the single pulse comes from a 10 Gbps transmitter and has a duration 0.1 nsec, shown in Fig. 8.

Fig. 8: The setup for single pulse response convolution.


Like the rectangle example, to get the single pulse response, we slide the single pulse, find the overlapped area by multiplication and integration. Similarly, we would expect the single pulse response to increase and reach a maximum value and decrease to zero. Note the single pulse response has voltage range from 0 to 1 V, the result of multiplying a large voltage value in GV scale with a small time value in nsec scale.


Shown in Fig. 9, the result from the simulation agrees with our expectation and we now have the knowledge of the response of the channel to a 10 Gbps pulse.  


Fig. 9: ADS simulation of single pulse response computed with convolution.


The Story Told by a Single Pulse  

You might be thinking, “Why do we care about the single pulse response?”


The short answer: because we can learn a lot from the signature of the waveform, especially when equalization is used at the transmitter and/or receiver. Interested readers can find more information in the attached single pulse response slides I presented in DesignCon 2017.


Convolution is quite a revolutionary concept in dealing with signals and systems in the time domain. In the coming weeks, we will start moving toward the frequency domain and become “bilingual” in both time and frequency domain.


I am working on putting together a workspace for this post and it should be up next week in TB #3.5. 


That's this week's Tim's Blackboard. See you in two weeks!


To download ADS so you can learn more about convolution:


[1] A. Dominguez. (2015, Jan. 24). A History of the Convolution Operation [Online]. Available:  


Understanding the impact thermal effects can have on your circuit design is critical to being able to adequately account for them during the design process. It’s also essential to designing your circuit in an efficient way. But that’s easier said than done. A recently released video from Wolfspeed may offer you some much-needed help.


The solution involves using the Wolfspeed MMIC process design kit (PDK) that works in Keysight EEsof EDA’s Advanced Design System (ADS) software. A key feature of the Wolfspeed ADS MMIC PDK is that it’s configured to work with the ADS Electro-Thermal Simulator to co-simulate electrical and thermal performance. The feature is a powerful tool that allows you to account for the significant thermal effects that can occur when using a high-power density technology like GaN.

To demonstrate this capability, the video details the example of a simple, single-pole tuned 10-GHz power amplifier. The design uses a 1.2-mm FET and its goal is to put out about 5 watts at temperature.

Layout of the single-pole tuned 10-GHz power amplifier, designed using only elements from the PDK itself.

The layout of the single-pole tuned 10-GHz power amplifier, designed using only elements from the PDK itself.


 Schematic of the electro-thermal simulation of the single-pole tuned 10-GHz power amplifier.

Schematic of the electro-thermal simulation of the single-pole tuned 10-GHz power amplifier.


Data display of the simulation of the single-pole tuned 10-GHz power amplifier from Keysight ADS software.

Data display of the simulation of the single-pole tuned 10-GHz power amplifier from Keysight ADS software.


With the ADS Electro-Thermal Simulator, Wolfspeed was able to get an accurate, “temperature aware” IC simulation result for the PA using device temperatures that took into account both thermal coupling and the thermal characteristics of the package.

Electro-thermal simulation of the single-pole tuned 10-GHz power amplifier. Peak is at 180 degrees.

 Electro-thermal simulation of the single-pole tuned 10-GHz power amplifier. The peak is at 180 degrees.


3D view of the electro-thermal simulation of the single-pole tuned 10-GHz power amplifier.

3D view of the electro-thermal simulation of the single-pole tuned 10-GHz power amplifier.


For specific details on how the ADS Electro-Thermal simulator was used in the design of the 10-GHz PA, watch the video below.


More information on the ADS Electro-Thermal Simulator is available here.


free trial, ADS, Keysight

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.


This week on Tim’s Blackboard, I will resolve the arrival time inconsistency shown in the previous post.  



Last week, we sent an impulse through a section of 50-Ohm, 6-inch microstrip transmission line and expected the impulse to arrive at 1 nsec. However, the impulse arrived 0.88 nsec at the output, see Fig. 1.


Fig. 1: ADS simulation result from last week's post. We expected the impulse to reach the output at 1 nsec, but the impulse arrived at 0.88 nsec.


Assumption, Assumption, Assumption

The answer to our question is in the assumption we made when we were calculating the time delay. The rule of thumb for the speed of propagation, 6 inch/nsec, assumes the Dk (dielectric constant) to be 4.


The speed of light in vacuum is 3*108 m/sec, or 30 cm/nsec, and alternatively, about 12 inch/nsec. To calculate the speed of propagation in a different medium, we divide the speed of light in vacuum by the square root of Dk: 



  Given an FR4 substrate with Dk = 4, the speed of propagation is:




Microstrip and Effective Dk

The Dk of FR4 is indeed 4, but that's not the whole story. When a signal propagates in a microstrip environment, it sees both FR4 and air. The result of the signal interacting with both medium is a lower effective DK.


A lower effective Dk would increase the propagation speed and lower the delay: consistent with last week’s result.

Having a guess of what was happening, we proceed to verify whether our guess is correct.


Consistency Test

Since the possible root cause of the early arrive time is the lower Dk due to air, we need to ensure the signal only sees the Dk of FR4. To do so, we place the same transmission line in a stripline environment, where the trace is only surrounded by FR4 material.  


Fig. 2: The same 6-inch transmission line with 20 mil trace width. Note the height of the substrate is changed so the impedance of the line is still 50 Ohms.


To make sure we are doing an apples-to-apples comparison, the substrate height is increased so the impedance of the transmission is still 50 Ohms, see Fig. 2.


We then perform the identical simulation with the new substrate. Shown in Fig.3, the result of the impulse indeed arrives at 1 nsec, and our guess is now a valid root cause for the shorter delay.

Fig. 3: Signal arrives at the predicted 1 nsec after switching the same 6-inch line into a stripline environment while maintaining impedance of 50 Ohms.

Inconsistency Resolved

Much like the melting trace paradox, our initial assumption was once again inaccurate. However, since we used a rule of thumb to quickly get the numbers for the propagation speed of light in FR4, some degree of inaccuracy could be expected.


As Dr. Eric Bogatin put it, “An okay answer now is better than a good answer later.” As long as we know the underlining assumption in the approximation, it is perfectly fine to use a rule of thumb to quickly estimate the result one is expecting.


That's this week's Tim's Blackboard. See you next week!


Use ADS simulations to perform consistency tests:

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.


This week on Tim’s Blackboard is the “Dirac Delta Misnomer.”



Did you know the famous Dirac delta function is mathematically NOT a function?


Fig. 1: ADS data display representation of the Dirac delta “function” by a line and arrowhead. The tip of the arrow head indicates the multiplicative constant to the Dirac delta. 


The  "function", shown in Fig. 1, can be loosely thought of as a function on the real line which is zero everywhere except at the origin, where it is infinite:


and it is also constrained to satisfy the following identity [1]:


However, there is no function that can simultaneously have all the above properties.
Any extended-real function that is equal to zero everywhere but a single point must have total integral zero [2] [3].


Well, if Diract delta is not a function, what is it?


Dirac Delta Distribution

Mathematically speaking, the Dirac delta “function” is a generalized function or a distribution. It can be considered as the limit of a zero-mean normal distribution when the standard deviation, σ, approaches 0, see Fig. 2.  


Fig. 2: Normal distribution with different standard deviations shown in ADS data display. As the value of standard deviation gets smaller and smaller, the function approaches Dirac delta distribution.


To rigorously capture the notion of the Dirac delta “function”, Mathematicians had also defined a measure. Instead of spending time on the mathematics, we will look at what brings the misnamed Dirac delta distribution to its fame.


Dirac Delta and Impulse Response   

The Dirac delta distribution is well known for many reasons. For example,

1.    The convolution of a Dirac delta function with function F is the original function F, ,

2.    The Fourier Transform of a Dirac delta is unity, , and most importantly,

3.    The response of a given Linear Time-Invariant (LTI) system to a Dirac delta distribution completely characterizes the system.


To build up a good foundation for future discussions on Convolution and Fourier Transform, let’s examine the impulse response: the response of an LTI system to the Dirac delta distribution. (note: in the following sections, the term impulse is used interchangeably with Dirac delta distribution.)  


Impulse Response of a Lossless Channel

Before we bring out simulation tools to simulate the impulse response of a lossless channel, it is important to know what to expect. Dr. Eric Bogatin named the practice Rule #9: “Never perform a measurement or simulation without first anticipating the results you expect to see.”


Shown in Fig. 3 is an illustration of the circuit setup. Given a lossless line with time delay, TD, and no mismatch to create reflection, we will see the impulse at the probe TD seconds after the impulse (Dirac delta distribution) is sent.


Fig. 3: Illustration of sending an impulse through a transmission line with time delay TD seconds  


In this experiment, we used a section of lossless transmission line that is 1 nsec long. Per Rule #9, we should expect the impulse to arrive at the output at 1 nsec.  

ADS Simulation Result  

As shown in Fig. 4, the simulation result is consistent with our expectation. The same impulse indeed shows up at the output 1 nsec after it leaves the source.     


Fig. 4: ADS simulation of an impulse through a 1 nsec long lossless transmission line. As expected, the impulse is delayed by 1 nsec.


Note that because it is impossible to generate an ideal Dirac delta distribution having an infinite amplitude, we are using the arrowhead to denote infinity.In the ADS workspace attached, you will find a method to approximate the impulse. The key is to ensure the approximated Dirac delta distribution has an integral of unity over the entire real line.      

Consistency Test

To make sure our approximated impulse satisfies the constraints formulated before, we also plot the integral of the approximated impulse. We would expect the plot of the integral of the output impulse to be a unit step function starting at 1 nsec.

Fig. 5: The integral of the approximated impulse is indeed the step function we expected.


As shown in Fig. 5, the integral of the approximated impulse fulfills the unit step requirement. We now have confidence in the approximated impulse and the generated impulse response.      


Impulse Response of a Lossy Channel

In real life, a lossless channel does not exist. There is always conductor loss and/or dielectric loss in the transmission line. We will now investigate a 6 inch 50 Ohm microstrip line on an FR4 substrate with a virtual prototype.


Using the rule of thumb, 6 inch/nsec, for the speed of propagation in FR4, we would expect 1 nsec delay for a 6 inch transmission line. In addition, because of the frequency-dependent loss, we would also expect the impulse to spread out. Lastly, we should see a very high voltage peak approximating the infinite amplitude.  

ADS Simulation Setup and Result  

In the lossy case simulation, we used the multilayer layer library substrate and transmission line so a 2D cross-section of the trace is solved by method of moments to gain more accuracy in simulating losses than the equation-based model.

In Fig. 6, the result of the simulation agrees with most of our predictions. The peak of the output voltage is more than 60 GV (daunting) and the impulse is more spread out because of the frequency-dependent loss.


Fig. 6: The result of lossy transmission line simulation agrees with our predictions, except for the arrival time.


Nonetheless, although close to our prediction, the arrival time of the impulse is a bit off. Instead of 1 nsec, the impulse arrives at 0.88 nsec. (Any guesses on why that is? Feel free to post possible explanations in the comment section, and check back next week for the answer.) 


Dirac Delta Misnomer Corrected

After our journey today, we now know that because of its unique definition, the Dirac delta should to refer to as distribution and not a function, a fun fact to bring up at social functions (pun intended).


Moreover, we touched upon the important properties of the Dirac delta distribution. Specifically, the response of an LTI system to a Dirac delta: the impulse response, which characterizes an LTI system completely.  


In future posts, we will build upon the impulse response idea and delve into Convolution and Fourier Transform.

That's this week's Tim's Blackboard. See you in two weeks!


Before then, make sure to download the workspace attached to see how an impulse can be approximated and how the impulse looks like after going through a realistic transmission line!


To download ADS to unarchive the workspace:



[1] Gel'fand, I. M.; Shilov, G. E. (1966–1968), Generalized functions, 1–5, Academic Press

[2] Vladimirov, V. S. (1971), Equations of mathematical physics, Marcel Dekker, ISBN 0-8247-1713-9.

[3] Duistermaat, Hans; Kolk (2010), Distributions: Theory and applications, Springer.


Today we are going to talk about the different types of linear simulation tools ADS provides.  ADS sets up its linear simulations into three different categories:  DC, AC, and S-parameter simulation.  Let’s go over each of these simulations, and how easy they are to run in ADS.


1.  DC Simulation:

Per Ohm’s law(V=IR), you get steady-state DC voltages and currents.  Capacitors are treated as ideal open circuits, and inductors are treated as ideal short circuits.  DC convergence occurs when 2 conditions are met:  Voltage change at each iteration is zero and Kirchoff’s Law is satisfied, meaning the sum of the node currents equal 0. 


 DC Simulation controller & sweep VAR


The DC Simulation icon translates to the following blue image in your workspace.


Once you double-click the VAR eqn icon, you can select the option to sweep, which allows you to sweep a parameter, but it must be declared as a variable.  To declare a variable, or a variable equation, select the following icon:

2.  AC Simulation:

An AC Simulation is performed in the frequency domain.  You can simulate a single frequency point, or across a frequency span in a linear or logarithmic sweep.  


 AC Simulation Controller

The following is what the default settings look like in your workspace. 

AC simulation is either a linear or small signal simulation and the frequency is defined in the controller, not the source.  On-screen parameters can be set in the Display tab.  AC sources are identified as:  V_AC, I_AC, and P_AC.  


3.  S-Parameter Simulation:


S-parameters describe the response of an N-port network to signals to any of the ports you want measurements from in terms of power ratios.  For example, an S12 measurement is the response at port 1 given the input power wave at port 2.      

Results of an S-Parameter Simulation in ADS include:


  • S-matrix with the complex values at each frequency point
  • Gamma value (complex reflection value)

  • Marker readout for Zo (characteristic impedance)
  • Smith chart plots for impedance matching


These results are similar to Network Analyzer measurements, so if you don’t have one, you can simply simulate what you are looking for in ADS.

If you don’t have ADS get a free 30-day trial here.


 S-Parameter Simulation Controller



The link below guides you through examples of an AC and S-Parameter simulation, as a visual walk through experience.


You made a bee line for understanding linear simulations!  The best way to learn is by doing, so check out the attached PDF that will walk you through an amplifier design.  This gives you the chance to see applications of the different types of linear simulations when designing. 

Many of you already know that Keysight's IC-CAP software provides an open flexible architecture that supports many industry standard and proprietary models. It also provides drivers for a range of popular test instruments required to make characterization measurements for extracting device model parameters and performing optimizations. But did you know that you can customize your IC-CAP environment to tackle your own measurement challenges?


This might be very welcome news if you are working with cutting-edge devices and need more than the capabilities already provided by IC-CAP. For many of these devices, skirting the limits of today’s technology often means that the measurement capabilities required are a moving target. And that can be a costly and time-consuming proposition if you have to buy and integrate new software tools every time you need new functionality. Your ability to automate these advanced measurements may even mean the difference between the success or failure of your project.


What if you need to implement your own behavioral model in Verilog-A along with custom parameter extraction routines? What if you want to create new measurement routines for controlling an arbitrary waveform generator? What if you want to add a time-domain measurement capability using an oscilloscope to capture your device’s fast pulse response? 


I faced such a scenario in my work at nSpace Labs, where I'm developing custom models for memristors and adding advanced measurement routines using the Keysight B1500A Semiconductor Device Analyzer. This analyzer is configured with multiple high-resolution SMUs (Source Measurement Units) and an integrated B1530A WGFMU (Waveform Generator/Fast Measurement Unit) to apply pulsed and arbitrary waveforms to the memristor’s electrodes, while simultaneously performing Fast-IV measurements. I needed to communicate with the WGFMU via GPIB using a Keysight provided software library and API for the WGFMU, something that is not currently supported by IC-CAP.


To take advantage of the WGFMU capabilities, I decided to use the Python programming language, included in all recent versions of IC-CAP, and PyVISA—a Python module that enables control of all kinds of measurement devices independent of the interface (e.g., GPIB, RS-232, USB or LAN). Utilizing IC-CAP's integrated Python environment, I was able to extend the software’s built-in measurement capabilities by integrating the third-party libraries and Python modules I needed for my application. This enabled me to programmatically generate the waveforms using Python, and then send the appropriate commands to the WGFMU for pulsing and simultaneously making Fast-IV measurements on my devices.


It took some effort, but I figured out how to install the WGFMU library and PyVISA package within a virtual Python environment for use with Keysight IC-CAP 2016. Now, I'd like to share with you the procedure for using some free Python packages and modules to extend the capabilities of IC-CAP. Using the WGFMU measurement routines that I have developed in Python, I am now able to extract and optimize parameters for my memristor model and have the ability to more accurately simulate the incremental conductance change of the memristor when a series of pulses are applied. Having more accurate models and the accompanying time-based measurement data provide me with a better understanding of the performance of the memristor-based circuits that I am designing. For you personally, extending the capabilities of IC-CAP means you can realize a significant return on your investment in your IC-CAP software, because you can now customize your environment to tackle whatever measurement challenge you might face.


Back to Basics

Before I delve into how to add PyVISA to a virtual Python environment, let’s discuss a few of the basics. First and foremost, the reason this is even possible is because the authors of IC-CAP developed a link between Python and IC-CAP back in 2013. The link provided in the module significantly expanded the flexibility and extensibility of the IC-CAP platform. The most common modules for engineering and scientific programming in Python are also currently included in the IC-CAP 2016 Python environment:

  1. Numpy: A fundamental package for numerical analysis including math functions, arrays and matrix operations.
  2. SciPy: A package for scientific and engineering computations.
  3. Matplotlib: A 2D plotting package that outputs plots in multiple popular graphics formats.
  4. PySide: A library that can be used for implementing custom graphical user interfaces using Qt.





There are a few things you’ll need to do prior to undertaking the installation of the PyVISA package:

  • Install IC-CAP_2016_01 or later, under Windows 7 in the default installation directory, which is typically C:\Keysight\ICCAP_2016_01.
  • Install the Keysight IO Libraries 17.x software and configure it to use the VISA library visa32.dll. This dynamic link library will be installed in the C:\Windows\system32directory.
  • Configure and test communication with your instruments using the Keysight Connection Expert software.                  

If you are new to IC-CAP software or Keysight IO Libraries, you’ll first want to get up to speed by checking out the links at the end of this article. If you are new to Python programming, I suggest you read the “About Python” section and check out the links at the end of this article before attempting this configuration.


Why Configure a Virtual Python Environment?

Sometimes, adding an incompatible module to a Python environment can cause it to become corrupted. Since IC-CAP ships with its own Python environment, you’ll have to be careful, or risk having to reinstall IC-CAP if problems arise. One way to avoid this problem is to use a virtual Python environment that can be implemented using two Python packages,"virtualenv" and “virtualenvwrapper-win,” which are designed to make it easy for users to create and manage completely isolated Python environments on the same computer.


By installing a separate virtual Python environment in your own user directory, you avoid breaking the IC-CAP installed environment, especially when potentially installing modules not compatible with the Python version shipped with your software. If you later upgrade to a newer version of IC-CAP, or if you need to reinstall the software for any reason, you’ll avoid having to reinstall all your third-party Python packages, assuming you have a backup of your home directory. 


The steps I’ve outlined for installing and configuring a virtual Python environment will allow you to install any third-party Python packages you want to use with IC-CAP. Key to this process is the installation of a standalone Python interpreter that includes PIP, a package manager needed to install the virtualenv and virtualenvwrapper-win. Having a stand-alone system Python environment is also great for developing experimental Python scripts and debugging in programming tools outside of the IC-CAP environment.



The following is a high-level overview of the procedure:

  • Install the Python 2.7 system environment
  • Install the virtualenv and virtualenvwrapper-win modules using PIP
  • Create your virtual Python environment for use with IC-CAP
  • Verify your new virtual Python environment
  • Configure the project directory for easy access to IC-CAPs Python interpreter
  • Install the PyVISA package using PIP in the virtual Python environment
  • Take PyVISA for a test drive using the virtual Python environment console
  • Write a macro within IC-CAP to activate the visual Python environment and use PyVISA


Step-By-Step Process for Installing the PyVISA Package


Step 1: Install the Python 2.7 system environment

Install the latest stable Python 2.7 release, which is currently version 2.7.13. Download Python 2.7.13 for Windows 64-bit from Python 2.7 is used because Python 3.x is not currently available for IC-CAP. Choose the default installation location, which is “C:\Python27”.



Add the Python 2.7 directories to the Windows PATH environment variable. This may be done through the Environment variables in Windows under the Advancedtab in computer Advanced system settings in Computer properties.


NOTE: The username in the command prompt is the name of the current user logged in to your Windows 7 machine. This generic name is a place holder for the commands listed below and represent the current user's home directory. The commands you will need to type are displayed in green throughout this article.


Check your Python installation.


C:/Users/usernamepython –V

Python 2.7.13


Step 2: Install the virtualenv and virtualenvwrapper-win modules using PIP

To installvirtualenv type:

C:> pip install virtualenv


This should install virtualenv.exe into the C:\Python27\Scripts directory


To installvirtualenvwrapper-win type:

C:> pip install virtualenvwrapper-win


This should install a set of useful batch '.bat' files that wrap the virtualenv functions and make creating and

managing virtual Python environments easier. These files should be installed in the C:\Python27\Scripts directory.


Check yourvirtualenv installation:

C:> virtualenv –-version



Step 3: Create your virtual Python environment for use with IC-CAP

Create the virtual environment using the virutalenvwrapper-win provided batch file mkvirtualenv and name it icenv.

Type the following at the command prompt:


C:\Users\usernamemkvirtualenv -p C:\Keysight\ICCAP_2016_01\tools\win32_64\python.exe icenv


NOTE: The -p option allows you to specify the IC-CAP Python 2.7.3 default environment to create our

virtual Python environment. Virtualenv creates the \Envs\icenv directory and copies the specified Python

interpreter and default modules to your \Users\username directory. 


Assuming everything succeeds with no errors or warnings, then we should get the following text:


Running virtualenv with interpreter C:\Keysight\ICCAP_2016_01\tools\win32_64\python.exe

New python executable in C:\Users\username\Envs\icenv\Scripts\python.exe

Installing setuptools, pip, wheel...done.


NOTE: The previous step creates the directory 'C:\Users\username\Envs\icenv' and copies the Python default

environment from C:\Keysight\ICCAP_2016_01\tools\win32_64. The contents of this '\Envs\icenv' directory

will now be the default Python configuration for IC-CAP 2016.


Use the cdvirtualenv batch file to easily change the directory to the icenv virtual Python environment we just created.


(icenv) C:\Users\username> cdvirtualenv


The command prompt should change to the following:


(icenv) C:\Users\username\Envs\icenv >


NOTE: This is the activated state for the icenv virtual Python environment. The 'activate.bat' file was called

during the mkvirtualenv command, which makes the following changes: 1) Prepending the virtual

environment's path to the Windows 7 system PATH environment variable so that it will be the first Python

interpreter found when searching for the python.exe. 2) Modifies the shell property to display the (icenv), which

denotes the virtual environment that is currently active. If no virtual environment is activated, the system  Python

is active by default.


Step 4: Verify your new virtual Python environment

Verify the activated Python 2.7 environment by checking the system path 'sys.path' and system prefix 'sys.prefix'


(icenv) C:\Users\username\Envs\icenv> python -V

Python 2.7.3


Start the interactive Python interpreter for the virtual environment.


(icenv) C:\Users\username\Envs\icenv> python


      You should see something like the following:

Python 2.7.3 (default, Feb 1 2013, 15:22:31) [MSC v.1700 64 bit (AMD64)] on win32

Type "help", "copyright", "credits" or "license" from more information.



Now, enter the following commands at the Python interactive prompt:

>>> import sys

>>> print sys.prefix


>>> print sys.path

['', C:\\Users\<username>\Envs\\icenv\\Scripts\\', 



>>> quit()

(icenv) C:\Users\username\Envs\icenv>


To exit the icenv virtual environment and return to the system Python environment, type the following at the command prompt:


(icenv) C:\Users\username\Envs\icenv> deactivate



Start the interactive Python interpreter for the system Python


C:\Users\username> python


You should see something like the following:

Python 2.7.13 (v2.7.3:a06454b1afa1, Dec 17 2016, 20:54:40) [MSC v.1500 64 bit (AMD64)] on win32

Type "help", "copyright", "credits" or "license" from more information.



Now, enter the following commands at the Python interactive prompt:


>>> import sys

>>> print sys.prefix


>>> print sys.path

['','C:\\WINDOWS\\SYSTEM32\\', 'C:\\Python27\\DLLs',

'C:\\Python27\\lib', 'C:\\Python27\\lib\\plat-win',  'C:\\Python27\\lib\\lib-tk',

'C:\\Python27', 'C:\\Python27\\lib\\site- packages']

>>> quit()



To reactivate the icenv virtual environment, use the virtualenvwrapper batch file 'workon.bat' by typing:


C:\Users\username\Envs\icenv> workon icenv

(icenv) C:\Users\username\Envs\icenv>


Step 5: Configure the project directory for easy access to IC-CAP’s Python interpreter

Here, we will set the project directory for the icenv virtual environment to IC-CAP's '\tools\win32_64' directory. This makes it easier to execute the built-in Python interpreter and import the IC-CAP included modules and Python tools.

Now you will no longer have to type the full path to access these tools.


NOTE: Anything you install using PIP while in the icenv virtual environment will still be installed

in the C:\Users\username\Envs\icenv.


Set the project directory to the ICCAP_2016_01 default Python path by typing the following:


(icenv) C:\Users\username\Envs\icenv> setprojectdir C:\Keysight\ICCAP_2016_01\tools\win32_64


"C:\Keysight\ICCAP_2016_01\tools\win32_64" is now the project directory for virtualenv



"C:\Keysight\ICCAP_2016_01\tools\win32_64" added to



Check that the project directory is set properly.


(icenv) C:\Users\username\Envs\icenv> cdproject

(icenv) C:\Keysight\ICCAP_2016_01\tools\wind32_64>


Return to the virtual environment directory.


(icenv) C:\Keysight\ICCAP_2016_01\tools\wind32_64> cdvirtualenv

(icenv) C:\Users\username\Envs\icenv>


Step 6: Install the PyVISA package using PIP in the virtual Python environment

Install PyVISA 1.8 using PIP.


(icenv) C:\Users\username\Envs\icenv> pip install pyvisa


Check that PyVISA was installed correctly to your (icenv) virtual python environment.


(icenv) C:\Users\username> cd C:\Users\username\Envs\icenv\lib\site-packages

(icenv) C:\Users\username\Envs\icenv\lib\site-packages > dir


The directory listing should now include the, visa.pyc, [pyvisa], and [PyVISA-1.8.dist-info] folders.


Step 7: Take PyVISA for a test drive using the virtual Python environment console 

You should have already installed the VISA drivers for your interface, either GPIB or PXI, and configured the system to use the software. Perform a "Scan for Instruments" to automatically add the instruments on your system to the VISA resources. Then, start an interactive Python shell and enter the following commands:


       (icenv) C:\Users\<username>>\Envs\icenv> python


       >>> import sys

       >>> import visa

       >>> print sys.path

        ['',  C:\\Users\<username>\Envs\\icenv\\Scripts\\',















      >>> rm = visa.ResourceManager()

      >>> print rm

      Resource Manager of Visa Library at C:\Windows\system32\visa32.dll


      >>> print rm.list_resources()



NOTE: An instrument was found on interface GPIB0 at address 16. Now quit the Python session and deactivate the virtual Python environment.


     >>> quit()

     (icenv) C:\Users\username>\Envs\icenv> deactivate




Step 8: Write a macro within IC-CAP to activate the icenv environment and use PyVISA

Open the project \examples\demo_features\5x3_PYTHON_PROGRAMMING\1_py_api_demo.


  1. Select the Macros tab.                                                                                                                                                           
  2. Click the “New... button to create a new macro.                                                                                                               
  3. Enter the name _init_pyvisa.                                                                                                                                                                                                                                                                                                                                                                                                        
  4. Select the Python (Local Namespace) radio button for the Macro Type.                                                                                                                                                                                                                                                                           
  5. Enter the following Python code in the editor window.                                                                                                                                                                         


  6. In Line #1 make sure to use the path of your home directory (where you created your icenv virtual Python environment), in this case, the 'C\Users\username\Envs\icenv\Scripts\' path.

  7. The script changes the current system PATH to prepend the (icenv) virtual Python environment to the existing PATH environment variable. This makes it the first location searched by the Windows OS when attempting to load a module (i.e., visa). The path entry 'C:\Users\marendall\Envs\icenv\lib\site-packages' will also be prepended to the system PATH environment variable and will be made available to IC-CAP's Python environment.

  8. Line #2 contains the execfile() function, which is called to launch the script. You can view the contents of this file in your favorite text editor or standalone Python IDE.

  9. Line #5 is where we import the module functionality into our IC-CAP Python environment.

  10. Line #10 gets the default VISA resource manager, which for our system is the visa32.dll installed with the Keysight IO Libraries in the C:\Windows\system32 directory.

  11. Line #13 calls the list_resources() function in the visa32 DLL library to return the current system resources

    previously configured with Keysight Connection Expert. This function returns a list of all the configured

    resources for the system as a Python List.

  12. In line #15 we print the result of the open_resource('GPIB0::16::INSTR') function, which should just return the     string 'GPIB Instrument at GPIB0::16::INSTR' if there are no errors.

  13. Line #17 closes the current communication session.


NOTE:  In practice you’ll want to code the statement in line #15 to be something like the following:

sl = rm.open_resource(r), where r is the resource string ('GPIB0::16::INSTR') and sl points to the object of the

opened VISA resource.


You can use call functions on this object by typing something like:

sl.write( cmd, termination='\r'), where cmd = "*RST" with the termination='r' being the carriage-return termination character to append to the end of the command string. This command will perform a GPIB reset on the instrument.


You can also use other available visa functions listed in the PyVISA docs, like sending: 

gpib_response = '\r') to read the response from the instrument's ouput buffer.


In a follow-on article I’ll show you how to create a PyVISA wrapper to simplify programming using these common

functions, but we’ll also be writing helper functions to check for instrument errors and display messages to the user.


NOTE: You should always close the VISA session when you are done with a resource. Not properly closing the session

will cause errors if you attempt to re-open the same resource later.                      


Step 9: Save and execute the macro within IC-CAP

Save the macro in the model file \Users\username\iccap\python\examples\1_py_api_demo.mdl or some other

suitable directory in your home directory.


  1. Click the Save button on the main toolbar, or select File->Save As from the main menu.        

  2. Select the macro _init_pyvisa from the Select Macro list.   

  3. Click the Execute button.                                                                                                                                                                                                                     

  4. Check the results of the script execution in the IC-CAP Status Window.                                                                             


The full content of the status window should read:















Resource Manager of Visa Library at C:\Windows\system32\visa32.dll


GPIBInstrument at GPIB0::16::INSTR


The Bottom Line

If you completed all of these steps successfully, you should now be able access the full capabilities of PyVISA from IC-CAP 2016. That means you can create transforms for instrument control and data acquisition over any supported interface.


Hopefully this information will be beneficial to those wanting to customize their IC-CAP 2016 installation to take full advantage of the many powerful Python packages available for download, or to write their own custom Python/PyVISA measurement routines. This method can also be used to install additional virtual Python environments for use with other Keysight products like WaferPro Express 2016. In a future blog post, I’ll outline the steps to do just that. In the meantime, for more information on IC-CAP or Keysight IO Libraries, go to and, respectively.



About Python

If you are new to Python programming, here’s some information to help you follow the steps in this blog.  

  • Python is an"interpreted" language, which means it generally executes commands typed by the user in an interactive command shell. This is convenient for testing program statements to learn the Python syntax. However, a more common means of writing a Python program is to create a Python script file with the '.py' extension. Say you created an example program and save it as '' To run the program, you simply type 'python' at the command shell prompt.
  • Python scripts, which are often called modules, can also be used for creating libraries of functionality, and are distributed along with program resources in packages.
  • Packages can be installed and combined with user-generated code to extend a program's functionality.  
  • PIPis the Python package installer that integrates with PyPI.orgthe Python Package Index. It is a repository of numerous free Python applications, utilities and libraries. PIP allows you to download and install packages from the package index without manually downloading, uncompressing and installing the package via the command 'python install'. PIP also checks for package dependencies and automatically downloads and installs those as well. 
  • An environment is a folder (directory) that contains everything a Python project (application) needs to run in an organized, isolated manner. When its initiated, it automatically comes with its own Python interpretera copy of the one used to create italongside its very own PIP. 



Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.


There is no new post this week on Tim’s Blackboard, but while you are waiting for next Wednesday to come, give this SI journal article a read. I collaborated with Signal Integrity industry's leading experts, Al Neves and Mike Resso, performed simulations in ADS and wrote about S-parameter Analysis and the Blink of an Eye.


See you next week!

TWL: 5/30/17

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.


This week on Tim’s Blackboard is the “melting trace paradox.”



Unlike other famous paradoxes such as the Zeno’s paradox, where Achilles and the Tortoise are involved, the melting trace paradox is one with a segment of copper trace and a current source, see Fig. 1. 


Fig. 1: The circuit setup for the melting trace paradox.


If you find the equations for power of the trace and the required energy to change the trace temperature, you will find the temperature change of a trace expressed as follows:



whereis the temperature change, and  is the specific heat and mass of the copper and  is the time elapsed.


The equation states that the temperature increases with time. That is, the longer I leave the current source on, the hotter the trace gets. As the temperature reaches the melting temperature of copper, the trace melts.


There is clearly a paradox here. In labs, we know that the temperature of the trace does not increase with no bound, and the trace does not come with a warning label that says “don’t leave current on for too long, trace will melt.” At some point, the temperature of the trace reaches a steady state value.


But how come our prediction is not consistent with our expectation?

Is our math wrong? Or is the world we know broken?

We will now look at the melting trace paradox and find an explanation and solution for it. 


Power and Energy of the Trace

To reconstruct the melting trace paradox, let’s first look the power dissipated by the trace. Assume the trace has resistance  and the current source is delivering  Amps. We can write: 




Since power has the unit Joule per second, we know that the longer we leave the current source on, the more energy,, is consumed:



Next, we will look at how much energy it takes to increase the temperature of the trace.


Increasing the Trace Temperature

Let’s calculate the energy required to increase the temperature of the trace. Recall the definition of specific heat: the amount of heat per unit mass required to raise the temperature by one degree Celsius. Let be the specific heat of the trace and the mass of the trace, we write:



where  is the energy required to increase the trace temperature bydegree Celsius.

Because we have an expression for the energy of the trace, we can re-write the temperature change in terms of energy: 


Melting Trace Paradox     

Here is the energy delivered to the trace:


Here is the temperature change of the trace with given energy:


Replacing  in first equation with  in the second equation , we get:



Putting in the numbers, given 1 A current source and a 1 inch long 20 mil wide copper trace, leaving the current source on for 1 minute results in 1100 oC temperature change, which exceeds the melting point of copper at 1085 oC.

We end up with a melted trace and a frowny face.

What Is Going On?

The assumption we made in calculating the energy generated is that ALL the electric energy goes to heat up the trace. The assumption is wrong for a lab environment.


In a lab environment, we have air to provide heat transfer through convection. However, if we are in space (vacuum) where there is no air for convective heat transfer, the lack of heat transfer traps the heat in the trace and increase the trace temperature with time and finally, the copper trace melts.   


Fig. 2: Air should be included in the setup to correctly represent the lab environment.


As shown in Fig. 2, in most labs, air is present to provide heat transfer through convection, allowing the trace to reach a steady state temperature.


It would be hard to calculate the steady state temperature with pencil and paper. I’ll show you how to find the temperature by performing an electro-thermal simulation with ADS PIPro.  


Electro-Thermal Simulation of the Trace

Entering the same parameters in the ADS PIPro electro-thermal simulator, we can setup two experiments: one with a vacuum and another where air and convection is present.


We would expect the trace in the experiment with a vacuum to have an extremely high temperature, and expect the one with air to reach an equilibrium temperature.


Moreover, with air surrounding the trace, we expect the surrounding of the trace to heat up and have a higher temperature. On the other hand, if the trace is in a vacuum, the surrounding of the trace should stay at the ambient temperature.   


Fig. 3: ADS PIPro electro-thermal simulation of a trace in a vacuum. Since there is no air to allow convective heat transfer, the trace reaches a very high temperature, ~35000 oC.


Shown in Fig. 3, as expected, the trace in vacuum reaches a high temperature, with no heat spreading to the trace’s surroundings. (But if there is no convective heat transfer in vacuum, how does the warmth of the sun gets to the earth!?)



Fig. 4: ADS PIPro electro-thermal simulation of a trace in a lab environment. Since air is present to provide convective heat transfer, the simulated temperature reaches a reasonable value, 56.8 oC


Shown in Fig. 4 is the analysis done with air around the trace. As expected, the air surrounding the trace provides means of heat transfer and allows some energy to escape instead of being all trapped in the trace. Reading from the result plot, we find the final temperature of the trace to be about 56.8 oC.   


The World is Not Broken and Math is Good

After the analysis, we are now sure the world is not broken and our calculations are correct. It is our assumption that needs to be improved. We took the air around us for granted and forgot to include it in our initial analysis.


By performing the consistency tests, we found the solution to the melting trace paradox and now have a better understanding of the thermal aspect of the current source and the trace. 


That's this week's Tim's Blackboard. See you in two weeks!


For more information about how ADS PIPro electro-thermal simulations can solve your paradox, go here:

For an ADS free trial:




Add PDKs to Your Design

Posted by vandduff Employee May 15, 2017

You successfully created a Low Pass Filter in ADS by following my example in the last blog post.  Let’s build off what you know from the Low Pass Filter exercise and learn how to add flexibility to your design, including PDKS and compare an ideal schematic to a schematic with external vendor’s components.


What’s a PDK?

You now want to design a low noise amplifier and need to select what type of process the device needs to be fabricated, whether it’s Si, SiGe, GaAs, or other high frequency manufacturing processes.  A Process Design Kit (PDK) contains active and passive device components with symbols, parameterized layouts, simulations models, and much more for IC design.   A PDK provides both the opportunity to shorten the product design-cycle for high frequency chip design, and the capability to simulate your chip exactly as you expect it before the chip is manufactured.  Some features of a PDK may include:


Schematic Example

Parametric Layout Cells

Design Rule Checks

Simulation Results

Layout Options

For a list of foundries that provide PDKs, follow this link:


Vendor supplied models provide more realistic results, that may include parasitics.  Once your LNA has been created with the PDK of your choice, you can now compare your data to an ideal design by adding your PDK design, and ideal when setting up your plots.  The data can be viewed in several plots, including rectangular and smith chart plots. When setting up your plot, you can plot several traces, which can include any S-parameter measurement, in dB or log.

The lab attached at the bottom shows you how.



What are Cell Views?

Another capability is cell view, which is a way of capturing a design.  It has multiple views and can define a design in that cell using a schematic.  Another way to think about it is within a hierarchy type setting for coding, like pointers in the programming language C.  This capability simplifies what can be viewed in your workspace.   


Check out the attached lab with step-by-step instructions that walk you through the topics we covered: 

  • What is a PDK?
  • Data Comparison
  • What is Cell View?


For other getting started topics, check out our video playlist:

In the past, the pre-manufacture design could be simulated and tested using compliance tools from an EDA vendor. And the post-manufacture prototype could be bench tested using compliance tools from test and measurement instrument vendors. However, because of subtle differences between the two vendor’s independent approaches to compliance, it was almost impossible to correlate the two.


This opened the possibility of the pre-manufacture design passing and the post-manufacture prototype failing, necessitating a time-consuming and expensive design spin. In contrast, Compliance Test Benches leverage the exact same industry-leading Compliance App used on Keysight Infiniium oscilloscopes. Compliance Test Bench mimics a real hardware test bench, and, using a scripting technology called “Waveform Bridge,” emits the same waveforms that the Infiniium app receives when you are testing in your lab. The new approach allows engineers to apply the same set of compliance tests in all three cases: real-time/on-scope, offline/on-scope, and offline/remote.


In Summary, Compliance Test benches can: 


Break the wall between Design Simulation and Lab Measurement

      • Exact same compliance software used for simulation and measurement


Provide probing point where it is not accessible inside the IC chip

  • Equalization takes place inside the chip for SERDES devices
  • It must be simulated to show if the data can be recovered by the receiver

Provide compliance validation before committing to hardware fabrication


Compliance test benches in Keysight ADS are virtual workspaces that can be customized for specific applications. Most of the connection in the design specification are included on these virtual workspaces. You can simply select them and run your simulation. Below is an example of the process to run a simulation to generate waveforms: 


Through this test, you can look at the host side eye diagram as well as the receiver side eye diagram and save the waveform results. Then you can run the exact same compliance application on the virtual scope that you have on your computer/machine. Now, at this stage when the simulation results pass the specification you can have confidence that there is an agreement between your pre-fabrication and post-fabrication results. This way if you find any inconsistency between the two results, you can target the specific problem area. 




More Information