# EEsof EDA

13 Posts authored by: Tim Wang Lee

# Tim’s Blackboard #10: What Makes DFE a Nonlinear Equalizer?

Posted by Tim Wang Lee Feb 26, 2018

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity. Find other cool posts here!

Last time on Tim’s Blackboard, we talked about linear Feed-Forward Equalization (FFE). This week, we will discuss the nonlinear Decision Feedback Equalization (DFE).

All the ADS content shown is in the attached workspace. Make sure you download the attached workspace and apply for a free trial to apply DFE to your own channel!

# Introduction

When I first learned about decision feedback equalization, one of the bullet points is, “it is a nonlinear equalizer”, but I never knew why. Today, I will answer the question:

What makes DFE a nonlinear equalizer?

# Decision Feedback Equalization Technique

Shown in Fig. 1, decision feedback equalizer (DFE) can open a closed eye. Nonetheless, the signature of an opened DFE eye is different than other equalizations. There are kinks in the eye diagram. To examine the eye diagram a little closer, we apply single pulse analysis to look at the blink of an eye.

Fig. 1: Keysight ADS channel simulation demonstrating Decision Feedback Equalization (DFE) with different number of taps. DFE exhibits kinks in the eye.

Just like the eye diagram, we expect the single pulse response after decision feedback equalization to also have kinks. Sure enough, in Fig. 2, we see the kinks in the equalized single pulse response.

Fig. 2: Equalized single pulse response shows how DFE corrects post-cursor ISI on a single pulse that has all 0’s but a single 1. DFE inserts negative amplitudes after the received “1” pulse to better detect the next 0.

Taking a closer look, one observes that since the single pulse has all 0’s but a single 1 in Fig. 2, as soon as DFE algorithm sees a 1, it tries to reduce inter-symbol interference (ISI) by adding negative amplitudes so that the following low voltage is lower, allowing better detection of the next 0.

By the same token, when we send a single pulse that has all 1’s but a single 0, we should expect that as soon as the algorithm sees a 0, it tries to reduce ISI by adding positive amplitudes, allowing better detection of the next 1.

Fig. 3: Equalized single pulse response shows how DFE corrects post-cursor ISI on a single pulse that has all 1’s but a single 0. DFE inserts positive amplitudes after the received “0” pulse to better detect the next 1.

Result shown in Fig. 3 is consistent with our expectation. DFE algorithm is reducing ISI based on the detected data (symbol).

Fig. 4: Comparison between received waveform and equalized waveform shows how DFE acts on the received waveform.

By comparing the received waveform and waveform after DFE, as seen in Fig. 4, we can further see the action of DFE algorithm, but the question remains:

What makes DFE a nonlinear equalizer?

# Symbol Detection and Decision: A Nonlinear Filter

At the arrival of received data (symbols), DFE algorithm detects and makes a decision. Assuming the decision is correct, proper tap values are chosen and feedback to the originally received data.

Fig. 5: An example of a symbol detector. Because the output does not scale linearly with the input, a symbol detector is nonlinear.

Shown in Fig. 5 is a symbol detection processing block. As the input doubles from 0.6 V to 1.2 V, the output does not double. Consequently, symbol detection is nonlinear. In turn, decision feedback equalization is also nonlinear.

But how do we make sure the detection is correct?

Fig. 6 is an illustration of DFE block diagram. The received symbol first undergo feedforward equalization so that the symbol detector can make the correct decisions. After the symbol detector makes a decision, the result goes through a feedback filter to be combined with the previously detected symbol.

Fig. 6: Decision Feedback Equalizer (DFE) block diagram. A feedforward filter is at the front end of DFE to help the symbol detector make a correct decision. Each decision then goes through feedback filter to be combined with previous symbol.

Because the input to the feedback filter consists of the sequence of decisions from previously detected symbols, which it uses to remove the portion of the ISI caused by those symbols, DFE only removes post-cursor ISI. Moreover, since DFE assumes that past symbol decisions are correct. Incorrect decisions from the symbol detector corrupt the filtering of the feedback loop. As a result, the inclusion of the feedforward filter on the front end is crucial in minimizing the probability of error [1].

# Realization of Decision Feedback Equalization

Given the basic algorithm of DFE, I decided to design my own DFE, see Fig. 7.

.

Fig. 7: Demonstration of a homemade DFE in Keysight ADS. See attached workspace for detail.

Knowing the input sequence is going to be a single “1” pulse, i.e. all 0's but and single 1, I first changed the feedback filter coefficients, V_tap1 and V_tap2, until the post-cursor ISI is reduced enough. Then, I adjusted the delay of the two taps so the corrections take place at the right time. When all was said and done, I had created a homemade 2-tap DFE. Fig. 8 shows the equalized single pulse response.

Fig. 8: Applying homemade 2-tap DFE to a single pulse.

In the process of creating a homemade DFE, I learned that DFE algorithm is not trivial. It requires many moving pieces to align. Besides the correct symbol detection, the timing and feedback filter coefficients (tap values) also need to be appropriately selected for different channels.

Fig. 9: DFE algorithm is readily available in Keysight ADS channel simulation. There are several adaptive algorithms to choose from.

Good news! To help expedite the simulating and testing process, DFE algorithms are implemented and readily available in ADS. ADS helps you test the amount of stress your channel can handle with DFE and adaptive DFE enabled, see Fig. 9.

# Summary of Equalizations

After today, we have talked about all three equalizations,

·        Decision Feedback Equalization (DFE).

Below is a summary of the equalizations.

Table 1: Summary of Equalization Techniques

Each of the equalizations has its own personality. While CTLE is sitting in the analog world, operating in the frequency domain, in the digital realm, FFE and DFE are working comfortably in the time domain.

Of course, each personality has its strength and weakness, and so does each equalization. In the near future, I will examine the pros and cons of equalization techniques. Make sure you bookmark the blog and check back regularly.

For the upcoming post, I will take a step back and ask the question:

What is Signal Integrity?

Until next time, make sure you download the attached workspace and apply for a free trial to apply DFE to your own channel!

# References

[1]       S. H. Hall, Advanced signal integrity for high-speed digital designs. 2009.

# Tim’s Blackboard #9: Eye-opening Experience with FFE

Posted by Tim Wang Lee Jan 12, 2018

Welcome to Tim’s Blackboard! This is the place to find discussion on topics related to signal integrity and power integrity. Find other cool posts here!

# Introduction

Last time on Tim’s Blackboard, I talked about Continuous-Time Linear Equalization (CTLE). This week, I will take things to discrete-time and discuss Feed-Forward Equalization (FFE).

All the ADS content shown is in the attached workspace.

# FFE Opens Closed Eyes

Illustrated in Fig. 1, Feed-Forward Equalization (FFE) can open a closed eye. However, unlike Continuous-Time Linear Equalization (CTLE), where the equalization is done with analog components in continuous-time, FFE happens digitally in discrete-time.

Fig. 1: A statistical channel simulation in Keysight ADS to demonstrate Feed-Forward Equalization (FFE) opening a closed eye.

Today, we will take a closer look at Feed-Forward Equalization and how it opens closed eyes for us.

# Concept of Feed-Forward Equalization Technique

In the time domain, FFE creates a pre-distorted pulse at the transmitter by combining delayed pulses multiplied by different weights. By choosing the correct weights to multiply each delayed pulse, one reduces the overall Inter-Symbol Interference (ISI) and opens the eye.

Knowing how the frequency-dependent loss of a channel spreads out the single pulse response, FFE generates a channel-specific pre-distorted pulse at the transmitter to compensate for the spread. An example FFE pulse is shown in Fig. 2.

Fig. 2: The overlay of FFE pulse to be transmitted and channel single pulse response shows FFE inserts pulses of negative amplitudes around the main pulse to cancel out positive amplitude ISI.

Transmitted, the FFE pulse travels down the channel. As the positive main pulse spreads, the negative amplitudes surrounding the main pulse “cancel” out the spreading.

Fig. 3: The equalized single pulse is more contained because FFE reduces the ISI spreading.

Shown in Fig. 3 is the single pulse response of Wild River Technology’s 10-inch stripline channel before and after FFE. Compared to the original channel single pulse response, the equalized single pulse response shows reduced ISI.

How does one generate the pre-distorted pulses?

# Realization of Feed-Forward Equalization

Demonstrated in Fig. 4 is the transmitter architecture of FFE. Mathematically, the structure of FFE is identical to finite impulse response (FIR) filter, where a signal goes through delay elements, and each delayed signal is multiplied by a coefficient of different weight. The important difference is the coefficients.

Fig. 4: An animation of FFE generating a pre-distorted pulse at the transmitter.

We refer to the delay element as tap spacing, and one tap spacing is usually one Unit-Interval. The weighting coefficients are known as tap weights, or just taps.

The “cursor tap”, C0, has the largest magnitude of all the taps and is the main contributor of the entire FFE pulse. The subscript of each tap, then, indicates the location of the taps relative to the cursor tap. For example, the tap, C-1, is one tap spacing before the cursor tap. We further use the term pre-cursor taps to group the taps before the cursor tap, and post-cursor taps for the ones after.

One can also apply the cursor categorization to a channel single pulse response. Fig. 5 is an example of a channel single pulse response with corresponding pre- and post- cursors.

Fig. 5: Application of cursor categorization to channel single pulse response. The cursor is the point where maximum amplitude occurs. Before the cursor are the pre-cursors, and after, post-cursors.

Marking the channel single pulse response sheds different light on our understanding of the channel. Shown in Fig. 6 are single pulse responses of a simulated lossless channel and a lossy one.

Fig. 6: Comparison between a simulated lossless channel and simulated lossy channel reveals more post-cursor ISI than pre-cursor.

One observes frequency-dependent loss of the channel causes ISI in the pre-cursor and post-cursor. Moreover, for the simulated channel in Fig. 6, there exists more post-cursor ISI than pre-cursor. Consequently, in this case, our selection of the tap values would focus on correcting post-cursor ISI.

In practice, user informs FFE algorithm how many pre-cursor and post-cursor taps to use. FFE algorithm then calculates proper values for pre-cursor and post-cursor taps to eliminate ISI.

How do we compute the tap values!?

# Algorithms to Identify Tap Values

No hard work required! Keysight ADS has FFE algorithms built-in to compute taps that optimize the eye opening. Nonetheless, "Advanced Signal Integrity For High-Speed Digital Designs” provides a good pencil-and-paper example of FFE Zero-Forcing solution [1].

There are also adaptive equalization techniques that compare the desired equalizer output and the actual equalizer output. Adaptive FFE techniques such as Least-Mean-Square (LMS) and Recursive-Least-Squares (RLS) are available in Keysight ADS.

Shown in Fig. 7 is the adaptive algorithm window where user specifies different parameters for different adaptive algorithms.

# Comparison Between CTLE and FFE

In the time domain, we should expect both techniques to correct for pre-cursor ISI and post-cursor ISI. However, because of the continuous-time, analog nature of CTLE, we expect CTLE to provide only limited improvement in pre-cursor ISI. On the other hand, operating digitally in discrete-time, we expect FFE to reduce ISI in both pre-cursor and post-cursor.

Fig. 8: Equalized single pulse response comparison between CTLE and FFE. Because of its digital nature, FFE corrects ISI beyond the first pre-cursor.

Shown in Fig. 8 is the single pulse response after CTLE and after FFE. As expected, CTLE barely provides ISI reduction beyond the first pre-cursor. In contrast, FFE can correct more pre-cursor ISI based on the number of the taps specified by the user.

Seen the frequency domain, we expect CTLE to have a high-pass filter characteristic as specified by the zeroes and poles of the filter. In the case of FFE, because of the nature of finite impulse response filter, we expect amplification and attenuation of different harmonics of Nyquist Frequency.

Fig. 9: Comparison between CTLE and FFE spectrum. As CTLE uses the high-pass filter response to counter the channel low-pass response, FFE amplifies the odd harmonics of the Nyquist frequency to equalize the channel.

Shown in Fig. 9 is the frequency response of CTLE and FFE. From the comparison, we see as CTLE focuses on boosting frequency content at the Nyquist frequency, FFE algorithm is selecting taps that effectively amplify the odd harmonics to achieve desired equalization result.

# Feed-Forward Equalization Summary

By selecting proper taps, FFE uses delayed pulses to cancel out ISI in the time domain. Viewed in the frequency domain, FFE effectively amplifies the odd harmonics of the Nyquist frequency and reduces ISI.

So far, both CTLE and FFE are linear equalizers. In the next post, we will cover a non-linear equalization technique: Decision Feedback Equalization (DFE).

Make sure you apply for a free trial and download the attached workspace to apply FFE to your own channel!

That's this week's Tim's Blackboard. Find other cool posts here!

See you next time!

# References

[1]       S. H. Hall, Advanced signal integrity for high-speed digital designs. 2009.

# Tim’s Blackboard #8: Eye-opening Experience with CTLE

Posted by Tim Wang Lee Dec 5, 2017

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.

This week on Tim’s Blackboard is “Eye-opening Experience with CTLE,” where we study one of the equalization techniques. This post has an associated ADS workspace. Download it now!

# CTLE Opens Closed Eyes

In the previous post, we discussed how frequency-dependent loss of a channel causes the eye to close and concluded with the use of equalization to open the eye.

Fig. 1: A statistical channel simulation in Keysight ADS to demonstrate how CTLE of different DC attenuation opens closed PAM4 eyes.

Today, we will take a close look at Continuous-Time Linear Equalization (CTLE) and how it opens closed eyes for us, see Fig. 1.

# Concept of Equalization

As I am writing this section, I ask myself,

“What does equalization imply in a non-technical context?”

And I am pleasantly surprised by Merriam-Webster Dictionary.

Recall that a lossy channel distorts the spectrum of the original single pulse unevenly. Seen in the time domain, the sharp transitions of the pulse spread out at the beginning and the end, as demonstrated in Fig. 2.

Merriam-Webster is right! To equalize is to make the frequency-dependent loss evenly distributed throughout a wide range of frequencies.

Fig. 2: Because the lossy channel attenuates higher frequency components more than lower frequency ones, the sharp transitions at the beginning and the end of the single pulse spread out.

# Continuous-Time Linear Equalization Technique

Fig. 3 shows a collection of Continuous-Time Linear Equalization (CTLE) responses for a reference receiver according to IEEE 802.3bs Draft Standard for Ethernet (October 10th 2017).

Fig. 3: A collection of CTLE responses for a reference receiver according to IEEE 802.3bs standard for Ethernet. To illustrate the behavior of the CTLE response, the x-axis of the graph is normalized to the Nyquist frequency.

Plotted against the Nyquist frequency, the curves of CTLE response give us insights on how CTLE evenly distributes the loss. While the CTLE response peaks at frequency close to the Nyquist to preserve content at higher frequencies, there is loss to attenuate spectral content at lower frequencies.

The construction of the CTLE response is that of a peaking filter with three poles and two zeros, defined by

where  is the CTLE gain,  are the CTLE poles,  is the CTLE zero and  are the CTLE low frequency pole and zero. An excel spreadsheet of the reference CTLE coefficients can be downloaded here. The coefficients are taken from Table 120E-2 of the October draft standard.

On a system level, we are adding an equalizer block after the channel. Applying the multiplication property, in the frequency domain, we can view the channel and equalizer block together as the response of an equalized channel, as demonstrated in Fig. 4.

Fig. 4: Illustration of the combined equalized channel response consisting of channel and equalizer.

# Application of CTLE

Fig. 5 shows the insertion loss of a 10-inch stripline channel from Wild River Technology’s ISI-32 platform. We can see the level of insertion loss increases with frequency. In other words, channel loss is unevenly distributed throughout frequencies.

Fig. 5: Left: Wild River Technology's loss characterization ISI-32 platform. Right: the insertion loss of a 10-inch stripline channel from the platform.

Because the goal of equalization is to provide a more evenly distributed loss through a wider bandwidth than the original channel, we would expect the equalizer to improve the unevenness of the original channel.

Shown on the left of Fig. 6 is a comparison between the 10-inch stripline channel and the CTLE response. As channel loss drops with frequency, CTLE provides a peak to counteract the effect.

Fig. 6: Left: channel response and CTLE response comparison. As the S21 of the channel drops, CTLE picks up to even out the increasing loss. Right: comparison between the original channel and equalized channel. The equalized channel has a more even frequency response throughout the frequencies below Nyquist.

Shown on the right of Fig. 6 is the equalized channel. The CTLE has successfully created more even loss curve than the original.

But how do I know for sure the loss of the equalized channel is really more even for wider bandwidth than the original?

To compare the responses of the channel before and after equalization on an equal footing, we normalize the equalized channel response to have 0 dB of loss at low frequency. Fig. 7 is the result of the two curves. Allowing the two responses to have identical loss at low frequency, we observe that, indeed, the equalized channel provides a more even frequency response for a wider frequency range.

Fig. 7: Comparison between original channel and the equalized channel response (normalized). The comparison demonstrates that the equalized channel provides a more even frequency response up to close to Nyquist.

# Single Pulse Response with CTLE

Since CTLE improves the evenness of the frequency response, we should consequently expect the single pulse response in time domain to improve as well. In particular, we expect a restoration of the transitioning edges which was distorted by the original high frequency loss. After equalization, the spread of the single pulse should be reduced.

Fig. 8: After equalization, the single pulse spectrum is restored and results in the reduction of spread of the single pulse in the time domain.

Fig. 8 shows simulation results consistent with our expectation. As we apply more and more DC attenuation to restore the spectrum, the spread of the single pulse keeps decreasing.

However, to my surprise, the maximum eye opening does not happen at maximum DC attenuation at 9 dB.

From the animation above, one observes both the reduction of the spread and reduction of amplitude. Until 6.5 dB of CTLE DC attenuation, the spread of the single pulse is positive and reaches almost zero at 6.5 dB. As the DC attenuation increases to more than 6.5 dB, the single pulse spectrum is restored too much, resulting in a negative dip at the end of the pulse.

# Achieve Maximum Eye-opening

Because the single pulse response is a special case of an eye diagram, we would also expect the eye to exhibit the same behavior. The eye opening should reach a maximum at around 6.5 dB of DC attenuation.

Fig. 9: ADS statistical channel simulation of an eye to show the eye opening with different CTLE configurations.

In Fig. 9, one can somewhat make out the increase of eye opening as the eye amplitude decreases. To identify the precise eye width and height, we plot the width and height measurements against the CTLE configurations, see Fig. 10.

Fig. 10: Eye width and eye height for different CTLE configuration. As expected, the maximum of eye width and eye height occurs at 6.5 dB of DC attenuation.

As expected, at 6.5 dB of CTLE DC attenuation, both eye height and eye width are at the maximum. However, this might not be always the case. Every channel is a little different, and every eye is a little different. Therefore, it is important and necessary to perform analyses in both frequency and time domain, view the single pulse response and review eye diagrams.

# More Equalization Techniques

Although the IEEE reference CTLE curves are all passive and have maximum 0 dB gain, depending on the need, CTLE implementations can also be active and have positive gain. As the name “continuous-time” suggests, one implements CTLE with analog components. Nonetheless, equalization can also be done in discrete-time with digital signal processing.

In the next two posts, we will discuss equalization in the discrete-time such as Feed Forward Equalization and Decision Feedback Equalization. Make sure you apply for a free trial and download the attached workspace to apply IEEE reference CTLE's to your own channel!

That's this week's Tim's Blackboard. See you next time!

# Tim’s Blackboard #7: Root Cause of Eye Closure

Posted by Tim Wang Lee Nov 9, 2017

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.

This week on Tim’s Blackboard is “Root Cause of Eye Closure,” where we study one of the root causes of eye closure.

# Introduction

Even when you do everything right, i.e., using controlled impedance lines and termination strategy, loss remains a problem when traces are long and when transmitting in the Gigabit regime.

Specifically, it is the frequency-dependent loss that significantly degrades the signal quality at the receiver. Fig. 1 demonstrates the resulting eye diagrams of frequency-dependent loss and constant loss with the same loss at Nyquist frequency.

Fig. 1: ADS simulation of two different channels with the same loss at Nyquist frequency. The eye closure of the channel with frequency-dependent loss is more prominent than the channel with constant loss (Eye Diagrams are offset to illustrate the eye closure).

Given the same transmitter, receiver and same loss at the Nyquist frequency, the channel with frequency-dependent loss introduces more Inter-Symbol Interference (ISI) and degrades the eye horizontally more than the channel with constant loss.

But how?

In this post, we will transmit a single pulse and use our knowledge of time domain and frequency domain transformation to learn the impact of frequency-dependent loss.

# Single Pulse in Frequency Domain

To view the pulse in frequency domain, we perform Fourier Transform to decompose the input signal, channel, and output signal into their corresponding frequency spectrum. Fig. 2 shows the mathematical relationship of the input, the channel, and the output.

Fig. 2: Illustration of performing Fourier Transform on the time domain input-channel-output relationship.

After the transformation, the time domain convolution corresponds to multiplication in the frequency domain. The output spectrum is the product of the input spectrum and the channel frequency response after multiplication.

Fig. 3 demonstrates a single pulse going through a channel in both time and frequency domain. Since the frequency domain and time domain are two sides of the same coin, if we have the frequency spectrum of a time domain signal, we could apply Inverse Fourier Transform on the spectrum to retrieve the time domain representation.

Fig. 3: Sending a single pulse through a channel.

We see difference in the shapes of the input and output single pulse spectrum in Fig. 3, and we know the channel ought to change the input spectrum, but

How does the channel frequency response cause the spread of the output time domain waveform?

To answer the question, let’s take a closer look at the time and frequency domain relationship.

# Reconstruction of Waveform from Spectrum

Frequency domain representation shows how different frequency components interact with each other to create the time domain waveform. The constructive and destructive interference of sine waves of different frequencies works together to form the time domain waveform.

Thus, the shape of the frequency spectrum is important when one wants to reconstruct and maintain the shape of the original time domain waveform. For example, if we were to divide the the amplitude of the entire spectrum by two, we should expect the resulting time domain waveform to still be a single pulse, but with half of the original amplitude.

Fig. 4: Time domain and frequency domain representation of the original and modified waveforms in ADS. Because different frequency components work together to produce the shape of the original pulse, if the relative strengths of all components are identical, the shape of the time domain waveform is the same

Fig. 4 shows a modified spectrum of the same shape and the result of Inverse Fourier Transform. As we expect, because the same modification, dividing by two, is done to the entire spectrum, the relationship between different frequencies is the same. Consequently, the shape of the single pulse waveform is maintained in the time domain, and the peak amplitude is indeed half of the original.

However, if we don’t treat the spectrum as a whole, and we alter only a small part of the spectrum, we expect to see a small change in the spectrum to produce a dramatic change in the shape of single pulse waveform, as shown in Fig. 5.

Fig. 5: Although the spectrum is modified a little bit, the relative strengths of different components are different. The new shape of the spectrum no longer corresponds to the original single pulse.

Although Fig. 5 is an extreme case where a small part of spectrum is removed, it underscores the importance of treating a given spectrum in its entirety to maintain the corresponding time domain waveform.

To see how the single pulse would look after going through the channel, let’s take a look at how the channel treats different frequency components.

# Channel Frequency Response Changes Spectrum

We can see from Fig. 6 the frequency response of the channel modifies the spectrum differently at different frequencies. Therefore, we expect the shape of the reconstructed pulse to be different from the original.

Fig. 6: Channel frequency response shows more attenuation at higher frequencies than lower frequencies.

Specifically, because the channel attenuates higher frequency components that make up the sharp transition more than the lower frequency ones, at the output of the channel, the rising and falling transitions of the pulse will spread.

A comparison of lossy channel and lossless channel in Fig. 7 shows consistent result with our expectation. The lossy channel distorts the spectrum of the original input pulse unevenly. Seen in the time domain, the sharp transitions of the original pulse spread out at the beginning and the end.

Fig. 7: Because the lossy channel attenuates higher frequency components more than lower frequency ones, the sharp transitions at the beginning and the end of the single pulse spread out.

The spreading of the single pulse is known as the inter-symbol interference (ISI) because the current pulse interferes with the one pulse before and the one after. To reduce ISI is to reduce eye closure.

# How to Avoid Eye Closure

Because of frequency-dependent loss closes the eye, to open the eye, we do the following:

1. Reduce the amount of loss,
2. Remove the frequency dependence of the loss.

Given a fixed data rate, to reduce the amount of loss, we can:

• Keep trace as short as possible,
•  Use substrate with lower Dk and Df,
• Use smoother conductor and as low resistance as budget allows.

To remove frequency dependence of the loss, we can equalize the spectrum with different equalization techniques:

• CTLE: Continuous Time Linear Equalizer,
• FFE: Feed-Forward Equalizer,
• DFE: Decision Feedback Equalizer.

Fig. 8 shows an example of applying equalization to open an eye.

Fig. 8: Equalization result of ADS channel simulation (Eye diagrams are offset to illustrate the opening).

The next blog talks more about the different equalization techniques and how to perform them in ADS.

Make sure you apply for a free trial and download the attached workspace to experiment with Fourier Transform and channel simulator!

That's this week's Tim's Blackboard. See you next time!

# Tim’s Blackboard #6: Unintentional Visit to Frequency Domain

Posted by Tim Wang Lee Aug 15, 2017

# Introduction

Much like Sir William Henry Bragg stated, often, the recipe for new discovery entails new light, so elements can be viewed in a fresh perspective.

This week on Tim’s Blackboard, I will start with the motivation for Fourier to introduce his series, follow by his unintentionally visit to the frequency domain, and end with how the new frequency domain view helps us understand the root cause of eye closure.

# Fourier and “The Analytic Theory of Heat”

Whenever frequency domain is in a conversation, there is no escape from mentioning the name of this famous mathematician and physicist: Joseph Fourier.

Fig. 1: Jean-Baptiste Joseph Fourier. Image credit: https://commons.wikimedia.org

Although well-known for Fourier Series and Transform, in his 1800’s publications, the French-born scientist in Fig. 1 was originally analyzing heat flow.

To solve the heat equation in a metal plate, Fourier had the idea to decompose a complicated heat source as a linear combination of simple sine and cosine waves, and to write the solution as a superposition of the corresponding eigen-solutions. Nowadays, this superposition or linear combination is known as the Fourier Series [1].

# Fourier Series: Unfamiliar Yet Familiar

Although trying to represent a complicated function with linear combinations of sine and cosines might sound foreign, the decomposition of a complicated element into simpler sub-elements is a familiar idea.

In his lecture on Fourier Series, MIT Professor Dennis Freeman cleverly illustrates the similarity between Fourier Series and the Cartesian representation of an arbitrary vector in 3D-space [2].

Fig. 2: An arbitrary vector in 3D-space.

Shown in Fig. 2 is an arbitrary vector in 3D-space. Without additional coordinate information, our view of the vector is a geometric one: a line. However, as soon as we place the vector in a coordinate system, the vector geometry translates to vector magnitudes and directions. In a Cartesian system, there are three different components: one in x-direction, one in y-direction and one in the z-direction, as demonstrated in Fig. 3.

Fig. 3: Representation of an arbitrary vector in 3D-space in Cartesian coordinates. The original vector is separated into three components of various magnitudes in different directions.

The concept of Fourier Series is extremely similar. In Fourier Series, one deconstructs a periodic function into sines and cosines of different frequencies. The different frequencies of cosines and sines are analogous to the different directions in the Cartesian coordinates.

Take a classic ideal square wave for example. Fig. 4 shows the comparison between representing a vector in 3D-space and expressing a square wave with Fourier Series.

Fig. 4: Comparison between a vector in 3D-space and an ideal square wave expressed in Fourier Series. The sine waves of different frequencies correspond to the different directions in Cartesian coordinate system.

It is important to note in Fig. 4, the “…” in the Fourier Series expression indicates an infinite sum of sines with only odd harmonics. Mathematically, we write

Unlike the vector in 3D-space, where only three magnitudes and directions are needed to recreate the vector, we need infinite number of magnitude and directions to truthfully represent the ideal square wave in Fourier Series.

But Tim, what if instead of infinite number of odd harmonics, I only have the first 10?

In ADS, there is a Vf_Square source that lets you experiment with the number of harmonics you desire to be in the Fourier Series. The result of the simulation is in Fig. 5.

Fig. 5: ADS simulation result of including only the first 10 odd harmonics in the square wave.

# Stepping into Frequency Domain

Writing a function in the form of Fourier Series gives us a fresh perspective. Specifically, by looking at the Fourier Series construction of a function, we are able to visualize the frequency components present in the function and the strength of each frequency component.

Let’s revisit the ideal square wave expression, the Fourier Series shown below has both “direction” and “magnitude.”

Because the multiplication factor in front of ω0 indicates the frequency of the sine wave, we plot the factor, n, on the x-axis. For each nth harmonic, there is a specific magnitude that goes on the y-axis. Fig. 6 illustrates the parameters we are plotting.

Fig. 6: Illustration of what goes on a frequency domain plot. On the x-axis, we plot the harmonics, and we plot the magnitude on the y-axis.

Fig. 7 displays the log-log plot of frequency domain spectrum up to the 100th harmonic of the sine wave component that makes up the ideal square wave. The 1/n relationship of the magnitude and harmonic is made clear in a log-log plot.

Fig. 7: Frequency spectrum of an ideal square wave up to the 100th harmonic. The magnitude of the harmonics is inversely proportional to the order of each harmonic.

# Extension of Fourier Series

Indeed, Fourier Series is very useful when it comes to representing a periodic waveform. Nonetheless, one major limitation of Fourier Series is the assumption of periodic waveform.

Let’s take the impulse response of a channel for example. Fig. 8 is the waveform of a channel we investigated in Dirac Delta Misnomer. The impulse response is NOT a periodic function. If I am interested in the impact of the channel on different frequency components, I would need a way to transform the aperiodic time domain response to the frequency domain.

Fig. 8: Time domain impulse response of a channel is not a periodic function.

In the next post, I will show that with the help of Fourier transform, an extension of Fourier Series, I can convert the time domain impulse response to the frequency domain insertion loss, as shown in Fig. 9.

Fig. 9: Frequency domain representation of the time domain impulse, also known as the insertion loss.

Because the insertion loss plot gives us valuable information on how each frequency component is affected by the channel, we can then identify the root cause of eye closure.

# Conclusion

As Sir William Bragg points out, new discovery requires a new point of view. There is no doubt Fourier’s approach to the heat equation is a novel one.

By using Fourier Series, we examine the ideal square wave through the frequency domain looking glass. In the next weeks, we will see how we apply Fourier transform to understand the root cause of eye closure.

That's this week's Tim's Blackboard. See you in two weeks!

Experiment with the square wave source:

# References

 [1] Wikipedia contributors, "Fourier series," 9 August 2017. [Online]. Available: https://en.wikipedia.org/w/index.php?title=Fourier_series&oldid=786176863. [2] D. Freeman, "6.003 Signals and Systems," Massachusetts Institute of Technology: MIT OpenCourseWare, Fall 2011. [Online]. Available: https://ocw.mit.edu.

# Tim’s Blackboard #5: Demystify Ultra-low Impedance Measurement

Posted by Tim Wang Lee Jul 31, 2017

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.

This week we are taking a break from Signal Integrity. In this post, I will “demystify ultra-low impedance measurement.”

# Introduction

To measure ultra-low DC resistance, instead of using a traditional 2-terminal sensing, one uses 4-terminal Kelvin sensing to avoid contact resistance. Similarly, instead of using the 1-port method to measure low impedance of the Power Distribution Network (PDN), we use the 2-port shunt technique, shown in Fig. 1.

Fig. 1: 2-port technique for ultra-low impedance measurement.

In the following paragraphs, I will show you that not only does the 2-port technique give us more measurable signal levels, it also helps us reduce the effect of contact resistance.

# Why Use the 2-port Technique?

In the March 2010 issue of the PCB Design Magazine, Mr. Istvan Novak pointed out “S11 VNA Measurements Don’t Work for PDN Measurements.

It is true. If I assume the device under test, the PDN, has 10 mOhm impedance and port impedance is 50 Ohm, the S11 calculation,

gives me -0.003 dB, which is easily masked by noise or bad calibration.

Even if I have an ideal VNA with no noise and with perfect calibration, the contact resistance from the test fixture to the device under test, typically in mOhm range, is large enough to influence the result of the measurement significantly.

Fig. 2: Illustration of contact resistance in series with impedance under test.

To tell how contact resistance impacts the impedance calculation, I need to derive the extracted impedance in terms of measured S11. Using the schematic shown in Fig. 2, I would write

According to the impedance extraction equation, the contact resistance is directly influencing the extracted impedance. Worse yet, since the impedance and the contact resistance are of the same order of magnitude, I know the impedance extraction result is highly sensitive to the contact resistance.

# 2-port Ultra-low Impedance Measurement Technique

Shown in Fig. 3, the 2-port ultra-low impedance technique connects the device under test in shunt with the ports and uses S21 to extract the impedance under test.

Fig. 3: Ultra-low impedance measurement uses S21 to measure and extract the impedance under test.

Note that because S21 is the response of port 2 by the excitation from port 1, it’s analogous to using port 1 as a current source and port 2 as a voltage probe in DC 4-terminal sensing.

Applying S-parameter analysis to the circuit in Fig. 3, the S21 of the device under test is:

Putting in the numbers (Zport = 50 Ohm, ZDUT = 10 mOhm),

Given a good VNA, I should be able to measure down to -68 dB.

As shown, the 2-port technique is more suitable for ultra-low impedance measurement. The measured S21 is in the -60 to -80 dB range, more approachable than the S11 in milli-dB range.

So far, I have shown S21 produces more measurable signal levels than the S11 measurement. Next, I will demonstrate another great feature of 2-port measurement: insensitivity to contact resistance.

# 2-port Technique Reduces Impact of Contact Resistance

Using the previous result, I continue and solve for the ideal extracted impedance given measured S21,

As shown, if there were no contact resistance, above calculation with measured S21 gives exactly the impedance under test. Let’s see what happens when I put the contact resistance back in the setup.

Fig. 4: Illustration of 2-port low impedance measurement setup with contact resistance.

Found in Fig. 4 is the 2-port measurement setup with contact resistance included. With the contact resistance, the extracted impedance is no longer just the device under test. The result of the impedance extraction is

where

Now, knowing both the contact resistance and the impedance of the device under test are in the mOhm range, I know the resistance error constant, Kr, is dominated by the sum of the contact resistance:

In addition, if I am measuring a low impedance PDN, S21 is going to be a number much smaller than 1, that is,

Given the approximations, I can rewrite the 2-port extracted impedance,

Great news! Since both S21 and the contact resistance are small numbers, the product is going to be even smaller! Consequently, as long as I am measuring low impedance, where S21 is a small value, the 2-port measurement technique is NOT sensitive to the contact resistance.

# Ultra-low Impedance Measurement Demystified

Having derived the impedance extraction equations for both 1-port and 2-port measurements, I have demonstrated that the 2-port technique is a wonderful method to measure ultra-low impedance.

The 2-port low-impedance technique can examine more than just PDN. Because of the ability to measure ultra-low impedance, the technique is also useful to investigate the skin-depth of copper traces and a capacitor’s equivalent series resistance and equivalent inductance.

That's this week's Tim's Blackboard. See you in two weeks!

Ultra-Low Impedance Measurements Using 2-Port Measurements

# Tim’s Blackboard #4: Your Channel, PRBS and the Eye

Posted by Tim Wang Lee Jul 12, 2017

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.

This week on Tim’s Blackboard is “Your Channel, PRBS and the Eye.”

# Introduction

Previously on Tim’s Blackboard, we showed the convolution process and the helpful single pulse response. This week, we will extend the previously learned single pulse response (SPR) to explore Pseudo-Random Bit Sequence (PRBS) and the eye diagram, see Fig. 1.

Fig. 1: Left: A Pseudo-Random Bit Sequence. Right: An Eye Diagram.

# What the Single Pulse Is Not Telling Us

Although the single pulse response gives us information on how a single pulse reacts to the channel under test, it does not tell us how previous pulses affect the shape of the current pulse.

In an ideal world, where the channel does not distort the signal with its frequency-dependent loss, the shape of each pulse is not dependent on other pulses. However, since we live in the real world, we often observe Inter-Symbol Interference (ISI) caused by rise time degradation, a consequence of frequency-dependent loss.

Shown in Fig. 2 is the ADS simulation result of two different patterns followed by a single pulse pattern: 01000. After going through a channel with considerable frequency-dependent loss, the single pulse waveform comes after a string of one’s is not the same as the single pulse waveform comes after a string of zero’s. Because the previous symbols-string of one's-is interfering with the single pulse pattern, the voltage that is representing zero increased from 0 V to almost 0.3 V. If we are not careful, this increase would cause false triggering in the receiver.

Fig. 2: Shown in ADS, the shape of the single pulse depends on the pattern before the pulse.

To add, although the single pulse response is helpful, it is rare for one to transmit or receive only a single pulse in practical high speed digital applications. Normally, the data pattern consists of different combinations of one’s and zero’s that we do not know a priori.

To mimic different data patterns and to characterize the level of ISI introduced by the channel, the Pseudo-Random Bit Sequence was born.

# PRBS Pattern and the Channel

Shown in Fig. 3 is an example of PRBS. As the name suggests, the Pseudo-Random Bit Sequence is a sequence of one’s and zero’s that are independent of each other. The randomness provided by PRBS gives us some ideas on how the channel affects transmitted digital data.

Fig. 3: Example of a PRBS pattern at the transmitter side before going through the channel.

Much like the single pulse response, the response of the channel to PRBS is the convolution of the PRBS pattern and the impulse response of the channel.

From the single pulse response, we learned that after going through the channel, the sharp zero-to-one transition of a single pulse becomes a slower rising curve at the beginning. Also, the single pulse gains a longer tail (all thanks to frequency-dependent loss, which we will discuss in the future). In the same way, we should expect the received PRBS pattern to not have a sharp transition between the zero’s and one’s.

Fig. 4: After going through the channel, the sharp transition edges of the original PRBS pattern become slower rising and falling curves.

Fig. 4 shows the PRBS pattern after going through the channel. As expected, after going through the channel, the sharp transition between zero’s and one’s are reduced by the channel impulse response.

# Eye Diagram: The Comprehensive Version of PRBS

Although PRBS gives us an idea on how the channel affects digital data pattern, the information is scattered throughout a large time scale. It is hard to come up with a figure of merit to describe the quality of the channel by looking at data that goes on and on in time.

To create a better representation of the channel, we can manipulate the received PRBS waveform using our knowledge of the data coming in.

For example, if we are sending data at 10 Gbps, we know the unit interval (UI) of each bit is 0.1 nsec. Using our knowledge of the UI, we can “slice” the long received PRBS waveform and examine the waveform one UI at a time. Now, because we are also interested in the transition from one bit to another, we increase our observation window to 2 UI’s, corresponding to half of an UI before and half an UI after the current bit.

Fig. 5: Three example time slices of a received PRBS waveform. The data rate is 10 Gbps, corresponding to 0.1 nsec UI. To observe the bit transitions, the observation window is extended to 0.2 nsec.

Shown in Fig. 5 are example “slices” of the received PRBS waveform. The eye diagram, which is the comprehensive version of the received PRBS waveform, is constructed by overlaying these observed partial waveforms on top of each other, as demonstrated in Fig. 6.

Fig. 6: Illustration of overlaying the three slices of PRBS waveform shown in Fig. 5 to create an eye diagram.

By combining many of these time slices, an eye diagram is formed. The resulting eye diagram for the example channel is shown in Fig. 7. According to the eye diagram, one would say the example channel is a good one because of the clear eye opening. With a clear eye opening, the receiver is able to distinguish the two different voltage levels at 100 psec.

Fig. 7: The eye diagram result of the channel shows an open eye. The receiver can easily tell the high voltage level from low voltage level if a decision is made at 100 psec.

To further specify the quality of the channel, one can now quote the vertical amplitude measurements of an eye (eye height, eye level, etc.) and/or the horizontal timing measurement of an eye such as jitter or eye width.

# How to Deal with a Closed Eye?

Although the example gives us a pretty clean and open eye, it is unlikely to always find such a pristine channel in practice. You are more likely to find a channel with an eye that looks like the one shown in Fig. 8.

Fig. 8: A channel that has an almost closed eye.

To deal with a closed eye, we will need to first find the root cause for the eye closure. We will do that in the next post!

That's this week's Tim's Blackboard. See you in two weeks!

# Tim’s Blackboard #3.5: Convolution Workspace

Posted by Tim Wang Lee Jun 28, 2017

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.

This week on Tim’s Blackboard, convolution workspace!

At the bottom of the page, there are links to the available workspaces.

Let me know if you have any questions at tim.wang-lee@keysight.com .

Cheer!

Use ADS to unarchive the workspaces:

# Tim’s Blackboard #3: Convolution and Single Pulse Response

Posted by Tim Wang Lee Jun 22, 2017

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.

This week on Tim’s Blackboard is “Convolution and Single Pulse Response.”

# Introduction

Two weeks ago, in the “Dirac Delta Misnomer” post, I explained why Dirac delta function is technically a distribution. I also talked about the impulse response: the response of a system given the Dirac delta distribution as the input.

This week, I will demonstrate the concept of convolution and the process of generating single pulse response using convolution, as shown in Fig. 1.

Fig. 1: ADS simulation result of the single pulse response generated by old-school convolution.

# System Level Abstraction

Before delving into the concept of convolution, I want to first show the top-level system diagram to place the impulse response in context.

In Fig. 2, you can see the input goes into the channel and out comes the output at the…… output. In general, we don’t know how the channel operates because we do not have the information for the channel behavior , which is the reason for the question marks.

Fig. 2: By injecting an impulse at the input of the channel, one obtains the impulse response of the channel under test. Because the impulse response completely characterizes a system, one now knows the channel behavior.

To obtain a behavioral model for the channel, one sends an impulse (Dirac delta distribution) at the input of the channel. Because the impulse response completely characterizes the channel, the channel response at the output is the behavioral model of the channel we are looking for. (To learn more about signal and systems, I strongly suggest 6.003 from MIT open courseware.)

# The Concept of Convolution

The form of the expression,

shows up naturally in the solution of differential equations by integral transforms (like Fourier or Laplace transforms) related to electrical engineering, optics, probability and statistics, and other disciplines [1].

In the context of the linear time-invariant (LTI) systems, such as a high-speed channel, once we have the impulse response of the channel, the output of the system is the convolution of the input and the channel behavior, see Fig. 3.

Fig. 3: The convolution operation appears when one computes the output of a channel, where the channel has impulse response, h(t), and x(t) at the input.

If we expand the short hand notation, , we arrive at the familiar form

# Graphical Convolution in Action

It’s hard to make out what is happening from the classic convolution integral, so I will represent the equation graphically.

Say we have a function, , shown in Fig. 4, and we want to calculate the convolution of the function with itself.

Fig. 4: Illustration of the function f(t)

To compute the convolution of function with itself, we will flip the function and generate the mirror image, , see Fig. 5.

Fig. 5: Illustration of f(-t), the mirror image of the function f(t)

Finally, we slide the mirrored function towards the right. As we slide the mirrored function, we will multiply the two functions and integrate to find the overlapped area. Fig. 6 is the illustration of convolving  with itself at some time t1.

Fig. 6: The shaded region is the value of convolving the two functions at time t1. According to the graph, at time equals to 0.6 units, the result of convolution is 0.6.

In Fig. 6, the shaded area is the value of the convolution at time t1, to find out convolution results for all time, we will keep sliding, increasing time and calculate the overlapped area at each time step. We can expect that as we slide the pink rectangle to the right, we will reach a maximum overlapped area at time equals to 1 unit, where the two rectangles are right on top of each other. Afterwards, we expect the area to start decreasing to zero.

The result of the convolution is shown by the red curve in Fig. 7. The red dot shows the value of the product of the two functions integrated throughout the entire time axis at a specific time t. The red trace left behind the red dot is the convolution result for all time.

Fig. 7: Animation of convolving a rectangular function with itself. The red curve shows the result of the convolution. Fun Fact: convolving two rectangles gives you a triangle.

# Generating Single Pulse Response

The single pulse response of a channel is the result of convolving a single pulse with the impulse response of the channel. In this example, the single pulse comes from a 10 Gbps transmitter and has a duration 0.1 nsec, shown in Fig. 8.

Fig. 8: The setup for single pulse response convolution.

Like the rectangle example, to get the single pulse response, we slide the single pulse, find the overlapped area by multiplication and integration. Similarly, we would expect the single pulse response to increase and reach a maximum value and decrease to zero. Note the single pulse response has voltage range from 0 to 1 V, the result of multiplying a large voltage value in GV scale with a small time value in nsec scale.

Shown in Fig. 9, the result from the simulation agrees with our expectation and we now have the knowledge of the response of the channel to a 10 Gbps pulse.

Fig. 9: ADS simulation of single pulse response computed with convolution.

# The Story Told by a Single Pulse

You might be thinking, “Why do we care about the single pulse response?”

The short answer: because we can learn a lot from the signature of the waveform, especially when equalization is used at the transmitter and/or receiver. Interested readers can find more information in the attached single pulse response slides I presented in DesignCon 2017.

Convolution is quite a revolutionary concept in dealing with signals and systems in the time domain. In the coming weeks, we will start moving toward the frequency domain and become “bilingual” in both time and frequency domain.

I am working on putting together a workspace for this post and it should be up next week in TB #3.5.

That's this week's Tim's Blackboard. See you in two weeks!

# References

[1] A. Dominguez. (2015, Jan. 24). A History of the Convolution Operation [Online]. Available: http://pulse.embs.org/january-2015/history-convolution-operation/

# Tim’s Blackboard #2.5 The Arrival Time Inconsistency

Posted by Tim Wang Lee Jun 14, 2017

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.

This week on Tim’s Blackboard, I will resolve the arrival time inconsistency shown in the previous post.

# Introduction

Last week, we sent an impulse through a section of 50-Ohm, 6-inch microstrip transmission line and expected the impulse to arrive at 1 nsec. However, the impulse arrived 0.88 nsec at the output, see Fig. 1.

Fig. 1: ADS simulation result from last week's post. We expected the impulse to reach the output at 1 nsec, but the impulse arrived at 0.88 nsec.

# Assumption, Assumption, Assumption

The answer to our question is in the assumption we made when we were calculating the time delay. The rule of thumb for the speed of propagation, 6 inch/nsec, assumes the Dk (dielectric constant) to be 4.

The speed of light in vacuum is 3*108 m/sec, or 30 cm/nsec, and alternatively, about 12 inch/nsec. To calculate the speed of propagation in a different medium, we divide the speed of light in vacuum by the square root of Dk:

Given an FR4 substrate with Dk = 4, the speed of propagation is:

# Microstrip and Effective Dk

The Dk of FR4 is indeed 4, but that's not the whole story. When a signal propagates in a microstrip environment, it sees both FR4 and air. The result of the signal interacting with both medium is a lower effective DK.

A lower effective Dk would increase the propagation speed and lower the delay: consistent with last week’s result.

Having a guess of what was happening, we proceed to verify whether our guess is correct.

# Consistency Test

Since the possible root cause of the early arrive time is the lower Dk due to air, we need to ensure the signal only sees the Dk of FR4. To do so, we place the same transmission line in a stripline environment, where the trace is only surrounded by FR4 material.

Fig. 2: The same 6-inch transmission line with 20 mil trace width. Note the height of the substrate is changed so the impedance of the line is still 50 Ohms.

To make sure we are doing an apples-to-apples comparison, the substrate height is increased so the impedance of the transmission is still 50 Ohms, see Fig. 2.

We then perform the identical simulation with the new substrate. Shown in Fig.3, the result of the impulse indeed arrives at 1 nsec, and our guess is now a valid root cause for the shorter delay.

Fig. 3: Signal arrives at the predicted 1 nsec after switching the same 6-inch line into a stripline environment while maintaining impedance of 50 Ohms.

# Inconsistency Resolved

Much like the melting trace paradox, our initial assumption was once again inaccurate. However, since we used a rule of thumb to quickly get the numbers for the propagation speed of light in FR4, some degree of inaccuracy could be expected.

As Dr. Eric Bogatin put it, “An okay answer now is better than a good answer later.” As long as we know the underlining assumption in the approximation, it is perfectly fine to use a rule of thumb to quickly estimate the result one is expecting.

That's this week's Tim's Blackboard. See you next week!

Use ADS simulations to perform consistency tests:

# Tim's Blackboard #2: The Dirac Delta Misnomer

Posted by Tim Wang Lee Jun 7, 2017

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.

This week on Tim’s Blackboard is the “Dirac Delta Misnomer.”

# Introduction

Did you know the famous Dirac delta function is mathematically NOT a function?

Fig. 1: ADS data display representation of the Dirac delta “function” by a line and arrowhead. The tip of the arrow head indicates the multiplicative constant to the Dirac delta.

The  "function", shown in Fig. 1, can be loosely thought of as a function on the real line which is zero everywhere except at the origin, where it is infinite:

and it is also constrained to satisfy the following identity [1]:

However, there is no function that can simultaneously have all the above properties.
Any extended-real function that is equal to zero everywhere but a single point must have total integral zero [2] [3].

Well, if Diract delta is not a function, what is it?

# Dirac Delta Distribution

Mathematically speaking, the Dirac delta “function” is a generalized function or a distribution. It can be considered as the limit of a zero-mean normal distribution when the standard deviation, σ, approaches 0, see Fig. 2.

Fig. 2: Normal distribution with different standard deviations shown in ADS data display. As the value of standard deviation gets smaller and smaller, the function approaches Dirac delta distribution.

To rigorously capture the notion of the Dirac delta “function”, Mathematicians had also defined a measure. Instead of spending time on the mathematics, we will look at what brings the misnamed Dirac delta distribution to its fame.

# Dirac Delta and Impulse Response

The Dirac delta distribution is well known for many reasons. For example,

1.    The convolution of a Dirac delta function with function F is the original function F, ,

2.    The Fourier Transform of a Dirac delta is unity, , and most importantly,

3.    The response of a given Linear Time-Invariant (LTI) system to a Dirac delta distribution completely characterizes the system.

To build up a good foundation for future discussions on Convolution and Fourier Transform, let’s examine the impulse response: the response of an LTI system to the Dirac delta distribution. (note: in the following sections, the term impulse is used interchangeably with Dirac delta distribution.)

# Impulse Response of a Lossless Channel

Before we bring out simulation tools to simulate the impulse response of a lossless channel, it is important to know what to expect. Dr. Eric Bogatin named the practice Rule #9: “Never perform a measurement or simulation without first anticipating the results you expect to see.”

## Expectation

Shown in Fig. 3 is an illustration of the circuit setup. Given a lossless line with time delay, TD, and no mismatch to create reflection, we will see the impulse at the probe TD seconds after the impulse (Dirac delta distribution) is sent.

Fig. 3: Illustration of sending an impulse through a transmission line with time delay TD seconds

In this experiment, we used a section of lossless transmission line that is 1 nsec long. Per Rule #9, we should expect the impulse to arrive at the output at 1 nsec.

As shown in Fig. 4, the simulation result is consistent with our expectation. The same impulse indeed shows up at the output 1 nsec after it leaves the source.

Fig. 4: ADS simulation of an impulse through a 1 nsec long lossless transmission line. As expected, the impulse is delayed by 1 nsec.

Note that because it is impossible to generate an ideal Dirac delta distribution having an infinite amplitude, we are using the arrowhead to denote infinity.In the ADS workspace attached, you will find a method to approximate the impulse. The key is to ensure the approximated Dirac delta distribution has an integral of unity over the entire real line.

## Consistency Test

To make sure our approximated impulse satisfies the constraints formulated before, we also plot the integral of the approximated impulse. We would expect the plot of the integral of the output impulse to be a unit step function starting at 1 nsec.

Fig. 5: The integral of the approximated impulse is indeed the step function we expected.

As shown in Fig. 5, the integral of the approximated impulse fulfills the unit step requirement. We now have confidence in the approximated impulse and the generated impulse response.

# Impulse Response of a Lossy Channel

In real life, a lossless channel does not exist. There is always conductor loss and/or dielectric loss in the transmission line. We will now investigate a 6 inch 50 Ohm microstrip line on an FR4 substrate with a virtual prototype.

## Expectation

Using the rule of thumb, 6 inch/nsec, for the speed of propagation in FR4, we would expect 1 nsec delay for a 6 inch transmission line. In addition, because of the frequency-dependent loss, we would also expect the impulse to spread out. Lastly, we should see a very high voltage peak approximating the infinite amplitude.

## ADS Simulation Setup and Result

In the lossy case simulation, we used the multilayer layer library substrate and transmission line so a 2D cross-section of the trace is solved by method of moments to gain more accuracy in simulating losses than the equation-based model.

In Fig. 6, the result of the simulation agrees with most of our predictions. The peak of the output voltage is more than 60 GV (daunting) and the impulse is more spread out because of the frequency-dependent loss.

Fig. 6: The result of lossy transmission line simulation agrees with our predictions, except for the arrival time.

Nonetheless, although close to our prediction, the arrival time of the impulse is a bit off. Instead of 1 nsec, the impulse arrives at 0.88 nsec. (Any guesses on why that is? Feel free to post possible explanations in the comment section, and check back next week for the answer.)

# Dirac Delta Misnomer Corrected

After our journey today, we now know that because of its unique definition, the Dirac delta should to refer to as distribution and not a function, a fun fact to bring up at social functions (pun intended).

Moreover, we touched upon the important properties of the Dirac delta distribution. Specifically, the response of an LTI system to a Dirac delta: the impulse response, which characterizes an LTI system completely.

In future posts, we will build upon the impulse response idea and delve into Convolution and Fourier Transform.

That's this week's Tim's Blackboard. See you in two weeks!

Before then, make sure to download the workspace attached to see how an impulse can be approximated and how the impulse looks like after going through a realistic transmission line!

# References

[1] Gel'fand, I. M.; Shilov, G. E. (1966–1968), Generalized functions, 1–5, Academic Press

[2] Vladimirov, V. S. (1971), Equations of mathematical physics, Marcel Dekker, ISBN 0-8247-1713-9.

[3] Duistermaat, Hans; Kolk (2010), Distributions: Theory and applications, Springer.

# Tim's Blackboard #1.5: the Blink of an Eye

Posted by Tim Wang Lee May 30, 2017

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.

There is no new post this week on Tim’s Blackboard, but while you are waiting for next Wednesday to come, give this SI journal article a read. I collaborated with Signal Integrity industry's leading experts, Al Neves and Mike Resso, performed simulations in ADS and wrote about S-parameter Analysis and the Blink of an Eye.

See you next week!

TWL: 5/30/17

# Tim’s Blackboard #1: The Melting Trace Paradox

Posted by Tim Wang Lee May 24, 2017

Welcome to Tim’s Blackboard! This is the place to find discussions on interesting topics related to signal integrity and power integrity.

This week on Tim’s Blackboard is the “melting trace paradox.”

# Introduction

Unlike other famous paradoxes such as the Zeno’s paradox, where Achilles and the Tortoise are involved, the melting trace paradox is one with a segment of copper trace and a current source, see Fig. 1.

Fig. 1: The circuit setup for the melting trace paradox.

If you find the equations for power of the trace and the required energy to change the trace temperature, you will find the temperature change of a trace expressed as follows:

whereis the temperature change, and  is the specific heat and mass of the copper and  is the time elapsed.

The equation states that the temperature increases with time. That is, the longer I leave the current source on, the hotter the trace gets. As the temperature reaches the melting temperature of copper, the trace melts.

There is clearly a paradox here. In labs, we know that the temperature of the trace does not increase with no bound, and the trace does not come with a warning label that says “don’t leave current on for too long, trace will melt.” At some point, the temperature of the trace reaches a steady state value.

But how come our prediction is not consistent with our expectation?

Is our math wrong? Or is the world we know broken?

We will now look at the melting trace paradox and find an explanation and solution for it.

# Power and Energy of the Trace

To reconstruct the melting trace paradox, let’s first look the power dissipated by the trace. Assume the trace has resistance  and the current source is delivering  Amps. We can write:

Since power has the unit Joule per second, we know that the longer we leave the current source on, the more energy,, is consumed:

Next, we will look at how much energy it takes to increase the temperature of the trace.

# Increasing the Trace Temperature

Let’s calculate the energy required to increase the temperature of the trace. Recall the definition of specific heat: the amount of heat per unit mass required to raise the temperature by one degree Celsius. Let be the specific heat of the trace and the mass of the trace, we write:

where  is the energy required to increase the trace temperature bydegree Celsius.

Because we have an expression for the energy of the trace, we can re-write the temperature change in terms of energy:

Here is the energy delivered to the trace:

Here is the temperature change of the trace with given energy:

Replacing  in first equation with  in the second equation , we get:

Putting in the numbers, given 1 A current source and a 1 inch long 20 mil wide copper trace, leaving the current source on for 1 minute results in 1100 oC temperature change, which exceeds the melting point of copper at 1085 oC.

We end up with a melted trace and a frowny face.

# What Is Going On?

The assumption we made in calculating the energy generated is that ALL the electric energy goes to heat up the trace. The assumption is wrong for a lab environment.

In a lab environment, we have air to provide heat transfer through convection. However, if we are in space (vacuum) where there is no air for convective heat transfer, the lack of heat transfer traps the heat in the trace and increase the trace temperature with time and finally, the copper trace melts.

Fig. 2: Air should be included in the setup to correctly represent the lab environment.

As shown in Fig. 2, in most labs, air is present to provide heat transfer through convection, allowing the trace to reach a steady state temperature.

It would be hard to calculate the steady state temperature with pencil and paper. I’ll show you how to find the temperature by performing an electro-thermal simulation with ADS PIPro.

# Electro-Thermal Simulation of the Trace

Entering the same parameters in the ADS PIPro electro-thermal simulator, we can setup two experiments: one with a vacuum and another where air and convection is present.

We would expect the trace in the experiment with a vacuum to have an extremely high temperature, and expect the one with air to reach an equilibrium temperature.

Moreover, with air surrounding the trace, we expect the surrounding of the trace to heat up and have a higher temperature. On the other hand, if the trace is in a vacuum, the surrounding of the trace should stay at the ambient temperature.

Fig. 3: ADS PIPro electro-thermal simulation of a trace in a vacuum. Since there is no air to allow convective heat transfer, the trace reaches a very high temperature, ~35000 oC.

Shown in Fig. 3, as expected, the trace in vacuum reaches a high temperature, with no heat spreading to the trace’s surroundings. (But if there is no convective heat transfer in vacuum, how does the warmth of the sun gets to the earth!?)

Fig. 4: ADS PIPro electro-thermal simulation of a trace in a lab environment. Since air is present to provide convective heat transfer, the simulated temperature reaches a reasonable value, 56.8 oC

Shown in Fig. 4 is the analysis done with air around the trace. As expected, the air surrounding the trace provides means of heat transfer and allows some energy to escape instead of being all trapped in the trace. Reading from the result plot, we find the final temperature of the trace to be about 56.8 oC.

# The World is Not Broken and Math is Good

After the analysis, we are now sure the world is not broken and our calculations are correct. It is our assumption that needs to be improved. We took the air around us for granted and forgot to include it in our initial analysis.

By performing the consistency tests, we found the solution to the melting trace paradox and now have a better understanding of the thermal aspect of the current source and the trace.

That's this week's Tim's Blackboard. See you in two weeks!