I am a new user to ADS and studying the examples presented in the BER Validation Guide, I have some problems understanding some things:

1. If I change the rolloff to 0.22 the simulated SER curve deviates significantly from theory. Why does changing the Rolloff factor affect the SER performance? I did not find anywhere any information on the impact of the rolloff on delay.

2. When plotting the Eye diagram to determine the optimal sampling instant, I have the TK sample delay slider to zero and in the Eye diagram I see that the best sample should be at a different value than what I know to be the optimal one from looking at the received samples and from the BERStart equation. How can I determine what the best sample is from the Eye diagram?

I look forward to hearing from you.

1. If I change the rolloff to 0.22 the simulated SER curve deviates significantly from theory. Why does changing the Rolloff factor affect the SER performance? I did not find anywhere any information on the impact of the rolloff on delay.

2. When plotting the Eye diagram to determine the optimal sampling instant, I have the TK sample delay slider to zero and in the Eye diagram I see that the best sample should be at a different value than what I know to be the optimal one from looking at the received samples and from the BERStart equation. How can I determine what the best sample is from the Eye diagram?

I look forward to hearing from you.

Ideal Raised Cosine filters have infinite impulse response length. For practical purposes the impulse response is truncated to a finite fitler length (filter length is 2*delay). The smaller the ExcessBw the longer it takes for the filter's impulse response to die out so in order to characterize the filter's behavior better we need to truncate its length to a bigger value.

I did the following simulations with the BER_16QAM design

Sim1: run simulation with existing setup

Sim2: run simulation with ExcessBw = 0.22

Sim3: run simulation with ExcessBw = 0.22 and changed Delay of RRC filters inside QPSK_Mod and QPSK_Demod to 8*SymbolTime (I also changed Ref_Delay variable in VAR block VAR1 to 16 * SymTime + RF_Channel_Delay + 2 * TStep)

Sim4: run simulation with ExcessBw = 0.22 and changed Delay of RRC filters inside QPSK_Mod and QPSK_Demod to 12*SymbolTime (I also changed Ref_Delay variable in VAR block VAR1 to 24 * SymTime + RF_Channel_Delay + 2 * TStep)

Unfortunately, the QPSK_Mod and QPSK_Demod components do not have parameters at the top level to set the filter Delay.

The SER results are shown below. As you can see the SER values for Eb/No 13 dB and 14 dB get closer to the results of Sim1 as the filter length increases

Eb/No Sim1 Sim2 Sim3 Sim4

4 2.228E-01 2.063E-01 2.116E-01 2.056E-01

5 1.690E-01 1.556E-01 1.555E-01 1.386E-01

6 1.016E-01 1.045E-01 1.039E-01 1.039E-01

7 6.473E-02 6.747E-02 6.834E-02 6.695E-02

8 3.578E-02 3.662E-02 3.656E-02 3.726E-02

9 1.630E-02 1.690E-02 1.603E-02 1.513E-02

10 6.902E-03 7.393E-03 7.076E-03 7.156E-03

11 2.237E-03 2.476E-03 2.480E-03 2.169E-03

12 5.166E-04 6.663E-04 6.315E-04 5.725E-04

13 1.000E-04 1.490E-04 9.700E-05 9.900E-05

14 1.000E-05 2.300E-05 1.800E-05 1.400E-05

My explanation to why this happens is that the RRC filters in the receiver filter out noise. When the filter length is too short to characterize the filter's response accurately, the filter does not do as good of a job compared to the ideal infinite length filter to filter out the noise, which effectively results in a smaller Eb/No value than expected.

To determine the optimal sampling instant from the eye diagram I used the original BER_16QAM design and followed the steps below:

1) deactivated ParamSweep and berMC sinks

2) activated TimedSinks Test_I and TestQ

3) set DefaultTimeStart in DF controller to 10*SymTime (to avoid filter transient)

4) set DefaultTimeStop in DF controller to 200*SymTime

5) in the data display place a rectangulat plot and plot eye(Test_I,1e6) (1e6 is the symbol rate)

The maximum eye opening occurs at 0.6 usec, so the optimal sampling instants for the Test signal are 10*SymTime + 0.6 usec + N*SymTime. (10*SymTime is the time instant we started recording data in the TimedSink). The BER_Start variable is set to

BER_Start = 5 * SymTime + int( ( SampPerSym - 1 ) / 2 ) * TStep + Ref_Delay

where

Ref_Delay = 8 * SymTime + RF_Channel_Delay + 2 * TStep

RF_ChannelDelay = int(EbNo) * TStep

EbNo = 20

After doing all the math BER_Start = 5 * SymTime + int( ( 10 - 1 ) / 2 ) * TStep + 8 * SymTime + 20 * TStep + 2 * TStep = 13 * SymTime + 26 * TStep.

Since SymTime = 10 *TStep, BER_Start = 15 * SymTime + 0.6 TStep = N * Symtime + 0.6 usec.

Let me know if you have any questions.