Hi guys,

I try to get into BER simulation. I started with the BPSK case of BER_Validation_prj.

Here my questions:

1. I noted that the result is significantly dependent on the number of samples per symbol. That should not be the case, isn't it?

2. I had a closer look on the root raised cosine filter (LPF_RaisedCosine_Timed) used in the project. The plots of the two versions (model with pulse equilization and impulse model) do not look like the ones I know from the literature. They even do not satisfy the first Nyquist criterion. I did a data flow simulaiton with an impulse at the input of the filter and a fft block and a timed sink at the output to determine the transfer function and the impulse response. The documentation unfortunately gives no clue at all.

I appreciate any idea or insight.

Thank you. Wolfram

I try to get into BER simulation. I started with the BPSK case of BER_Validation_prj.

Here my questions:

1. I noted that the result is significantly dependent on the number of samples per symbol. That should not be the case, isn't it?

2. I had a closer look on the root raised cosine filter (LPF_RaisedCosine_Timed) used in the project. The plots of the two versions (model with pulse equilization and impulse model) do not look like the ones I know from the literature. They even do not satisfy the first Nyquist criterion. I did a data flow simulaiton with an impulse at the input of the filter and a fft block and a timed sink at the output to determine the transfer function and the impulse response. The documentation unfortunately gives no clue at all.

I appreciate any idea or insight.

Thank you. Wolfram

I thought that the interpolator is taking care of the optimal sampling point.

A comment to the major point of the issue. I think my mistake is that I am measuring the filter with impulses, which are not used in the application.

In a book I found there is a passage about the impulse response being adapted to the finite rectengular pulse to create the same output as if the filter is exited with an ideal dirac impulse. It would also fit to the option: Model with pulse equalization.

But then again I would like to have much more information in the documentation about the background of the parameters ...

Wolfram

## Attachments

The verification is pretty simple. Just play with the parameter in the example mentioned above and you will see.

Finally, I thought you are part of the tech support. Is that correct?

Thank you, regards Wolfram

I opened the design BER_BPSK from the BER_Validation_prj and chagned SampPerBit from 10 to 24. I also reduced the Sweep to go only up to 8 dB of EbNo instead of 10 dB (to save simulation time since I increased the number of samples per bit). The results I got were close (graphically) to the ones for 10 samples per bit.

I then looked at the relative error (new-old)/old (where new are the new results and old are the old results). I saw relative errors ranging from -17% to 22.5%! This does sound like a lot.

I then computed the relative error of the new and old BER simulation results but relative to the theoretical results (new-theory)/theory and (old-theory)/theory. I saw results ranging from -12.7% to 13%.

The truth is that if you do all the statistical analysis with Hypothesis Testing and Confidence Intervals (I am not sure whether you are familiar with these terms) it turns out that an EstRelVariance of 0.01 (1%) gives you a confidence interval of 95% with 20% relative error!!! Stated in a different way, a 1% EstRelVariance gives you a BER estimate that is within 20% of the actual value 95% of the times!!! 20% is quite a lot of tolerance and that is not 100% of the time (5% of the time the estimate may be more than 20% different from the actual value). The errors that I saw were withing the 20% tolerance.

You can see this variation even with the same number of samples per bit (just simulate with a different random seed). This is what I did. I opened the original BER_BPSK design, set EbNo to 4 in VAR block VAR1 and deactivated the ParamSweep. I then defined a new variable called N and placed a new ParamSweep to sweep N from 1 to 1000 with step 1. Finally, I set the DefaultSeed parameter of the DF controller to an expression that used N, e.g. 24*N*N-83*N+873. I then ran the simulation. After the simulation was over, I calculated (in the data display) the relative error of the 1000 BER estimates using the theoretical BER value at Eb/No=4 dB as a refernce. I then counted how many times the error was more that 20% (plus or minus). I got 53 times (out of 1000). This is very close to the 5% I mentioned above. I did the same experiment but used a different expression to set the DefaultSeed. For example, when I set the DefaultSeed to 8375*N-2355 I got 39 BER estimates with more than 20% error.

These are the equations I used in the data display:

berValues = BPSK_BER_Sim[::,0]

relErrorPercent = 100*(berValues-BPSK_BER_Theory[4])/BPSK_BER_Theory[4]

Over20Percent = if ( abs( relErrPercent ) > 20 ) then 1 else 0

total = sum(Over20Percent)

I hope this helps.

By the way, I am not in Tech Support. I am part of the R&D group.