I have the following setup:
In this question I'm supposed to present and explain the spectrum results of for different sampling times (t_s = 0.1, 0.25, 0.5, 0.7, 1 sec), so I created five different blocks like this, each with the corresponding sampling time in the zero-order hold block as well as the spectrum analyzer block.
The pulse generator parameters are: amplitude of 1, period of 1 sec, duty cycle of 50%, and phase delay of -0.25 sec (to make the signal centered around 0.
The zero-order hold is set to the sample rate as I explained.
The spectrum analyzer parameters are: 512 in length of buffer and number of points for FFT, plot after 128 points, and the sample rate is set to the same as the zero-order hold block.
I don't quite understand the results I get, in all of them it seems like the time signal is chopped, (meaning it's missing some of the simulation - the simulation time is 100 sec), the magnitude is always 1 and the phase has small floating point errors.
I do understand the time graph I get in each, but I don't understand the difference in the spectrum between the different sample rates, it all looks the same (with some floating point error in the phase).
for t_s = 0.1:
for t_s = 0.25:
for t_s = 0.5:
for t_s = 0.7:
for t_s = 1 for some reason it doesn't show anything, I do understand that since it samples exactly with the period it should be a time signal which will be constant at 1.





I'll also post this to the signal processing stack exchange)not a good idea, or allowed, to cross-post, it upsets both communities. Pick one or the other, so you don't split the answer effort. BTW, the phase you show is 0 +/- rounding noise on 64 bit reals, so you can forget about trying to interpret any apparent detail in those traces. \$\endgroup\$