I am trying to understand how the Nyquist-Shannon theorem applies to sampling in the time domain. Suppose I want to sample a function whose time constants I know. From what I understand, the bandwidth is determined by the shortest time constant. After that, I'm shaky. Wikipedia appears to suggest that the Nyquist rate is twice the bandwidth as I defined it previously, and that the Nyquist frequency should be between that and the bandwidth (https://en.wikipedia.org/wiki/Nyquist_frequency). On the other hand, a signal processing book that I consulted (Oppenheim and Schafer) suggests that twice the bandwidth is the Nyquist rate, and that the Nyquist frequency should be between that and the bandwidth. And it also shows the sampling frequency as being $2\pi$ times the bandwidth (I think) for reasons that I don't understand.
Based on the latter source, I would guess that my sampling rate in the time domain must fall within that interval in order not to produce artifacts in reproducing the function that I mentioned. From what I understand, aliasing is one such artifact.
Can somebody help clear this up for me? This is very muddy in my head. (My background is in an area of science that doesn't involve signal processing as part of the education, and I am trying to understand this because it seems quite fundamental.)