 Search

# The Online Encyclopedia and Dictionary   ## Encyclopedia ## Dictionary ## Quotes  # Nyquist-Shannon sampling theorem

The Nyquist-Shannon sampling theorem is the fundamental theorem in the field of information theory, in particular telecommunications. It is also known as Whittaker-Nyquist-Kotelnikov-Shannon sampling theorem or just simply the sampling theorem.

The theorem states that:

when sampling a signal (e.g., converting from an analog signal to digital), the sampling frequency must be greater than twice the bandwidth of the input signal in order to be able to reconstruct the original perfectly from the sampled version.

If B is the bandwidth and Fs is the sampling rate, then the theorem can be stated mathematically (called the "sampling condition" from here on)

2B < Fs

IMPORTANT NOTE: This theorem is commonly misstated/misunderstood (or even mistaught). The sampling rate must be greater than twice the signal bandwidth, not the maximum/highest frequency. A signal is a baseband signal if the maximum/highest frequency coincides with the bandwidth, which means the signal contains zero hertz. Not all signals are baseband signals (e.g., FM radio). This principle finds practical application in the "IF-sampling" techniques used in some digital receivers.

 Contents

## Aliasing

If the sampling condition is not satisfied, then frequencies will overlap (see the proof). This overlap is called aliasing.

To prevent aliasing, two things can readily be done

1. Increase the sampling rate
2. Introduce an anti-aliasing filter or make anti-aliasing filter more stringent

The anti-aliasing filter is to restrict the bandwidth of the signal to satisfy the sampling condition. This holds in theory, but is not satisfiable in reality. It is not satisfiable in reality because a signal will have some energy outside of the bandwidth. However, the energy can be small enough that the aliasing effects are negligible.

## Downsampling

When a signal is downsampled, the theorem still must be satisfied. The theorem is satisfied when downsampling by filtering the signal appropriately with an anti-aliasing filter.

## Critical frequency

The critical frequency is defined as twice the bandwidth (if the sampling condition was an equality instead of an inequality).

If the sampling frequency is exactly twice the highest frequency of the input signal, then phase mismatches between the sampler and the signal will distort the signal. For example, sampling cos(pi * t) at t=0,1,2... will give you the discrete signal cos(pi * n), as desired. However, sampling the same signal at t=0.5,1.5,2.5... will give you a constant zero signal. These two sets of samples, which differ only in phase and not frequency, give dramatically different results because they sample at exactly the critical frequency.

## Historical background

The theorem was first formulated by Harry Nyquist in 1928 ("Certain topics in telegraph transmission theory"), but was only formally proven by Claude E. Shannon in 1949 ("Communication in the presence of noise"). Kotelnikov published in 1933, Whittaker in 1935, and Gabor in 1946.

Mathematically, the theorem is formulated as a statement about the Fourier transformation.

If a function s(x) has a Fourier transform F[s(x)] = S(f) = 0 for |f| ≥ W, then it is completely determined by giving the value of the function at a series of points spaced 1/(2W) apart. The values sn = s(n/(2W)) are called the samples of s(x).

The minimum sample frequency that allows reconstruction of the original signal, that is 2W samples per unit distance, is known as the Nyquist frequency, (or Nyquist rate). The time inbetween samples is called the Nyquist interval.

If S(f) = 0 for |f| > W, then s(x) can be recovered from its samples by the Nyquist-Shannon interpolation formula.

A well-known consequence of the sampling theorem is that a signal cannot be both bandlimited and time-limited. To see why, assume that such a signal exists, and sample it faster than the Nyquist frequency. These finitely many time-domain coefficients should define the entire signal. Equivalently, the entire spectrum of the bandlimited signal should be expressible in terms of the finitely many time-domain coefficients obtained from sampling the signal. Mathematically this is equivalent to requiring that a (trigonometric) polynomial can have infinitely many zeros since the bandlimited signal must be zero on an interval beyond a critical frequency which has infinitely many points. However, it is well-known that polynomials do not have more zeros than their orders due to the fundamental theorem of algebra. This contradiction shows that our original assumption that a time-limited and bandlimited signal exists is incorrect.

## Undersampling

When sampling a non-baseband signal, the theorem states that the sampling rate need only be twice the bandwidth. Doing this results in a sampling rate less than the carrier frequency of the signal.

Consider FM radio to illustrate the idea of undersampling. In the US, FM radio operates on the frequency band from 88 MHz to 108 MHz. To satisfy the sampling condition, the sampling rate needs to be greater than 40 MHz. Clearly 40 MHz is less than 88 or 108 MHz and this is a scenario of undersampling.

If the theorem is misunderstood to mean twice the highest frequency, then the sampling rate would assumed to need to be greater than 216 MHz. While this does satisfy the correctly-applied sampling condition (40MHz < Fs) it is grossly over sampled.

Note that if the FM radio band is sampled at >40 MHz then a band-pass filter is required for the anti-aliasing filter.

In certain problems, the frequencies of interest are not an interval of frequencies, but perhaps some more interesting set F of frequencies. Again, the sampling frequency must be proportional to the size of F. For instance, certain domain decomposition methods fail to converge for the 0th frequency (the constant mode) and some medium frequencies. Then the set of interesting frequencies would be something like 10 Hz to 100 Hz, and 110 Hz to 200 Hz. In this case, one would need to sample at 360 Hz, not 400 Hz, to fully capture these signals.

## Proof

To prove the theorem, consider two continuous signals: any continuous signal f(t) and a Dirac comb δT(t).

Let the result of the multiplication be $f^{*}(t) = f(t) \delta_T(t) = f(t) \sum_{n=-\infty}^{\infty} \delta(t - n T)$

and taking the Fourier transform and applying the multiplication/convolution property $F^{*}(\omega) = \mathcal{F} \{f^{*}(t)\}$ $= \frac{1}{2 \pi} F(\omega) * \mathcal{F}\{\delta_T(t)\}$ $= \frac{1}{2 \pi} F(\omega) * \left\{ \frac{2 \pi}{T} \sum_{n = -\infty}^{\infty} \delta (\omega - n \omega_s) \right\}$ $= \frac{1}{T} \sum_{n = -\infty}^{\infty} \int_{-\infty}^{\infty} F(\tau) \delta(\omega - n \omega_s - \tau) d \tau$

and by the sifting property of the Dirac delta, the integral can be removed $F^{*}(\omega) = \frac{1}{T} \sum_{n = -\infty}^{\infty} F(\omega - n \omega_s)$

where $\omega_s = \frac{2 \pi}{T}$ and is the sampling rate.

The end result is a summation of shifted F(ω).

Let ωmax be the maximum frequency of F(ω), then F(ω) is bounded on $\left[ -\omega_{\max}, \omega_{\max} \right]$. The bandwidth of F(ω) is then max. In order for a replicated F(ω) shifted by ωs to not overlap then the condition max < ωs must hold true.

So, if ωs is not sufficiently large then the terms of the summation will overlap and aliasing will be introduced.

Although the theorem states that the sampling rate must be twice the bandwidth, it can readily be seen that this proof still holds. The proof just assumes that the bandwidth limited signal is centered about zero.

## References

• E. T. Whittaker, "On the Functions Which are Represented by the Expansions of the Interpolation Theory," Proc. Royal Soc. Edinburgh, Sec. A, vol.35, pp.181-194, 1915
• H. Nyquist, "Certain topics in telegraph transmission theory," Trans. AIEE, vol. 47, pp. 617-644, Apr. 1928.
• V. A. Kotelnikov , "On the carrying capacity of the ether and wire in telecommunications," Material for the First All-Union Conference on Questions of Communication, Izd. Red. Upr. Svyazi RKKA, Moscow, 1933 (Russian).
• C. E. Shannon, "Communication in the presence of noise," Proc. Institute of Radio Engineers, vol. 37, no.1, pp. 10-21, Jan. 1949.  