Page updated: March 22, 2021
Author: Curtis Mobley
View PDF

# Autocovariance Functions: Theory

The preceding pages have described the statistical properties of random sea surfaces using variance spectra. This page begins the exploration of an alternate path to the speciﬁcation of surface roughness properties. Clearly, wind-dependent variance spectra are applicable only to surface waves that are generated by wind. Consider, however, the surface of a ﬂowing river. The river’s surface can have ripples or waves generated by turbulence resulting from unstable shear ﬂow induced by ﬂow over a shallow bottom, or by eddies created as the water ﬂows around rocks in the river. These water surfaces do not depend on the wind speed and can have diﬀerent statistical properties, hence diﬀerent optical properties, than wind-roughened water surfaces. Such surfaces can be described by their autocovariance functions.

Autocovariance functions can be converted to elevation variance spectra via the Wiener-Khinchin theorem. This page shows how that is done. Once a given autocovariance function has been converted to its equivalent elevation variance spectrum, the algorithms of the previous chapters are immediately applicable, even though the variance spectrum is not wind-dependent. Indeed, this conversion enables the Fourier transform methods of the previous chapters to be used to generate random realizations of any surface, not just water surfaces.

As is often the case, there is a large gap between textbook theory—usually developed for continuous variables or an inﬁnite sample size of discrete values—and its implementation in a computer program for a ﬁnite sample size of discrete variables. In particular, careful attention must be paid to sampling of an autocovariance function in order to obtain the corresponding variance spectrum, or vice versa. I ﬁnd it disappointing and frustrating (but not surprising) that numerical matters such as the eﬀects of ﬁnite sample size, maximum lag size, and exactly how to sample spectra or autocovariances (in particular, the material on page Autocovariance Functions: Sampling) never seems to be mentioned in textbooks on digital signal processing or related subjects. It is left to the innocent student to spend a few weeks of unfunded time ﬁguring out why various numerical results are not internally consistent or do not perfectly match the textbook theory.

This page begins with a discussion of autocovariances, and then the Wiener-Khinchin theorem is stated. The theorem is numerically illustrated on the next page ﬁrst using the wind-dependent Pierson-Moskowitz elevation variance spectrum, for which certain values can be analytically calculated and used to check the numerical results. The modeling of water surfaces generated by shear-induced turbulence is then illustrated, again using analytical functions that allow for a rigorous check on the numerical results.

### Autocovariance

The autocovariance ${C}_{zz}\left(\ell \right)$ of $z\left(r\right)$ is deﬁned as

 ${C}_{zz}\left(\ell \right)\equiv \mathsc{ℰ}\left\{z\left(r\right)z\left(r+\ell \right)\right\}\phantom{\rule{0.3em}{0ex}},$ (1)

where $\mathsc{ℰ}$ denotes statistical expectation and $\ell$ is the spatial lag. This deﬁnition in terms of the expectation holds for both continuous and discrete variables. For the present discussion with $z\left(r\right)$ being sea surface elevation, ${C}_{zz}\left(\ell \right)$ shows how strongly the sea surface elevation at one location is correlated to the elevation at a distance $\ell$ away. ${C}_{zz}\left(\ell \right)$ has units of ${m}^{2}$, and ${C}_{zz}\left(\ell =0\right)$ is the variance of the surface elevation. The autocovariance is an even function of the lag: ${C}_{zz}\left(-\ell \right)={C}_{zz}\left(\ell \right)$.

Consider an inﬁnite sample of discrete zero-mean surface elevations spaced a distance $\Delta x$ apart. The autocovariance is then computed by (e.g., Eq. 2.6.3 of Proakis and Manolakis (1996) with a minor change in their notation)

 ${C}_{zz}\left(\ell \right)=\sum _{r=-\infty }^{+\infty }z\left(r\right)z\left(r+\ell \right),\phantom{\rule{1em}{0ex}}for\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\ell =0,±1,±2,...\phantom{\rule{0.3em}{0ex}}.$

Here $\ell$ is indexing the lag distance in units of the sample spacing $\Delta x$. For a ﬁnite sample of $N$ discrete values, the same authors deﬁne the sample autocovariance by (their Eq. 2.6.11)

 ${C}_{zz}\left(\ell \right)=\sum _{r=0}^{N-|\ell |-1}z\left(r\right)z\left(r+\ell \right)\phantom{\rule{0.3em}{0ex}}.$ (2)

As usual, there are competing deﬁnitions. For a ﬁnite sample of $N$ discrete values, perhaps with a non-zero mean $m$, the IDL autocorrelation function (A_CORRELATE) uses

 ${C}_{zz}\left({\ell }_{r}\right)=\frac{1}{N}\sum _{r=0}^{N-|\ell |-1}\left[z\left(r\right)-m\right]\left[z\left(r+\ell \right)-m\right]\phantom{\rule{1em}{0ex}}for\phantom{\rule{1em}{0ex}}-\left(N-2\right)\le {\ell }_{r}\le N-2\phantom{\rule{0.3em}{0ex}},$ (3)

where

 $m=\frac{1}{N}\sum _{r=0}^{N}z\left(r\right)$

is the sample mean. Matlab computes the autocovariance via

 ${C}_{zz}\left(\ell \right)=\frac{1}{N-1}\sum _{r=0}^{N-|\ell |-1}\left[z\left(r\right)-m\right]\left[z\left(r+\ell \right)-m\right]\phantom{\rule{0.3em}{0ex}}.$

Note that the lag must be less than the length of the sample. (That is, the sample locations are at ${x}_{r}=r\Delta x,r=0,...,N-1$ and the allowed lag distances are ${\ell }_{r}=r\Delta x,r=0,...,N-2$.) Note also the factor of $1∕N$ in the IDL deﬁnition (3), which does not appear in Eq. (2), and which is a factor of $1∕\left(N-1\right)$ in the Matlab version.

Nor is there even any consensus on the terms “autocovariance” and “autocorrelation.” Some authors (and this page) deﬁne the nondimensional autocorrelation ${\rho }_{zz}\left(\ell \right)$ as the autocovariance normalized by the variance, i.e.

 ${\rho }_{zz}\left(\ell \right)\equiv \frac{{C}_{zz}\left(\ell \right)}{{C}_{zz}\left(0\right)}\phantom{\rule{0.3em}{0ex}}.$ (4)

However, Proakis and Manolakis (1996) call the autocovariance as used here the autocorrelation, and they call the autocorrelation of Eq. (4) the “normalized autocorrelation.” These sorts diﬀerences in the deﬁnitions and computations of autocovariances can can cause much grief when comparing the numerical outputs of diﬀerent computer codes, or numerical outputs with textbook examples.

### The Wiener-Khinchin Theorem

Now that the autocovariance has been deﬁned, the Wiener-Khinchin theorem can be stated: The Fourier transform of the autocovariance equals the variance spectral density function. Symbolically,

 ${\mathsc{ℱ}}_{\nu }\left\{{C}_{zz}\left(\ell \right)\right\}={\mathsc{𝒮}}_{2s}\left(\nu \right)\phantom{\rule{0.3em}{0ex}}.$ (5)

Indeed, some texts deﬁne the spectral density as the Fourier transform of the autocovariance. The inverse is of course

 ${\mathsc{ℱ}}_{\nu }^{-1}\left\{{\mathsc{𝒮}}_{2s}\left(\nu \right)\right\}={C}_{zz}\left(\ell \right)\phantom{\rule{0.3em}{0ex}}.$ (6)

Here ${\mathsc{𝒮}}_{2s}$ is a two-sided spectral density function as discussed on several previous pages (e.g., Surfaces to Spectra: 1D).

It is important to note (as emphasized by the $\nu$ subscript on the Fourier transform operator $\mathsc{ℱ}$) that the theorem is written with the $\nu$ version of the Fourier transform (Eq. 1 of the Fourier Transforms page), and the density function is written in terms of the spatial frequency $\nu$, which has units of 1/meters. (In the time domain, the conjugate variables are time $t$ in seconds and frequency $f$ in 1/seconds = cycles/second = Hz.) The spectral density function for the angular spatial frequency $k=2\pi \nu$ (or angular temporal frequency $\omega =2\pi f$ in radians per second in the time domain) can be obtained by noting that corresponding intervals of the spectral densities contain the same amount of variance:

 ${\mathsc{𝒮}}_{2s}\left(k\right)dk={\mathsc{𝒮}}_{2s}\left(\nu \right)d\nu \phantom{\rule{0.3em}{0ex}},$

which gives

 ${\mathsc{𝒮}}_{2s}\left(k\right)=\frac{1}{2\pi }{\mathsc{𝒮}}_{2s}\left(\nu =k∕2\pi \right)\phantom{\rule{0.3em}{0ex}}.$ (7)

Note that $\ell$ varies from $-\infty$ to $+\infty$ and, likewise, $\nu$ or $k$ ranges over all negative and positive values. The variance spectrum obtained from the Fourier transform of an autocovariance function is therefore a two-sided spectrum.

Comment: In light of Eq. (7), the theorem stated in terms of angular spatial frequency $k$ appears to be

 ${\mathsc{ℱ}}_{k}\left\{{C}_{zz}\left(\ell \right)\right\}=2\pi {\mathsc{𝒮}}_{2s}\left(k\right)\phantom{\rule{0.3em}{0ex}},$ (8)

with the inverse

 ${\mathsc{ℱ}}_{k}^{-1}\left\{{\mathsc{𝒮}}_{2s}\left(k\right)\right\}=\frac{1}{2\pi }{C}_{zz}\left(\ell \right)\phantom{\rule{3.26288pt}{0ex}}.$ (9)

I say “appears to be” because I’ve never actually seen the theorem written this way because the textbooks all seem to stick with $x$ and $\nu$ (or $t$ and $f$ in the time domain). As Press et al. (1992) say in Numerical Recipes (p. 491), “There are fewer factors of $2\pi$ to remember if you use the ($\nu$ or) $f$ convention, especially when we get to discretely sampled data....” In any case, Eqs. (8) and (9) are consistent with the $k$ spectrum of Eq. (17) discussed below.

The theorem is usually proved in textbooks for continuous variables $x$ and $\nu$. However, in numerical application to a ﬁnite number of discrete samples, discrete variables ${x}_{r}$ and ${\nu }_{u}$ or ${k}_{u}$ are used, and proper attention must be paid to pesky factors of $N$, $2\pi$, and bandwidth, and to the array ordering required by a particular FFT routine.

The continuous-variable Fourier transform in Eq. (5) gives a spectral density ${\mathsc{𝒮}}_{2s}\left(\nu \right)$ with units of ${m}^{2}∕\left(1∕m\right)$. However, if the theorem is written for a DFT of discrete data ${C}_{zz}\left({\ell }_{r}\right)={C}_{zz}\left(r\right)$,

 ${𝔇}_{\nu }\left\{{C}_{zz}\left(r\right)\right\}={\mathsc{𝒮}}_{2s}\left({\nu }_{u}\right)\phantom{\rule{0.3em}{0ex}},$ (10)

then the resulting discrete spectrum ${\mathsc{𝒮}}_{2s}\left({\nu }_{u}\right)={\mathsc{𝒮}}_{2s}\left(u\right)$ has units of ${m}^{2}$. Just as was discussed on the Fourier Transforms page, this discrete spectrum must be divided by the bandwidth $\Delta \nu$ in order to obtain the density at $\nu ={\nu }_{u}$. That is,

 ${\mathsc{𝒮}}_{2s}\left(\nu ={\nu }_{u}\right)={\mathsc{𝒮}}_{2s}\left(u\right)∕\Delta \nu \phantom{\rule{0.3em}{0ex}}.$ (11)

### Example

Horoshenkov et al. (2013) (their Eq. 4) give an analytic formula for the autocorrelation function of surface waves generated by bottom-induced turbulence in shallow ﬂowing water. In the notation of this page, this function is

 ${\rho }_{zz}\left(\ell \right)=exp\left(-\frac{{\ell }^{2}}{2{\sigma }_{w}^{2}}\right)cos\left(\frac{2\pi }{{L}_{o}}\ell \right)\phantom{\rule{0.3em}{0ex}}.$ (12)

In their words, “${\sigma }_{w}$ relates to the spatial radius of correlation (correlation length)” and “${L}_{o}$ relates to the characteristic period in the surface wave pattern.” The average values for the physical conditions of the Horoshenkov et al. study are ${\sigma }_{w}=0.22\phantom{\rule{2.6108pt}{0ex}}m$ and ${L}_{o}=0.17\phantom{\rule{2.6108pt}{0ex}}m$. [Note: Eq. (12) is Horoshenkov’s $W\left(\rho \right)$ as shown in their abstract and in their conclusions, where it has a factor of 1/2 in the exponential. Their Eq (4) does not have the 1/2. This is probably a typo since Gaussians usually have the form $exp\left[-{x}^{2}∕\left(2{\sigma }^{2}\right)\right]$.]

This autocorrelation function provides a nice example of how to use the Wiener-Khinchin theorem to obtain the corresponding variance spectrum. Equation (12), when converted to an autocovariance via a factor of the variance, $⟨{z}^{2}⟩={C}_{zz}\left(0\right)$, has the form

$\begin{array}{llll}\hfill {C}_{zz}\left(\ell \right)=& {C}_{zz}\left(0\right)exp\left(-\frac{{\ell }^{2}}{2{\sigma }_{w}^{2}}\right)cos\left(\frac{2\pi }{{L}_{o}}\ell \right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill =& {C}_{zz}\left(0\right)exp\left(-{a}^{2}{\ell }^{2}\right)cos\left({q}_{o}\ell \right)\phantom{\rule{0.3em}{0ex}},\phantom{\rule{2em}{0ex}}& \hfill \text{(13)}\end{array}$

where $a=1∕\left(\sqrt{2}{\sigma }_{w}\right)$ and ${q}_{o}=2\pi ∕{L}_{o}$. This function has an easily computed analytical Fourier transform.

The continuous-variable Fourier transform of this ${C}_{zz}\left(\ell \right)$ is given by Eq. (1) of the Fourier Transforms page:

 ${\mathsc{𝒮}}_{2s}\left(\nu \right)={\mathsc{ℱ}}_{\nu }\left\{{C}_{zz}\left(\ell \right)\right\}={\int }_{-\infty }^{\infty }{C}_{zz}\left(\ell \right)\phantom{\rule{0.3em}{0ex}}{e}^{-i2\pi \nu \ell }\phantom{\rule{0.3em}{0ex}}d\ell \phantom{\rule{0.3em}{0ex}}.$ (14)

Here $\ell$ and $\nu$ are continuous variables; ${\mathsc{𝒮}}_{2s}\left(\nu \right)$ has units of ${m}^{3}$, which is interpreted as ${m}^{2}∕\left(1∕m\right)$ as explained before. Note that this variance spectral density is two-sided, i.e., $\infty <\nu <\infty$. Expanding the complex exponential via ${e}^{-i𝜃}=cos𝜃-isin𝜃$ gives

$\begin{array}{llll}\hfill {\mathsc{𝒮}}_{2s}\left(\nu \right)=& {\int }_{-\infty }^{\infty }{C}_{zz}\left(0\right)\phantom{\rule{0.3em}{0ex}}exp\left(-{a}^{2}{\ell }^{2}\right)cos\left({q}_{o}\ell \right)cos\left(2\pi \nu \ell \right)\phantom{\rule{0.3em}{0ex}}d\ell \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill -& i{\int }_{-\infty }^{\infty }{C}_{zz}\left(0\right)\phantom{\rule{0.3em}{0ex}}exp\left(-{a}^{2}{\ell }^{2}\right)cos\left({q}_{o}\ell \right)sin\left(2\pi \nu \ell \right)\phantom{\rule{0.3em}{0ex}}d\ell \phantom{\rule{0.3em}{0ex}}.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

The imaginary term is zero because the integrand is an odd function of $\ell$. Using the identity

 $cosx\phantom{\rule{0.3em}{0ex}}cosy=\frac{1}{2}\left[cos\left(x+y\right)+cos\left(x-y\right)\right]$

gives

 ${\mathsc{𝒮}}_{2s}\left(\nu \right)=2{C}_{zz}\left(0\right){\int }_{0}^{\infty }exp\left(-{a}^{2}{\ell }^{2}\right)\frac{1}{2}\left\{cos\left[\left(2\pi \nu +{q}_{o}\right)\ell \right]+cos\left[\left(2\pi \nu -{q}_{o}\right)\ell \right]\right\}d\ell \phantom{\rule{0.3em}{0ex}}.$

The integral

 ${\int }_{0}^{\infty }exp\left(-{a}^{2}{\ell }^{2}\right)cos\left(b\ell \right)\phantom{\rule{0.3em}{0ex}}d\ell =\frac{\sqrt{\pi }exp\left[-{b}^{2}∕\left(4{a}^{2}\right)\right]}{2a}$

gives the Fourier transform of the ${C}_{zz}\left(\ell \right)$ of Eq. (13):

 ${\mathsc{𝒮}}_{2s}\left(\nu \right)=\sqrt{\frac{\pi }{2}}{\sigma }_{w}{C}_{zz}\left(0\right)\left\{exp\left[-\frac{1}{2}{\left(2\pi {\sigma }_{w}\right)}^{2}{\left(\nu +1∕{L}_{o}\right)}^{2}\right]+exp\left[-\frac{1}{2}{\left(2\pi {\sigma }_{w}\right)}^{2}{\left(\nu -1∕{L}_{o}\right)}^{2}\right]\right\}\phantom{\rule{0.3em}{0ex}}.$ (15)

This variance spectrum has maxima at $\nu =±1∕{L}_{o}$, where the value is very close to $\sqrt{\frac{\pi }{2}}{\sigma }_{w}{C}_{zz}\left(0\right)$. Figure 1 plots this ${C}_{zz}\left(\ell \right)$ (Eq. 13) and ${\mathsc{𝒮}}_{2s}\left(\nu \right)$ (Eq. 15) for typical values of ${\sigma }_{w}=0.22\phantom{\rule{1em}{0ex}}m$, ${L}_{o}=0.17\phantom{\rule{2.6108pt}{0ex}}m$, and ${C}_{zz}\left(0\right)=2.5×1{0}^{-7}\phantom{\rule{2.6108pt}{0ex}}{m}^{2}$. Note that the sub peaks of the autocovariance lie at integer multiples of $±{L}_{o}$, and that the peaks of the spectrum are at $±1∕{L}_{o}$.

By deﬁnition, the integral over all frequencies of an elevation variance spectral density gives the total elevation variance $⟨{z}^{2}⟩$:

 $⟨{z}^{2}⟩={\int }_{-\infty }^{\infty }{\mathsc{𝒮}}_{2s}\left(\nu \right)d\nu \phantom{\rule{0.3em}{0ex}}.$

This can be computed analytically for the spectrum of Eq. (15). The ${\mathsc{𝒮}}_{2s}\left(\nu \right)$ spectrum of Eq. (15) is the sum of two identical Gaussians centered at diﬀerent $\nu$ values. Consider the one centered at $\nu =1∕{L}_{o}$, which involves the exponential with the $\nu -1∕{L}_{o}$ term. The total variance is then twice the integral of this Gaussian:

 $⟨{z}^{2}⟩=2\sqrt{\frac{\pi }{2}}{\sigma }_{w}{C}_{zz}\left(0\right){\int }_{-\infty }^{\infty }exp\left[-\frac{1}{2}{\left(2\pi {\sigma }_{w}\right)}^{2}{\left(\nu -1∕{L}_{o}\right)}^{2}\right]d\nu \phantom{\rule{0.3em}{0ex}}.$

Letting $x=\nu -1∕{L}_{o}$ gives

 $⟨{z}^{2}⟩=2\sqrt{\frac{\pi }{2}}{\sigma }_{w}{C}_{zz}\left(0\right){\int }_{-\infty }^{\infty }exp\left[-{c}^{2}{x}^{2}\right]dx\phantom{\rule{0.3em}{0ex}},$

where ${c}^{2}=\frac{1}{2}{\left(2\pi {\sigma }_{w}\right)}^{2}$. The integral

 ${\int }_{0}^{\infty }exp\left[-{c}^{2}{x}^{2}\right]dx=\frac{\sqrt{\pi }}{2c}$

then gives the ﬁnal result:

 $⟨{z}^{2}⟩=4\sqrt{\frac{\pi }{2}}{\sigma }_{w}{C}_{zz}\left(0\right)\frac{\sqrt{\pi }}{2\frac{1}{\sqrt{2}}2\pi {\sigma }_{w}}={C}_{zz}\left(0\right)\phantom{\rule{0.3em}{0ex}}.$ (16)

Thus starting with a variance of ${C}_{zz}\left(0\right)$ in the autocovariance of Eq. (13), obtaining the variance spectral density from the Fourier $\nu$-transform of the autocovarience, and then integrating the variance spectral density over $\nu$ thus gives back the variance as originally speciﬁed in the autocovariance function.

However, if the above process is naively carried through starting (as in Eq. 14) with the $k$-transform of Eq. (3) on the Fourier Transforms page, the end result (as in Eq. 16) is $2\pi {C}_{zz}\left(0\right)$. This extra factor of $2\pi$ is rectiﬁed by the $1∕2\pi$ factor seen in Eq. (7). Thus the $k$ version of the Horeshenkov spectral density is

 ${\mathsc{𝒮}}_{2s}\left(k\right)=\frac{1}{2\pi }\sqrt{\frac{\pi }{2}}{\sigma }_{w}{C}_{zz}\left(0\right)\left\{exp\left[-\frac{1}{2}{\sigma }_{w}^{2}{\left(k+{q}_{o}\right)}^{2}\right]+exp\left[-\frac{1}{2}{\sigma }_{w}^{2}{\left(k-{q}_{o}\right)}^{2}\right]\right\}\phantom{\rule{0.3em}{0ex}}.$ (17)

Integration of this ${\mathsc{𝒮}}_{2s}\left(k\right)$ over all $k$ then results in $⟨{z}^{2}⟩={C}_{zz}\left(0\right)$, as required. The inverse $k$ transform of the spectral density (17) gives ${C}_{zz}\left(\ell \right)∕\left(2\pi \right)$, as expected from Eq. (9).