The degree of nonlinearity can be measured in several ways. But how much
nonlinear predictability, say, is necessary to exclude more trivial
explanations? All quantifiers of nonlinearity show fluctuations but the
distributions, or error bars if you wish, are not available analytically. It
is therefore necessary to use Monte Carlo techniques to assess the significance
of results. One important method in this context is the method of surrogate
data [82]. A null hypothesis is formulated, for example that the
data has been created by a stationary Gaussian linear process, and then it is
attempted to reject this hypothesis by comparing results for the data to
appropriate realizations of the null hypothesis. Since the null assumption is
not a simple one but leaves room for free parameters, the Monte Carlo sample
has to take these into account. One approach is to construct *constrained
realizations* of the null hypothesis. The idea is that the free parameters left
by the null are reflected by specific properties of the data. For example the
unknown coefficients of an autoregressive process are reflected in the
autocorrelation function. Constrained realizations are obtained by randomizing
the data subject to the constraint that an appropriate set of parameters
remains fixed. For example, random data with a given periodogram can be made by
assuming random phases and taking the inverse Fourier transform of the given
periodogram. Random data with the same distribution as a given data set can be
generated by permuting the data randomly without replacement. Asking for a
given spectrum and a given distribution at the same time poses already a much
more difficult question.

Wed Jan 6 15:38:27 CET 1999