TISEAN 3.0.1: Table of Contents

All programs in alphabetical order


Generating time series

A few routines are provided to generate test data from simple equations. Since there are powerfull packages (for example Dynamics by Helena Nusse and Jim Yorke) that can generate chaotic data, we have only included a minimal selection here.

Create Hénon time series henon
Create Ikeda time series ikeda
Create Lorenz time series lorenz
Run an autoregressive model ar-run
Add noise to data makenoise

Linear tools

This section contains some rather basic implementations of linear time series methods which are there just for convenience. If you want to embark seriously on linear or spectral analysis of your data, you will have to use any one of the statistical or mathematics packages around. Please, don't judge us by the level of sophistication in this section!

AR model ar-model, ar-run
ARIMA model arima-model
Autocorrelation function corr
Power spectrum using the maximum entropy method mem_spec
Power spectrum using FFT spectrum
Principal Component Analysis pca
Notch filter notch
Wiener filter wiener
Simple low pass filter low121
Savitzky-Golay filter sav_gol


Here are some tools for the pre-processing of data which save you the truble of writing your own five-line Perl, awk, FORTRAN or C programs.

Choose sub-sequence or columns choose
Normalise, rescale, mean, standard deviation rescale, rms
Distribution of the data histogram
Change sampling time resample


This section contains two important tools for the visualization of time series properties and another stationarity test as proposed by Schreiber. The recurrence plot and the space time separation plot are of great value for the detection of nonstationarity, selection of relevant time scales, selection of stationary episodes and so forth.

There is a short corresponding section in the introduction paper.

Recurrence plot recurr
Space-time separation plot stp
Stationarity test nstat_z

Embedding and Poincaré sections

Since the concept of phase space is at the heart of all the nonlinear methods in this package, phase space reconstruction plays an important role. Although delay and other embeddings are used inside most of the other programs, it is important to have these techniques also for data viewing, selection of parameters, etc. For delay embeddings, use delay.

Phase space reconstruction is discussed also in the the introduction paper.

Embed using delay coordinates delay
Mutual information of the data mutual
Poincaré section poincare
Determine the extrema of a time series extrema
Unstable periodic orbits upo, upoembed
False nearest neighbours false_nearest


A number of phase space based prediction techniques are implemented in TISEAN. They differ in the way in which the dynamics is approximated. Locally zeroth order models, locally first order models, radial basis functions, and polynomial fits are provided.

For a discussion of these methods and examples see the corresponding section of the introduction paper.

Locally zeroth order model test lzo-test
Iterate locally zeroth order model lzo-run
Locally first order model test lfo-test
Iterate locally first order model lfo-run
Local vs. global linear prediction lfo-ar
Local vs. global mean prediction lzo-gm
Radial basis function fit rbf
Polynomial model polynom, polynomp, polyback, polypar

Noise reduction

This is how the three of us got into this business. Since spectral filters are problematic with chaotic, broad band signals, new techniques were necessary. All the implementations here use phase space projections for noise reduction. The program lazy use locally constant approximations of the dynamics. The routine ghkss implements locally linear projections. Finally, for testing purposes you may want to add noise to data and compare the outcome of your cleaning attempts with the true signal.

The introduction paper has a section on nonlinear noise reduction, too.

Add noise to data makenoise
Compare two data sets compare
Simple nonlinear noise reduction lazy
Nonlinear noise reduction ghkss

Dimension and entropy estimation

If you are looking for a program that reads your signal and issues a number that says "correlation dimension", you got yourself the wrong package. We think you are still better off than getting such a wrong answer. The programs in this section carry out the calculations necessary to detect scaling and self similarity in a fractal attractor. You will have to establish scaling and eventually, in favourable cases, extract the dimension or entropy by careful evaluation of the data produced by these programs.

There is an implementation of the Grassbeger-Procaccia correlation integral in this package that can handle multivariate data and mixed embeddings. A fixed mass algorithm for the information dimension D1 is available which also can handle multivariate data and mixed embeddings, and a box-counting implementation of the order Q Renyi entropies for multifractal studies.

Post-processing can be performed on the output in order to obtain Takens' estimator or the Gaussian kernel correlation integral, or just for smoothing.

You may want to consult the introduction paper for initial material on dimension estimation. If you are serious, you will need to study some of the literature cited there as well.

Correlation dimension d2 d2
Fixed mass estimation of D1 c1
Renyi Entropies of Qth order boxcount
Takens estimator c2t
Gaussian kernel C2 c2g
Simply smooth the output of d2 av-d2
Get local slopes from the correlation integral c2d

Lyapunov exponents

Lyapunov exponents are an important means of quantification for unstable systems. They are however difficult to estimate from a time series. Unless low dimensional, high quality data is at hand, one should not attempt to calculate the full spectrum. Try to compute the maximal exponent first. The two implementations differ slightly. While lyap_k implements the formula by Kantz, lyap_r uses that by Rosenstein et al. which differs only in the definition of the neighbourhoods. We recommend to use the former version, lyap_k.

The estimation of Lyapunov exponents is also discussed in the introduction paper. A recent addition is a programm to compute finite time exponents which are not invariant but contain additional information.

Maximal exponent lyap_k, lyap_r
Lyapunov spectrum lyap_spec

Surrogate data

Before attempting any sophisticated nonlinear time series analysis, one should try to establish that nonlinearity is indeed present. The most suitable method for this is the approach of surrogate data. We present two schemes for the generation of surrogate time series, one using iterative adjustments of spectrum and distribution, and a very general framework for constrained randomization that is based on combinatorial minimization of a cost function. The latter approach is more like a toolbox, a starting point for your own ideas on suitable null hypotheses etc. A few basic discriminating statistics are also provided.

There is a short overview page for nonlinearity tests. There is also a section in the introduction paper.

Make surrogate data surrogates
Determine end-to-end mismatch endtoend
General constrained randomization randomize
Discriminating statistics timerev, predict

Spike trains

Sequences of times of singular events (heart beats, neuronal spikes etc.), or sequences of intervals between such events (RR-intervals etc.) require specialised techniques, even for their linear analysis. Below find a list of routines that may proove useful for this type of data.

Event/intervcal conversion intervals
Interval/event conversion events
Autocorrelation function of event times spikeauto
Power spectrum of event times spikespec
Surrogate data preserving event time autocorrelations randomize_spikeauto_exp_random
Surrogate data preserving event time power spectrum randomize_spikespec_exp_event


This part of TISEAN contains programs which explore properties amongst different time series. It is still in an early state and might contain more programs in the future.

Since at least two time series are involved in these programs the usage of some flags is different in case that the programs deal with multivariate data.
The -m or -M refer to the columns to be loaded for each data set. Thus, -m 2,2 means two colums for each data set. In combination with -c this requires to specify twice as many columns to this flag as are given with -m[M].

Linear cross-correlations xcor
Nonlinear cross-prediction xzero
Cross-correlation integral xc2
Cross-recurrence Plot xrecur


To avoid redundancies we decided to remove some of the programs from the active development part of the package. For historical reasons they are still there but we plan to remove them in future releases.

Fortran version of delay embedding delay
Add noise to data addnoise
Autocorrelation function autocor
Principal component analysis pc
Simple nonlinear noise reduction nrlazy
Nonlinear noise reduction project
Naive implementation of the correlation dimension c2naive
Finite size exponents fsle


Copyright © (1998-2007) Rainer Hegger, Holger Kantz, Thomas Schreiber