Turning Bands Simulations
This page constitutes an add-on to the User’s Guide for Simulations.
For the theoretical background, the user should refer to Matheron G., The intrinsic random functions and their application (In Adv. App. Prob. Vol.5, pp. 439-468, 1973).
Principle
The Turning Band method is a stereological device designed to reduce a multidimensional simulation to unidimensional ones: if C3 stands for the (polar) covariance to be produced in , it is sufficient to simulate a stationary unidimensional random function with X covariance:
X is then spread throughout the space:
where Θ is a unit vector with a uniform direction.
Non Conditional Simulation
A random function is said to be multigaussian if any linear combination of its variables follows a gaussian distribution. In the stationary case, Multigaussian Random Function has its spatial distribution totally characterized by its mean value and its covariance.
The easiest way to build a Multigaussian Random Function is to use a parallel procedure. Let Y1, ..., Yn stand for a sequence of standard independent and identically distributed random functions with covariance C. The spatial distribution of the random function:
tends to become Multigaussian with covariance C as n becomes very large, according to the Central Limit Theorem.
Several algorithms are available to simulate the elementary random functions Yi with a given covariance C. The user will find much more information in Lantuéjoul C., Geostatistical Simulation (Springer Berlin, 2002. 256p).
The choice of the method to generate the random function X is theoretically free. However in Isatis, this or that method will be used preferably to optimize the generation of this or that specific model of covariance. The selection of the method is automatic.
The simulation of the covariance is then obtained by summation with projection of the simulations on a given number of lines of the covariance. Each line is called in fact "turning band" and the problem of the optimal count of Turning Bands remains, although Ch. Lantuejoul provides some hints in Lantuéjoul C., Non Conditional Simulation of Stationary Isotropic Multigaussian Random Functions (In M. Armstrong & P.A. Dowd eds., Geostatistical Simulations, Kluwer Dordrecht, 1994, pp.147-167).
Conditioning
If we consider the kriging estimation of Z(x) using the value of the variable at the data points Z(xα), in each point, we can write the following decomposition:
In the Gaussian framework, the residual is not correlated with any data value. It is therefore independent from any linear combination of these data values, such as the kriging estimate. Finally the estimate and the residual are two independent random functions, not necessarily stationary: for example at a data point, the residual is zero.
If we consider a non-conditional simulation Zs(x) of the same random function, known over the whole domain of interest and its kriging estimation based on the value of this simulation at the data points, we can write similarly:
where estimate and residual are independent, with the same structure.
By combining the simulated residual to the initial kriging estimation, we obtain:
which is another random function, conditional this time as it honors the data values at the data points.
Note: This conditioning method is not concerned about how the non-conditional simulation Zs(x) has been obtained.
As non correlation is equivalent to independence in the gaussian context, a simulation of a gaussian random function with nested structures can be obtained by adding independent simulations of the elementary structures.
For the same reason, combining linearly independent gaussian random functions with elementary structures gives, under a linear model of coregionalization, a multivariate simulation of different variables.