Online Encyclopedia
Monte Carlo method
Monte Carlo methods are algorithms for solving various kinds of computational problems by using random numbers (or more often pseudo-random numbers), as opposed to deterministic algorithms. Monte Carlo methods are extremely important in computational physics and related applied fields, and have diverse applications from esoteric quantum chromodynamics calculations to designing heat shields and aerodynamic forms. These methods have proven efficient in solving the integro-differential equations defining the radiance field, and thus these methods have been used in global illumination computations which produce photo-realistic images of virtual 3d models, with applications in video games, architecture, design, computer generated films and special effects in cinema, and other fields.
Interestingly, the Monte Carlo method does not require truly random numbers to be useful. Much of the most useful techniques use deterministic, pseudo-random sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good simulations is for the pseudo-random sequence to appear "random enough" in a certain sense. That is that they must either be uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered.
Because of the repetition of algorithms and the large number of calculations involved, Monte Carlo is a method suited to calculation using a computer, utilizing many techniques of computer simulation.
A Monte Carlo algorithm is a numerical Monte Carlo method used to find solutions to mathematical problems (which may have many variables) that cannot easily be solved, for example, by integral calculus, or other numerical methods. Its efficiency relative to other numerical methods increases when the dimension of the problem increases.
Contents |
History
Monte Carlo methods were originally practiced under more generic names such as "statistical sampling". The "Monte Carlo" designation, popularized by early pioneers in the field (including Stanislaw Marcin Ulam, Enrico Fermi, John von Neumann and Nicholas Metropolis), is a reference to the famous casino in that area, under the belief that Monte Carlo lends its name to this method because of the use of randomness and the repetitive nature employed to find an approximation to the solution. Stanislaw Marcin Ulam tells in his autobiography "Adventures of a Mathematician" that the method was named in honor of his uncle, who was a gambler at the mentioned casino.
Random methods of computation have their roots in the pre-electronic computing era. Perhaps the most famous early use was by Fermi in 1930, when he used a random method to calculate the properties of the newly-discovered neutron (see also Buffon's needle). Monte Carlo methods were central to the simulations required for the Manhattan Project. (An alternate explanation of the Monte Carlo name is that the Manhattan Project team met in Monte Carlo to create their methods.) However, it was only after electronic computers were first built (from 1945 on) that Monte Carlo methods began to be studied in depth.
Integration
Deterministic methods of numerical integration operate by taking a number of evenly spaced samples from a function. In general, this works very well for functions of one variable. However, for functions of vectors, deterministic quadrature methods can be very inefficient. To numerically integrate a two-dimensional vector, equally spaced grid points over a two-dimensional surface are required. For instance a 10x10 grid requires 100 points. If the vector has 100 dimensions, the same spacing on the grid would require 10^{100} points – that's far too many to be computed. 100 dimensions is by no means unreasonable, since in many physical problems, a "dimension" is equivalent to a degree of freedom, and in a three-dimensional simulation, there are at least three degrees of freedom per particle.
Monte Carlo methods provide a way out of this exponential time-increase. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of "average". By the central limit theorem, this method will display convergerence – i.e. quadrupling the number of sampled points will halve the error, regardless of the number of dimensions.
A refinement of this method is to somehow make the points random, but more likely to come from regions of high contribution to the integral than from regions of low contribution. In other words, the points should be drawn from a distribution similar in form to the integrand. Unfortunately, doing this precisely is just as difficult as solving the integral in the first place, but there are approximate methods available: from simply making up an integrable function thought to be similar, to one of the adaptive routines discussed in the topics listed below.
A similar approach, is using low-discrepancy sequences with the quasi-Monte Carlo method. Quasi-Monte Carlo methods can often be more efficent at numerical integration because the sequence "fills" the area better in a sense and samples more of the most important points that can make the simulation converge to the desired solution more quickly.
Integration methods
- Direct sampling methods
- Importance sampling
- Stratified sampling
- Recursive stratified sampling
- VEGAS algorithm
- Random walk Monte Carlo including Markov chains
Optimisation
Another powerful and very popular application for random numbers in numerical simulation is in numerical optimisation. These problems use functions of some often large-dimensional vector that are to be minimized. Many problems can be phrased in this way: for example a computer chess program could be seen as trying to find the optimal set of, say, 10 moves which produces the best evaluation function at the end. The traveling salesman problem is another optimisation problem. There are also applications to engineering design, such as multidisciplinary design optimization.
Most Monte Carlo optimisation methods are based on random walks. Essentially, the program will move around a marker in multi-dimensional space, tending to move in directions which lead to a lower function, but sometimes moving against the gradient.
Optimisation methods
- Stochastic tunneling
- simulated annealing
- genetic algorithms
- Parallel Tempering
Other methods
- Diffusion and quantum Monte Carlo
- Semiconductor charge transport and the like
- Quasi-random numbers and self avoiding walks
- Assorted random models, e.g. self-organised criticality
See also
References and external links
- P. Kevin MacKeown, Stochastic Simulation in Physics, 1997, ISBN 981-3083-26-3
- Harvey Gould & Jan Tobochnik, An Introduction to Computer Simulation Methods, Part 2, Applications to Physical Systems, 1988, ISBN 020116504X