Search

The Online Encyclopedia and Dictionary

 
     
 

Encyclopedia

Dictionary

Quotes

   
 

Minimum message length

Minimum message length (MML) is a formal information theory restatement of Occam's Razor: even when models are not equal in accuracy, the one generating the shortest overall message is more likely to be correct (where the message consists of a statement of the model, followed by a statement of data encoded concisely using that model). MML was invented by Chris Wallace.

From Shannon's Mathematical Theory of Communication (1949) we know that in an optimal code, the message length of an event E, MsgLen(E), where E has probability P(E), is given by MsgLen(E) = −log2(P(E)).

From Bayes' theorem we know that the probability of a hypothesis (H) given evidence (E) is proportional to P(E|H) P(H), which is just P(H & E). We want the model with the highest such probability.

Therefore, we want the model which generates the shortest description of the data! Since MsgLen (H & E) = −log2(P(H & E)), the most probable model will have the shortest such message. The message breaks into two parts: −log2(P(H & E)) = −log2(P(H)) + −log2(P(E|H)). The first is the length of the model, and the second is the length of the data, given the model.

So what? MML naturally and precisely trades model complexity for goodness of fit. A more complicated model takes longer to state (longer first part) but probably fits the data better (shorter second part). So an MML metric won't choose a complicated model unless that model pays for itself.

Key points about MML:

  • MML can be used to compare models of different structure. For example, its earliest application was in finding mixture models with the optimal number of classes. Adding extra classes to a mixture model will always allow the data to be fitted to greater accuracy, but according to MML this must be weighed against the extra bits required to encode the parameters defining those classes.
  • MML is a method of Bayesian model comparison. It gives every model a score.
  • MML is scale-invariant. Unlike many Bayesian selection methods, MML doesn't care if you change from measuring length to volume.
  • MML accounts for the precision of measurement. It uses the Fisher information (in the Wallace-Freeman 1987 approximation, or other hyper-volumes in other approximations) to optimally discretize continuous parameters. Therefore the posterior is always a probability, not a probability density.
  • MML has been in use since 1968. MML coding schemes have been developed for several distributions, and many kinds of machine learners including: unsupervised classification, decision trees and graphs, DNA sequences, Bayesian networks, Neural networks (one-layer only so far), image compression, image and function segmentation, etc.

See also

External links

Advances in Minimum Description Length: Theory and Applications, M.I.T. Press (MIT Press), April 2005, ISBN 0-262-07262-9; and Chapter 11: J.W.Comley and D.L. Dowe, Minimum Message Length, MDL and Generalised Bayesian Networks with Asymmetric Languages.

Last updated: 05-21-2005 19:30:28