Search

The Online Encyclopedia and Dictionary

 
     
 

Encyclopedia

Dictionary

Quotes

 

Doomsday argument

The Doomsday argument is a probabilistic argument that claims to predict the future lifetime of the human race given only an estimate of the total number of humans born so far.

It was first proposed by the astrophysicist Brandon Carter in 1983 and was subsequently championed by the philosopher John Leslie. It has since been independently discovered by J. Richard Gott and H. B. Nielsen . Similar theories predicting an end to the world from population statistics were proposed earlier by Heinz von Foerster, among others.

Contents

The Doomsday argument

This article introduces the DA in four alternative ways:

Numerical estimates of the Doomsday argument

Let us imagine our fractional position f = n/N along the chronological list of all the humans who will ever be born, where n is our absolute position from the beginning of the list and N is the total number of humans.

Assuming that we are equally likely (along with the other N humans) to find ourselves at any position n, we can assert that our fractional position f is uniformly distributed on the interval (0,1] prior to learning our absolute position. This is an example of the Copernican principle.

Let us further assume that our fractional position f is uniformly distributed on (0,1] even after we learn of our absolute position n. This is equivalent to the assumption that we have no prior information about the total number of humans, N.

Now, we can say with 95% confidence that f = n/N is within the interval (0.05,1]. In other words we are 95% certain that we are within the last 95% of all the humans ever to be born. Given our absolute position n, this implies an upper bound for N obtained by rearranging

n / N > 0.05

to give

N < 20n.

If we assume that 60 billion humans have been born so far (Leslie's figure) then we can say with 95% confidence that the total number of humans, N, will be less than 20·60 = 1200 billion.

Assuming that the world population stabilizes at 10 billion and a life expectancy of 80 years, one can calculate how long it will take for the remaining 1140 billion humans to be born.

Thus we find the argument predicts, with 95% confidence, that mankind will disappear within 9120 years. Depending on your projection of world population in the forthcoming centuries, your estimates might vary, but the main point of the argument is that mankind is likely to disappear rather soon.

Remarks

  • A precise formulation of the Doomsday argument (DA) requires the Bayesian interpretation of probability, which is widely, if not universally, accepted.
  • The step that converts N into an extinction time depends upon a finite human lifespan. If immortality becomes common, and the birth rate drops to zero, N will never be reached. (The observer-moments DA formulation would still apply if humans developed unlimited lifespan.) John Eastmond 's 2002 critique ([1]) concludes that "an infinite conscious lifetime is not possible, even in principle" because, he contends, the DA applied to the life-expectancy of a potential immortal would allow the observation of n moments to be converted into uncountably many data bits.
  • The (naïve) form of the DA outlined above implicitly assumes finite N, otherwise all humans will have a position close to 0 in the range (0,1]. In principle there seems to be no reason why we must assume the prior existence of some finite upper bound to our position, which suggests a problem with the argument. However, if N really is infinite, any random sample of n will also be infinite, with the chance of finite n being vanishingly small. Our observation of a finite n would then be a cosmic coincidence, which the DA doesn't rule out.
  • The total number of humans born so far may depend on one's definition of "human".
  • By counting the number of human conciousnesses as states, the DA implies a specialness to the human condition which is the precise opposite of statistical reasoning in the natural sciences, or the Copernican principle itself.
  • The U(0,1] f distribution is derived from: (a) The Principle of indifference, and (b) The assumption of no 'prior' knowledge on the distribution of N. Both are reasonable 'in principle', but would be rejected by many Bayesians.

Simplification: two possible total number of humans

Here is a simplified version of the argument, based on A refutation of the Doomsday Argument by Korb and Oliver.

Assume for simplicity that there are two possible numbers for N, the total number of humans who will ever be born: either N = 60 billions, or N = 6 000 billions. Now, you have no a priori knowledge of your position in the history of humanity, so you decide to compute how many humans have been born before you. It turns out that you are human #59 854 795 447, i.e. one of the first 60 billions.

Now, if in fact N = 60 billions, the probability that you were in the first 60 billions is 100%, of course. However, if N = 6 000 billions, then the probability that you were in the first 60 billions is only 1%. Therefore, it is more likely that N = 60 billions (although it is not certain). In fact, N is monotonically less probable as it grows larger. It is possible to sum the probabilities for each value of N and therefore to compute a statistical 'confidence limit' on N. For example, taking the numbers above, it is 95% certain that N is smaller that 1200 billion.

What the argument is not

The Doomsday argument (DA) does not say that humanity cannot go on expanding forever. It does not put any limit on the number of humans that will ever exist, or the date on which humanity will go extinct.

An abreviated form of the argument does make these claims, by confusing probability with certainty. However, the actual DA is: there is a 95% chance of extinction within 9120 years. That implies a 5% chance that humans will still be thriving circa 11125 AD. (These dates are based on the assumptions above, the precise numbers vary among specific Doomsday arguments.)

Variations

This argument has generated a lively philosophical debate, and no consensus has yet emerged on its solution. The variants described below produce the DA by separate derivations.

Gott's formulation: 'vague prior' total population

Gott specifically proposes the functional form for the prior distribution of the number of people who will ever be born (N). Gott's DA use the vague prior : P(N) = k/N (Where P(N) is the probability prior to discovering n, the total number of humans who have yet been born). The constant, k, is chosen to normalize the sum of P(N) - this can be done to include P(0)>0, but the value chosen isn't important here (just the functional form).

The vague prior distribution of P(N) has the advantage of being analytically tractable without requiring obscure mathematics. Since Gott actually specifies the prior distribution of total humans, P(N), Bayes's theorem and the principle of indifference alone give us P(N|n), the probability of N humans being born if n is a random draw from N:

P(N|n) = \frac{P(n | N) P(N)}{P(n)}

This is Bayes's theorem for the posterior probability of total population exactly N, conditioned on current population exactly n. Now, using the indifference principle:

P(n | N) = \frac{1}{N}

And, Gott's assumption of the vague prior for P(N):

P(N) = \frac{k}{N}

The unconditioned n distribution of the current population is identical to the vague prior N probability density function, so:

P(n) = \frac{k}{n}

Giving P(N|n) for each specific N (through a substitution into the posterior probability equation):

P(N|n) = \frac{n}{N^2}

The easiest way to produce the doomsday estimate with a given confidence (say 95%) is to pretend that N is a continuous variable (since it is very large) and integrate over the probability density from N = n to N = Zn. (This will give a function for the probability that N <= Z')

P(N <= Z) = \int_n^Z P(N|n)\,dN = [\frac{-n}{Z}]-[\frac{-n}{n}]

Defining Z = 20n gives:

P(N <= 20n) = \frac{1}{20}

This is the simplest Bayesian derivation of the DA:

The chance that the total number of humans that will ever be born (N) is greater than twenty times the total that have been is below 5%

The use of a vague prior distribution seems well-motivated as it assumes as little knowledge as possible about N, given that any particualar function must be chosen. It is equivalent to the assumption that the probability density of one's fractional position remains uniformly distributed even after learning of one's absolute position (n). The function has been used for scale invariant distributions since being described in 1939 (Jeffreys, The Theory of Probability, Oxford University Press) and is well understood.

Gott's vague prior is more mathematically defensible against Olum's critique that the chance of existing is proportional to N (because a scale invariant probability density function can never have a 95% chance of being below any finite limit if the scale is unknown - a property that this attack originally required).

Carter-Leslie version

Leslie's argument differs from Gott's version in that he does not assume a vague prior probability distribution for N. Instead he argues that the force of the DA resides purely in the increased probability of an early Doomsday once you take into account your birth position, regardless of your prior probability distribution for N. He calls this the probability shift.

Singularity

Heinz von Foerster argued that humanity's abilities to construct societies, civilizations and technologies do not result in self inhibition. Rather, societies' success varies directly with population size. Von Foerster found that this model fit some 25 data points from the birth of Jesus to 1958, with only 7% of the variance left unexplained. Several follow-up letters (1961, 1962, …) were published in Science showing that von Foerster's equation was still on track. The data continued to fit up until 1973. The most remarkable thing about von Foerster's model was it predicted that the human population would reach infinity or a mathematical singularity, on Friday, November 13, 2026.

Reference classes

One of the major areas of contempary DA debate is the 'reference class' from which n is drawn, and of which N is the ultimate size. The 'standard' DA hypothesis doesn't spend very much time on this point, and simply says that the reference class is the number of 'humans'. Given that you are human, the Copernican principle could be applied to ask if you were born unusually early, but the grouping of 'human' has been widely challenged on practical and philosophical grounds. Nick Bostrom has argued that conciousness is (part of) the discriminator between what it is in and out of the reference class, and that the existence of contempary extra-terrestial civilizations would effect the calculation dramatically.

The following sub-sections relate to different suggested reference classes, each of which has had the standard DA applied to it.

Narrowing the reference class

Some philosophers have been bold enought to suggest that only people who have thought about the Doomsday argument (DA) belong in the reference class 'human'. If that is true, then Carter was defying this own prediction when he first described the argument (to the Royal Society). A member present could have argued thus:

"Presently, only one person in the world understands the Doomsday argument, so by its own logic there is a 95% chance that it is a minor problem which will only ever interest twenty people, and I should ignore it."

(See meta-DA). This line of reasoning could apply to all new research, and, with a large enough reference class, empirically does apply; if we consider all original research papers published in a fixed number of peer reviewed periodicals during the 1980s the mean number of citations each of those papers received by others published in the same set of journals during the 1990s is at most the average citation count per paper (less than twenty). Since some super-star papers have massive citation counts others (the majority) will have fewer than 20 citations. If significance can be gauged by citation count in the subsequent decade it is a mathematical fact that most papers are insignificant.

Even this class is not the narrowest; some have only people who believe in the DA being human.

Sampling only WMD-era humans

The Doomsday clock shows the expected time to nuclear armageddon by the judgement of an expert board, rather than a Bayesian model. If the twelve hours of the clock symbolize the lifespan of the human race, its current time of 11:53 implies that we are among the last 1% of people who will ever be born (i.e. that n > 0.99N). From the standard Doomsday argument point of view, this makes the current time a very 'special', and unlikely period to be born in. If the true Doomsday clock were observed at random, there would be less than a 1% chance of seeing a time as late as the experts' estimate.

However, the Doomsday clock specifically estimates the proximity of atomic self destruction - which has only been possible for fifty years (the clock first appeared in 1949, but 50 for simplicity). If doomsday requires nuclear weaponary then the doomsday argument only applies within the set of people who are contemporaneous with nuclear weapons; in this model, the number of people living through, or born after Hiroshima is n, and the number of people who ever will is N. Applying the standard Doomsday argument to these variable definitions gives a 50% chance of apocalypse within 50 years (assuming constant population). That the Doomsday clock's hands are so close to midnight is not an improbable anomaly in this model.

If your life is randomly selected from all lives lived under the shadow of the bomb, this simple model gives a 95% chance of Armageddon within 1000 years. (Which happens to be one tenth the running time of the Clock of the Long Now.)

Sampling from observer-moments

Nick Bostrom, considering observation selection effects, has produced a Self-Sampling Assumption: "that you should think of yourself as if you were a random observer from a suitable reference class". If the 'reference class' is the set of humans to ever be born, this gives N < 20n with 95% confidence (the standard Doomsday argument). However, he has refined this idea to apply to observer-moments rather than just observers. If the minute in which you read this article is randomly selected from every minute in every human's lifespan then (with 95% confidence) this event has occured after the first 5% of human observer-moments.

If future mean lifespan is twice historic, this implies 95% confidence that N < 10n (since the average future individual will account for twice the observer-moments of the average historic human). The 95th percentile extinction-time estimate in this version (4560 years) is below the n = U(0,N] estimate (9120 years) because observer-moments are a non-linear function of n; Bertrand's paradox is analogous.

Rebuttals

We are in the earliest 5%, a priori

First of all, we must be clear about what is being rebutted: Disagreeing with the Doomsday argument (DA) implies that:

  1. We are within the first 5% of humans to be born.
  2. This is not purely a coincidence.

Therefore, these rebuttals try to give reasons for believing that we are some of earliest humans. For instance, if your Wikipedia internal ID is 50,000 the naïve Doomsday argument implies a 95% chance that there will never be more than a million Wikipedians. This can be refuted if your other characteristics are typical of the early adopter. The mainstream of potential users will only prefer this encylopedia to others when it is complete (when there will be an exponential Wikipedian increase by Metcalfe's law). If you enjoy Wikipedia's incompleteness, we already know that you are unusual, prior to the discovery of your low internal ID.

If you have measurable attributes that set you apart from the typical long run user, the Wikipedian DA can be refuted based on the fact that you would expect to be within the first 5% of Wikipedians, a priori. The analogy to the total-human-population form of the argument is: Confidence in a prediction of the distribution of human characteristics that places modern & historic humans outside the mainstream, implies that we already know, before examining n that it is likely to be very early in N.

For example, if you are certain that 99% of humans who will ever live will be cyborgs, but you are not a cyborg, you could be equally certain that at least one hundred times as many people remain to be born as have been. Robin Hanson's critique suggests a likely prior for N is exp(U(0,k]) - a sufficiently large k would give 95% confidence boundary: N < exp(20)n. (Our unusual characteristic is simply how early we are in an exponentially weighted draw of births). He sums up these criticisms of the DA:

"All else is not equal; we have good reasons for thinking we are not randomly selected humans from all who will ever live."

The weaknesses of this rebuttal are:

  1. The question of how the confident prediction is derived. We need an uncannily prescient picture of humanity's statistical distribution through all time, before we can pronounce ourselves as extremal members of that population. (In contrast, Wikipedian pioneers have clearly distinct psychology from the mainstream.)
  2. If the majority of humans have characteristics we do not share, some would argue that this is equivalent to the Doomsday argument, since people like us will become extinct. (Friedrich Nietzsche outlines this point of view in Also sprach Zarathustra.)
  3. It is not a general critique of the argument's logic (desired by some) but a specific rebuttal applying only to human population. It simply denies the applicability of the Copernican principle to this case.

SIA: The possibility of not existing at all

One objection, originally by Dennis Dieks (1992), developed by Bartha & Hitchcock (1999), and expanded by Ken Olum (2001), is that the possibility of you existing at all depends on how many humans will ever exist. If this is a high number, then the possibility of you existing is higher than if only a few humans will ever exist. Since you do indeed exist, this is evidence that the number of humans that will ever exist is high.

The current name for this attack within the (very active) DA community is the "Self-Induction Assumption" (SIA ), proposed by one of its opponents, the DA-advocate Nick Bostrom. His (2000) definition reads:

SIA: Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.

A development of Dieks's original paper by Kopf, Krtous and Page (1994), showed that the SIA precisely cancels out the effect of the Doomsday Argument, and therefore, one's birth position gives no information about the total number of humans that will exist. This conclusion of SIA is uncontroversial with modern DA-proponents, who instead question the validity of the assumption itself, not the conclusion which would follow, if the SIA were true.

This argument seems to suggest that, all other things being equal, any theory which postulates a high number of conscious beings in the universe is more likely true than a theory which does not. This is, to say the least, controversial.

Many worlds

Eastman's "Many-Worlds Resolution of the Doomsday Argument" claims that extension of the DA from a single historic timeline into a form dealing with many simultaneous 'quasi-histories' is impossible.

The many-worlds interpretation of quantum mechanics suggests that time has a network-like structure with many actually-occurring pasts merging into each present moment and many actually-occurring futures branching from each present moment. The apparent linearity of time is due to the fact that our memories are consistent with only one past. If all finite values of total population size are realized in different futures, Eastman suggests that would avoid both the prior assumption of a finite upper bound to our birth position, and also any correlation between our present position and a particular future total population size that we experience should we live long enough to see Doomsday.

Caves' rebuttal

Caves (see his on-line paper at External Links below) uses Bayesian arguments to show that the uniform distribution assumption is, in fact, incompatible with the Copernican principle, not a consequence of it. He gives a number of examples to show that Gott's rule is implausible. One example is reproduced here

Suppose you are going to a meeting of your book club, to be held at a member’s house that you’ve never been to before. You find the right street, but having forgotten the street address, you choose between two houses where there is evident activity. Knocking at one, you are told that the activity within is a birthday party, not a book-club meeting. Your friendly enquiry about the age of the celebrant elicits the reply that she is celebrating her 50th birthday. According to Gott, you can predict with 95% confidence that the woman will survive between 50/39 = 1.28 years and 39*50 = 1, 950 years into the future. Since the wide range encompasses reasonable expectations regarding the woman's survival, it might not seem so bad, till one realizes that Eq(4) {Gott's rule} predicts that with probability 1/2 the woman will survive beyond 100 years old and with probability 1/3 beyond 150. Few of us would want to bet on the woman’s survival using Gott's rule.

The Meta Doomsday argument

P. T. Landsberg and J. N. Dewynne 1997 applied belief in the DA to itself; finding a paradox. In 2001 Bradley Monton and Sherrilyn Roush extend this by claiming it leads to an inevitable refutation of Gott's theory.

Mathless explanation by analogy

Comparison with the score of a cricket batsman: A random in-progress test match is sampled at a random time for a single piece of information: the current batsman's run tally so far. If the batsman is dismissed (rather than declaring) what is the chance that he will end up with a score more than double his current total? A rough empirical result (see Haigh, Taking Chances) is that, on average, the chance is half.

The Doomsday argument (DA) is that even if we were completely ignorant of the game we could make the same prediction, or profit by offering a bet paying odds of 2-to-3 on the batsmen doubling his current score. Critically, we offer this bet before we know what the current score is (this is necessary because the absolute value of the current score would give a cricket expert a lot of information about the chance of that tally doubling - see below). It is necessary to be ignorant of the absolute run tally before making the prediction because this is linked to the likely total, but if the likely total and absolute value are not linked the survival prediction can be made after discovering the batter's current score. Analagously, the DA says that if the absolute number of humans born gives no information on the number that will be we can predict the species's total number of births after discovering that 60 billion people have ever been born: with 50% confidence it is 120 billion people, so that there is better-chance-than-not that the last human birth will occur before the 23nd century.

How the current value can effect the likely total

In sporting terms, the rationale that a high-scoring player has a higher survival time than a player with a low score is that a batsman who has managed to establish an usually high total has thereby proven his mastery of the opposing team's attack, and is much less likely to be bowled out by the next delivery than a player who has not. Of course, a low scoring player may happen to be the world's greatest player in the first over of the game, and a high scorer may have simply been lucky, but this is not information we have. (The DA is ceteris paribus.)

It is not true that the chance is half, whatever is the number of runs currently scored; batting records give an empirical correlation between reaching a given score (50 say) and reaching any other, higher score (say 100). On the average the chance of doubling the current tally may be half, but the chance of reaching a century having scored fifty is much lower than reaching ten from five. Thus, the absolute value of the score gives information about the likely total score, over and above the fixed relative relationship between current and final scores. The batting records give a prior distribution which provides useful information. In particular, we know the mean score across all players and matches. High and low posterior information (the current score) only gives a weak indication of the player's skill, which is more strongly influenced by this prior mean score; therefore the survival probability to double the current score will be greater than half when that is below the mean, and less than half when it is above.

An analogous critique of the DA is that we (somehow) possess prior knowledge of the all-time human population distribution, and that this is more significant than the finding of a low number of births until now (i.e. a low cricket tally).

The run-scoring analogy of the Self-Indication Assumption

The SIA is analogous to randomly sampling from the time of day, rather than randomly sampling from the scores (births). Sampling the time would include those lengthy periods of a test match where a dismissed player is replaced, during which no runs are scored. If we sample based on time-of-day rather than running score we will often find that a new or dismissed batsman has a score of zero when the total score that day was low, but we will rarely sample a zero if the same batsman stayed at the crease, piling on runs all day long. Therefore, the very fact that we sample a non-zero score would tell us something (in this sampling scheme) about the likely final score that the current batsman will achieve.

See also

External links

Interactive external links

  • Laster: A simple webpage applet giving the min & max survival times of anything with 50% and 95% confidence requiring only that you input how old it is. It is designed to use the same mathematics as J. Richard Gott's form of the DA, and was programmed by sustainable development researcher Jerrad Pierce.

References

  • John Leslie, The End of the World: The Science and Ethics of Human Extinction, Routledge, 1998, ISBN 0-415-18447-9.
  • J. R. Gott III, Implications of the Copernican Principle for our Future Prospects, Nature, vol. 363, pp. 315-319, 1993.
  • J. R. Gott III, Future Prospects Discussed, Nature, vol. 368, p. 108, 1994.
  • This argument plays a central role in Stephen Baxter's science fiction book, , Del Rey Books, 2000, ISBN 0-345-43076-X.
  • Harold Jeffreys The Theory of Probability, 1939, out of print. (An empiricist, Bayesian approach to the foundations of probability theory. This introduced the scale invariant, vague prior).
Last updated: 06-02-2005 12:34:11
Last updated: 10-29-2005 02:13:46