PDA

View Full Version : The Doomsday Argument: would any scientists present please weigh in?



Frederick M. Dolan
12-29-2007, 12:45 PM
I've been intrigued by something called the Doomsday Argument, an application of the Anthropic Principle. I'm hoping that the scientists and mathematicians here can help me see what's wrong with it.

For a full version of the argument, go to https://anthropic-principle.com/preprints/lit/index.html. Here is the basic version.

Imagine that two big urns are put in front of you, and you know that one of them contains ten balls and the other a million, but you are ignorant as to which is which. You know the balls in each urn are numbered 1, 2, 3, 4 ... etc. Now you take a ball at random from the left urn, and it is number 7. Clearly, this is a strong indication that that urn contains only ten balls. If originally the odds were fifty-fifty, a swift application of Bayes' theorem gives you the posterior probability that the left urn is the one with only ten balls. (pposterior (L=10) = 0.999990). But now consider the case where instead of the urns you have two possible human races, and instead of balls you have individuals, ranked according to birth order. As a matter of fact, you happen to find that your rank is about sixty billion. Now, say Carter and Leslie, we should reason in the same way as we did with the urns. That you should have a rank of sixty billion or so is much more likely if only 100 billion persons will ever have lived than if there will be many trillion persons. Therefore, by Bayes' theorem, you should update your beliefs about humankind’s prospects and realize that an impending doomsday is much more probable than you have hitherto thought.

Willie Lumplump
12-29-2007, 05:34 PM
I've been intrigued by something called the Doomsday Argument, an application of the Anthropic Principle. I'm hoping that the scientists and mathematicians here can help me see what's wrong with it.

For a full version of the argument, go to https://anthropic-principle.com/preprints/lit/index.html. Here is the basic version.

Imagine that two big urns are put in front of you, and you know that one of them contains ten balls and the other a million, but you are ignorant as to which is which. You know the balls in each urn are numbered 1, 2, 3, 4 ... etc. Now you take a ball at random from the left urn, and it is number 7. Clearly, this is a strong indication that that urn contains only ten balls. If originally the odds were fifty-fifty, a swift application of Bayes' theorem gives you the posterior probability that the left urn is the one with only ten balls. (pposterior (L=10) = 0.999990). But now consider the case where instead of the urns you have two possible human races, and instead of balls you have individuals, ranked according to birth order. As a matter of fact, you happen to find that your rank is about sixty billion. Now, say Carter and Leslie, we should reason in the same way as we did with the urns. That you should have a rank of sixty billion or so is much more likely if only 100 billion persons will ever have lived than if there will be many trillion persons. Therefore, by Bayes' theorem, you should update your beliefs about humankind’s prospects and realize that an impending doomsday is much more probable than you have hitherto thought.OK, I'll weigh in: What on earth does birth order, sixty billion, and a trillion have to do with doomsday?

Frederick M. Dolan
12-29-2007, 07:03 PM
OK, I'll weigh in: What on earth does birth order, sixty billion, and a trillion have to do with doomsday?

I was hoping you could tell me!

In the fuller account that the link points to, the underlying argument is said to be what astrophysicist Richard Gott calls the "Delta t" form. (1) To estimate how long a series of observations/measurements will last, (2) assume that what is measured can be observed only between tbegin and tend, (3) assume tnow is randomly located between tbegin and tend) (i.e., that there is nothing special about tnow), (4) then tfuture = (tend - tnow) = tpast = (tnow - tbegin), tfuture as the estimate of how much longer the series will last means that it is estimated that the series will continue for as long as it has already lasted when the random observation is made. This overestimates the value half the time and underestimates it half the time, but, if understand it correctly, when the numbers are sufficiently large the confidence level goes up dramatically.

Zeno Swijtink
12-29-2007, 07:52 PM
I've been intrigued by something called the Doomsday Argument, an application of the Anthropic Principle. I'm hoping that the scientists and mathematicians here can help me see what's wrong with it.

For a full version of the argument, go to https://anthropic-principle.com/preprints/lit/index.html. Here is the basic version.

Imagine that two big urns are put in front of you, and you know that one of them contains ten balls and the other a million, but you are ignorant as to which is which. You know the balls in each urn are numbered 1, 2, 3, 4 ... etc. Now you take a ball at random from the left urn, and it is number 7. Clearly, this is a strong indication that that urn contains only ten balls. If originally the odds were fifty-fifty, a swift application of Bayes' theorem gives you the posterior probability that the left urn is the one with only ten balls. (pposterior (L=10) = 0.999990). But now consider the case where instead of the urns you have two possible human races, and instead of balls you have individuals, ranked according to birth order. As a matter of fact, you happen to find that your rank is about sixty billion. Now, say Carter and Leslie, we should reason in the same way as we did with the urns. That you should have a rank of sixty billion or so is much more likely if only 100 billion persons will ever have lived than if there will be many trillion persons. Therefore, by Bayes' theorem, you should update your beliefs about humankind’s prospects and realize that an impending doomsday is much more probable than you have hitherto thought.

Since this is an argument that has being doing its round for quite a while I wonder how you propose to discuss this. With some background about its current status? Or more naively, as if we lived in the late 90s, and encounter it for the first time?

In the latter approach: how intuitively plausible is the argument?

It seems to apply to any person, no matter whether they live today, lived 50,000 years ago, or will live 50,000 year hence.

It seems to apply to any being alive, no matter whether it is alive today, lived 50 million year ago, or will live 50 million year hence.

So can an argument that put fears in the belly of any forwards looking species, no matter at what point in the evolution of live on earth, have any formal plausibility, or must it rest on faulty principles?

Frederick M. Dolan
12-29-2007, 09:32 PM
I AM encountering the argument for the first time (in effect), so my participation will necessarily be naive, but I'd be very interested in any background about its current status you'd care to offer -- especially if the current consensus is that it's no longer of interest because it's been disproved or proved.

As for intuitive plausibility, the argument strikes me as highly counter-intuitive but I'm not sure how far to trust my intuitions here. Given the time scale of evolutionary processes, it SEEMS intuitive to think we're closer to the beginning of humanity than to its end. But ideas about an open-ended future, perpetual progress, etc., have powerful ideological attractions and that is suspicious in itself.


Since this is an argument that has being doing its round for quite a while I wonder how you propose to discuss this. With some background about its current status? Or more naively, as if we lived in the late 90s, and encounter it for the first time?

In the latter approach: how intuitively plausible is the argument?

It seems to apply to any person, no matter whether they live today, lived 50,000 years ago, or will live 50,000 year hence.

It seems to apply to any being alive, no matter whether it is alive today, lived 50 million year ago, or will live 50 million year hence.

So can an argument that put fears in the belly of any forwards looking species, no matter at what point in the evolution of live on earth, have any formal plausibility, or must it rest on faulty principles?

Kermit1941
12-31-2007, 10:28 PM
Since this is an argument that has being doing its round for quite a while I wonder how you propose to discuss this. With some background about its current status? Or more naively, as if we lived in the late 90s, and encounter it for the first time?

In the latter approach: how intuitively plausible is the argument?

It seems to apply to any person, no matter whether they live today, lived 50,000 years ago, or will live 50,000 year hence.

It seems to apply to any being alive, no matter whether it is alive today, lived 50 million year ago, or will live 50 million year hence.

So can an argument that put fears in the belly of any forwards looking species, no matter at what point in the evolution of live on earth, have any formal plausibility, or must it rest on faulty principles?


The argument is flawed. We can make the error obvious if we make the choosing of balls from the urn match more closely the human race prospect experiment.

Have the two urns as before. One urn contains ten balls, and the other contains a million balls.

But now instead of being to choose an urn from the ten at random, imagine that the balls exist only one at a time, and when you reach in to pick out the one ball in the urn, which ball you pick depends on what moment you chose to pick it.

From the urn containing a million balls, the number of the ball you pick will depend on how long you wait before picking the ball.

Kermit Rose < [email protected] >