Q&A: Deal or No Deal

We’ve never discussed the famous “Monty Hall Problem” here (though we did talk about it on the radio before we started podcasting). We recently got an interesting letter that highlights the difference between a game like “Let’s Make A Deal” and a game like “Deal or No Deal”.

Mark A. recently wrote us:

The contestant started with 25 cases each with a value of dollars inside. They select one case with the hopes that it has the “Grand Prize” (GP) amount inside. Cases are opened and other amounts eliminated. In the end only two cases remain. The contestant has the one they selected and the other is still in the “Game Field” across form them. One case has the 1,000,000 dollars GP and the other has 10 dollars. As I understand it there is a higher probability that the GP is in the case not selected by the contestant in the beginning so that contestant should always swap cases. I do not see how this can be so. I understand that in the beginning the case selected only had a 4% chance of having the grand prize (GP) but so did the other remaining case. As cases were eliminated the odds of these cases having the grand prize raised at the same rate to the point that now they should each have a 50% probability of having the GP. In addition in the beginning each case also had a 4% chance of having the 1 dollar and that chance also rose to 50% for each case. I do not see any preferred state where the decision of the contestant has an effect on the distribution of the odds.

I see this interpretation of Bayesian analysis often and I can not see the validity. Am I wrong?

This is a really fascinating question, and the answer depends on the critical phrase:

Cases are opened and other amounts eliminated.

Strangely, it depends on WHO is doing the eliminating and with what knowledge!

In other words, are they eliminated in such a way that the GP must be in play at the end, or in such a way the game might have been aborted prematurely?

A) If the contestant chooses, or the cases are chosen randomly (i.e. the GP was at risk at every stage), then the probability is the same for each case, at each stage, right to the end. It doesn’t matter either way if the contestant switches. This is the way Deal or No Deal is played.

B) If the game-show host, or some knowledgeable party removes cases from play (knowing they do not contain the GP), then it is better to switch. Incidentally, this version is known as “The Monty Hall Problem”, after the host of the 70’s game show Let’s Make a Deal (In which a contestant would be offered three doors, one of which conceals a fantastic prize; the contestant chooses one door, and then Monty Hall would eliminate one of the remaining doors that doesn’t have a prize; the contestant is then given a chance to switch to the last, unopened door— an opportunity which should always be taken!)

This seems paradoxical, doesn’t it? The knowledge and intention of the person removing cases from play seems to change the probabilities.

But this really does make sense.

—-

In (A) the probabilities remain equal, in effect, because no action has been taken that changes the relative likelihood of any outcome. Suppose we have, at a given stage, N equally likely possibilities, and one is removed at random, if the game does continue (which it might not) then there now (N-1) possibilities— all of which are still equally likely, etc.

In (B) the actions change the relative probabilities. This is a little harder to explain, but in a nutshell, the host sweetens the deal: your original choice is just as likely to hold the prize, but the other choices have become more likely to be winners, since a losing choice has been removed. Let’s count out the possibilities:

Suppose we have three briefcases are a, b, c, and the prize is in case a. We will list them in the order of

“case chosen by the contestant, case eliminated, case remaining”

The game would have ended if case a had been eliminated, so this leaves only

a b c (contestant should keep)
a c b (contestant should keep)
b c a (contestant should switch)
c b a (contestant should switch)

In (A) each of these is equally likely, since each of the choices was made completely at random. Any of the six sequences
a b c
a c b
b a (stop)
b c a
c a (stop)
c b a
was equally likely (Since there is 1/3rd chance the contestant will pick a,b or c; then there are two equally likely possible ways for one of the remaining case to be eliminated; the final case, if there is one, is determined)

Now 1/3rd of the time the game ends prematurely, but if the game finishes, there is 1/2 probability that the contestant should switch– it’s 50-50 either way.

In (B) though, the choices are not equally likely.
There is still a 1/3rd chance that the contestant chooses a, 1/3rd b, 1/3rd c.
If the contestant chooses a, it is equally likely that the host opens case b or c;
On the other hand, if the contestant chooses b, the host will certainly open c; if the contestant chooses c, the host will certainly open b.

so this gives

a b c 1/6th of the time
a c b 1/6th of the time
b c a 1/3rd
c b a 1/3rd

2/3rds of the time, the contestant is better off switching.

1. strauss said,

April 9, 2008 at 1:40 pm

This amazing article shows how monte hall problem has been hidden, unrecognized, inside of hundreds of cognitive science experiments, wrecking many experimental results.

2. jlundell said,

April 11, 2008 at 4:21 pm

Great article. It’s also got one of the nicest and most concise explanations I’ve seen of the MHP odds:

This answer goes against our intuition that, with two unopened doors left, the odds are 50-50 that the car is behind one of them. But when you stick with Door 1, you’ll win only if your original choice was correct, which happens only 1 in 3 times on average. If you switch, you’ll win whenever your original choice was wrong, which happens 2 out of 3 times.