Is the Ellsberg Paradox really a paradox?

The other week, our friend Blue Aurora has the opportunity to pose this question to two of the greatest economic minds of our times:

Blue Aurora didn't mention the Ellsberg Paradox specifically, but it is a major critique of subjective effective utility, and it's one that he's brought up here before.

I'm not sure, though, that it's all that much of a paradox. I'll yank the description straight from Wikipedia:

"Suppose you have an urn containing 30 red balls and 60 other balls that are either black or yellow. You don't know how many black or how many yellow balls there are, but that the total number of black balls plus the total number of yellow equals 60. The balls are well mixed so that each individual ball is as likely to be drawn as any other. You are now given a choice between two gambles:

Gamble A : You receive $100 if you draw a red ball
Gamble B: You receive $100 if you draw a black ball

Also you are given the choice between these two gambles (about a different draw from the same urn):

Gamble C: You receive $100 if you draw a red or yellow ball
Gamble D: You receive $100 if you draw a black or yellow ball

...Utility theory models the choice by assuming that in choosing between these gambles, people assume a probability that the non-red balls are yellow versus black, and then compute the expected utility of the two gambles. Since the prizes are exactly the same, it follows that you will prefer Gamble A to Gamble B if and only if you believe that drawing a red ball is more likely than drawing a black ball (according to expected utility theory). Also, there would be no clear preference between the choices if you thought that a red ball was as likely as a black ball.

Similarly it follows that you will prefer Gamble C to Gamble D if, and only if, you believe that drawing a red or yellow ball is more likely than drawing a black or yellow ball. It might seem intuitive that, if drawing a red ball is more likely than drawing a black ball, then drawing a red or yellow ball is also more likely than drawing a black or yellow ball. So, supposing you prefer Gamble A to Gamble B, it follows that you will also prefer Gamble C to Gamble D. And, supposing instead that you prefer Gamble B to Gamble A, it follows that you will also prefer Gamble D to Gamble C.

When surveyed, however, most people strictly prefer Gamble A to Gamble B and Gamble D to Gamble C. Therefore, some assumptions of the expected utility theory are violated."

The math of the paradox goes like this (also straight from Wikipedia):

R \cdot U(\$100) + (1-R)  \cdot U(\$0) > B\cdot U(\$100) + (1-B) \cdot U(\$0)
R [U(\$100) - U(\$0)] > B [U(\$100) - U(\$0)]
\Longleftrightarrow R > B \;
 
B\cdot U(\$100) + Y\cdot U(\$100)  + R \cdot U(\$0) > R \cdot U(\$100) + Y\cdot U(\$100) + B \cdot U(\$0)
B [U(\$100) - U(\$0)] > R [U(\$100) - U(\$0)]
\Longleftrightarrow B > R \;

So we have an apparent contradiction if you choose both A and D (I would choose both A and D, by the way).

Now, I suppose this does contradict the simple way that subjective utility is often introduced. But I don't see any way around introducing a concept in a simpler way. The ultimate point is this: we make judgements based on expected utilities associated with uncertain outcomes, not on the utility of the expectation of an outcome. In other words (as in the math above), we maximize:

 P*(U(X')) + (1-P)*(U(X'')), not

U(P*X' + (1-P)*X'')

And normally that's enough to figure it out. But in this case you have to remember that the probability of a black or a yellow ball can't be treated the same as the probability of a red ball (the way the math in Wikipedia presents it). We have uncertainty about the value of B and Y, whereas we don't have uncertainty about the value of R (we only have uncertainty about whether a particular ball choice will be red, not what the probability of that event is).

Gamble D is the inverse of gamble A. We are uncertain about the value of B or Y individually, but we are positive what the value of B + Y is. It is 1-R (or 2R).

I have to confess I played with the math substituting this in and it kept coming out to the same solution. so I'm not sure that's the best way to go about it. The way to go about it, I think, is to note that R is fundamentally different from B or Y, insofar as every possible value for B and Y we could expect has probability distributions of their own. So the math on Wikipedia is actually not the right math that application of SEU would require, because we are only taking the expected value that a ball is black, and not the expected value of the probability that the ball is black (which is an additional uncertainty in this example).

Presumably you'd go about proving this more rigorously by fitting a probability distribution to B and Y, but assigning a probability of 1 to R = 1/3.