Blue Aurora responds on Ellsberg with something that some other commenters brought up as well: "Part of the point of the Ellsberg paradox, if I have this down right, is that people are in theory supposed to favour both options equally 50-50. Instead, there is ambiguity aversion, Daniel. Not risk aversion. There's an important difference."
I think we should be careful not to confuse semantic differences with substantive differences when people talk about this. I had to google "ambiguity aversion". I've never studied decision theory or the literature around this. But it sounds like exactly what I was talking about in my post. It's uncertainty around the probability itself, rather than just around the outcome of the event we're discussing (in this case the ball draw).
This is very important. This is Keynesian/Knightian uncertainty and it's impressionistic and volatile and can cause a lot of problems when you don't consider it.
But I'm not sure how it's supposed to overthrow SEU except by the semantic rules of the people with an interest in overthrowing SEU.
I have had a lot more statistics than I have had decision theory (I'm guessing this is true of most economists), and in statistics we think about randomness in the actual outcome of an event (the ball draw) as well as sampling error: uncertainty about a particular likelihood we've assessed.
When I think of subjective expected utility, I think of any kind of uncertainty around a choice - both uncertainty about the outcome and any uncertainty about our model. This seems natural. I don't know why you wouldn't look at it this way. But if you wanted to segregate those two kinds of uncertainties for some reason, what kind of utility theory would you come up with?
You'd probably come up with something like the way I think about SEU! You'd say "we need to consider both of these types of uncertainty".
Yes, I agree.
I really don't see what damage is done to SEU here. You're just observing that simple applications to simple models (where there is no uncertainty about the probabilities) is not how you should apply these ideas to more complex cases (where there is uncertainty about the probabilities). I agree with that. But I can't think of any assumption or conclusion of SEU that's been overthrown here. We've just demonstrated that in this case raised by Ellsberg there is a good and a bad way to apply SEU.