Bobbing around the interwebs today we find a post about a prediction of UBL's location. A tip of the homburg to Drew Conway for being the first mention I saw. Now, for the prediction itself.
As impressive as a 81% chance attributed to the actual location of UBL is, it raises three questions. These are important questions for any prediction system after its prediction is realized. Bear in mind that I'm not criticizing the actual prediction model, just the attitude of cheering for the probability without further details.
Yes, 81% is impressive; did the model make other predictions (say the location of weapons caches), and if so were they also congruent with facts? Often models will predict several variables and get some right and others wrong. Other predicted variables can act as quality control and validation. (Choice modelers typically use a hold-out sample to validate calibrated models.) It's hard to validate a model based on a single prediction.
Equally important is the size of the space of possibilities relative to the size of the predicted event. If the space was over the entire world, and the prediction pointed to Abbottabad but not Islamabad, that's impressive; if the space was restricted to Af/Pk and the model predicted the entire Islamabad district, that's a lot less impressive. I predict that somewhere in San Francisco there's a panhandler with a "Why lie, the money's for beer" poster; that's not an impressive prediction. If I predict that the panhandler is on the Market - Valencia intersection, that's impressive.
Selection is the last issue: was this the only location model for UBL or were there hundreds of competing models and we're just seeing the best? In that case it's less impressive that a model gave a high probability to the actual outcome: it's sampling on the dependent variable. For example, when throwing four dice once, getting 1-1-1-1 is very unlikely ($1/6^4 \approx 0.0008$); when throwing four dice 10 000 times, it's very likely that the 1-1-1-1 combination will appear in one of them (that probability is $1-(1- 1/6^4)^{10000} \approx 1$).
Rules of model building and inference are not there because statisticians need a barrier to entry to keep the profession profitable. (Though they sure help with paying the bills.) They are there because there's a lot of ways in which one can make wrong inferences from good models.
Usama Bin Laden had to be somewhere; a sufficiently large set of models with large enough isoprobability areas will almost surely contain a model that gives a high probability to the actual location where UBL was, especially if it was allowed to predict the location of the top hundred Al-Qaeda people and it just happened to be right about UBL.
Lessons: 1) the value of a predicted probability $\Pr(x)$ for a known event $x$ can only be understood with the context of the predicted probabilities $\Pr(y)$ for other known events $y$; 2) we must be very careful in defining what $x$ is and what the space $\mathcal{X}: x \in \mathcal{X}$ is; 3) when analyzing the results of a model, one needs to control for the existence of other models [cough] Bayesian thinking [/cough].
Effective model building and evaluation need to take into account the effects of limited reasoning by those reporting model results, or, in simpler terms, make sure you look behind the curtain before you trust the magic model to be actually magical.
Summary of this post: in acrostic!