## Sunday, April 22, 2012

### Frequentists, Bayesians, and HBO's "Girls"

Yielding to pressure, I watched the first episode of HBO's "Girls" on YouTube — well, the first ten minutes or so. The experience wasn't a total waste: I got an example of the difference between frequentists and Bayesians from it.

The protagonist, whose name I can't remember (henceforth "she"), has an unpaid internship that she took on the expectation of a job. She doesn't get the job and it's implied that there never was a job.*

Given that there's only that one data point, frequentists would have to decline any conclusion regarding the existence of the potential job. (The point estimate would be irrelevant without a variance for that estimate.)

Bayesians have a different view of things.

There are two possible states of the world: boss told the truth ($T$) or boss lied ($L$). There are two possible events: she gets a job ($J$) or she doesn't ($N$).

Without any information about the boss, we'll assume that the probability of truth or lie before any event was observed (that is the apriori probability) is

$\Pr(T) = p_0 = 1/2$,
$\Pr(L) = 1-p_0 = 1/2$.

The $1/2$ is the maximum entropy assumption for $p_0$, meaning we are the most uncertain about the truthfulness of the boss.

If the boss lied, then we can never observe event $J$,

$\Pr(J|L) = 0$,
$\Pr(N|L) = 1$.

If the boss told the truth, and there was in fact a potential job, she might still not get the job, as she might be a bad match. Given no other information, we can assume the same high-entropy case, here for the conditional probabilities:

$\Pr(J|T) = 1/2$,
$\Pr(N|T) = 1/2$.

We can now determine the probability that the the boss was telling the truth:

$\Pr(T|J) = 1$,
$\Pr(T|N) = \frac{\Pr(N|T) \Pr(T)}{\Pr(N|T) \Pr(T)+\Pr(N|L) \Pr(L)}= \frac{1/4}{1/4 + 1/2} = 1/3.$

Since she didn't get the job, there's a $2/3$ chance that there was never any job.

Note that there's really no magic on the Bayesian side; we bring a lot of baggage to the problem with the apriori and conditional probabilities. But in doing so we make our assumptions and ignorance explicit, which allows us to make inferences.

It's not magic, it's Bayes.

-- -- -- --
* I was going to write "SPOILER ALERT," but then I realized there's no way to spoil the show more than it already is...