Saturday, February 8, 2020

Fun with numbers for February 8, 2020

Some collected twitterage and other nerditude from the interwebs.

Converting California to EVs: we're going to need a bigger boat grid


I like how silent electric vehicles are, but if California is to convert a significant number of FF cars to electric (50-80%), its grid will need to deliver 11-18% more energy (we already import around 1/3 of that energy and our grid is not exactly underutilized).




Playing around with diffusion models to avoid thinking about coronavirus


Playing around with some diffusion models of infection, not really sophisticated enough to deal with the topological complexities of coronavirus given air travel but better than people who believe you get that virus from drinking too much Corona beer… 🤯




Better choose winners of the past or the new thing?


Based on the following tweet by TJIC, author of Prometheus Award winning hard scifi books (first, second) about homesteading the Moon, with uplifted (genetically engineered, intelligent) Dogs and sentient AI,


I decided to create a simple model and just run with it. For laughs only.

We need to have some sort of metric of quality, $x$, and we'll assume that since people can stop reading a novel if it's too bad, $x \ge 0$. We also know Sturgeon's law, that 90% of everything is dross, so we'll need a distribution with a long left tail. For now we're okay with the exponential distribution $f_X(x) = \lambda \exp(-\lambda x)$, and we'll go with a $\lambda = 1$ to start.

Instead of changing the average quality of the novels for different years, we'll change the sample size from which the winners are chosen; what we're interested in is, therefore, $M(x)_N = E\left[\max\{x_1,\ldots,x_N\}\right]$ for different $N$, the number of novels. Assuming that $10 < N < 100000$, we can use a simple simulation to find those $M(x)_N$:


The results are

$M(x)_{10} = 2.899432$
$M(x)_{100} = 5.230011$
$M(x)_{1000} = 7.512119$
$M(x)_{10000} =  9.750539$
$M(x)_{100000} =   12.122326$

Let's say there are between 100 and 1000 scifi novels worthy of that name in any given year of the last 100 years. So, unless the new novels have on average between 5.2 and 7.5 times the average quality of those in the previous 100 years, one is better off picking a winner at random from those 100 years than a random new novel.

(Yes, there's a lot of nonsense in this model, but the idea is just to show that when there's a long left tail, which comes from Sturgeon's law --- and this one isn't even that steep --- randomly picking past winners is a better choice than randomly picking new novels even if the quality improved a bit relative to the past.)



No numbers, just Bay Area seamanship





Live long and prosper.