Thursday, December 26, 2019

Fun With Numbers for Boxing Day, 2019

Some collected numerical fun from twitter to end the year.


As an amuse-bouche, if you're going to mock other people for their lack of intelligence, perhaps don't make trivial arithmetic errors…


(In accordance with my recent resolution to be more positive by not posting negative content, I didn't post this to twitter and I obscured the author.)



Geometry and trigonometry to the rescue


Scott Manley likes For All Mankind, but would like the producers to get the science right a bit more often:


Trust but verify, as they said in the Soviet Union:


In case the trigonometry isn't obvious, the angle (call it $\alpha$) is important to translate the horizontal measurements (say $l_1$ measured at $h_1$) into vertical distance via the magic of tangents: $\tan(\alpha/2) = l_1/(2 h_1)$ from where we get $h_1 = l_1/(2 \tan(\alpha/2))$.


The calculation above is actually for a FoV of 60° (camera), not 120° (eyes) as said in the text, because I used a hand calculator and post-its and transcribed the result from the wrong post-it; this result is about twice the correct result; for more accuracy, here are the different altitudes calculated [using a spreadsheet, like a proper responsible adult] as a function of what the angle taken by the big ship (around 50 m linear dimension) is:


(There are many approximations and precision trade-offs in the measurement, but SM's point holds: these are clearly different orbits and no one in the production or writing team seems to have noticed.)



It's only the equivalent of one to five .50-cal bullets...


The Hacksmith made one of those "how much dangerous nonsense can we post before YouTube throttles our channel" videos:


and I checked their Physics:


They replied on twitter that the maximum speed was over 2000 RPM, at which point I calculated that the kinetic energy was close to that of five .50-cal bullets.

What could go wrong, amirite?

(I like how the producers of Nikita [with Maggie Q, not La Femme Nikita with Peta Wilson] thought that the Styer HS .50 was an appropriate rifle for a shot through a window across a city street. Spoiler alert: it isn't; it's too much gun, in the words of Mike Ermentraut. The rifle looks gigantic next to Maggie Q, which is probably why they chose that caliber instead of something in .223 or .308 either of which would be more appropriate --- he said with all his marksmanship expertise acquired on the training fields of the xbox.)



Et tu, Arthur C. Clarke?


Usually A.C. Clarke's science is spot-on (excerpt from The Songs of Distant Earth),


 but in this case, no:


(We could say that it's the captain of the Magellan that's wrong, perhaps exaggerating for effect, not A.C. Clarke, but that's a cop-out.)

Here's an example of A.C. Clarke getting much harder science right, from Rendezvouz with Rama (an old tweet, from the era when I wasn't blogging):


(I mean, what kind of nerd does numerical integration to check on the feasibility of a scifi author's solution to a minor plot point just to post it on twitter? This guy! 🤓 [Pointing both thumbs at self.])



Tidal turbines and bad interpretation of statistics


Real Engineering had an interesting video about tidal turbines:


But I had an issue with the conclusions from the impact study, because they repeat a common error: mistaking statistical significance (or lack thereof) for effect size. This point deserves a better treatment, but for now here's a simple example:


The energy density of the ocean, like other renewables, is still a bit on the low side. Compared to Canadian actinides, it's certainly lacking:




Carbon capture wonky accounting


The XPrize has a video on "Everyday Products Made Out of Thin Air":



I like the Xprize and the ideas behind it, but most of these 'carbon capture products' are complete nonsense. The CO2 footprint for the processes that make and market the product is much larger than captured CO2. In other words, these products harm the environment by increasing the total CO2 output.

(Yes, I've covered this before, on one of the rare occasions I agreed with Thunderf00t.)

If you create say 1000 tonnes of CO2 building a factory to make a product that captures 100 g of carbon per unit, you need to make over 2.7 million units just to capture the CO2 created by building the factory alone! (If the product has 100 g of carbon, that came from 44/12*100 = 367 g of CO2.) Not counting the footprint of packaging, delivery, etc.

(This is the same accounting problem that people have comparing the CO2 footprints in production of wind turbines and gas turbines. If the gas turbines already exist and the wind turbines don't, the CO2 footprint of building them has to enter the calculation [but never does…].)

Note also that the products aren't made of 100% carbon, so the correct accounting for how much CO2 they capture would necessitate accounting for the CO2 footprint of the other components and their delivery — usually to a net creation of CO2 by these 'capture' products just in this manner.

Let us not forget delivery; even if we just consider local delivery with a city van (like those that are always blocking traffic in San Francisco by being double-parked in awkward places, not that traffic moves in San Francisco, vans or no vans), the numbers aren't encouraging:

A Ford Transit cargo van is rated for 25 MPG in the city. Assuming that gasoline is 100% trimethylpentane for simplicity, burning 1 kg of gasoline yields 3.1 kg of CO2. One gallon of gasoline is 2.86 kg (3.79 l * 0.755 kg/l) so 100 miles of delivery route has a 35.5 kg CO2 footprint. If each product unit has 100 g of carbon captured (367 g of CO2), it takes 97 units in that delivery route just to make up for the delivery itself.

Here are some real carbon capture products: first some really big ones a little bit south of the Bay Area


More: https://www.flickr.com/photos/josecamoessilva/albums/72157629918640442

and one of the same species that sprang from a seed taken to the Moon (story)


More: https://www.flickr.com/photos/josecamoessilva/albums/72157687657575895

I like trees.


Tuesday, December 10, 2019

Analysis paralysis vs precipitate decisions

Making good decisions includes deciding when you should make the decision.

There was a discussion on twitter where Tanner Guzy, (whose writings/tweets about clothing provide a counterpoint to the stuffier subforums of The Style Forum and the traditionalist The London Lounge), expressed a common opinion that is, alas, too reductive:


The truth is out there... ahem, is more complicated than that:


Making a decision without enough information is precipitate and usually leads to wrong decisions, in that even if the outcome turns out well it's because of luck; relying on luck is not a good foundation for decision-making. The thing to do is continue to collect information until the risk of making a decision is within acceptable parameters.

(If a decision has to be made by a certain deadline, then the risk parameters should work as a guide to whether it's better to pass on the opportunities afforded by the decision or to risk making the decision based on whatever information is available at that time.)

Once enough information has been obtained to make the decision risk acceptable, the decision-maker should commit to the appropriate course of action. If the decision-maker keeps postponing the decision and waiting for more information, that's what is correctly called "analysis paralysis."

Let us clarify some of these ideas with numerical examples, using a single yes/no decision for simplicity. Say our question is whether to short the stock of a company that's developing aquaculture farms in the Rub' al Khali.

Our quantity of interest is the probability that the right choice is "yes," call it $p(I_t)$ where the $I_t$ is the set of information available at time $t$. At time zero we'll have $p(I_0) = 0.5$ to represent a no-information state.

Because we can hedge the decision somewhat, there's a defined range of probabilities for which the risk is unacceptable (say from 0.125 to 0.875 for our example), but outside of that range the decision can be taken: if the probability is consistently above 0.875 it's safe to choose yes, if it's below 0.125 it's safe to choose no.

Let's say we have some noisy data; there's one bit of information out there $T$ (for true), which is either zero or one (zero means the decision should be no, one that it should be yes), but each data event is a noisy representation of $T$, call it $E_i$, where $i$ is the number of data event, defined as

$E_i = T $ with probability $1 - \epsilon$  and

$E_i = 1-T $ with probability $\epsilon$,

where $\epsilon$ is the probability of an error. These data events could be financial analysts reports, feasibility analyses of aquaculture farms in desert climates, political stability in the area that might affect industrial policies, etc. As far as we're concerned, they're either favorable (if 1) or unfavorable (if 0) to our stock short.

Let's set $T=1$ for illustration, in other words, "yes" is the right choice (as seen by some hypothetical being with full information, not the decision-maker). In the words of the example decision, $T=1$ means it's a good idea to short the stock of companies that purport to build aquaculture farms in the desert (the "yes" decision).

The decision-maker doesn't know that $T=1$, and uses as a starting point the no-knowledge position, $p(I_0) = 0.5$.

The decision-maker collects information until such a time as the posterior probability is clearly outside the "zone of unacceptable risk," here the middle 75% of the probability range. Probabilities are updated using Bayes's rule assuming that the decision-maker knows the $\epsilon$, in other words the reliability of the data sources:

$p(I_{k+1} | E_{k+1} = 1) = \frac{ (1- \epsilon) \times p(I_k)}{(1- \epsilon) \times p(I_k) + \epsilon \times (1- p(I_k))}$  and

$p(I_{k+1} | E_{k+1} = 0) = \frac{ \epsilon \times p(I_k)}{  \epsilon \times p(I_k) + (1- \epsilon) \times (1- p(I_k)) }$.

For our first example, let's have $\epsilon=0.3$, a middle-of-the-road case. Here's an example (the 21 data events are in blue, but we can only see the ones because the zeros have zero height):


We get twenty-one reports and analyses; some (1, 4, 6, 8, 9, 13, 14, and 21) are negative (they say we shouldn't short the stock), while the others are positive; this data is used to update the probability, in red, and that probability is used to drive the decision. (Note that event 21 would be irrelevant as the decision would have been taken before that.)

In this case, making a decision before the 17th data event would be precipitate and for better resilience one should wait at least two more without entering the zone of unacceptable risk before committing to a yes, so making the decision only after event 19 isn't a case of analysis paralysis.

Another example, still with $\epsilon=0.3$:


In this case, committing to yes after event 13 would be precipitate, whereas after event 17 would be an appropriate time.

If we now consider cases with lower noise, $\epsilon=0.25$, we can see that decisions converge to the "yes" answer faster and also why one should not commit as soon as the first data event brings the posterior probability outside of the zone of unacceptable risk:



If we now consider cases with higher noise, $\epsilon=0.4$, we can see that it takes longer for the information to converge (longer than the 21 events depicted) and therefore a responsible decision-maker would wait to commit to the decision:



In the last example, the decision-maker might take a gamble after data event 18, but to be sure the commit should only happen after a couple of events in which the posterior probability was outside the zone of unacceptable risk..

Deciding when to commit to a decision is as important as the decision itself; precipitate decisions come from committing too soon, analysis paralysis from a failure to commit when appropriate.

Sunday, December 1, 2019

Fun with Numbers for December 1, 2019

007: GoldenEye gets an orbit right


I was reading the book 007: GoldenEye and noticed that Xenia Onatopp's description doesn't match Famke Janssen's looks; oh, and also this:


At first glance, the book appears to be playing fast and loose with orbits; after all, the ISS, which orbits around 400 km, is also on a roughly 90-minute orbit. So, let us check the numbers.

The first step is computing the acceleration of gravity $g_{100}$ at 100 km altitude. Using Newton's formula we can compute it from first principles (radius and mass of the Earth, gravitational constant... too many things to look up), or we can use the precomputed $g=$ 9.8 m/s$^2$ and solve for the altitude using a ratio of two Newton's formulas at different radii (using 6370 km as the radius of the Earth):

$ g_{100} = 9.8 \times \left(\frac{6370}{6470}\right)^2 = 9.5$ m/s$^2$

This acceleration has to match the centripetal acceleration of a circle with radius 6470 km, $a = v^2/r = g_{100}$, yielding a orbital speed of 7.84 km/s.

The circumference of a great circle at 100 km altitude is $2 \times \pi \times 6470$ km = 40,652 km, giving a total orbit time of 5180 s, or 1 hour, 26 minutes, and 19 seconds. So close enough to ninety minutes for a general.

So, yes, GoldenEye's orbit makes sense (-ish). Even though it's much lower than that of the ISS, which also has around 90 minute orbital period (92 minutes, and it's on a very mildly elliptical orbit).

On the other hand, a 100 km orbit would graze the atmosphere (it's inside the thermosphere layer, near the bottom) and therefore lose energy over time, so not a great orbit to place an orbital weapon masquerading as a piece of space debris, because you can't boost up "space debris."

Here are the circular orbital times for different altitudes; because of the approximation of $g=9.8$ m/s$^2$ and radius of the Earth as 6370 km, there are increasing errors with altitude, which are obvious for the GEO orbit (in yellow), still not bad since GEO shows that errors will be less than 2 minutes 38 seconds on all the other orbits:




There's no True(x) function for the internet (or anywhere else)



(Ignore the bad grammar, it was a long day.)

What happens if we feed the [putative social media lie-detector] function $\mathrm{TRUE}(x)$ the statement $x=$"the set of all sets that don't contain themselves contains itself"?

Let's take a short detour to the beginning of the last century...

Most sets one encounters in everyday math don't contain themselves: the set of real numbers $\mathbb{R}$ doesn't contain itself, neither does the set $\{$chocolate, Graham cracker, marshmallow$\}$, for example. So one could collect all these sets that don't contain themselves into a set $S$, the set of all sets that don't contain themselves. So far so good, until we ask whether $S$ contains itself.

Well, one would reason, let's say $S$ doesn't contain itself; then $S$ is a set that doesn't contain itself, which means it's one of the sets in $S$. Oops.

Maybe if we start from the other side: say $S$ contains itself; but in that case $S$ is a set that contains itself, and doesn't belong in $S$.

This is Russell's set paradox and it shows that there are propositions for which there is no possible truth value.



On the price of micro-SD cards


Browsing Amazon for Black Friday deals (I saved 100% on Black Friday with coupon code #DontBuyUnnecessaryStuff and you can too), I saw these micro-SD cards:


Instead of buying them, I decided to analyze their prices, first computing the average cost per GB (as seen above) and then realizing that there's a fixed component to the price apart from the cost per GB, which a simple linear model captures:




All the electricity California needs is about 6 kilos of antimatter


I was reading a report on how much it costs to decommission (properly) a wind farm and realized that if we just had some antimatter lying around (!), California energy needs would be met with small quantities.


Okay, antimatter is a bit dangerous, so how about we develop that cold fusion people keep talking about? Here:


(Divide that by an efficiency factor if you feel like it.)



Relativity misconceptions and the reason I restarted blogging


I was listening to a podcast with Hans G Schantz, author of the The Hidden Truth trilogy (so far… fans eagerly await the fourth installment; highly recommended) and he had to correct the podcast host on what I've noticed is a very common misconception: that "near" the speed of light relativistic effects are very large.

Which is true, for an appropriate understanding of "near."

Time dilation, space contraction, and mass increase are all regulated by a function $\gamma(v) = (1 -(v/c)^2)^{-1/2}$, a very non-linear function. For the type of effects that people typically think about, like tenfold increases, we're talking about speeds near $0.995 c$; for the type of effect that would be noticeable in  small objects or short durations, one needs to go significantly above that:


Interestingly, the decision to restart blogging (first under the new name "Fun with numbers," then back to the admonition to keep one's thoughts to oneself by Boetius) was due to a number of calculations I had been tweeting regarding relativistic effects in the Torchship trilogy by Karl K Gallagher (highly recommended as well). Here are some examples, from Twitter:



And it's always heartwarming to see an author who keeps the science fiction human: that in a universe with mass-to-energy converters, wormhole travel, rampaging artificial intelligences, and AI-made trans-Oganesson-118 elements, there's a place for the problem-solving power of a wrench:





Computerphile has a simple data analysis course on YouTube using R



Link to the playlist here.
Download RStudio here.



Another promising lab rig that I hope will become a product at scale



The Phys.org article is here and the actual Science Advances paper is here.

Strictly speaking, what the paper describes is a successful laboratory test rig, but let's be generous and consider it a successful tech demo, also known in the low-tech world as a proof-of-concept. Note that though not all successful lab test rigs become successful tech demos, the ratio is much higher than the number of lab rigs (successful and otherwise) that become tech demos, so it's not that big a leap in the technology development process.

Friday, November 22, 2019

Fun with numbers for November 22, 2019

How lucky can asteroid miners be?



So, I was speed-rereading Orson Scott Card's First Formic War books (as one does; the actual books, not the comics, BTW), and took issue with the luck involved in noticing the first formic attack ship.

Call it the "how lucky can you get?" issue.

Basically, the miner ship El Cavador (literally "The Digger" in Castilian) on the Kuiper belt had to be incredibly lucky to see the formic ship, since it wasn't in the plane of the ecliptic, and therefore could be anywhere in the space between 30 AU (4,487,936,130 km) and 55 AU (8,227,882,905 km) distance from the Sun.

The volume of space between $r_1$ and $r_2 $ for $r_2 < r_1$ is $4/3\, \pi (r_1 - r_2)^3$, so the volume between 30 and 55 AU is 219,121,440,383,835,000,000,000,000,000 cubic kilometers.

Let's say the formic ship is as big the area of Manhattan with 1 km height, i.e. 60 km$^3$. What the hay, let's add a few other boroughs and make it 200 km$^3$. Then, it occupies a fraction $9 \times 10^{-28}$ of that space.

To put that fraction into perspective, the odds of winning each of the various lotteries in the US are around 1 in 300 million or so; the probability of the formic ship being in a specific point of the volume is slightly lower than the probability of winning three lotteries and throwing a pair of dice and getting two sixes, all together.

What if the ship was as big as the Earth, or it could be detected within a ball of the radius of the Earth? Earth volume is close to 1 trillion cubic kilometers, so the fraction is 1/219,121,440,383,835,000, or $4.56 \times 10^{-18}$; much more likely: about as likely as winning two lotteries and drawing the king of hearts from a deck of cards, simultaneously.

Let us be a little more generous with the discoverability of the formic ship. Let's say it's discoverable within a light-minute; that is, all El Cavador has to do observe a ball with 1 light-minute radius that happens to contain the formic ship. In this case, the odds are significantly better: 1 in 8,969,717. Note that one light-minute is 1/3 the distance between the Sun and Mercury, so this is a very large ball.

If we make an even more generous assumption of discoverability within one light-hour, the odds are 1 in 42. But this is a huge ball: if centered on the Sun it would go past the orbit of Jupiter, with a radius about 1 1/3 times the distance between the Sun and Jupiter. And that's still just under a 2.5% chance of detecting the ship.

Okay, it's a suspension of disbelief thing. With most space opera there's a lot of things that need to happen so that the story isn't "alien ship detected, alien weapon deployed, human population terminated, aliens occupy the planet, the end." So, the miners on El Cavador got lucky and, consequently, a series of novels exploring sociology more than science or engineering can be written.

Still, the formic wars are pretty good space opera, so one forgives these things.



Using Tribonacci numbers to measure Rstats performance on the iPad


Fibonacci numbers are defined by $F(1) = F(2)= 1$ and $F(n) = F(n-1) + F(n-2)$ for $n>2$. A variation, "Tribonacci" numbers ("tri" for three) uses $T(1) = T(2) = T(3) = 1$ and $T(n) = T(n-1) + T(n-2) + T(n-3)$ for $n>3$. These are easy enough to compute with a cycle, or for that matter, a spreadsheet:


(Yes, the sequence gets very close to an exponential. There's a literature on it and everything.)

Because of the triple recursion, these numbers are also a simple way to test the speed of a given platform. (The triple recursion forces a large number of function calls and if-then-else decisions, which strains the interpreter; obviously an optimizing compiler might transcode the recursion into a for-loop.)

For example, to test the R front end on the iPad nano-reviewed in a previous FwN, we can use this code:


Since it runs remotely on a server, it wasn't quite as fast as on my programming rig, but at least it wasn't too bad.

Note that there's a combinatorial explosion of function calls, for example, these are the function calls for $T(7)$:


There's probably a smart mathematical formula for the total number of function calls in the full recursive formulation; being an engineer, I decided to let the computer do the counting for me, with this modified code:


And the results of this code (prettified on a spreadsheet, but computed by RStudio):


For $T(30)= 20,603,361$ there are 30,905,041 function calls. This program is a good test of function call execution speed.


Charlie's Angels and Rotten Tomatoes



Since the model is parameterized, all I need to compute one of these is to enter the audience and critic numbers and percentages. Interesting how the critics and the audience are in agreement in the 2019 remake, though the movie hasn't fared too well in the theaters. (I'll watch it when it comes to Netflix, Amazon Prime, or Apple TV+, so I can't comment on the movie itself; I liked the 2000 and 2003 movies, as comedies that they were.)



Late entry: more fun with Tesla



15-40 miles of range, using TSLA's 300 Wh/mile is 4.5 kWh to 12 kWh. Say 12 hours of sunlight, so we're talking 375 to 1000 W of solar panels. For typical solar panels mounted at appropriate angles (150 W/m2), that's 2.5 to 6.7 square meters of solar panels…

Yeah, right!



No numbers: some Twitterage from last week


Smog over San Francisco, like it's 1970s Hell-A


Misrepresenting nuclear with scary images


Snarky, who, me?



Alien-human war space opera – a comprehensive theory





Friday, November 15, 2019

Fun with numbers for November 15, 2019

How many test rigs for a successful product at scale?


From the last Fun with Numbers:


This is a general comment on how new technologies are presented in the media: usually something that is either a laboratory test rig or at best a proof-of-concept technology demonstration is hailed as a revolutionary product ready to take the world and be deployed at scale.

Consider how many is "a lot of," as a function of success probabilities at each stage:


Yep, notwithstanding all good intentions in the world, there's a lot of work to be done behind the scenes before a test rig becomes a product at scale, and many of the candidates are eliminated along the way.



Recreational math: statistics of the maximum draw of N random variables


At the end of a day of mathematical coding, and since Rstudio was already open (it almost always is), I decided to check whether running 1000 iterations versus 10000 iterations of simulated maxima (drawing N samples from a standard distribution and computing the maximum, repeated either 1000 times or 10000 times) makes a difference. (Yes, an elaboration on the third part of this blog post.)

Turns out, not a lot of difference:


Workflow: BBEdit (IMNSHO the best editor for coding) --> RStudio --> Numbers (for pretty tables) --> Keynote (for layout); yes, I'm sure there's an R package that does layouts, but this workflow is WYSIWYG.

The R code is basically two nested for-loops, the built-in functions max and rnorm doing all the heavy lifting.

Added later: since I already had the program parameterized, I decided to run a 100,000 iteration simulation to see what happens. Turns out, almost nothing worth noting:


Adding a couple of extra lines of code, we can iterate over the number of iterations, so for now here's a summary of the preliminary results (to be continued later, possibly):


And a couple of even longer simulations (all for the maximum of 10,000 draws):


Just for fun, the probability (theoretical) of the maximum for a variety of $N$ (powers of ten in this example) is greater than some given $x$ is:




More fun with Solar Roadways


Via EEVblog on twitter, the gift that keeps on giving:


This Solar Roadways installation is in Sandpoint, ID (48°N). Solar Roadways claims its panels can be used to clear the roads by melting the snow… so let's do a little recreational numerical thermodynamics, like one does.

Average solar radiation level for Idaho in November: 3.48 kWh per m$^2$ per day or 145 W/m$^2$ average power. (This is solar radiation, not electrical output. But we'll assume that Solar Roadways has perfectly efficient solar panels, for now.)

Density of fallen snow (lowest estimate, much lower than fresh powder): 50 kg/m$^3$ via the University of British Columbia.

Energy needed to melt 1 cm of snowfall (per m$^2$): 50 [kg/m^3] $\times$ 0.01 [m/cm] $\times$ 334 [kJ/kg] (enthalpy of fusion for water) = 167 kJ/m$^2$ ignoring the energy necessary to raise the temperature, as it's usually much lower than the enthalpy of fusion (at 1 atmosphere and 0°C, the enthalpy of fusion of water is equal to the energy needed to raise the temperature of the resulting liquid water to approximately 80°C).

So, with perfect solar panels and perfect heating elements, in fact with no energy loss anywhere whatsoever, Solar Roadways could deal with a snowfall of 3.1 cm per hour (= 145 $\times$ 3600 / 167,000) as long as the panel and surroundings (and snow) were at 0°C.

Just multiply that 3.1 cm/hr by the efficiency coefficient to get more realistic estimates. Remember that the snow, the panels, and the surroundings have to be at 0°C for these numbers to work. Colder doesn't just make it harder; small changes can make it impossible (because the energy doesn't go into the snow, goes into the surrounding area).



Another week, another Rotten Tomatoes vignette


This time for the movie Midway (the 2019 movie, not the 1972 classic Midway):


Critics and audience are 411,408,053,038,500,000 (411 quadrillion) times more likely to use opposite criteria than same criteria.

Recap of model: each individual has a probability $\theta_i$ of liking the movie/show; we simplify by having only two possible cases, critics and audience using the same $\theta_0$ or critics using a $\theta_1$ and audience using a $\theta_A = 1-\theta_1$. We estimate both cases using the four numbers above (percentages and number of critics and audience members), then compute a likelihood ratio of the probability of those ratings under $\theta_0$ and $\theta_1$. That's where the 411 quadrillion times comes from: the probability of a model using $\theta_1$ generating those four numbers is 411 quadrillion times the probability of a model using $\theta_0$ generating those four numbers. (Numerical note: for accuracy, the computations are made in log-space.)



Google gets fined and YouTubers get new rules


Via EEVBlog's EEVblab #67, we learn that due to non-compliance with COPPA, YouTube got fined 170 million dollars and had to change some rules for content (having to do with children-targeted videos):


Backgrounder from The Verge here; or directly from the FTC: "Google and YouTube Will Pay Record $170 Million for Alleged Violations of Children’s Privacy Law." (Yes, technically it's Alphabet now, but like Boaty McBoatface, the name everyone knows is Google. Even the FTC uses it.)

According to Statista: "In the most recently reported fiscal year, Google's revenue amounted to 136.22 billion US dollars. Google's revenue is largely made up by advertising revenue, which amounted to 116 billion US dollars in 2018."

170 MM / 136,220 MM =  0.125 %

2018 had 31,536,000 seconds, so that 170 MM corresponds to 10 hours, 57 minutes of revenue for Google. 

Here's a handy visualization:






Engineering, the key to success in sporting activities


Bowling 2.0 (some might call it cheating, I call it winning via superior technology) via Mark Rober:


I'd like a tool wall like his but it doesn't go with minimalism.



No numbers: recommendation success but product design fail.



Nerdy, pro-engineering products are a good choice for Amazon to recommend to me, but unfortunately many of them suffer from a visual form of "The Igon Value Problem."