Friday, January 13, 2017

Medical tests and probabilities

You may have heard this one, but bear with me.

Let's say you get tested for a condition that affects ten percent of the population and the test is positive. The doctor says that the test is ninety percent accurate (presumably in both directions). How likely is it that you really have the condition?

[Think, think, think.]

Most people, including most doctors themselves, say something close to $90\%$; they might shade that number down a little, say to $80\%$, because they understand that "the base rate is important."

Yes, it is. That's why one must do computation rather than fall prey to anchor-and-adjustment biases.

Here's the computation for the example above (click for bigger):


One-half. That's the probability that you have the condition given the positive test result.

We can get a little more general: if the base rate is $\Pr(\text{sick}) = p$ and the accuracy (assumed symmetric) of the test is $\Pr(\text{positive}|\text{sick}) = \Pr(\text{negative}|\text{not sick})  = r $, then the probability of being sick given a positive test result is

\[ \Pr(\text{sick}|\text{positive}) = \frac{p \times r}{p \times r + (1- p) \times (1-r)}. \]

The following table shows that probability for a variety of base rates and test accuracies (again, assuming that the test is symmetric, that is the probability of a false positive and a false negative are the same; more about that below).


A quick perusal of this table shows some interesting things, such as the really low probabilities, even with very accurate tests, for the very small base rates (so, if you get a positive result for a very rare disease, don't fret too much, do the follow-up).


There are many philosophical objections to all the above, but as a good engineer I'll ignore them all and go straight to the interesting questions that people ask about that table, for example, how the accuracy or precision of the test works.

Let's say you have a test of some sort, cholesterol, blood pressure, etc; it produces some output variable that we'll assume is continuous. Then, there will be a distribution of these values for people who are healthy and, if the test is of any use, a different distribution for people who are sick. The scale is the same, but, for example, healthy people have, let's say, blood pressure values centered around 110 over 80, while sick people have blood pressure values centered around 140 over 100.

So, depending on the variables measured, the type of technology available, the combination of variables, one can have more or less overlap between the distributions of the test variable for healthy and sick people.

Assuming for illustration normal distributions with equal variance, here are two different tests, the second one being more precise than the first one:



Note that these distributions are fixed by the technology, the medical variables, the biochemistry, etc; the two examples above would, for example, be the difference between comparing blood pressures (test 1) and measuring some blood chemical that is more closely associated with the medical condition (test 2), not some statistical magic made on the same variable.

Note that there are other ways that a test A can be more precise than test B, for example if the variances for A are smaller than for B, even if the means are the same; or if the distributions themselves are asymmetric, with longer tails on the appropriate side (so that the overlap becomes much smaller).

(Note that the use of normal distributions with similar variances above was only for example purposes; most actual tests have significant asymmetries and different variances for the healthy versus sick populations. It's something that people who discover and refine testing technologies rely on to come up with their tests. I'll continue to use the same-variance normals in my examples,  for simplicity.) 


A second question that interested (and interesting) people ask about these numbers is why the tests are symmetric (the probability of a false positive equal to that of a false negative). 

They are symmetric in the examples we use to explain them, since it makes the computation simpler. In reality almost all important preliminary tests have a built-in bias towards the most robust outcome.

For example, many tests for dangerous conditions have a built-in positive bias, since the outcome of a positive preliminary test is more testing (usually followed by relief since the positive was a false positive), while the outcome of a negative can be lack of treatment for an existing condition (if it's a false negative).

To change the test from a symmetric error to a positive bias, all that is necessary is to change the threshold between positive and negative towards the side of the negative:



In fact, if you, the patient, have access to the raw data (you should be able to, at least in the US where doctors treat patients like humans, not NHS cost units), you can see how far off the threshold you are and look up actual distribution tables on the internet. (Don't argue these with your HMO doctor, though, most of them don't understand statistical arguments.)

For illustration, here are the posterior probabilities for a test that has bias $k$ in favor of false positives, understood as $\Pr(\text{positive}|\text{not sick}) = k \times \Pr(\text{negative}|\text{sick})$, for some different base rates $p$ and probability of accurate positive test $r$ (as above):


So, this is good news: if you get a scary positive test for a dangerous medical condition, that test is probably biased towards false positives (because of the scary part) and therefore the probability that you actually have that scary condition is much lower than you'd think, even if you'd been trained in statistical thinking (because that training, for simplicity, almost always uses symmetric tests). Therefore, be a little more relaxed when getting the follow-up test.


There's a third interesting question that people ask when shown the computation above: the probability of someone getting tested to begin with. It's an interesting question because in all these computational examples we assume that the population that gets tested has the same distribution of sick and health people as the general population. But the decision to be tested is usually a function of some reason (mild symptoms, hypochondria, job requirement), so the population of those tested may have a higher incidence of the condition than the general population.

This can be modeled by adding elements to the computation, which makes the computation more cumbersome and detracts from its value to make the point that base rates are very important. But it's a good elaboration and many models used by doctors over-estimate base rates precisely because they miss this probability of being tested. More good news there!


Probabilities: so important to understand, so thoroughly misunderstood.


- - - - -
Production notes

1. There's nothing new above, but I've had to make this argument dozens of times to people and forum dwellers (particularly difficult when they've just received a positive result for some scary condition), so I decided to write a post that I can point people to.

2. [warning: rant]  As someone who has railed against the use of spline drawing and quarter-ellipses in other people's slides, I did the right thing and plotted those normal distributions from the actual normal distribution formula. That's why they don't look like the overly-rounded "normal" distributions in some other people's slides: because these people make their "normals" with free-hand spline drawing and their exponentials with quarter ellipses, That's extremely lazy in an age when any spreadsheet, RStats, Matlab, or Mathematica can easily plot the actual curve. The people I mean know who they are. [end rant]

Wednesday, January 11, 2017

Geeking out in the new year

😎 The little rocket that could, couldn't. JAXA's experimental souped-up sounding rocket didn't go up yesterday. But here's hoping that there'll be a successful launch next time. These nano-launchers have a lot of potential for small experimental payloads, and the cost is incredibly small for a orbital insertion.

😎 Via phys.org, we learn that Escherichia Coli (yes, that E. Coli) can be genetically reprogramed to make industrial chemicals, in this case the nonessential aminoacid L-serine. Who knew the local food trucks were potential competitors to BASF and DuPont?

😎 Veritasium goes over some of the issues involved in the detection of gravity waves:



😎 Donald Norman and Mick McManus have a cage match about the future of design in the age of AI (by which I think they mean ML, not AI in general)


😎 To comment on a blog post by the Supreme Dark Lord of the Evil Legion of Evil about the book "Uncertainty" by William Briggs, I reread my notes on it and found this shining example of academic snark (click to embiggen):



😎 Godspeed, SpaceX, on your return to flight operations, currently delayed due to storm conditions and range conflict at Vandenberg. Very fast turnaround, compared for example with the shuttle program after the two losses. Of course, those were manned missions, so there was a much larger PR angle. As for the "anomaly," as predicted by almost everyone in the various discussion forums, it was a helium tank issue.

😎 Via the Singularity Hub, we learn that one-third of american workers would rather work for a robot than for a human boss. There's something here reminiscent of Marc Andreesen's remark that in the future people would be divided into those who give orders to computers and those who are given orders by computers. People are just pre-adapting to that incoming reality.

😎 CES came and went and a lot of opportunities for unnecessary expenditure presented themselves. Linus of the tech tips has some of the most interesting novelties in these videos:






📕 First book of 2017 was "Operation Paperclip: The secret intelligence program that brought Nazi scientists to America" by Annie Jacobsen. There are a few inconsistencies between this and other descriptions of the program (and some questionable anecdotes), but overall a compelling narrative of how german scientists were essential to a lot of US military programs (and NASA) after WWII.

📕 Second book of 2017 was "The Winter Fortress: The epic mission to sabotage Hitler's atomic bomb" by Neal Bascomb. It's the book version of the Netflix series (well, I saw it on Netflix) "the Heavy Water War." Does a reasonable job of explaining a number of operational difficulties with the attack on Norsk Nydro.

📕 Third book of 2017 was "32 Yolks: From my mother's table to working the line" by Eric Ripert. Autobiography of the executive chef at Le Bernardin, NY. Essential reading for a food snob cooking aficionado. Better than Anthony Bourdain's multiple autobiographies; and I like Bourdain.

📕 Fourth book of 2017 was "Anathem," by Neal Stephenson (reread, obviously). I read it -- twice in a row -- on the day it came out in 2008. (Yes, it's over 1000 pages.) I tend to read it at least once a year, to enjoy all the multi-level puzzles and self-referential jokes that Stephenson planted in it. Like fraa Orolo, I too suffer from Attention Surplus Disorder.

📕 Fifth book of 2017 was "Scare Pollution: Why and how to fix the EPA" by Steven Milloy of junkscience. It makes a reasonable case for ending the EPA, not fixing it, but will never happen. Absent a revolution or a societal collapse, government ratchets up, never down.

(Yes, I read a lot of books. These, by the way, are non-work books. I also read work-related books.)

Sunday, January 8, 2017

Numerical thinking - A superpower everyone can get


There are significant advantages to being a numerical thinker. So, why isn't everyone one?

Some people can't be numerical thinkers (or won't be numerical thinkers), typically due to one of three causes:
Acalculia: the inability to do calculations; in its pure form a type of brain damage, but more commonly a consequence of bad educational system. 
Innumeracy: lack of mathematical and numerical knowledge, again generally as the result of a bad educational system. 
Numerophobia: a fear of numbers and numerical (and mathematical) thinking, possibly an attitude brought on by exposure to the educational system.
On a side note, a large part of the problem is the educational system, particularly the way logic and math are covered in it. Just in case that wasn't clear.

Numerical thinkers get a different perspective on the world. It's like a superpower, one that can be developed with practice. (Logical thinkers have a related, but different, superpower.)

Take, for example, this list of large power generating plants, from Wikipedia:



Left to themselves, the numbers on the table are just descriptors, and there's very little that can be said about these plants, other than that there's a quick drop in generation capacity from the first few to the rest.

When numerical thinkers see those numbers, they see the numbers as an invitation to compute; as a way to go beyond the data, to get information out of that data. For example, my first thought was to look at the capacity factors of these power plants: how much power do they really generate as a percentage of their nominal (or "nameplate") power.

Sidenote: Before proceeding, there's an interesting observation I should make here, about operational numerophobia (similar to this older post): in social interactions when this type of problem comes up, educated people who can do calculations in their job, or at least could during their formal education, have trouble knowing where to start to convert a yearly production of 98.8 TWh into a power rating (in MW). 
Since this is trivial (divide by the number of hours in one year, 8760, and convert TW to MW by multiplying by one million), the only explanation is yet another case of operational numerophobia. End of sidenote.

Capacity (or load) factor is like any other efficiency measure: how much of the potential is realized? Here are the results for the top 15 or so plants (depending on whether you count the off-line Japanese nuclear plant):



Once these additional numbers are computed, more interesting observations can be made; for example:

The nuclear average capacity factor is $87.7\%$, while the hydro average is just $47.2\%$. That might be partly from use of pumped hydro as storage for surplus energy on the grid (it's the only grid-scale storage available at present; explained in the video below).

That is the power of being a numerical thinker: the ability to go beyond simple numbers and have a deeper understanding of reality. It's within most people's reach to become a numerical thinker, all that's necessary is the will to do so and a little practice.

Alas, many people prefer the easier route of being numerical-poseurs...

A lot of people I interact with pepper their discussions with numbers and even charts, but they aren't numerical thinkers. The numbers and the charts are props, mostly, like the raw numbers on the Wikipedia table. It's only when those numbers are combined among themselves and with outside data (none in this example), information (the use of pumped hydro as grid-level storage), and knowledge (nameplate vs effective capacity, capacity factors) that they realize their potential for informativeness.

A numerical thinker can always spot a numerical-poseur. It's in what they don't do.

- - - -

Bonus content: Don Sadoway talking about electricity storage and liquid metal batteries:



Systems for the new year (and other years too)

In lieu of goals or resolutions for the new year, here are some systems I've found useful over the years:

Less.

Process to zero. For me it started with Inbox Zero. Then I realized the world was full of processing convexities and that processing to zero was generally an efficient and effective way of dealing with those.

Automate, reduce, reuse, delegate, simplify, repurpose; Automate... the real scarce resource for everyone is attention $\times$ time. Solve once, apply often.

Even paranoids have enemies. Consider others' incentive compatibility and individual rationality drivers, particularly as it concerns information that is single-sourced or traceable to a single source.

Robustness beats optimality. Because optimality is never quite optimized for the randomness that can occur.

Show, don't tell. It's not just a gym rule (though it's a good gym rule). Works both as a signaling device (from me to others) and a screening device (from others to me).

Systems, not goals. This compact form comes from Scott Adams's book How to Fail at Almost Everything and Still Win Big: Kind of the Story of My Life but the idea has been floating around for a while.

Tuesday, December 27, 2016

Interstellar delivers truth bombs

Early on in the movie Interstellar there are two important lessons about what makes a society fail (or succeed), both delivered in the parent-teacher conference that Cooper attends.

Lesson one: don't underestimate the power of engineering (and science)



Lesson two: beware of those who would rewrite the truth



(Excerpts from the novelization of the movie by Greg Keyes. No, I'm not a nerd. Ok, I am.)

Andrew Rader points out some problems with the movie:



The main problem was also pointed out by Kip Thorne in The Science of Interstellar: that fighting the blight on Earth would make a lot more sense than going to a different planet.

Thorne also raises the problem of orbital mechanics in chapter 7 of the book:


and proposes a few speculative mechanisms to get the necessary changes in velocity from gravity assists. Note that there are two decelerations one of $c/3$ and one of $c/4$ for a total speed change of  $7c/12$ or $1.75\times 10^{8}$ m/s. Returning to the Endurance requires an increase in speed of $1.75\times 10^{8}$ m/s as well.

To see the size of the problem, let's say they take 500 seconds (8 minutes and 20 seconds) to do each maneuver (while the rest of the Universe ages significantly) and the Ranger's mass is 2 metric tons (for simplicity, we'll assume that the water taken in on the planet makes up for the loss of Dr. Doyle to stupidity, indiscipline, and lack of planning). If we assume constant thrust for simplicity, assume away all friction and ignore the propellant mass loss (yay, infinite specific impulse!), the thrust needed for each maneuver is $7 \times 10^8$ Newton or about the same as 1077 SpaceX Merlin engines (averaging their atmosphere and vacuum thrust to 650 kN). Since there's propellant mass loss, let's say we "only" need the equivalent of 900 Merlin engines. So, yes, only a gravity assist would do.

Yes, it's an oversimplification, but didn't feel like solving the Tsiolkovsky equation. Hence the drop from 1077 to 900 engines. (That's still equivalent to 100 Falcon 9 rockets.) By the way, Thorne appears unconvinced of the feasibility of those gravity assists and hence of the feasibility of whole expedition to Miller's planet. But at least they tried to be accurate with some science in the movie.

Oh, and speaking of nerds:


Monday, December 26, 2016

Much Ado About Extreme Intelligence

The Supreme Dark Lord of the Evil Legion of Evil wrote a post about the qualitative differences between Very Large Crude Carriers (VLCCs) and Ultra Large Crude Carriers (ULCCs) Very High IQ (VHIQ) persons and Ultra High IQ (UHIQ) persons.

Speaking from my position in the fat part of the distribution, this qualitative difference looks a bit like an unnecessary threshold. Also, the comments on the post appear to be from very smart people, who nevertheless seem unaware of the basics of performance measurement, so: way-to-go MBA training!

On to the main point:

Since intelligence is something one has no control over, being determined mostly by factors of genetics and early childhood environment, it's strange to be proud of it. As a person of reasonable, compact stature, I noticed a similar behavior in tall people who were proud of being tall. It's not something you achieve; therefore it's nothing to be proud of. Thankful, maybe.

Furthermore, there's a measurement problem, once one moves away from pattern-matching and response speed tests (I used to think that psychometrics was put in the world to make phrenology look good; then I realized it happened by accident) and into real-world outcomes, which depend on a lot more than raw brain power, for example, observable outcomes depend on:
  • The opportunities to use that brain power, and the fields in which it's used.
  • The motivation to use it, and the goals to be achieved.
  • The skills and knowledge, including thinking skill (which is separate from intelligence in the same way that competitive weightlifting skill is separate from raw physical force) and a can-do attitude towards thinking.
  • Outside factors which distort the observables; they can be random events (what engineers call 'noise') or they can be environment biases.
So, from my position in the fat part of the distribution, but standing on the intellectual (and experimental) shoulders of much smarter others, here are VD's points (in blue) with my comments:

"VHIQ inclines towards binary either/or thinking and taking sides. UHIQ inclines towards probabilistic thinking and balancing between contradictory possibilities."

Maybe. This reads a lot more like differences as to when the mental decision trees are pruned rather than the binary/continuous difference (not just probabilistic, also threshold-thinking on continuous quantities).

Raw power may make a difference as to how many branches can be considered and how far each branch can be developed for any given problem, but then the boundary between VHIQ and UHIQ would be fuzzy,  porous, and contingent on the problem; not a situation conducive to qualitative differences.

"VHIQ seeks understanding towards application or justification, UHIQ seeks understanding towards holistic understanding."

Different tools for different objectives, I would think.

Certain problems (most engineering problems, thankfully) can be decomposed and processed in ever more cohesive and less coupled chunks which are processed separately and integrated hierarchically. Other problems require holistic understanding of emergent properties (most large-scale systems formed by similar small units communicating among them, aka complex systems) and a completely different mindset. (Here's an example of the two mindsets applied to economics.)

Other than that, practice with a field of knowledge will eventually lead to a holistic understanding of that field, though raw brain power makes that easier.

"VHIQ refines the original thought of others, UHIQ synthesizes multiple original thoughts."

Conceptually different, but in essence the same response as the previous one: what to do depends on the purpose of the doing.

"VHIQ rationalizes logical conclusions, UHIQ accepts logical conclusions. This is ironic because VHIQ considers itself to be highly logical, UHIQ considers itself to be investigative."

Maybe it's my observation sample, but I don't have this experience (with other people); the people I notice rationalizing [wrong] things tend to be non-high-IQ (when in good faith) or are high-IQ  but are doing it knowingly, in response to outside incentives (in bad faith, in other words).

"VHIQ recognizes the truths in the works of the great thinkers of the past and applies them. UHIQ recognizes the flaws in the thinking of the great thinkers of the past and explores them."

Maybe this is field-dependent, and in the fields I'm exposed to, it's a matter of knowledge and goal stratification.

"VHIQ usually spots logical flaws in an argument. UHIQ usually senses them."

This difference is a matter of practice, therefore differences in motivation (say, in curiosity) will be more important than differences in raw intellect. This is the case for almost all 'intuitive' or 'natural' skills, other than proprioperception and motion control. And even then, gym n00bs spend several minutes placing their feet for each deadlift attempt, skilled powerlifters' feet fall in place as soon as they enter the lifting platform.

It's true that mental capacity will put a limit on how fast and how far this 'intuitiveness' in skill development can go, but motivation, training, and opportunity differences will spread out any differences that separate VHIQ from UHIQ.

"VHIQ enjoys pedantry. UHIQ hates it. Both are capable of utilizing it at will."

I would venture that it's a continuous scale, starting at about one standard deviation below the mean, and negatively correlated with IQ. As for utilizing pedantry at will, this seems to be more of an operational decision, as when people who 'love science' (but don't learn any) start talking down to engineers and get a 'how many Joule in a kilowatt-hour?' question in return, for example.

For example, it's usually the people who can't recognize the period of a music piece by the music alone that make a big deal of correcting someone who says 'Bach' with 'Johann Sebastian Bach.' Musicologists, good and very good, only become pedantic to wave off the poseurs. So I'd venture that VHIQ might be more exposed to poseurs than UHIQ and therefore come across as more pedantic (which they are, in a small way, given their lower IQ and the aforementioned negative correlation) than the UHIQ.

"VHIQ is uncomfortable with chaos and seeks to impose order on it, even if none exists. UHIQ is comfortable with chaos and seeks to recognize patterns in it."

Different fields of endeavor have different requirements. Intelligence is, among other things, and with other things (motivation, skills, opportunities, environmental factors), the ability to adapt to and if needed change the environment and the field of endeavor.

"VHIQ is spergey and egocentric. UHIQ is holistic and solipsistic."

I've never observed this, so I can't comment; then again, I'm not a people person so it could well be true and hard to spot from the fat part of the curve.

"VHIQ will die on a conceptual hill. UHIQ surrenders at the first reasonable show of force."
"VHIQ attempts to rationalize its errors. UHIQ sees no point in hesitating to admit them."
"VHIQ seeks to prove the correctness of its case. UHIQ doesn't believe in the legitimacy of the jury."
"VHIQ is competitive. UHIQ doesn't keep score."

(These four are pairwise different but my response to all is the same.)

All four statements above appear to me as orthogonal to differences in raw brain power, as they can be seen at all levels of intelligence down to the middle of the curve; I'd place them in a two-dimensional space of (pragmatic-pigheaded x internally-externally motivated), but that's probably already been done better by some psychometrician.

This is reminiscent of the chapter on nerds in The Inmates are Running The Asylum, which summarizes nerds as more interested in being right than being successful. That's not because they're stupid, but rather a learned attitude (and some personality differences, which aren't intelligence differences).

"VHIQ believes in the unique power of SCIENCE. UHIQ sees science as a conceptual framework of limited utility."

People who understand science, as opposed to people who 'love science' (as long as they don't have to learn any) or great scientists [in their own minds, if no one else's], treat science as a collection of best models and a method for finding better models. This is a matter of knowledge rather than raw brain power, though more raw brain power makes acquiring and expressing this knowledge easier.

It's possible that some of VHIQ are more likely than UHIQ to be selected by certain institutional designs for compliance with a particular view of science and that impedes their understanding of science as I just defined it; however, this would be a case of environment (incentives) not a consequence of intelligence itself. Kind of a "A-grade managers hire A-grade employees, B-grade managers hire C-grade employees" survival rule for mediocrities.

"VHIQ seeks to rank and order things. UHIQ seeks to recognize and articulate concepts."

Orders are a subset of networks and to our best understanding of how people recognize and process concepts they do so by fitting them into networks of other concepts. So, this is again a matter of degree, though when networks are involved, all sorts of emergent behavior (what could pass for creativity or super-fast reasoning) can happen that are hard to predict.

"VHIQ asks "how can this be used?" UHIQ asks "what does this mean?""

Engineers vs scientists of all intelligence levels (well, at least those above the mean) exhibit this difference in thinking goals. Since they do so at all intelligence levels, this has a significant orthogonality to the IQ dimension.

Inasmuch as there's a correlation it may come from the mental power requirements of answering questions of meaning rather than doing something, but both questions are present in children before the education system takes away their will to think, so this would be an environmental influence, more than a difference in underlying abilities needed, driving the choice of problems to work in.

So, okay it may be that from my position in the scale I can't see the existing differences, but it appears to me that performance measurement theory has a pretty good explanation for most of these points, and it doesn't require too much smarts to understand.

Good for me, otherwise I'd have to go for a long walk to think about it. 😉

 - - - - -

Addendum:

Vox Day's response concurs [with his reading of my penultimate paragraph to mean] that I just don't have what it takes to understand really intelligent people. 😂

(That's not what that paragraph means.)

Wednesday, December 21, 2016

Geeking out midweek

😀 Via Watts Up With That, I learned that some mentally ill people in California are talking about secession from the rest of the US (which a lot of people in the rest of the US would support wholeheartedly). WUWT mentions the electrical problem of secession, which I decided to take a little further by computing the shortfall in dispatchable capacity:


(Click for larger, as usual.) Wind and solar are non-dispatchable, so their production needs to have a back-up of slack dispatchable power; given that California imported about 1/3 of its electricity last year, it's reasonable to assume that the required slack capacity isn't available in-state.

The rest is basic arithmetic, which hasn't stopped people from asking how to go from a yearly consumption of 126.4 TWh to a capacity shortfall of 15 GW (14.4 and change, but power plants tend to have 1 GW units; well, big, Rankine cycle-based power plants do).


😀 I like to wear sciency t-shirts to the gym and these are my two newest,

New gym t-shirts. Nerd, me?

but the one on the right is already outdated, as element 118, Oganesson has been added to the periodic table and is a noble gas (an element with a full complement of electrons in its outer orbital layer):



📗 Alton Brown has a new book, EveryDayCook, with some science in it as usual. I find that cooking is a very important skill to have and a great way to introduce kids to science. So, naturally, almost all families with children I know either don't cook at all or keep the kids out of the kitchen. Alas.

Just got the book, so can't say much about it, but if Alton Brown's previous books (and television show) are anything to go by, go buy it. Check out AB's podcast too.


📹 Destin "Smarter Every Day" Sandlin likes opals (a type of semi-precious stone) and takes his viewers to Australia to see how they are mined:



📺 The remake of MacGyver isn't as good as the original (no surprise there, I guess): while the science is as bad as it ever was (it was never good), there's a lot more stupidity (things happen that a 6-year-old would know to avoid), violence (Robert Dean Anderson's MacGyver was averse to solving problems with it), and preachiness (there was a bit in the old one, too).

I still like the general idea of MacGyver, that of solving problems with science (STEM, really) and creative thinking. That's why I give MacGyver a pass on most of the problems, because the attitude is right: thinking and STEM are what makes humans advance as a species.

Also, the self-description of the new MacGyver, when he met the hacker (I guess they needed one in 2016) was spot-on: "you hack computers, I hack everything else."


🚶I like to take long walks to think, so I took a 25 km one in San Francisco last weekend:

Walk in San Francisco


😠 Whomever does the social media for Los Alamos National Laboratory posted this shameful tweet (the image is the problem; see if you can figure it out before reading further):


And my comment (which, of course didn't get a response, since whomever does their social media probably doesn't understand science beyond the level of a kindergartener):
And that fireball, so far outside the atmosphere, what causes it, @LosAlamosNatLab? Seriously, I literally can't even! #facepalm 😱😡
The fireball heat is caused by compression of the atmosphere in front of the meteor, then air friction creates the tail by slowing some of the burning material. It's that $PV=nRT$ thingamaboyle, when the pressure $P$ gets very high too fast for the volume $V$ to change (particularly in front of objects traveling much faster than the speed of sound), the temperature $T$ has to get very high to balance out the equation, since number of moles $n$ doesn't change and $R$ is a constant. Science. It works.

It's not inconsistent behavior to upbraid the LANL and The Science Channel for their bad science in social media while giving MacGyver a pass. MacGyver is a fiction television show; the LANL and The Science Channel are (or should be) serious institutions with an educational mission.


🎱 Puzzle of the week (from a Twitter rathole so I have no way to credit): find "DOG"


(Yes, it's just a matter of exhaustive search, but this has been fiendishly well-designed, to maximize the need for searching. Such design effort just to thwart intelligent searches is worthy of mention.)