Monday, March 27, 2017

Creationists attack evolution, hilarity ensues.

On a site that I occasionally peruse there was a post today about evolution. Since the owner of the site is a creationist and so are most of the commenters on that site, I decided to pick some short comments for analysis.

Usually creationists give themselves away by using "evolutionist" instead of "person who understands basic biology"; they also focus all their attention on politics and personalities instead of ideas and evidence, for example using "Darwinism" and "Darwinian evolution" instead of simply saying evolution.

Comments from that site in italic blue, my analysis follows them.


So now the neo-Darwinists are just like Darwin himself, once again they're holding to their theory out of dogmatic bloodymindedness in spite of the overwhelming evidence against it.

As they say on Wikipedia "citation needed." In fact, evolution by random mutation and non-random selection has been validated as a mechanism for speciation, for example with the flu virus; details of the process get increasingly better understood as we learn more from molecular biology, and as with everything in science, the evidence is what matters.

The alleged "overwhelming evidence against it" always ends up being something that has been thoroughly debunked ("how could an eye evolve?"), arguments from ignorance of the basics, cherry-picked or outright made-up data, personal attacks, or a combination of these.


No one understands the first thing about evolution because all of the predictions it makes come out false.

Ditto with "citation needed," and the response to previous quote.


[Responding to] "Surely, if it was intelligently designed by a supernatural entity or an alien, they would not have made such a very sloppy work."

Only someone who has never been in an engineering lab could possibly make such an ignorant statement.

Engineering labs make technology demonstrators and prototypes. Finished products are held to a different standard. (That was my beef with Thunderf00t's criticisms of the Hyperloop.)

If there had been a designer, then that designer would fail all design and product creation courses in existence, what with putting a waste disposal outlet and amusement area in such close proximity. No need to ask about the infinite regression of designers, which is what design arguments for life always end up tangled in.


Simple observation of the complexity of plant and animal life reveals the theory of evolution as one of the most retarded pieces of BS ever believed by human beings.

The ignorance in this comment might have been tolerated up to the 1960s, as explanations of the actual work of replicator dynamics and evolutionary stable strategies (not by that name, which is modern) were too technical for most people to understand. But since the 1970s there have been a slew of simple, easy-to-understand explanations of how the process works, so there's no longer any excuse for even a lightly-educated person to say such nonsense.


Darwinism violates the central limit theorem, one of the most fundamental laws of probability in the universe.

Now here we have a great example of a species called the Pomposus Ignoramus Maximus: someone who picks a concept at random, places that concept in a already-faulty argument, and states the resulting mess as fact.

The central limit theorem (more precisely, each central limit theorem, since there are many, each a generalization of an earlier one) says that under certain regularity conditions, the distribution of the sample mean converges to a Normal distribution with mean equal to the population mean and variance equal to the population variance divided by the sample size.

Note that this is a theorem about generalizing the results of sampling and has nothing to do with biology or evolution. That's the "picks a concept at random" part. Anyone familiar with a central limit theorem would understand that it has nothing to do with evolution, but obviously the commenter doesn't know that.

The "already-faulty argument part" usually goes as follows: because evolution creates what appears to be order, it violates the second law of thermodynamics. This argument is faulty on a number of different levels, the obvious one being that the second law of thermodynamics doesn't apply to open systems that receive energy from outside, and evolution, which depends on reproduction, does receive energy from the environment.

Of course, the first word, "Darwinism," gives the commenter away as an ignoramus, as no one who understands biology refers to evolution this way. "Darwinism" is used to make it appear as if evolution is just a political movement or a personality cult.



There was also a lot of nonsense about the Universe being a computer simulation. I'll let Lubos Motl deal with that.

Sunday, March 26, 2017

Late March geekery and revealed preference


I drew a random sample of 100 pages from the browser history on my personal laptop, and then classified each of the pages into a number of categories. Results were rounded to 5% because such a small sample will necessarily contain a lot of noise; the 5% is essentially a low-pass filter cutting out the high-frequency noise of occasional browsing.

This is a very rough measure of preference, since it doesn't account for time spent on the page, for example, or other measures of interest. But it's a good starting point, and it matches interests outside of online content (books, videos, participation in forums). I classified book-related pages into the most appropriate category, so the two SciFi-related pages in the sample went into STEM, because they were hard-scifi not space opera.

STEM and business (including economics) are my work areas, but on that chart I map only the non-work related perusal. I start the day with a massive RSS-feed reader, about one- half business and one-half science and engineering; of that about two-thirds has no direct use for work. The other categories in the chart are personal interests with typically map to activities in the real, physical world: exercise, food, culture, and the Bay Area and California (events, traffic, weather, local news; I live in the world, after all).

One surprising omission is photography-related content, but that's the problem of 100 observations. I would have sampled 10,000 but then I'd need to classify those 10,000 by hand, so I didn't. I did have some hits from the StyleForum, but most were about food, not clothing or shoes, so they were classified appropriately. The London Lounge, alas, didn't appear in the sample.

Other than business and economics-related news, I don't follow current events, celebrities, gossip, or sports, so it's not an anomaly that they're not represented in the chart. (I get enough relevant news and cultural developments as part of the work-related information stream. Backed up by charts.) Obviously there's no adult or embarrassing content in my browsing  --- because I don't consume that, of course, not because I understand private mode browsing.

With the above distribution in mind, the rest of the post is an illustration of the content making up that revealed preference (and my take on some of it). I avoid commenting on business, management, and economics for work-related reasons, so I'll leave those out.


😎 Thunderf00t shows how Fukushima worrywarts are wrong:


Here's Thunderf00t being uncharitable again! Making these poor people stretch their 2 IQ points together to understand that 2 Becquerel per cubic meter of seawater from Fukushima is not so enormously dangerous when compared to 10,000 Becquerel from our own body radiation (for 100kg person). Or the extremely dangerous bananas.

Pretty interesting explanation about the bio-accumulation of Strontium (because of chemical similarity to Calcium, which is in our bones).


😎 Internet fitness experts, what a joke:


I could be naughty and notice that both of these pull-up experts have narrow shoulders and thin upper arms, which is interesting since the main purpose of pull-ups is to develop broad shoulders and thick upper arms. But I won't be naughty.


😎 Your tax dollars at work (at Fermilab): the weak nuclear force.




πŸ“• I recently read Dan Ariely's "Payoff," an interesting book about motivation. Highly recommended for everyone: it's not about how to motivate employees (though it does serve that purpose), but rather how to understand one's own motivations. By allowing me to see outside of my own behavior, it clarified some issues of work-related happiness I'd been struggling with.


πŸ“• Now reading Joel Mokyr's "A culture of Growth," an analysis of what really drove the success of industrial civilization. Not one to bury the lede, Mokyr states early on that, unlike what economists and historians (and economic historians) tend to think, it was not intellectuals driving the change but rather the pragmatics (what we would call engineers, entrepreneurs, and venture investors).


πŸ‘ Some interesting links:

  • What Costco knows about customers. This is the kind of thing I collect in case I start doing executive education (or regular MBA teaching) again. Interesting, and if I wanted to turn it into a discussion piece or an exercise, it would take less than one hour to do so. I browse these for fun (yes, I've idiosyncratic tastes), but there's a bit of a pragmatic background to it.


🎡 Closing music, via Voices of Music:


Saturday, March 25, 2017

Reality vs nonsensical products (part 688 of Aleph-null)

Via Thunderf00t, I found this Waterseer-wannabe, which is about as feasible as the original Waterseer, that is not at all.



Obviously it's very important that the product is 3D-printed, rather than CNC-machined or heat-molded. 3D-printers, like the Internet Of Things, are magical incantations that can get around the laws of Physics. Or so one would think, given how credulous people become at the sound of these incantations.

Alas, as is usual with engineering, ugly numbers murder beautiful illusions:



Since the battery voltage is 12V, a 12kW Peltier effect cooler will require a 1000A current, which is likely to make Li-ion battery a bit... well, just watch what happens:



Engineering rule: when an electronic device starts outgassing, that's generally not a good thing.

Wednesday, March 22, 2017

The power of "equations"

If a picture is worth a thousand words, an equation is worth a thousand pages of text.

This was inspired by a livestream about free trade based on criticism of "original texts." (Basically Ricardo and Schumpeter.) The quotes aren't a diss on the texts themselves, but rather a way to emphasize that this is a type of scholarly pursuit in itself, though not the type used in modern economics, STEM, or pragmatic professional fields like business analytics or medicine.

What's the problem with the argumentation from these original texts? Simply put, the texts are long and convoluted, with many unnecessary diversions and some logical problems in the presentation. The valid arguments in these texts can be condensed in about one page of stated assumptions and two results about specialization.

It's not just that math's an efficient way to communicate, math has precise meaning and an inference process. It brings discipline and clarity to the texts and the inference process isn't open to debate. (Checks and corrections, yes; debate, no.)

Unfortunately, without math, the speaker's argument was essentially a sequence of variations on "Schumpeter points out that this assumption of Ricardo doesn't hold true," without the extra step of determining whether those assumptions are important to the final result or not. (We'll come back to this problem.)

Word-thinking about quantitative fields is generally to be avoided.

That was the inspiration, and this post isn't about free trade or the particular mode of thought of that speaker, but rather about the power of mathematical modeling, which I'm calling "equations" in the title.

Here's a reasonably robust statement: when the price of a commodity goes up, people buy less of that commodity. (Sometimes this is put as "demand goes down," which is incorrect, it's the demand quantity that goes down. Changes in demand are movements of an entire function.)

So, quantity is a decreasing function of price (and first-time readers of economics textbooks get confused because the charts have quantity in the $x$ axis and price in the $y$ axis). This has been known for a long time; what's the problem with that formulation, simplified to "when price rises, quantity falls"?

The problem, of course, is that there are many different types of decreasing function. Here are a few, for example (click for bigger):


Functions 1 to 4 represent four common behaviors of decreasing functions: the linear function has similar changes leading to similar effects; the convex function has decreasing effect of similar change (like most natural decay processes); the concave function has increasing effect of similar change (like the accelerating effect of a bank run on bank reserves); and the s-shaped function shows up in many diffusion processes (and is a commonly used price response function in marketing).

Functions 5 to 8 are variations on the convex function, showing increasing curvature. (Function 2 would fit between 5 and 6.) They're here to make the point that even knowing the general shape isn't enough: one must know the parameters of that shape.

That figure does have 2000 data points, since each function has 250 points plotted. (When talking about math, some people use drawing tools to make their "functions," I prefer to plot them from the mathematical formula; it's a habit of mine, not lying to the audience.) To describe them in text would take a long time (unless the text is a description of mathematical formulation), while they can be written simply as formulas; for example, the convex functions are all exponentials:

$\qquad y = 100 \, \exp(-\kappa \, x) $

with different values of $\kappa$. They are the type of exponential decay found in many processes, for example, where $x$ is time and $y(x) = \alpha \, y(x-1)$ with $y(0)>0$ models a process of decay with discrete-time rate $0 < \alpha < 1$. In case it's not obvious, $\kappa = -\log_{e}(\alpha)$.*

So, what does this have to do with reasoning?

Here we go back to the problem with arguments like "Schumpeter showed that Ricardo's assumption X was wrong." When a model is written out in equations, we have a sequence of steps leading to the result, each step tagged with either a know result, rules of math inference (say "$a \times b = a \times c$ simplifies to $b = c$ unless $a = 0$"), or an assumption of the model. This allows a reader to quickly see where a failed assumption will lead to problems and determine whether the assumption can be replaced with something true (or, as is the case with many of the assumptions made by Ricardo, is unnecessary for the result).

The main power, however, is that mathematical notation forces the speaker to be precise, and inferences from mathematical models can be checked independently of subject matter expertise. A mathematician may not understand any of the economics involved, but will merrily check that a decay process of the kind $y(n)= \alpha \, y(n-1)$ can be described by an equation $y(n) = y(0) \, \exp(-\kappa \, n)$ and determine the relationship between $\kappa$ and $\alpha$.

From those precise models, one can make inferences that take into account details hidden by language. Consider the "price rises, quantity falls" text and compare it with the different decreasing functions in the figure above. The shape of the function, its slope and its curvature have different implications for how price changes affect a market, differences that are lost in the "price rises, quantity falls" formulation.

It bears repeating the first mentioned advantage: that hundreds of pages can be condensed in one page of equations. Once one's mind is used to processing equations, this is a very efficient way to learn new things. Stories about Port wineries in Portugal and textile factories in England may be entertaining, but they aren't necessary to understand specialization (which is what comparative advantage really is).

Math. It's a superpower mostly anyone can acquire. Sadly, most opt not to.


- - - - - Addendum - - - - -

No self-respecting economist would use the Ricardo comparative advantage argument for international trade now, particularly because it's so simple it can be understood by anyone. Most likely they'd use some variation of the magic factory example:

"Let's say a new technology that converts corn into cars is discovered and a factory is built in Iowa that can take ~ $\$20,000$ of corn and convert it into a car that costs $\$30,000$ to make in Michigan. Can we agree that this technology makes the US richer?

Now, move the factory to Long Beach, CA. Maybe there's a little more cost in moving the corn there, but we're still making the US richer, right?

Now, someone goes into the magic factory and discovers that it's really a depot: stores grain until it's sent to China on bulk carriers and receives cars made in China from RoRos during the night. The effect is the same as the magic factory, so it makes the US richer, right?"

There are many cons to this example, but it does make one issue clear: trade is in many respects just like a different technology.


- - - - - Footnote - - - - -

* It's obvious to me, because after decades of playing around with mathematical models, I grok most of these simple things. There are some people who mistake this well-developed and highly available knowledge (from practice) for ultra-high intelligence (rather than regular very high intelligence), a mistake I elaborate upon in this post. 😎

Thursday, March 9, 2017

Collected early March geekery

😎 Two talks by Edward Tufte (I read Tufte's notebooks regularly):




😎 The end of modern medicine (rise of the superbugs):


Be afraid. Be very afraid. Ok, moderately cautious.


😎 Don't let Dave Jones borrow your pre-release not-yet-for-sale oscilloscope:


(The electrolytic capacitors in the power supply led to a big argument in the EEVBlog forum.)


😎 A digital clock coded using the Conway "game of life." From twitter user Abraham (click that link for an animated version, or get the code and run it on any of the many simulators):


Behold this example of "my code-fu is bigger than your code-fu." Nerds, nerds everywhere (on Stack Exchange, that is):
http://codegolf.stackexchange.com/questions/88783/build-a-digital-clock-in-conways-game-of-life/111932#111932
It's a French coder... I expect the clock to end the cycle with a "Ich gebe auf!" πŸ˜‰


😎 Interesting paper on molten salt reactors, works as a good introduction (non-technical) to them. MSRs are the future of nuclear (at least for now).



😎 Nice to see physicists addressing the really important problems (not really):



πŸ˜‹ Two-ingredient (plus salt and spices) tomato soup: 6oz tomato paste, 24oz milk.

Two ingredient tomato soup.

And the final product (3 hours in the slow cooker), dressed with fresh basil and parmesan:

Two ingredient tomato soup dressed up with fresh basil and grated Parmesan.


πŸ€„️ Easy puzzle from a repository of Martin Gardner puzzles: Which of the two dots is the center of the circle? Can you prove it?
(More of an optical illusion than a puzzle, since there's enough information in the picture to make a determination without guessing.)

I have a complete collection of Martin Gardner puzzle books and partly trace my interest in computation and information manipulation systems to the "Mathematical Games" columns in Scientific American. (And to their successor, "Metamagical Themas," which is an anagram of "Mathematical Games.")

Tuesday, March 7, 2017

Deep understanding and problem solving

There's value in deep understanding.

Nope, I don't mean the difference between word thinkers and quantitative thinkers. Been there, done that. Nor the difference between different levels of expertise on technical matters; again, been there, done that.

No, we're talking the crème de la crème, experts that can adapt to changing situations or comprehend complexity across different fields, by being deep understanders.

Because any opportunity to mock those who purport to educate the masses by passing along material they don't understand, let us talk about Igon Values... ahem, eigenvalues and eigenvectors.

Taught in AP math classes or freshman linear algebra, the eigenvectors $\mathbf{x}_{i}$ and associated eigenvalues $\lambda_{i}$ of a square matrix $\mathbf{A}$ are defined as the solutions to $\mathbf{A} \, \mathbf{x}_{i} = \lambda_{i} \, \mathbf{x}_{i}$.

Undergrads learn that these represent something about the structure of the matrix, learn that the matrix can be diagonalized using them, how they appear in other places (principal components analysis and network centrality, for example).

But those who get to use these and other math concepts on a day-to-day basis, who get to really understand them, develop a deeper understanding of the meaning of the concepts. There's something important about how these objects relate to each other.

After a while, one realizes that there are structures and meta-structures that repeat across different problems, even across different fields. Someone said that after a lot of experience in one engineering (say, electrical), adapting to another (say, mechanical) revealed that while the nouns changed, the verbs were very similar.

This is what deep understanding affords: a quasi-intuitive grokking of a field, based on the regularities of knowledge across different fields.

For example: while many who have taken a linear algebra in college may vaguely recall what an eigenvalue is, those who understand the meaning of eigenvalues and eigenvectors for matrices will have a much easier time understanding the eigenfunctions of linear operators:


The structure [something that operates] [something operated upon] = [constant] [something operated upon] is common, and what it means is that the [something operated upon] is in some sense invariant with the [something that operates], other than the proportionality constant. That suggests that there's a hidden meaning or structure to the [something that operates] that can be elicited by studying the [something operated upon].

And this structure, mathematical as it might be, has a lot of applications outside of mathematics (and not just as a mathematical tool for formalizing technical problems). It's a basic principle of undestanding: what is invariant to a transformation tells us something deep about that transformation. (Again, invariant in "direction," so to speak, possibly a change of size or even sign.)

And this is itself a meta-principle: that the study of what changes and what's invariant in a particular set of problems gives some indications about latent structure to that set of problems. That latent structure may be a good point to start when trying to solve problems from this set.

Yep, really dumbing down this blog, pandering to the public...