Showing posts with label Engineering. Show all posts
Showing posts with label Engineering. Show all posts

Friday, September 30, 2016

Ah, "science" in the media, always good for a laugh

There's nothing wrong with the idea of science in the media, per se: I want more and better science in the media. But there's a lot wrong when people who clearly don't know any science write (or illustrate) pieces about science or related matters like engineering, the environment, and space exploration.


The Wall St. Journal, where people who don't understand basic Physics units write tweets about trading systems designed by hordes of Math and Physics PhDs:



With friends like Engadget "green" writers, the environment won't improve.



The Motley Fool, being its foolish self.



That's it for September. A lot of incipient posts in the hamper, but paid work got in the way of blogging. Such is life.

Saturday, September 24, 2016

Carbon capture, perpetual motion machines, and IGORs

There's one quick rule to evaluate energy-related technologies: if you can turn them into perpetual motion machines, they aren't real.

In conversation with an IGOR (Ignorant Grandstanding Oblivious Rabble-rouser), I pointed out that the idea of using atmospheric carbon dioxide to make fuel isn't entirely new (Nature did it first), but the technologies being proposed aren't realistic, for the reason above.

IGOR countered that these processes could, in his view, be the solution to our energy crisis (do we have one?), because the fuel produced by carbon-capture will provide the energy to keep the process going.

Ahem. Let's think about this, with a diagram:



What reasonable people say is that the energy extracted from the fuel will partially cover the energy needs of the capture and conversion process (that is $x > y$ but not by much); what IGORs say is that $y>x$. But if that were so, we could feed the exhaust from the energy production system into the input for the capture system, and get a perpetual motion machine that generates free energy.

Some of the more reasonable proponents of this carbon-capture and conversion idea suggest that the energy coming in can itself be green energy, like solar, and therefore there's a net "carbon-based" energy coming out of the system. Two points:
First, that's fine, but then why use part of that solar energy to create carbon-based fuels, instead of using the solar energy to replace the carbon-based fuels? Note that any $\mathrm{CO}_2$ that gets turned into fuel will yield another $\mathrm{CO}_2$ after the energy generation (conservation of the carbon), so no advantage there.
Second, the designs proposed look extremely wasteful of energy: capturing $\mathrm{CO}_2$ after it has diffused into the atmosphere is bound to require a lot of energy to flow non-$\mathrm{CO}_2$ gases in the atmosphere through the carbon-capture process. Better to stop $\mathrm{CO}_2$ at the source, if that's what you're after.
Of course, as I mentioned, Nature does provide us with a technology to use solar power to capture $\mathrm{CO}_2$ and turn it into fuel:



It also has the advantage of being pretty, giving shade, operating in silence, and bearing fruit. Trees. It's trees. Let's plant more trees. I like trees.

One particularly oblivious IGOR insinuated I was anti-environment because I prefer trees to useless noisy subsidy-harvesting machines.

With friends like that, the environment is doomed.

Saturday, September 17, 2016

The problem with wireless earbuds for audiophiles

(The lack of an headphone jack on the iPhone 7 upset many people, but not me. I generally don't use my iPhone as a source of music, and if I were to do so in the future, I'd use an external DAC/Amp.)

To see why the wireless earbuds are a problem for audiophiles, we need to begin at the opposite end of the process, when analog signal (music) becomes a digital representation.

There are two steps in the process: first, the continuous analog signal is sliced in time, "sampled," so that it's now represented by a sequence of analog levels; second, those analog levels are compared with a finite scale, the digital scale, and the best approximation is used to represent the level, thusly:


There are two sources of information loss (or "noise") in this process:
1. By taking level slices of a continuous curve, the sampling creates an imperfect representation of the curve; that's called sampling noise. The longer the slices, that is the less often the analog input is sampled, the higher this sampling noise. 
2. By forcing the analog samples, which are on a continuous scale, to match the limited levels of a digital scale, the process creates a second type of noise, quantization noise. In the above example, the difference between the digital output for periods (1) and (2) is higher than the difference between the analog samples for those periods. Also, periods (2) and (3) have the same digital output, despite the different analog sample levels.
To reduce sampling noise we can sample more often, that is have thinner slices of time so that there are more analog samples to represent the same curve. To lower quantization noise we can have more digital levels; typically the number of levels is a power of 2, since we use binary coding.

For example, CD encoding used 44,100 samples per second per channel at 16 bits of resolution (allowing $2^{16}$ or 65,536 different levels); this was deemed enough for music since it allowed for an upper frequency limit of over 20 kHz (generally considered the limit of human hearing) and a dynamic range of 96dB (each bit adds 6dB; the choice of 96dB was widely panned by audiophiles as too small).*

As with everything in engineering (and in life, really) this was a matter of trade-offs. Later we've gone beyond these limits with other standards like SACD, for example. But the problem of trade-off remains, typically that of space or bandwidth against quality of reproduction.

Before compression, the total number of bits necessary to represent a stereo signal sampled at a rate of $s$ samples per second and a number of digital levels $2^{N}$ is $2 \, s \, N$ bits per second; because there's a lot of redundancy in music (no, it's not just Philip Glass), there are opportunities for compression.

Sometimes the music is compressed without losing information, called "lossless compression"; an example is FLAC, which has all the information necessary to reconstruct the original uncompressed digital music file. This is similar to compressing a data file for transmission; after decompression the reconstituted file must be identical to the original. (FLAC uses regularities of music to compress data more efficiently than a general compression algorithm.)

Sometimes the compression loses information that is deemed unnecessary, called "lossy compression"; MP3 compression is lossy. Lossy compression adds sampling and/or quantization noise to the original data, though the design of the compression scheme is supposed to minimize the aural effect of those additional errors in some trade-off with the compression ratio.

On the other hand, because plastic CDs and wireless signals sometimes get damaged, some space or bandwidth has to be used for error-correction codes and other digital administrative minutia. When the iPhone connects to the earbuds by wire, it can send an analog signal, but when it connects via Bluetooth, the signal is digital, must be compressed for transmission and requires a lot of network administration detritus.

So, one of the first questions wireless earbuds raise is: is Apple sending enough data over that Bluetooth connection for an audiophile? This isn't the only question, though.

Using the very best in advanced engineering CAD displays, we can see that this is only the first of four classes of problems:

Four classes of reasons why wireless earbuds are not for audiophiles.



Problem class 1: Quality of the Data

Apple's decision to go wireless changes the transmission of data between the main processor and the digital-to-analog conversion from a wired connector inside the phone, and protected from most interference, to a digital transmission over a noisy channel (Bluetooth). That means that a lot of other things have to be transmitted, in particular handshaking data, error-correction codes, and diagnostic signals.

The problem is mostly that Apple went from being a perfectionist's personal fiefdom (during the reign of His Steveness, may his divine hand bless you with a bounty of new MacBookPros) to being a company looking to make a buck. And companies looking to make a buck make different trade-offs.

His Steveness wanted the best. He might not have gotten it always, but he made products for people who wanted to brag they had the best. (Even when by all objective measures they didn't.) But now, the whole company seems to be into the "milk our brand while it lasts" phase of its corporate life cycle, so I'd venture that their trade-offs are much closer to the general public's than those of the fringes.

His Steveness run the company targeting the fringes, so that the general [Apple-buying] public could pretend aspire to be in the fringes. Depending on who you ask that's "aspirational marketing" or a "reality distortion field."

Not anymore. Not for Apple.

Taking a lossy compression like MP3 and compressing even further for the earbuds (possibly limiting both the frequency and the dynamic ranges) isn't a recipe for audiophile sound. It does work for phone calls, and that's probably what most phones are used for. But for music... no.

(At this point I should mention that digital audiophiles have moved on from Apple a while ago, putting up with miserably bad less than optimal interfaces to use things like —to go entry-level— a second-generation Fiio X5.)



Problem class 2: Quality of Digital To Analog Conversion

A second source of problems is the digital-to-analog conversion circuitry. Among the many problems that can come from a cheap (and low-power, which is important in wireless earbuds) DAC, the most obvious are reproduction errors (the same digital input doesn't map to the same voltage consistently, or the difference between digital levels doesn't match to the appropriate difference in voltages). This isn't that much of a problem in 2016 (it used to be in the 1990s).

Another, more serious problem has to do with the precision of the timing, which is one of the major reason why if you care about computer music you'll get an external DAC, possibly a Chord Mojo or an Audioquest Dragonfly. (Or maybe something from the brand that can't be named.)

Even small errors in timing (some of which are induced by the buffering and data processing necessary to extract the digital music from the wireless signal) can lead to significant phase distortion, in that the 'time' used to reproduce the music doesn't match real time.

To illustrate this problem, consider the following phase-distorted sine wave (slightly exaggerated to make the case visible, but even very small phase distortions sound horrible):


Comparing the two periods of the distorted wave, T1 and T2, you can see that phase distortion in this case induces frequency variation. This means that instruments will sound as if they are out-of-tune, and [if you're over 30 you'll get this reference] like your brand-spanking-new iPhone is a cassette player running out of battery power.

If you accidentally downloaded a [poorly encoded] FLAC file from a torrent site you accidentally fell into while looking for a French Literature study group, accidentally run that FLAC file through a FLAC to MP3 converter that you accidentally had on your computer, then accidentally played it and noticed strange warbling and high pitch glitches, that's an entirely accidental observation of very bad phase distortion.

This is why any audiophile wants a DAC that uses its own timing circuitry and buffer, rather than depend on the shared circuitry involved in network management etc.



Problem class 3: Fixed- vs Variable-Gain Analog Amplification

Many computers (and I assume all iPods and iPhones) have a fixed gain amplifier for the reconstructed analog signal. That means that changes in volume are created by multiplying the digital signal by digital fractions prior to conversion to analog. In essence, removing data from the signal.

For example, to halve a digital number, all you need to do is shift all bits right, disposing of the lower-significance bit and adding a zero at the highest significant bit (or, depending on how negative numbers are encoded, adding a copy of the previously high bit). This means that one bit of data has been lost. The sound is not just half-volume, but also half-dynamic range; each halving of volume removes one bit or 6 dB of dynamic range:

$\texttt{[1000101001011011]} \rightarrow \texttt{[0100010100101101]} \rightarrow \texttt{[0010001010010110] }$

If the original dynamic range of the data was higher than that of humans (CD or CD-derived online purchase or stream? No, it wasn't!), then this loss isn't important. Otherwise (i.e. basically always), your music just became lower resolution.

In a better sound system (i.e. any external DAC/amp), the analog signal out of the DAC goes into an analog amplifier that has variable gain. In some systems the variable gain is controlled with a knob, in others using a digital interface. But in both cases the amplifier tends to be a digitally controlled variable gain amplifier, in which the analog signal path is all analog and only the gain is controlled by a digital system (typically a feedback network of switchable topology).

(An alternative approach is to take the, say, 16-bit data and shift it 8 bits up into the most significant bits of a 24-bit word, then multiply that by an 8-bit fraction (thus allowing for 256 different volume levels) and feed the result to a 24-bit DAC, whose result will feed a fixed-gain amplifier. This allows for the whole process to be digital as long as possible.)

The amplification issue alone is worth getting an external DAC; but it's important to also consider the next point.

You don't say, @AudiophiliacMan

(My Audioquest Dragonfly is usually plugged into a powered USB hub, so it doesn't rely on the computer USB bus power.)


Problem class 4: Power issues

And this is the big big one. You like loud music? Well, expect distortion as soon as the volume gets loud. Because most of these small batteries aren't able to deliver the current needed fast enough. So what happens is that as the output voltage increases by $\Delta v$, requiring a $\propto (\Delta v)^2$ increase in power, the amplifier "fixed" gain starts to decrease, more so the higher the $\Delta v$, and we get... well, we get this:


That compression of the sine wave makes it sound nasal. When your music sounds like that, it's a sign that your amplifier is not being able to draw enough power. Note that this is different from the clipping that happens if the transistors in the output stage enter the saturation regime; in that case, instead of a smooth scrunched sine wave, we get a flat-volume squared wave, which makes everything sound like a heavy metal guitar.**

Ever wonder why 100W audiophile amplifiers have external power supplies that look bigger than the 1000W power supply on a computer server? That's because they are. Abundant power is an essential part of clean amplification, and without clean amplification the rest doesn't matter. And the way you get abundant power is you have a lot of slack available.

Care to bet how much slack power those earbuds have?



Does it matter?

To whom?

To me, no. I have a number of other, better sources of music, and I use the iPhone as an internet device and, astonishingly, as a phone. Weird, I know.

To those who just want to listen to podcasts, audiobooks, maybe some music in noisy environments? Of course not.

To an audiophile, who for some unexplained reason doesn't get a cheap lossless player like the Fiio X5? Yes, it matters, but this audiophile has the option to get the new Audioquest Dragonfly RED, with a tail adapter for the iPhone, so that's what s/he should do. Pair that with a nice pair of big cans like the Sennheiser 650s (in my opinion the best quality/price cans on the market), and you're set.

To an audiosnob who can't tell the difference between 866kbps Apple lossless and 32kbps mono MP3 but insists on having "the very best," preferably Bang & Olufsen or some other design-heavy, sound quality-light, high-reconition brand? Yes, it will matter a lot. (Audiosnobs have already invaded Head-Fi and other audiophile forums arguing against the iPhone 7 from their usual position, ignorance.)




-- -- -- -- Footnotes -- -- -- --

* Yes, the Nyquist limit for 44.1 kHz sampling is 22.05 kHz... as long as the anti-aliasing filter is a perfect step function in the frequency domain. The universe containing exactly zero perfect step function anti-aliasing filters, I and the entire engineering profession prefer to hedge by saying that it's "over 20 kHz."

When audiophiles say that LPs (Long Play records,  aka "vinyl," Olivia Wilde not included) have better sound than CDs, they are usually referring to dynamic range. It's not just that CDs have only 96dB of range, but much worse, that in transferring the music from the master recordings to CD, sound butchers engineers would monkey about with the original dynamic ranges to "make it fit better," which was disastrous for music with broad dynamic ranges.

(The standard example is the butchery of Dire Straits' "Money For Nothing," which was so compressed for the CD that it lost the whole point of the intro. Hey, though I listen almost exclusively to art music and jazz, nostalgia has its place.)


** That's because the sound effect that makes electric guitars sound like that is precisely pre-amping the sound so high that the output stage transistors will saturate and clip the waveform square, at the same time removing almost all volume envelope effects. You can do this to any instrument including voice.

Added later: yes, I know all these effects are digital now. Kids these days! In my day you built your effects with transistors, µA741s and sometimes NE555s. None of that "digitize, FFT, do whatever, convert out" nonsense. We had grit!

Sunday, August 14, 2016

Working the solution versus solving the problem

Some time ago I tweeted that I was going to row a number of nautical miles on my trusty old Concept IIc machine. As an engineer, I use SI units for everything --- except on the water, where I use traditional units: nautical miles and knots.

A couple of rowers I know asked me how I had hacked the controller on the Concept IIc to change the units. This was my answer:

How I "hacked the software" on the Concept IIc to use nautical miles. #genius

Many people miss the point, that the others were making a common mistake in problem-solving, a mistake that forecloses most creative solutions:

The mistake is working the solution instead of solving the problem.

Hidden in the question about the hack is an assumption: that the solution has to come from my programming skills (they know what I do, so it's not an unreasonable assumption). That assumption sets a path to a solution, which would include reprogramming the firmware inside the Concept IIc controller.

Having the ability to backtrack from that path into the beginning and to choose another path is the key process in the thinking process here. Too many people start on one path and can't get off it to pursue other possible paths to the solution.

By focussing on the problem, i.e. the question "what is to be achieved?", rather than the solution under consideration, changing the software, the mistake was avoided.

Yes, this is a trivial and obvious (after the fact) example, but often the difference between a non-working "solution" and a working solution is a matter of focus on the problem to be solved.

Alas, changing their focus is too hard for some would-be problem solvers.

Saturday, July 30, 2016

Product ≠ Prototype ≠ Technology ≠ Idea

Production note: Some credit to Thunderf00t, for had he not made such a complete pig's breakfast of his analysis of Hyperloop, this "why scientists are bad at engineering" post wouldn't have been written. *


Product ≠ Prototype ≠ Technology ≠ Idea


There are significant differences between an idea ("it would be great to fly from London to New York in four hours, let's use fighter jet technologies to make an airliner") and a marketable product (the Concorde). That's just on the engineering side, without the additional complexity of the business side.


Ideas to technology

An idea is just an organization of thoughts, for example: "if we got a train riding on magnets instead of wheels, we could get rid of friction, wear, and fatigue; then if we put the train in a low pressure tube we could go really fast."

This idea becomes a technology when you get something actually working; this something is called, for obvious reasons, a technology demonstrator. It's used to show that the technology has some potential, and it used to be a minimum requirement for getting funding. (More on that below.)

Linear motor Maglev technology is already available, though maybe not quite up-to-spec, but there are some technological barriers to overcome regarding the tubes and the pods.

Here it's worth noting a common error of reasoning, which is to assume that just because something hasn't been done, it can't be done.
For example, TF's use of a video excerpt showing Brian Cox inside "the largest vacuum chamber in existence." It's the largest because there was never a need for a larger one. It doesn't represent a technology limit. It's not that difficult to make a long tube that can take a big pressure differential (= pipeline), though we currently design this kind of tube for over-pressure because that's what its current use requires.
Many of the "the largest X in existence" limits are determined by economic necessity, not laws of physics. Think about the largest pizza ever made; was its size determined by some limit of the laws of physics?
Sometimes the technology is based on existing science, or co-developed with it, like some of the current work in biotech. Sometimes the technology precedes the science needed to explain it (or at least the attention of the scientists whose expertise is necessary to build the explanation), as was the case of most of the mechanical innovations in the first industrial revolution.

Part of the funding of Hyperloop is an investment in technology development that will have applications beyond the Hyperloop itself ("spillovers"). There's this thingamabob called a "laser" that was imagined as a pew-pew death-ray in sciFi, became reality as a pure Physics experiment, and mostly is used to checkout groceries, read data off of polycarbonate discs, pump bits down fiberoptics, and annoy cats. Oh, some pew-pew, too.

Sometimes licensing or developing the technology in directions other than the originally intended ends up being the most important part of the business.

It's probably worth noting two things at this point:
  • Hyperloop projects haven't finished the technology development phase; that would be indicated by a technology demonstration. Assertions about the final product at this stage are futile.
  • Getting funded by professional investment organizations (with their due diligence and fiduciary obligations) requires passing much stricter scrutiny than that given to crowdsourced projects (like Solar Roadways, the Fontus water bottle, or Triton artificial gills).

Technology to prototype

Once the technologies necessary for implementing the idea exist, they have to be put together and made to work under laboratory conditions or at test-scale, in the form of prototypes.

Here's where the "scientists are bad at engineering" point becomes most pointy.

Prototypes will obey the laws of Physics (and other sciences), since they operate in reality. It may be the case that the laws aren't known yet (as with the first industrial revolution) or that they are being simultaneously developed, but no prototype can violate the laws of Physics.

The problem is that there's a lot of specialized knowledge that goes into engineering. Each small piece of knowledge obeys the laws of Physics, but deriving them from first principles isn't practical. (And real scientists don't dirty their hands with engineering.)
For example, a physicist friend of mine didn't know why the suspenders of a suspension bridge (the vertical cables from the big catenary cable to the bridge deck) sometimes have a thin metal helix around them. When pressed on it he said "it's probably a reinforcement of some kind." I knew that the helix is there to limit aerodynamic flutter, and told him. He said, "oh, of course" and mentioned some interesting facts of turbulent flow.
That's what I mean by "science is the foundation of engineering, but scientists don't learn the body of knowledge of engineering." Most scientists are humble enough to understand that there are things they don't know. My physicist friend didn't assert that the helix was for reinforcement; he actually said, "I don't know," a sentence more people would be wise to use.
For illustration, here's a series of videos about metal shop work (the presenter is a professor, I believe, since he keeps talking about research prototypes, but he's seriously shop-savvy):


Instructive and entertaining videos. A big hat tip to Star Simpson for the link, via Casey Handmer. Such is the serendipitous nature of internet knowledge discovery.

A prototype is a one-off, possibly scaled-down, version of the product reduced to its core elements. It's designed to be operated by specialists under controlled circumstances. It requires constant attention during performance and, conversely, is usually over-instrumented for its final purpose (as a product, that is), since part of its purpose as a prototype is to see which parts of the engineering body of knowledge need to be applied to the technology itself.

Sometimes that extensive instrumenting of prototypes helps discover hitherto unknown issues or phenomena and leads to rethinking of extant technologies and redesign or retrofit of existing products. Historically a good part of the body of knowledge of engineering has evolved by this process.
For example, vortex shedding in aircraft wings was not identified for the first several decades of aviation, even though the physics necessary for it was developed in the late 19th Century. Once the engineering idea of vortex shedding wingtips (or, for older airframes being retrofitted, winglets) entered the body of knowledge, it became universal for new airframe design.
The gulf between a prototype, typically a one-off object made to laboratory-grade specifications that requires an expert to operate, and a final product is almost as big as that between idea and prototype, and a lot of other specialized skills are necessary to bridge that gulf.

Prototype to product

Any engineering product development textbook will identify a lot of things that separate a prototype from a product, but here are a few off the top of my head (and the figure above):
  • Products have to be mass-produced by production facilities, not prototyping shops or laboratories. Figuring out how to mass-produce a product and organizing that production is what's called production engineering. Sometimes that involves the development of specialized production technology, and its prototyping and production, which might involve production engineering of its own, which might require... etc.
  • Products are to be operated by normal people, not expert operators (the drunk Russian truck drivers in the figure were motivated by the Only In Russia twitter account, a terrible sink of productivity). Though it's not entirely accurate, many people believe that Apple's success stems from its ability to deploy technology into final products by making it accessible to average users. That is the field of user experience design.
  • Products also need to be much more resilient, safe, repairable, and maintainable than prototypes. Though, sadly for the practice of engineering  ---and the environment --- the "discard don't repair" mentality has taken hold, so maintainability and repairability aren't priorities in much product design. It being a railway, Hyperloop would have to be designed for both, of course.
There are a lot more. Engineering textbooks exist for a reason, they're not just collections of photos of pretty machines. A lot of knowlege goes into actually making things.

In the case of Hyperloop the product is passenger rail transportation, so there's yet another body of knowledge involved, that of managing railroad operations.

Yes, it sounds exciting, doesn't it?

The whole "how hyperloop will kill you" schtick is nonsensical, since there's no final design to evaluate; but it becomes hilarious when almost all the ways to "kill" the passengers have well-established railroad solutions, namely sectioning (you can isolate sections of a line, and you can have isolation joints in the tube), shunt lines and spurs (to remove a pod from the main tube and access the outside world), instrumentation and control system with appropriate redundancies, and a wealth of other factors that any railroad engineer would be aware of.

I'm not a railroad engineer; these are basic Industrial Management observations.

And then there's deployment…

Anyone with a passing knowledge of operations management or project management could find some possible issues with the infrastructure of Hyperloop, even without knowing the details of the technology. Not impossibilities, issues that might cost money and time.
For example, a number of logistics complications come to mind regarding the construction of the Hyperloop along Route 5, namely: the movement of large-sized tube elements; the use of the Route 5 lanes as part of the construction area (even if most of the staging is done off of the road itself) while it's in use as a public roadway; and let's not forget that California municipalities are among the most anti-change in the world: NIMBY was invented here. Unless you know someone who knows someone who knows…
To have an idea of the scale of the problem created by moving the many elements of the tube, consider what happens when just one large assembly has to move on public roadways:

Building the Hyperloop infrastructure is essentially a large-scale project management problem, and specialists would be involved; I added the example above to show that there are more obvious difficulties than the risk of depressurization; in fact, depressurization isn't much of an issue under good operations management and a well thought-out track.

But pointing out commonsensical logistical difficulties doesn't help with the whole "I am a great scientist, hear me snark" persona.



- - - - - - - - - - Footnote - - - - - - - - - -

* My current view of transportation is that trains and ships are better for freight and cars and airplanes are better for people. By cars I mean autonomous individual vehicles, not necessarily individually owned, chaining for inter-city travel at 200-300 km/h (individual pods self-organizing into convoys), and swarming for autonomous intra-city travel. Most of the current problems with air travel are economic, regulatory, cultural, and managerial, not technological, though I'd like to see supersonic aircraft further along the product development process.

Maybe the Acela corridor would make sense for Hyperloop, though. Particularly since weather in the frozen Winter wasteland and broiling Summer Inferno of the Northeast is more volatile than in California, and the Hyperloop tube would be more resilient than the air shuttles, particularly the small planes. (Boston to NYC late December in a small plane… the horror, the horror.)

But as mentioned above, I believe there are some potential high-value spillovers from the technological developments necessary for Hyperloop, including advances in materials science and production engineering, even if it isn't ever actually built.


A couple of acquaintances asked me why I don't address TF's video (or its follow-up and comments on both YouTube and Reddit) directly. Giving it minimal thought,


But the main reason not to get into online arguments with strangers is basically the same as for not wrestling with a pig: you both get dirty but the pig enjoys it.

Monday, July 25, 2016

A rational case for Solar Roadways projects in organizations


The first time I heard of Solar Roadways my response was "so they are putting solar panels flat on the ground and shaded by cars?" My interlocutor correctly interpreted that as "What a thoroughly stupid idea; no point wasting more time on it." *

There are, however, some good reasons to start a Solar Roadways project in some organizations. Really: good, rational reasons, that you can convince an engineer with. Well, some engineers.

Because of the buzz surrounding Solar Roadways, the project might be funded. And a project funded means a number of ways to fund other projects that would not be funded. For example:

1. An overhead charge is applied to all outside grants and funding. For example, an organization might add a fifty-percent surcharge to any expenditure: spend 1000 on your Solar Roadways funded project, contribute an additional 500 to a general fund (from which the projects that aren't sexy or buzz-worthy can be funded).

2. Fund as much personnel as you can get away with from the Solar Roadways money; of course, funding them doesn't mean that they can't work on other things, and in many organizations it's difficult to tell which project a worker is working on without expending a lot of effort. Given its own problems, it's unlikely that Solar Roadways project funders will be too eager to get a serious audit of expenditures.

3. Fund as much infrastructure, capital investment, and current expenses with Solar Roadways project money. Basically same argument as personnel.

4. Use the buzz of having a Solar Roadways project to attract attention and more funding, to get potential donors to come to fund-raisers, to impress upon the alumni (for universities) how "with it" your institution is. Also, you can play the "Solar Freaking Roadways" clip with the Serenity captain over and over again for the nerdiest of your audience, thus distracting them from any inconvenient engineering professor whose pet project isn't being funded.

Obviously these aren't arguments for Solar Roadways as an energy source, but rather examples of why smart and knowledgeable people go along with nonsense like that.

Great video by Crazy Aussie Dave Jones (EEVBlog) on Solar Roadways:


- - - -

* Some people start going over the details and quibble over the durability of the panels and the visibility of the lights in them or whether they could really melt snow (hint: no, they can't).

That's like arguing about whether the container cross-bracing ties in a Maersk Triple-E would hold if instead of sailing it over water we attached rocket motors to the hull and sent it to orbit and then deorbited it towards the destination port.

(Yes, get it to orbital speed then deorbit, to make it even stupider than a simple --- though also highly unrealistic --- ballistic trajectory.)

The cross-bracing isn't the problem, the concept itself is demented.

Sunday, February 8, 2015

Science popularization has an identity problem

Some influential science popularizers are doing a disservice to public understanding of science and possibly even to science education.

Yes, it's a strong statement. Alas, it's a demonstrable one.

With the caveats that I enjoy the Mythbusters show, especially the recent series with their back-to-origins style, and that this post is not specifically about them, the recent episode about The A-Team presented an almost-perfect example of the problem.

"Stoichiometry."

Midway through the episode Adam uses this word. It's an expensive way of saying "mass balancing of chemical equations" (not how it was described in the show). And then, well... and then Jamie proceeded to not use stoichiometry.

To be concrete: they were exploding propane. Jamie tried mixing it with pure oxygen and got a big explosion. Then they mention stoichiometry. At this point, what they should have done was to introduce some basic chemistry.

The propane molecule has 3 carbon and 8 hydrogen atoms, $\mathrm{C}_{3} \mathrm{H}_{8}$. It burns with molecular oxygen, $\mathrm{O}_{2}$, yielding carbon dioxide, $\mathrm{C} \mathrm{O}_{2}$, and water vapor, $\mathrm{H}_{2} \mathrm{O}$.

Chemists represent reactions with equations, like this:

$\mathrm{C}_{3} \mathrm{H}_{8} + \mathrm{O}_{2} \rightarrow \mathrm{C} \mathrm{O}_{2} + \mathrm{H}_{2} \mathrm{O}$

This equation is unbalanced: for example, there are three carbons on the left-hand side, but only one on the right-hand side. By changing the proportions of reagents, we can get both sides to match:

$\mathrm{C}_{3} \mathrm{H}_{8} + \mathbf{5} \, \mathrm{O}_{2} \rightarrow \mathbf{3} \, \mathrm{C} \mathrm{O}_{2} + \mathbf{4} \, \mathrm{H}_{2} \mathrm{O}$

Once we have this balance, we can determine that we need 160 grams of oxygen for each 44 grams of propane. For this we need to look up the atomic masses (to compute molar masses) of carbon (12 g/mol), hydrogen (1 g/mol) and oxygen (16 g/mol). (*)

Back on the Mythbusters, after mentioning stoichiometry, Jamie starts trying out different proportions of propane to oxygen. If he had actually used stoichiometry he'd already have the proportions calculated, as I did above, about four times more oxygen than propane by mass; no need to experiment with different proportions.

(Yes, there'a a lot of experimentation in engineering, but no engineer ignores the basic scientific foundations of her field. Chemical engineers don't figure out mass balances by trial and error; they use trial and error after exhausting the established science.)

This illustrates a major problem in the way science is being popularized: to a segment of the educated and interested audience, science is an identity product. Like a Prada bag or a sports franchise logo on a t-shirt, they see science as something that can signal membership in a desired group and exclusion from undesirable groups.

Hence the word "stoichiometry" inserted in a show that doesn't actually use stoichiometry.

"Stoichiometry" here is, like the sports franchise logo, purely a symbol. The audience learns the word, in the sense that they can repeat it, but not the concept, let alone the principles and the tools of stoichiometry. The audience gains a way to signal that they "like" science, but no actual knowledge. Like a sedentary person who wears "team colors" to watch televised games.

Some successful science popularizers pander to this "like, not learn, science" audience, instead of trying to use that audience's interest in science to educate them.

So what, most people will ask. It's the market working: you give the audience what they want. And there's no question that selling science as identity is good business. Shows like House MD, Bones, The Big Bang Theory, all take advantage of this trend. Gift shops at science museums cater to the identity much more than the education: a look at their sales typically finds much more logo-ed merchandize than chemistry sets or microscopes.

(Personal anecdote: despite having three science museums nearby, I had to use the web to get a real periodic table poster. A printable simple table from Los Alamos National Lab.)

"Liking" science without learning it is bad for society:

1. Crowds out opportunities for education. People have limited time (and money) for their hobbies and activities. If they spend their "science budget" on identity, they won't have any left for actual science learning. Many more people read Feynman's two autobiographies than his Lectures On Physics or his popular physics books.

2. Devalues the work of scientists and engineers, by presenting a view of science that excludes the hard work of learning and the value of the knowledge base (trial-and-error in lieu of mass balance calculations, for example). Some people end up thinking that science is just another type of institution credential (or celebrity worship) instead of being validated by physical reality.

3. Weakens science education. Some people who go into science expect it to be easy and entertaining (in the purely ludic sense), instead of hard but rewarding (deriving satisfaction from really understanding something), as that's what the popularization depicts. They then want schools to match those expectations. While colleges may not want to simplify science and engineering classes, they put pressure on faculty for more "engaging" teaching: less technical, more show. (**)

4. As science becomes more of an identity product to some people, and increasingly perceived as identity-only by others, it becomes more vulnerable to non-scientific identity threats, such as derailing a major scientific and technical achievement in space exploration by talking about sartorial choices and sociological forces in academia.


So, what can we do?

First, we should recognize that an interest in science, even if currently trending towards identity, can be channeled into support for science and science education. As societal trends go, a generalized liking for science is better than most alternatives.

Second, there are plenty of sources of information and education that can be used to learn science. There's a broad variety of online resources for science education at different levels of knowledge, free and accessible to anyone with an internet connection (or indeed a library card; books were the original MOOCs).

Third, current "science as identity" popularizers may be open to educating their audiences. Contacting them, offering feedback, and using social media to otherwise proselytize for science (as in scientific knowledge and thinking like a scientist) might induce them to change their approach.

The most important thing anyone can do, though, is to try to get people who "like" science to understand that they should really learn some.

(Final note on the A-Team episode: Adam should have played Murdock, not Hannibal.)

- - - -
(*) I learned to do this on my own as a kid, but the material was covered in ninth grade chemistry. (A long time ago in a country far away, in ninth grade you chose a technical or artistic area in school; mine was 'chemical technology' because my school didn't have electronics.) A side-effect of my early interest in chemistry is that I have quasi-Brezhnevian eyebrows: you burn them off five or six hundred times, they grow back with a vengeance.

(**) Some schools protect their main reputation-building degrees by creating non-technical versions of the technical courses and bundling them into subsidiary degrees. So, for example, they have information technology courses, which sound like computer science courses but are in fact nothing like them.
          Another approach is the encroachment of humanities, arts, and social sciences "breadth" requirements into science and engineering degrees. When I studied EECS in Europe, we had five years of math, physics, chemistry, and engineering courses. A similar degree in the US has four years and usually a minimum of one-year-equivalent of those "breadth" requirements, though some people can have more than two-year-equivalent by choosing "soft engineering" courses like "social impact of computers."