No. It's not.
I've read a number of recent books and articles about how technology, particularly computers and robots, will change everything and create a bipartite society, where "there will be those who tell computers what to do, and those who are told what to do by computers" – in a compact form. (As a computer engineer, I sort of approve of this message. :-)
This idea of a bipartite society with a small elite lording over the undifferentiated masses is not new (really, not new at all). That it's a result of technology instead of divine intervention or application of force is also not new, but, since most people have an "everything that happened before me is irrelevant because my birth was the most important event in the totality of space-time" attitude towards the past, this is ignored.
There are a few reasons contributing to the popularity of this idea:
It's mostly right, and in a highly visible way. Technological change makes life harder for those who fail to adapt to it. In the case of better robotics and smarter computers, adaptation is more difficult than it was for other changes like the production line or electricity. One way to see this is to see how previously personalized services have been first productized (ex: going from real customer service representatives to people following an interactive script on a computer) and then the production processes were automated (ex: from script-following humans to voice-recognition speech interfaces to computers). Technological change is real, it's important, and it's been a constant for a long time now.
(Added Jan. 7, 2014: Yes, I understand that the economics of technology adoption have a lot to do with things other than technology, namely broader labor and economic policies. I have teaching exercises for the specific purpose of making that point to execs and MBAs. Because discussion of these topics touches the boundaries of politics, I keep them out of my blog.)
It's partially wrong, but in a non-obvious way. People adapt and new careers appear that weren't possible before; there are skilled jobs available, only the people who write books/punditize/etc don't understand them; and humans are social animals. The reason why these are non-obvious, in order: it's hard to forecast evolution of the use of a technology; people with "knowledge work" jobs don't get Mike Rowe's point about skilled manual labor; most people don't realize how social they are.
(On top of these sociological reasons there's a basic point of product engineering that most authors/pundits/etc don't get, as they're not product engineers themselves: a prototype or technology demonstrator working in laboratory conditions or very limited and specific circumstances is a far cry from a product fitting with the existing infrastructure at large and usable by an average customer. Ignoring this difference leads authors/pundits/etc to over-estimate the speed of technological change and therefore the capacity of regular people to adapt to it.)
Change sells. There's really a very small market for "work hard and consume less than you produce" advice, for two reasons. First, people who are likely to take that advice already know it. Second, most people want a shortcut or an edge; if all that matters is change, that's a shortcut (no need to learn what others have spent time learning) and gives the audience an edge over other people who didn't get the message.
It appeals to the chattering classes. The chattering classes tend to see themselves as the elite (mostly incorrectly, in the long term, especially for information technologies) and therefore the idea that technology will cement their ascendancy over the rest of the population appeals to them. That they don't, in general, understand the technologies, is beyond their grasp.
It appeals to the creators of these technologies. Obviously so, as they are hailed as the creators of the new order. And since these tend to be successful people whom some/many others want to understand or imitate, there's a ready market for books/tv/consulting. Interestingly enough, most of the writers, pundits, etc, especially the more successful ones, are barely conversant with the technical foundations of the technologies. Hence the constant reference to unimportant details and biographical information.
It appeals to those who are failing. It suggests that one's problems come from outside, from change that is being imposed on them. Therefore failure is not the result of goofing off in school, going to work under the influence of mind-altering substances, lack of self-control, the uselessness of a degree in Narcissism Studies from Givusyourstudentloans U. No, it's someone else's fault. Don't bother with STEM, business, or learning a useful skill. Above all, don't do anything that might harm your self-esteem, like taking a technical MOOC with grades.
It appeals to those in power. First, it justifies the existence of a class of people who deserve to have power over others. Second, it describes a social problem that can only be solved by the application of power: since structural change creates a permanent underclass, not by their fault, wealth must be redistributed for the common good. Third, it readily identifies the class of people who must be punished/taxed: the creators of these technologies, who also create new sources of wealth to be taxed. Fourth, it absolves those in power from responsibility, since it's technology, not policy that is to blame. Fifth, it suggests that technology and other agents of change should be brought under the control of the powerful, since they can wreak such havoc in society.
To be clear, technology changes society and has been doing so since fire, the wheel, agriculture, writing, – skipping ahead – printing press, systematic experiments, the production line, electricity, DNA testing, selfies... The changes these technologies have brought are now integrated in the way we view the world, making them so "obvious" that they don't really count. Or do they? Maybe "we" should do some research. If these changes were obvious, certainly they were accurately predicted at the time. Like "we" are doing now with robots and AI.
You can find paper books about these changes on your local sky library dirigible, which you reach with your nuclear-powered Plymouth flying car, wearing your metal fabric onesie with a zipper on your shoulder, right after getting your weekly nutrition pill. You can listen to one of three channels bringing you music via telephone wires, from the best orchestras in Philadelphia and St. Louis while you read.
Or you can look up older predictions using Google on your iPhone, while you walk in wool socks and leather shoes to drink coffee brewed in the same manner as in 1900. The price changed, though. It's much cheaper to make, but you pay a lot more for the ambiance.
Think about that last word.
Non-work posts by Jose Camoes Silva; repurposed in May 2019 as a blog mostly about innumeracy and related matters, though not exclusively.
Friday, December 27, 2013
Friday, December 6, 2013
Word salad of scientific jargon
"The scientists that I respect are scientists who work hard to be understood, to use language clearly, to use words correctly, and to understand what is going on. We have been subjected to a kind of word salad of scientific jargon, used out of context, inappropriately, apparently uncomprehendingly." – Richard Dawkins, in the video Dangerous Ideas - Deepak Chopra and Richard Dawkins, approximately 27 minutes in.
That's how I feel about a lot of technical communications: conference panels, presentations, and articles. An observed regularity: the better the researchers, the less they tend to go into "word salad of scientific jargon" mode.
Subscribe to:
Posts (Atom)