People change; books and seminars help.
No, not "empower yourself" books and seminars. Of those I cannot speak. Presentation and teaching books and seminars, that's what I'm talking about. It all starts with this picture (click to enlarge):
I made that picture one evening, as entertainment. I was cleaning up my hard drive and started perusing old teaching materials; noticed the different styles therein; and decided to play around with InDesign. After a while I ended up putting online something that I believe has useful content. It includes some references, which is what I'm writing about here.
Though I'm writing about the references, I cannot overemphasize the importance of the seminars. Tufte's books explain all the material (and the seminar's potential value is realized only after studying the books); but the seminar provides a clear example that it works. Some may read the books and go back to outline-like bullet point disaster slides because they don't trust the approach to work with a live audience. Tufte's seminar allays these fears.
The HBS seminar is more specific to teaching, but for those of us in the knowledge diffusion profession it's full of essential information. There are books on the case method and participant-centered learning, but they are not comparable to the seminar. I know, because I read the books before. And when the seminar started I was skeptical. Very skeptical. And when the seminar ended I reflected on what had happened - the instructor had made us, the audience learn all the material I had read about, without stating anything about it. Reading a book about the classroom skill would be like reading a book about complicated gymnastics.
But, even if one cannot attend these seminars, here are some references that help:
Edward Tufte's books and web site contain the foundations of good information design and presentation.
Made to stick, by the Heath brothers explains why some ideas stay with us while others are forgotten as soon as the presentation is over.
Brain rules, by John Medina, uses neuroscience to give life advice. There are many things in it that apply to teaching and learning; in addition, the skill with which Medina explains the technical material and the underlying science to a popular audience, without dumbing it down, is a teaching/presentation tool to learn (by his example).
Things that make us smart, by Donald Norman, a book about cognitive artifacts, i.e. objects that amplify brain powers. I also recommend his essay responding to Tufte, essentially agreeing with his principles but disagreeing with his position on projected materials.
Speak like Churchill, stand like Lincoln, by James Humes, should be mandatory reading for anyone who ever has to make a public speech. Of any kind. Humes is a speechwriter and public speaker by profession and his book gives out practical advice on both the writing and the delivery. I have read many books on public speaking and this one is in a class of its own.
The non-designer design book, by Robin Williams lets us in on the secrets behind what works visually and what doesn't. It really makes one appreciate the importance of what appears at first to be over-fussy unimportant details.
Tools for teaching, by Barbara Gross Davis covers every element of course design, class design, class management, and evaluation. It is rather focussed on institutional learning (like university courses), but many of the issues, techniques, and checklists are applicable in other instruction environments.
These references helped me (a lot), but they are just the fundamentals. To go beyond them, I recommend:
Donald Norman's other books, as illustrations of how cognitive limitations of people interact with the complexity of all artifacts.
Robin Williams design workshop, which goes beyond the non-designers design book. E.g.: once you understand the difference between legibility (Helvetica) and readability (Times), you can now understand why one is appropriate for chorus slides (H) and the other for long written handouts (T).
Universal principles of design, by William Lidwell, Kritina Holden, and Jill Butler is a quick reference for design issues. I also like to peruse it regularly to get some reminders of design principles. It's organized alphabetically and each principle has a page or two, with examples.
On writing well, by William Zinsser. This book changed the way I write. It may seem orthogonal to presentations and teaching, but consider how much writing is involved in class preparation and creation of supplemental materials.
Designing effective instruction, by Gary Morrison, Steven Ross, and Jerrold Kemp, complements Tools for teaching. While TfT has the underlying model of a class, this book tackles the issues of training and instruction from a professional service point of view. (In short: TfT is geared towards university classes, DEI is geared towards firm-specific Exec-Ed.)
As usual, information in this post is provided only with the guarantee that it worked for me. It may - probably will - work for others. I still stand by the opener of my post on presentations:
Most presentations are terrible, and that's by choice of the presenter.
Non-work posts by Jose Camoes Silva; repurposed in May 2019 as a blog mostly about innumeracy and related matters, though not exclusively.
Showing posts with label design. Show all posts
Showing posts with label design. Show all posts
Sunday, January 10, 2010
Saturday, March 21, 2009
Designers and decision-makers
I understand why Douglas Bowman is upset, but he's ultimately wrong: he makes a common error, that of using zero as an approximation for a very small number.
First, let me avoid misunderstandings: design is important and trained graphic designers tend to do it better than other people; experiments don't solve all problems and sometimes mislead managers; judgment and data complement each other. On to why Mr. Bowman is wrong, using an hypothetical based on one of his examples.
We learn that Google tested 41 different shades of blue for some clickthrough application. Given his writing, he appears to think that the idea is ridiculous; I disagree. Suppose his choice of blue is off by a very small amount; to be precise say that his favorite color leads to one in ten thousand fewer clicks than the one that does best in the experiment. (How finely tuned would his color sense have to be in order to predict a difference of 0.0001 clickability? Without the experiment we'd never know.)
The problem is that a small number in day-to-day terms (one in ten thousand) is not used in day-to-day applications (serving millions of search queries per day). Googling the number of links served per day I get about 200 million searches, each with a few sponsored links. Let's say 5 links per search, for a total of 1 billion links. Even if the average payment to Google for a clickthrough is only 5c, the difference in colors is worth $ \$5,000$ a day or 1.8 million a year. (These numbers are for illustration, but management at Google knows the real ones.)
This hypothetical loss of 1.8 million doesn't seem much compared to Google's total revenue but it is a pure opportunity cost of indulging the arrogance of credentialism (meaning: "as a trained designer I should overrule data"). I don't intend this as an attack on Mr Bowman, because I don't think most designers perceive the problem this way. But this is the business way of looking at the decision.
Ok, but what if he is right about the color choice? That is, what if after running the experiment the color that performs best is the one he had chosen?
Then the experiment will waste some clicks on the other colors and there's the added cost of running it and processing the data. Say it costs $\$100$k to do this. That means that if there is more than a 5.56% chance that Mr. Bowman is wrong by at least 0.0001 clickability, the cost of the experiment will pay itself off in one year.
Using numbers lets management ask Mr. Bowman a more precise question: Can you be 95% sure that the maximum error in color choice translates into fewer than 1 in 10,000 clicks lost?
The main problem here is the same as with most experience-based judgements when they encounter lots of data: they are roughly right and precisely wrong. And, while in each instance the error is very small to be noticed, multiplied across many instances it becomes a measurable opportunity cost.
First, let me avoid misunderstandings: design is important and trained graphic designers tend to do it better than other people; experiments don't solve all problems and sometimes mislead managers; judgment and data complement each other. On to why Mr. Bowman is wrong, using an hypothetical based on one of his examples.
We learn that Google tested 41 different shades of blue for some clickthrough application. Given his writing, he appears to think that the idea is ridiculous; I disagree. Suppose his choice of blue is off by a very small amount; to be precise say that his favorite color leads to one in ten thousand fewer clicks than the one that does best in the experiment. (How finely tuned would his color sense have to be in order to predict a difference of 0.0001 clickability? Without the experiment we'd never know.)
The problem is that a small number in day-to-day terms (one in ten thousand) is not used in day-to-day applications (serving millions of search queries per day). Googling the number of links served per day I get about 200 million searches, each with a few sponsored links. Let's say 5 links per search, for a total of 1 billion links. Even if the average payment to Google for a clickthrough is only 5c, the difference in colors is worth $ \$5,000$ a day or 1.8 million a year. (These numbers are for illustration, but management at Google knows the real ones.)
This hypothetical loss of 1.8 million doesn't seem much compared to Google's total revenue but it is a pure opportunity cost of indulging the arrogance of credentialism (meaning: "as a trained designer I should overrule data"). I don't intend this as an attack on Mr Bowman, because I don't think most designers perceive the problem this way. But this is the business way of looking at the decision.
Ok, but what if he is right about the color choice? That is, what if after running the experiment the color that performs best is the one he had chosen?
Then the experiment will waste some clicks on the other colors and there's the added cost of running it and processing the data. Say it costs $\$100$k to do this. That means that if there is more than a 5.56% chance that Mr. Bowman is wrong by at least 0.0001 clickability, the cost of the experiment will pay itself off in one year.
Using numbers lets management ask Mr. Bowman a more precise question: Can you be 95% sure that the maximum error in color choice translates into fewer than 1 in 10,000 clicks lost?
The main problem here is the same as with most experience-based judgements when they encounter lots of data: they are roughly right and precisely wrong. And, while in each instance the error is very small to be noticed, multiplied across many instances it becomes a measurable opportunity cost.
Labels:
analytics,
design,
experiments,
google,
management
Subscribe to:
Posts (Atom)
