Showing posts with label google. Show all posts
Showing posts with label google. Show all posts

Monday, May 16, 2011

Two quick thoughts about Microsoft's purchase of Skype

1. Valuation of a property like Skype is a lot more than just some multiple of earnings.

Quite a few bloggers, twitterers, and forum participants jumped on Facebook, Google, and Microsoft for their billion-dollar valuations of Skype. Usually the criticism was based on Skype's lackluster earnings. This is a massively myopic point of view.

One can acquire a company for many reasons beyond its current revenue stream: the company may own resources that it is not adequately exploiting, such as technology or highly valuable personnel; it may have a valuable brand or a large user base (which is certainly true for Skype); it may have valuable information about its customers (again true for Skype as the communication graph -- not just the link graph -- is valuable); and finally, the company may have untapped revenue potential, just not with their current revenue model.

As a general rule, just because one cannot think of a way to monetize something, it doesn't mean that there is no way to monetize that thing.

Another possible reason to buy a company is strategy at a corporate level: to stop it from developing into a competitor for some of our products, to stop competitors from buying it (and therefore becoming better competitors), and to signal commitment to a specific market.


2. Perhaps there's a little Winner's Curse going on here, or perhaps not

When three companies (Google, Facebook, and Microsoft) compete for the same company, there's always the possibility of a little Winner's Curse effect:

 Assume that the value of Skype to these companies includes a big fraction that is common, meaning that it will be realized independent of the owner. Call that true common value $v$. To simplify, for now, assume that there are no synergies or strategic advantages for any of the buying companies; so the whole value is $v$.

Using all the information available, Google, Facebook, and Microsoft estimate $v$, each coming up with a number: $\tilde v_G$, $\tilde v_F$, and $\tilde v_M$. Note that these are estimates of the same $v$, not a representation of different actual value that Skype might have for these three companies. The estimates are different because each company uses different financial models and has access to different information or weighs it differently.

In a competitive market the winner will be the company who has the highest estimate, so we can assume that $\tilde v_M > \tilde v_G$ and $\tilde v_M > \tilde v_F$. The question now becomes: is what Microsoft paid for Skype higher than $v$ (the true $v$)?

Probabilistically the winning $\tilde v$ is likely to be higher than $v$,* since it's the maximum of three unbiased estimates -- one hopes these three companies have good financial advisers -- of the true $v$. Microsoft knows this and may shade its offer down a little from $\tilde v_M$. But even so, there's a chance that it paid too much.

Except that we're ignoring all the non-common value: synergies, strategic fit with Microsoft's other properties, and signaling to the market that Microsoft isn't yet a zombie like IBM was in the '90s.

There's a lot going on between Skype and Microsoft that the online comentariat missed. Then again, that's the fun of reading it.

(Hey, I finally wrote a business post in this blog that I repositioned as a business blog over a month ago!)

-------------------

* If the distribution of the errors in estimates of $v$ is symmetrical around zero (ergo the median of $\tilde v$ is $v$), the probability that the maximum of three observations $\tilde v$ is higher than $v$ is $7/8$.

Saturday, April 30, 2011

A situation in which I have to defend Gargle

I try not to judge, but ignorance and lax thinking of this magnitude is hard to ignore.

I'm far from being a Google fanboy and have in the past skewered a fanboy while reviewing his book; Google has plenty of people in public relations management, a lot of money to spend on it, and doesn't need my help; and every now and then I cringe when I hear people refer to Google's "don't be evil" slogan.

But this self-absorbed post makes me want to defend Google, for once. Here's the story as I see it, and as most people with even a passing interest in management and some minor real-world experience would probably see it:

A person was fired for indulging his personal politics at a contract site in a way that endangered the contract between his employer and the client (whose actions were legal and generous beyond the current norm).

I'll add that every company has a "class" system, using the scare quotes because the original poster chooses that word for emotional effect due to its association with reprehensible behavior (that doesn't apply here). The appropriate term is hierarchy.

Google apparently gives many fringe benefits to some contractors (red badge ones): free lunches, shuttles, access to internal talks; this is incredibly generous by common standards. But in the everyone should have everything everybody else does mindset of the original poster, the existence of different types of contractor (red vs yellow badges) is indicative of something bad.

Gee, how lucky Google was that this genius didn't learn about the discrimination in the use of the corporate jets. Imagine what his post would be like if he had learned that the interns couldn't use the company's 767 to take their friends to Bermuda.

He mentioned he was going to grad school; probably will fit in perfectly.

Saturday, March 21, 2009

Designers and decision-makers

I understand why Douglas Bowman is upset, but he's ultimately wrong: he makes a common error, that of using zero as an approximation for a very small number.

First, let me avoid misunderstandings: design is important and trained graphic designers tend to do it better than other people; experiments don't solve all problems and sometimes mislead managers; judgment and data complement each other. On to why Mr. Bowman is wrong, using an hypothetical based on one of his examples.

We learn that Google tested 41 different shades of blue for some clickthrough application. Given his writing, he appears to think that the idea is ridiculous; I disagree. Suppose his choice of blue is off by a very small amount; to be precise say that his favorite color leads to one in ten thousand fewer clicks than the one that does best in the experiment. (How finely tuned would his color sense have to be in order to predict a difference of 0.0001 clickability? Without the experiment we'd never know.)

The problem is that a small number in day-to-day terms (one in ten thousand) is not used in day-to-day applications (serving millions of search queries per day). Googling the number of links served per day I get about 200 million searches, each with a few sponsored links. Let's say 5 links per search, for a total of 1 billion links. Even if the average payment to Google for a clickthrough is only 5c, the difference in colors is worth $ \$5,000$ a day or 1.8 million a year. (These numbers are for illustration, but management at Google knows the real ones.)

This hypothetical loss of 1.8 million doesn't seem much compared to Google's total revenue but it is a pure opportunity cost of indulging the arrogance of credentialism (meaning: "as a trained designer I should overrule data"). I don't intend this as an attack on Mr Bowman, because I don't think most designers perceive the problem this way. But this is the business way of looking at the decision.

Ok, but what if he is right about the color choice? That is, what if after running the experiment the color that performs best is the one he had chosen?

Then the experiment will waste some clicks on the other colors and there's the added cost of running it and processing the data. Say it costs $\$100$k to do this. That means that if there is more than a 5.56% chance that Mr. Bowman is wrong by at least 0.0001 clickability, the cost of the experiment will pay itself off in one year.

Using numbers lets management ask Mr. Bowman a more precise question: Can you be 95% sure that the maximum error in color choice translates into fewer than 1 in 10,000 clicks lost?

The main problem here is the same as with most experience-based judgements when they encounter lots of data: they are roughly right and precisely wrong. And, while in each instance the error is very small to be noticed, multiplied across many instances it becomes a measurable opportunity cost.