This post has an unusual format. Indeed, it is based on the text of an interview Nassim Taleb, author of the best-selling book, Fooled by Randomness, gave to www.edge.org. This interview is captivating and a good introduction to Taleb's book. Here it is (the full text is available from www.edge.org):
"One can study randomness, at three levels: mathematical, empirical, and behavioral. The first is the narrowly defined mathematics of randomness, which is no longer the interesting problem because we've pretty much reached small returns in what we can develop in that branch. The second one is the dynamics of the real world, the dynamics of history, what we can and cannot model, how we can get into the guts of the mechanics of historical events, whether quantitative models can help us and how they can hurt us. And the third is our human ability to understand uncertainty. We are endowed with a native scorn of the abstract; we ignore what we do not see, even if our logic recommends otherwise. We tend to overestimate causal relationships. When we meet someone who by playing Russian roulette became extremely influential, wealthy, and powerful, we still act toward that person as if he gained that status just by skills, even when you know there's been a lot of luck. Why? Because our behavior toward that person is going to be entirely determined by shallow heuristics and very superficial matters related to his appearance.
There are two technical problems in randomness — what I call the soft problem and the hard problem. The soft problem in randomness is what practitioners hate me for, but academics have a no-brainer solution for it — it's just hard to implement. It's what we call in some circles the observation bias, or the related data mining problem. When you look at anything — say the stock market — you see the survivors, the winners; you don 't see the losers because you don't observe the cemetery and you will be likely to misattribute the causes that led to the winning.
There is a silly book called A Millionaire Next Door, and one of the authors wrote an even sillier book called
The Millionaire's Mind. They interviewed a bunch of millionaires to figure out how these people got rich.
Visibly they came up with bunch of traits. You need a little bit of intelligence, a lot of hard work, and a lot
of risk-taking. And they derived that, hey, taking risk is good for you if you want to become a millionaire.
What these people forgot to do is to go take a look at the less visible cemetery — in other words, bankrupt
people, failures, people who went out of business — and look at their traits. They would have discovered that some of the same traits are shared by these people, like hard work and risk taking. This tells me that the unique trait that the millionaires had in common was mostly luck.
This bias makes us miscompute the odds and wrongly ascribe skills. If you funded 1,000,000 unemployed
people endowed with no more than the ability to say "buy" or "sell", odds are that you will break-even in
the aggregate, minus transaction costs, but a few will hit the jackpot, simply because the base cohort is
very large. It will be almost impossible not to have small Warren Buffets by luck alone. After the fact they
will be very visible and will derive precise and well-sounding explanations about why they made it. It is difficult to argue with them; "nothing succeeds like success". All these retrospective explanations are
pervasive, but there are scientific methods to correct for the bias. This has not filtered through to the
business world or the news media; researchers have evidence that professional fund managers are
just no better than random and cost money to society (the total revenues from these transaction costs is
in the hundreds of billion of dollars) but the public will remain convinced that "some" of these investors
The hard problem of randomness may be insoluble. It's what some academics hate me for.
It is an epistemological problem: we do not observe probabilities in a direct way. We have to find them
somewhere, and they can be prone to a few types of misspecification. Some probabilities are incomputable
— the good news is that we know which ones. Much of the mathematical models we have to capture
uncertainty work in a casino and gambling environment, but they are not applicable to the complicated social world in which we live, a fact that is trivial to show empirically.
Consider two types of randomness. The first type is physical randomness — in other words, the probability of running into a giant taller than seven, eight, or nine feet, which in the physical world is very low. The probability of running into someone 200 miles tall is definitely zero; because you have to have a mother of some size, there are physical limitations. The probability that a heat particle will go from here to China, or from here to the moon, is extremely small since it needs energy for that. These distributions tend to be "bell-shaped", Gaussian, with tractable properties.
But in the random variables we observe today, like prices — what I call Type-2 randomness, anything
that's informational — the sky is the limit. It' s "wild" uncertainty. As the Germans saw during the hyperinflation episode, a currency can go from one to a billion, instantly. You can name a number, nothing physical can stop it from getting there. What is worrisome is that nothing in the past statistics could have helped you guess the possibility of such hyperinflation effect. People can become very powerful overnight on a very small idea.
Take the Google phenomenon or the Microsoft effect — "all-or-nothing" dynamics. The equivalent of Google,
where someone just takes over everything, would have been impossible to witness in the Pleistocene.
These are more and more prevalent in a world where the bulk of the random variables are socio-informational with low physical limitations. That type of randomness is close to impossible to model since a single observation of large impact, what I called a Black Swan, can destroy the entire inference.
This is not just a logical statement: these happens routinely. In spite of all the mathematical sophistication, we're not getting anything tangible except the knowledge that we do not know much about these "tail" properties. And since our world is more and more dominated by these large deviations that are not possible to model properly, we understand less and less of what's going on. "
Now, if you enjoyed Nassim Taleb's unusual way of looking at life, swans and Lady Fortuna, I urge you to read his op-ed in the New York Times entitled "Learning to Expect the Unexpected" (April 8, 2004). In this thought-provoking op-ed, he explains why he thinks the 9/11 US commission is predicated on at least three major flaws, that is plagued by Black Swans!.
Let me know what you think. But, I "bet" you won't go to your next MBA class or your next board meeting with the same mindset or, at least, with the same questions burning your lips!