Daniel Kahneman: How cognitive illusions blind us to reason

(guardian.co.uk) Why do Wall Street traders have such faith in their powers of prediction, when their success is largely down to chance? Daniel Kahneman explains how cognitive illusions skew our thinking.

(…)

Some years ago I had an unusual opportunity to examine the illusion of financial skill up close. I had been invited to speak to a group of investment advisers in a firm that provided financial advice and other services to very wealthy clients. I asked for some data to prepare my presentation and was granted a small treasure: a spreadsheet summarising the investment outcomes of some 25 anonymous wealth advisers, for each of eight consecutive years. Each adviser’s score for each year was his (most of them were men) main determinant of his year-end bonus. It was a simple matter to rank the advisers by their performance in each year and to determine whether there were persistent differences in skill among them and whether the same advisers consistently achieved better returns for their clients year after year.

To answer the question, I computed correlation coefficients between the rankings in each pair of years: year 1 with year 2, year 1 with year 3, and so on up through year 7 with year 8. That yielded 28 correlation coefficients, one for each pair of years. I knew the theory and was prepared to find weak evidence of persistence of skill. Still, I was surprised to find that the average of the 28 correlations was 0.01. In other words, zero. The consistent correlations that would indicate differences in skill were not to be found. The results resembled what you would expect from a dice-rolling contest, not a game of skill.

(…)

Read more: http://www.guardian.co.uk/science/2011/oct/30/daniel-kahneman-cognitive-illusion-extract?newsfeed=true

About these ads
2 comments
  1. Daniel Kahneman: How cognitive illusions blind us to reason | Science | The Observer

    http://www.guardian.co.uk/science/2011/oct/30/daniel-kahneman-cognitive-illusion-extract

    Many decades ago, I spent what seemed like a great deal of time under a scorching sun, watching groups of sweaty soldiers as they solved a problem. I was doing my national service in the Israeli army at the time. I had completed an undergraduate degree in psychology, and was assigned to the army’s psychology branch, where one of my duties was to help evaluate candidates for officer training. We used methods that had been developed by the British army in the second world war.

    One test, called the “leaderless group challenge”, was conducted on an obstacle field. Eight candidates, strangers to one another, were instructed to lift a long log from the ground and carry it to a wall about six feet high. The entire group had to get to the other side of the wall without the log touching either the ground or the wall, and without anyone touching the wall. If any of these things happened, they had to declare it and start again.

    There was more than one way to solve the problem. A common solution was for the team to send several men to the other side by crawling over the log as it was held at an angle, like a giant fishing rod, by other members of the group. Or else some soldiers would climb on to someone’s shoulders and jump across. The last man would then have to jump up at the pole, held up at an angle by the rest of the group, shin his way along its length as the others kept him and the pole suspended in the air, and leap safely to the other side. Failure was common at this point, which required them to start all over again.

    As a colleague and I monitored the exercise, we made a note of who took charge, who tried to lead but was rebuffed, how cooperative each soldier was in contributing to the group effort. We saw who seemed to be stubborn, submissive, arrogant, patient, hot-tempered, persistent, or a quitter.

    We sometimes saw competitive spite when someone whose idea had been rejected by the group no longer worked very hard. And we saw reactions to crisis: who berated a comrade whose mistake had caused the whole group to fail, who stepped forward to lead when the exhausted team had to start over. Under the stress of the event, we felt, each man’s true nature revealed itself.

    After watching the candidates make several attempts, we had to summarise our impressions of soldiers’ leadership abilities and determine, with a score, who should be eligible for officer training. The task was not difficult, because we felt we had already seen each soldier’s leadership skills.

    Some of the men had looked like strong leaders, others had seemed like wimps or arrogant fools, others mediocre but not hopeless. Quite a few looked so weak that we ruled them out as candidates for officer rank. When our multiple observations of each candidate converged on a coherent story, we were confident in our evaluations and felt that what we had seen pointed to the future. The soldier who took over when the group was in trouble and led the team over the wall was a leader at that moment. The obvious best guess about how he would do in training, or in combat, was that he would be as effective then as he had been at the wall. Any other prediction seemed inconsistent with the evidence before our eyes.

    Because our impressions of how well each soldier had performed were generally coherent and clear, our formal predictions were just as definite. We rarely experienced doubts or formed conflicting impressions. We were quite willing to declare: “This one will never make it,” “That fellow is mediocre, but he should do OK,” or “He will be a star.” We felt no need to question our forecasts, moderate them, or equivocate. If challenged, however, we were prepared to admit: “But of course anything could happen.”

    We were willing to make that admission because, despite our definite impressions about individual candidates, we knew with certainty that our forecasts were largely useless. The evidence was overwhelming. Every few months we had a feedback session in which we learned how the cadets were doing at the officer training school and could compare our assessments against the opinions of commanders who had been monitoring them for some time. The story was always the same: our ability to predict performance at the school was negligible. Our forecasts were not much better than blind guesses.

    We were downcast for a while after receiving the discouraging news. But this was the army. Useful or not, there was a routine to be followed and orders to be obeyed. Another batch of candidates arrived the next day. We took them to the obstacle field, we faced them with the wall, they lifted the log, and within a few minutes we saw their true natures revealed, as clearly as before. The dismal truth about the quality of our predictions had no effect whatsoever on how we evaluated candidates and very little effect on the confidence we felt in our judgments and predictions.

    What happened was remarkable. The evidence of our previous failure should have shaken our confidence in our judgments of the candidates, but it did not. It should also have caused us to moderate our predictions, but it did not. We knew as a general fact that our predictions were little better than random guesses, but we continued to feel and act as if each of our specific predictions was valid. I had discovered my first cognitive illusion.

    Looking back, the most striking part of the story is that our knowledge of the general rule that we could not predict had no effect on our confidence in individual cases. We were reluctant to infer the particular from the general. Subjective confidence in a judgment is not a reasoned evaluation of the probability that this judgment is correct. Confidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it. It is wise to take admissions of uncertainty seriously, but declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.

    In 1984, my collaborator Amos Tversky and I, and our friend Richard Thaler, now a guru of behavioural economics and co-originator of “nudge” theory, visited a Wall Street firm. Our host, a senior investment manager, had invited us to discuss the role of judgment biases in investing. I knew so little about finance that I did not even know what to ask him, but I remember one exchange. “When you sell a stock,” I asked, “who buys it?” He answered with a wave in the vague direction of the window, indicating that he expected the buyer to be someone else very much like him. That was odd: what made one person buy and the other sell? Most of the buyers and sellers know that they have the same information; they exchange the stocks primarily because they have different opinions. The buyers think the price is too low and likely to rise, while the sellers think the price is high and likely to drop. The puzzle is why buyers and sellers alike think the current price is wrong. What makes them believe they know more about what the price should be than the market does? For most of them, that belief is an illusion.

    Most people in the investment business have read Burton Malkiel’s wonderful book A Random Walk Down Wall Street. Malkiel’s central idea is that a stock’s price incorporates all the available knowledge about the value of the company and the best predictions about the future of the stock. If some people believe that the price of a stock will be higher tomorrow, they will buy more of it today.

    This, in turn, will cause its price to rise. If all assets in a market are correctly priced, no one can expect either to gain or to lose by trading. Perfect prices leave no scope for cleverness, but they also protect fools from their own folly. We now know, however, that the theory is not quite right. Many individual investors lose consistently by trading. The first demonstration of this startling conclusion was collected by Terry Odean, a finance professor at University of California Berkeley who was once my student.

    Odean began by studying the trading records of 10,000 brokerage accounts of individual investors spanning a seven-year period. He was able to analyse every transaction the investors executed through that firm, nearly 163,000 trades. This rich set of data allowed Odean to identify all instances in which an investor sold some of his holdings in one stock and soon afterward bought another stock. By these actions the investor revealed that he (most of the investors were men) had a definite idea about the future of the two stocks: he expected the stock that he chose to buy to do better than the stock he chose to sell.

    To determine whether those ideas were well founded, Odean compared the returns of the two stocks over the course of one year after the transaction. The results were unequivocally bad. On average, the shares that individual traders sold did better than those they bought, by a very substantial margin: 3.2 percentage points per year, above and beyond the significant costs of executing the two trades.

    It is important to remember that this is a statement about averages: some individuals did much better, others did much worse. However, it is clear that for the large majority of individual investors, taking a shower and doing nothing would have been a better policy than implementing the ideas that came to their minds. Later research by Odean and his colleague Brad Barber supported this conclusion. In a paper titled “Trading Is Hazardous to Your Wealth” they showed that, on average, the most active traders had the poorest results, while the investors who traded the least earned the highest returns. In another paper, titled “Boys Will Be Boys”, they showed that men acted on their useless ideas significantly more often than women, and that as a result women achieved better investment results than men.

    Of course, there is always someone on the other side of each transaction; in general, it’s financial institutions and professional investors, ready to take advantage of the mistakes that individual traders make in choosing a stock to sell and another stock to buy. Further research by Barber and Odean has shed light on these mistakes. Individual investors like to lock in their gains by selling “winners”, stocks that have appreciated since they were purchased, and they hang on to their losers. Unfortunately for them, recent winners tend to do better than recent losers in the short run, so individuals sell the wrong stocks. They also buy the wrong stocks.

    Few stock pickers, if any, have the skill needed to beat the market consistently, year after year. Professional investors, including fund managers, fail a basic test of skill: persistent achievement. The diagnostic for the existence of any skill is the consistency of individual differences in achievement. The logic is simple: if individual differences in any one year are due entirely to luck, the ranking of investors and funds will vary erratically and the year-to-year correlation will be zero. Where there is skill, however, the rankings will be more stable. The persistence of individual differences is the measure by which we confirm the existence of skill among car salespeople, orthodontists or golfers.

    Mutual funds are run by highly experienced and hardworking professionals who buy and sell stocks to achieve the best possible results for their clients. Nevertheless, the evidence from more than 50 years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than like playing poker. Typically at least two out of every three mutual funds underperform the overall market in any given year.

    More important, the year-to-year correlation between the outcomes of mutual funds is very small, barely higher than zero. The successful funds in any given year are mostly lucky; they have a good roll of the dice. There is general agreement among researchers that nearly all stock pickers, whether they know it or not – and few of them do – are playing a game of chance.

    The subjective experience of traders is that they are making sensible educated guesses in a situation of great uncertainty. In highly efficient markets, however, educated guesses are no more accurate than blind guesses.

    Some years ago I had an unusual opportunity to examine the illusion of financial skill up close. I had been invited to speak to a group of investment advisers in a firm that provided financial advice and other services to very wealthy clients. I asked for some data to prepare my presentation and was granted a small treasure: a spreadsheet summarising the investment outcomes of some 25 anonymous wealth advisers, for each of eight consecutive years. Each adviser’s score for each year was his (most of them were men) main determinant of his year-end bonus. It was a simple matter to rank the advisers by their performance in each year and to determine whether there were persistent differences in skill among them and whether the same advisers consistently achieved better returns for their clients year after year.

    To answer the question, I computed correlation coefficients between the rankings in each pair of years: year 1 with year 2, year 1 with year 3, and so on up through year 7 with year 8. That yielded 28 correlation coefficients, one for each pair of years. I knew the theory and was prepared to find weak evidence of persistence of skill. Still, I was surprised to find that the average of the 28 correlations was 0.01. In other words, zero. The consistent correlations that would indicate differences in skill were not to be found. The results resembled what you would expect from a dice-rolling contest, not a game of skill.

    No one in the firm seemed to be aware of the nature of the game that its stock pickers were playing. The advisers themselves felt they were competent professionals doing a serious job, and their superiors agreed. On the evening before the seminar, Richard Thaler and I had dinner with some of the top executives of the firm, the people who decide on the size of bonuses.

    We asked them to guess the year-to-year correlation in the rankings of individual advisers. They thought they knew what was coming and smiled as they said “not very high” or “performance certainly fluctuates”. It quickly became clear, however, that no one expected the average correlation to be zero.

    Our message to the executives was that, at least when it came to building portfolios, the firm was rewarding luck as if it were skill. This should have been shocking news to them, but it was not. There was no sign they disbelieved us. How could they? After all, we had analysed their own results, and they were sophisticated enough to see the implications, which we politely refrained from spelling out. We all went on calmly with our dinner, and I have no doubt that both our findings and their implications were quickly swept under the rug and that life in the firm went on as before. The illusion of skill is not only an individual aberration; it is deeply ingrained in their culture. Facts that challenge such basic assumptions – and thereby threaten people’s livelihood and self-esteem – are simply not absorbed. The mind does not digest them. This is particularly true of statistical studies of performance, which provide base-rate information that people generally ignore when it clashes with their personal impressions from experience.

    The next morning, we reported the findings to the advisers, and their response was equally bland. Their own experience of exercising careful judgment on complex problems was far more compelling to them than an obscure statistical fact. When we were done, one of the executives with whom I had dined the previous evening drove me to the airport. He told me, with a trace of defensiveness: “I have done very well for the firm and no one can take that away from me.” I smiled and said nothing. But I thought: “Well, I took it away from you this morning. If your success was due mostly to chance, how much credit are you entitled to take for it?”

    Cognitive illusions can be more stubborn than visual illusions. When my colleagues and I in the army learned that our leadership assessment tests had low validity, we accepted that fact intellectually, but it had no impact on either our feelings or our subsequent actions. The response we encountered in the financial firm was even more extreme. I am convinced that the message that Thaler and I delivered to both the executives and the portfolio managers was instantly put away in a dark corner of memory where it would cause no damage.

    Why do investors, both amateur and professional, stubbornly believe that they can do better than the market, contrary to an economic theory that most of them accept, and contrary to what they could learn from a dispassionate evaluation of their personal experience? The most potent psychological cause of the illusion is certainly that the people who pick stocks are exercising high-level skills. They consult economic data and forecasts, they examine income statements and balance sheets, they evaluate the quality of top management, and they assess the competition. All this is serious work that requires extensive training. Unfortunately, skill in evaluating the business prospects of a firm is not sufficient for successful stock trading, where the key question is whether the information about the firm is already incorporated in the price of its stock. Traders apparently lack the skill to answer this crucial question, but they appear to be ignorant of their ignorance. As I had discovered from watching cadets on the obstacle field, subjective confidence of traders is a feeling, not a judgment.

    © Daniel Kahneman. This is an extract from Daniel Kahneman’s new book, Thinking, Fast and Slow (Allen Lane, £25)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 26 other followers

%d bloggers like this: