Navigating Human Biases in Predictions

Kasparov and Big Blue
In The Signal and the Noise, Nate Silver provides interesting examples of why some predictions fail but some don't. Using examples from games like chess and poker, the investment world, baseball, weather forecasting, earthquake prediction, economics, and polling Silver says predictions go bad because of biases, vested interests, and overconfidence.

The chess tale about humans/computers trying to outguess each other — how Kasparov thought he could beat Deep Blue and didn't — includes an interesting comparison between computers and people:

The father of the modern chess computer was MIT’s Claude Shannon, a mathematician regarded as the founder of information theory, who in 1950 published a paper called “Programming a Computer for Playing Chess.” Shannon identified some of the algorithms and techniques that form the backbone of chess programs today. He also recognized why chess is such an interesting problem for testing the powers of information-processing machines.

Chess, Shannon realized, has an exceptionally clear and distinct goal—achieving checkmate. Moreover, it follows a relatively simple set of rules and has no element of chance or randomness. And yet, as anybody who has played chess has realized (I am not such a good player myself), using those simple rules to achieve that simple goal is not at all easy. It requires deep concentration to survive more than a couple of dozen moves into a chess game, let alone to actually win one. Shannon saw chess as a litmus test for the power of computers and the sort of abilities they might someday possess.

But Shannon, in contrast to some who came after him, did not hold the romanticized notion that computers might play chess in the same way that humans do. Nor did he see their victory over humans at chess as being inevitable. Instead, he saw four potential advantages for computers:

  1. They are very fast at making calculations.
  2. They won’t make errors, unless the errors are encoded in the program.
  3. They won’t get lazy and fail to fully analyze a position or all the possible moves.
  4. They won’t play emotionally and become overconfident in an apparent winning position that might be squandered or grow despondent in a difficult one that might be salvaged.

These were to be weighed, Shannon thought, against four distinctly human advantages:

  1. Our minds are flexible, able to shift gears to solve a problem rather than follow a set of code.

  2. We have the capacity for imagination.

  3. We have the ability to reason.

  4. We have the ability to learn.

Shannon was painting a particular vision of the future, one that did not take into account our blind spots, how we stop relying more on imagination when we become more experienced.

Where computers have the capacity to consider all possible moves fast, we require more creativity and confidence to buck conventional thinking. What eventually won the day for Deep Blue was a doubt and misinterpretation. Silver says:

there were some bugs in Deep Blue’s inventory: not many, but a few. Toward the end of my interview with him, Campbell somewhat mischievously referred to an incident that had occurred toward the end of the first game in their 1997 match with Kasparov.

“A bug occurred in the game and it may have made Kasparov misunderstand the capabilities of Deep Blue,” Campbell told me. “He didn’t come up with the theory that the move that it played was a bug.”

The bug had arisen on the forty-fourth move of their first game against Kasparov; unable to select a move, the program had defaulted to a last-resort fail-safe in which it picked a play completely at random. The bug had been inconsequential, coming late in the game in a position that had already been lost; Campbell and team repaired it the next day. “We had seen it once before, in a test game played earlier in 1997, and thought that it was fixed,” he told me. “Unfortunately there was one case that we had missed.”

In fact, the bug was anything but unfortunate for Deep Blue: it was likely what allowed the computer to beat Kasparov. In the popular recounting of Kasparov’s match against Deep Blue, it was the second game in which his problems originated—when he had made the almost unprecedented error of forfeiting a position that he could probably have drawn. But what had inspired Kasparov to commit this mistake? His anxiety over Deep Blue’s forty-fourth move in the first game—the move in which the computer had moved its rook for no apparent purpose. Kasparov had concluded that the counterintuitive play must be a sign of superior intelligence. He had never considered that it was simply a bug.

For as much as we rely on twenty-first-century technology, we still have Edgar Allan Poe’s blind spots about the role that these machines play in our lives. The computer had made Kasparov blink, but only because of a design flaw.

For computing to be accurate good data and programming are necessary ingredients. Both are the product of human intervention. This means that in many fields, as it happened with Big Blue, the process for improving predictions should be based on trial and error. In these cases, computer processing and human ingenuity play complementary roles.
[image via]

Leave a Reply

Your email address will not be published. Required fields are marked *