Clear thinking

Clear thinking - Persuasive communication

At the End of 2016, We Desperately Need to Remember These Two Ideas

Regardless of whether your candidate won or lost, the 2016 political season in the US has seriously undermined two key beliefs that I have long had about persuasive communications. The first is that the truth matters, and the second is that moderation of thought and expression is a virtue. Maybe I’ve taken them for granted because I’ve thought that both of these ideas are obvious, but it’s clear that someone needs to speak out in their defense.

Two of those defenders—whose opinions and approach I greatly respect—are Josh Bernoff, author of Writing Without Bullshit, and Peter Wehner,  a columnist for the New York Times. Since they can say it much better than I can, I simply give you these two links, and strongly recommend you read them:  

Bernoff: The Truth Foundation

Wehner: Moderate Is Not a Dirty Word (This is the title of the print edition of the opinion in the NYT; different than the online edition for some reason.)

Read More
Clear thinking - Lean Communication

Bad Simplicity, Complexity, Good Simplicity

The best way to add value and reduce waste in communication is to provide your audience with “simplicity on the other side of complexity”.

I first came across this phrase in a fascinating blog post titled: Simplicity on the other Side of Complexity, by John Kolko, and he in turn took it from Chief Justice Oliver Wendell Holmes who said, “I would not give a fig for the simplicity on this side of complexity, but I would give my life for the simplicity on the other side of complexity.”

The phrase resonated with me because it explains so clearly an idea that I am constantly trying to impart to participants in my lean communication and presentations classes. One of the most common reasons that companies bring me in to work with their teams is that their executives complain that they have to sit through unnecessarily complex and excessively detailed presentations, where masses of data substitute for clear understanding. The common complaint is “I ask what time it is, and they tell me how to build a watch.”

When you have to make a presentation to busy high-level decision makers—whether internally or to a potential customer—it’s because they need information to grasp the key issues about a complex decision, so that they can take effective action to improve outcomes. The simplicity with which you express information clearly adds value to them, because it reduces uncertainty and saves time and effort. But only if it’s good simplicity, which resides on the other side of complexity.

In essence, the gist of Kolko’s article is that the learning curve for a complex issue looks like a bell curve (I’ve added the labels to his simple drawing, and the rest of this article includes my additions to his):

simplicity curve 2

At the far left, when people first approach an issue, they know very little about it, but of course that doesn’t stop most of them from jumping to conclusions or forming opinions. People who are intellectually lazy or who use motivated reasoning don’t proceed past this stage and enjoy the confidence born of ignorance. They forget what H.L. Mencken said: “Explanations exist; they have existed for all time; there is always a well-known solution to every human problem — neat, plausible, and wrong.” For examples of this type of presentation, just think back to the past two weeks of US Presidential conventions. This kind of simple is almost pure waste, because it’s either wrong, or it’s trivial.

But you’re a business professional, so you can’t afford to be intellectually lazy because you know if you present a simplistic and shallow case you will be eviscerated in the boardroom. So you take your time, gather evidence and analyze it, and see greater depth and nuance in the issues. If you stick to it, you eventually become an expert; you “get it”, so that’s the next natural stopping point. Complexity is the point at which the average expert feels they are ready to present their findings. However, the biggest mistake they make is that they include far more detail than the listener needs to use their findings, either through defensiveness or inability to connect to what the listener cares about. As one CEO told me, “I get a lot of detail but very little meaning.” They may have added value, but there is still a significant amount of waste, in the form of time, effort, and confusion.

simplicity on the other side of complexity

Outstanding expert presenters know that you never truly know how well you know something until you try to explain it to others, so they take the next logical step. They add value to their listeners by distilling all their hard-won complexity into just the meaning the listener needs for their purposes. They know exactly why the listener needs the information, and give them just what they need to know to make the best possible decision, so that there is zero waste. Most times, they go beyond merely providing information and advocate for a specific decision (which is a given if it’s a sales presentation)—but it’s based on highly informed judgment.

The tools for achieving this type of good simplicity are the tools of lean communication: Outside-in thinking to know what the listener needs, Bottom Line Up Front to provide meaning right away, SO WHAT filter to root out waste, and pull to make adjustments as necessary.

Before you decide to strive for good simplicity, I would be remiss in not pointing out one personal risk you might run: if your goal is to call attention to how smart you are, it may not be the best way. As Kolko says, “The place you land on the right—the simplicity on the other side of complexity—is often super obvious in retrospect. That’s sort of the point: it’s made obvious to others because you did the heavy lifting of getting through the mess.”

But if your goal is to get the right things done, simplicity on the other side of complexity is the only way.

Read More
Book reviews - Clear thinking - Thinking Books

Book Recommendation: Superforecasting

Superforecasting: The Art and Science of Prediction, by Philip Tetlock and Dan Gardner, is a fascinating book, but I’m not sure if you should read it, for reasons that I explain at the end of this post.

There is a huge market for forecasting in our country, from political talking heads on TV, to investment advisors, to the approximately 20,000 intelligence analysts working in our intelligence establishment. But, while the stakes for getting it wrong can be huge (see Iraq’s WMD), there is no formal reliable way of measuring or analyzing the track records of those doing the predicting. Pick your favorite commentator or columnist, what’s their hit rate on their predictions? That’s impossible to answer, first because no one has compared what they said would happen to what did happen, and even if they did, so many of their predictions are so vaguely worded that most of them can easily claim they meant something else and wiggle off the hook.

Philip Tetlock is trying to change that. Beginning in the 1980s, he has been studying how good experts are at prediction (answer, just slightly better than a drunk monkey throwing darts). One of his findings was that pundits who were the most confident tended to be wrong more often, but they also got on TV more often. They are hired more for their ability to tell a compelling story with confidence than for their track record in forecasting.

This latest book details his findings from a four-year project funded by IARPA, the Intelligence Advanced Research Projects Activity, to test the forecasting performance of several different teams of experts. It was a large test which asked over 500 questions to more than 20,000 participants between 2011 and 2015. It was also rigorous, with questions designed to eliminate the wiggle room problem. For example, they asked, “Will any country withdraw from the Eurozone in the next three months? How many additional countries will report cases of the Ebola virus in the next eight months?”

The study found that about 2% of participants, which he calls superforecasters, are consistently more accurate in their predictions. By identifying the superforecasters, and then testing different combinations and variables, he was able to tease out what makes them successful, and the bulk of the book explains the traits, techniques and habits of thought that make for superior judgment.[1]

The basic theme is that it’s not superior intellect that distinguishes the SFs, but how they think. Here are just a few of his recommendations:

  • Break down tough problems into their components, and make estimates or judgments about those.
  • Pay attention to base rates first, and then adjust. For example, I may think that my friend is very likely to strike it rich in a very difficult venture, because I start with knowing how smart he is. But if I begin by considering that the odds are 50 to 1 against success, I could double his chances and still think it’s very unlikely.
  • Be actively open-minded, not only being open to new information but looking for it. Once you have formed a judgment, pay attention to new information, especially anything that would call your initial judgment into question.
  • Write down your forecasts and your reasoning, because the mere fact of writing it will help distance you emotionally from your first prediction. If it’s important that you get it right, take the further step of writing down all the reasons you might be wrong, and then synthesize the two.
  • Expand your range of alternatives. Most people have a three-position dial about predictions: yes, no, and even odds. You can force yourself to become more systematic about your own thinking by adopting a 7-point scale as recommended by the National Intelligence Council as you see here:

Remote     Very unlikely     Unlikely       Even         Probably, likely         Very likely       Almost Certain

Even better, use percentages. It won’t guarantee[2] you’re right, but it will force you to examine your own thinking and help you adopt a more nuanced viewpoint.

There’s far more good advice than I can summarize, but frankly I’m struggling a little in deciding whether to recommend that you read Superforecasting. On the plus side, I predict that it is very likely that if you read and apply its lessons, you will become a better thinker. On the other hand, it’s an even chance that you will become a worse persuasive communicator. That’s because an effective salesperson radiates confidence about the future they recommend, while effective forecasters are far more cautious and humble about their predictions.

My personal choice would be to begin with better thinking. First, for the obvious point that you owe it to yourself and to others to give them your best thinking. Second, sustainable influence depends on credibility, which will in the long run correlate strongly with your predictions. It’s true that TV pundits who are the most confident in their predictions tend to be wrong most often, and they don’t suffer for it. But when people are putting their own reputations or money at stake based on what you predict, they tend to have longer memories.

[1] I use judgment in the sense that they are better predictors, not that they necessarily make better decisions.

[2] In fact, you may have noticed that the seven-point scale does not include certainty on either side.

Read More
Clear thinking

Calibration: How Well Do You Know What You Know?

Measure twice, <a href=

To know that we know what we know and that we do not know what we do not know, that is true knowledge.[1]

                Confucius

If you could know – and prove – beyond a reasonable doubt that everything you say or write is true, you would quickly become immensely credible. You would also probably live on another planet.

Credibility is nothing but the probability estimate that others form when deciding whether to rely on what you tell them. You’re credible when they assume a reasonably high probability that what you say is correct.

But even though credibility is something that others assign to you, it has to begin with your own probability estimate. Any time you utter something controversial, you put your personal credibility at risk. It may be a slight risk, as when you tell someone they would probably like that new restaurant, or a huge risk, as when you passionately advocate a major investment for your company. So, you weigh the evidence in your mind, maybe carefully and analytically, or maybe intuitively, to figure out how certain you are before you decide whether to take the risk.

Since you can’t be sure of everything, the next best thing is to be able to accurately measure how sure you should be. For example, you may be 100% sure that the sun will rise tomorrow, but how sure are you that it will rain tomorrow, or that the project you’re proposing will cut costs in half? If you think it’s a high probability, you might estimate the chances at 80%. If you have no clue, your estimate would be 50%, if you think it’s possible but not probable, it might be 20%.

But here’s the rub. How accurate is your estimation of certainty? Calibration is a measure of the accuracy of your own probability estimate about what you believe to be true. It’s a measure of how closely your level of certainty accords to the true facts. If you are generally accurate, you’re said to be well-calibrated. If you’re over- or under-confident in your certainty, you are poorly calibrated.

Just as some people know more than others, some people are better calibrated than others. So, for example, in one of the simplest tests you may answer ten questions and if you’re 70% certain about each of your answers, you will get seven right if you’re well-calibrated, fewer than seven if you’re overconfident, and more than seven if you’re underconfident. Most people are overconfident; one study that gave a quiz to over 2000 people found that fewer than 1% were not overconfident.[2]

Overconfidence is not all bad – it encourages difficult efforts and can help you sell your ideas. It will tend to increase your credibility in a single situation, because listeners will take cues from your perceived confidence. Your level of certainty about what you’re saying will affect the confidence with which you express it, which will in turn affect how much listeners believe you.

But excessive overconfidence can definitely hurt your credibility by increasing the odds that you will be shown to be wrong. We all know people who are often wrong, but never in doubt – just watch any of the early stages of American Idol to see this overestimation displayed to a painful degree. In fact, studies have shown that the people with the least competence are the most likely to overestimate their actual knowledge. It’s called the Dunning-Kruger Effect[3]. Justin Kruger and David Dunning of Cornell University ran a study that measured subjects’ objective performance in tests of humor, grammar and logic, and found that those scoring in the bottom quartile were the most overconfident of their abilities; scoring on average in the 12th percentile, they rated themselves on average in the 62nd.

But there’s also an external aspect, which is others’ perceptions of how well-calibrated you are. If you’re well-calibrated, you are less likely to run ahead of your facts and get yourself into trouble, which is a good thing for long-term credibility.

Some very few people are underconfident in their estimate of certainty.[4] They are less sure of their knowledge, which certainly lowers the risk of being proven wrong, but also limits their influence. Their uncertainty may show through in their expression, or they may be less apt to speak up on behalf of their position or interests.

So, being well-calibrated will improve your credibility in two ways. First, it will help you avoid the extremes of over- and underconfidence. Second, by being perceived to be well-calibrated, or self-aware, you can be more credible to your listeners.

Because most people are overconfident, improved calibration will most likely cause you to dial back your confidence a little when you speak. Although it would seem that being tentative would lower your credibility, it depends on the situation. One area where perceptions of credibility have immediate and important consequences is in criminal trials, and researchers have found – in mock trials – that jury members are affected by how well-calibrated they perceive witnesses to be. Jury members were initially more likely to believe witnesses who expressed certainty about what they had seen than those who were less sure. But when their testimony was later shown to be wrong in a minor detail, the effects were reversed. The confident ones were seen as less credible, while the unsure ones were seen as more credible.

If you’re already seen as an expert, being a little less sure may help. Experts who express some uncertainty were found in one study to be seen as more credible than when they expressed certainty[5]. The author of the study ascribes this to the surprise factor that makes people pay closer attention to their message and hence be more influenced. But I think there may be a different explanation. Showing that you know you could be wrong makes you seem more self-aware (better calibrated) and open-minded, which plays better with educated audiences.

What’s the lesson we can draw from this? Don’t get ahead of your facts. Be transparent about your levels of confidence. When you’re unsure of something, say so. It will make you more credible when you say you’re sure.

How to improve your calibration

Calibration can be improved through training and experience. It begins with awareness of the problem and acceptance of the fact that you are probably susceptible to it. Here are a half-dozen ways to get better.

Test your calibration. ProjectionPoint has a test on their website that allows you to test your calibration. Simply seeing the results, if they are bad, will make you aware of the need to improve your calibration.

Separate fact from opinion. As Richard Feynman said, “The most important thing is not to fool yourself. And you’re the easiest person to fool.”

Keep track. Experience tends toreduce overconfidence and improve calibration, as long as you learn from that experience. It’s no accident that two of the best-calibrated professions are bookies and meteorologists. This is because they get rapid feedback on their decisions, and are held accountable for being wrong.

Be more foxy. As we saw previously, hedgehogs, who know one thing very well, tend to be less calibrated than foxes, who have more breadth of knowledge. He found that hedgehogs were not only wrong more often than foxes, but that they were less likely to recognize or admit that they were wrong when events did not match their predictions

Try not to make up your mind too quickly. Early judgments can serve as anchors, so that if you adjust your position in light of new information, you will probably not adjust as far. If you do, be on the lookout for confirmation bias, which is the general tendency to notice evidence that supports your view and be less apt to seek out or even notice contrary evidence. Follow Charles Darwin’s example:

“I had also, during many years, followed a golden rule, namely, that whenever a published fact, a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from the memory than favourable ones.  Owing to this habit, very few objections were raised against my views which I had not at least noticed and attempted to answer.”

Practice productive paranoia. When you’re very confident and it’s important, try extra hard to find holes in your idea. Individually, you can take the time to list reasons why you might be wrong. With colleagues, you can conduct a PreMortem: imagine that it is some future time and your idea has failed, and try to figure out all the ways it could have happened.[6]

If you follow these six practices, I’m 90% confident that your calibration will improve, and 75% confident that your personal credibility will also.

 

[1] Quoted in “Managing Overconfidence, by J. Edward Russo and Paul J.H. Schoemaker, Sloan Management Review, Winter 1992.

[2] Russo and Schoemaker 1992.

[3] Kruger, Justin; David Dunning (1999). “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments”.

[4] Russo and Schoemaker say that public accountants are slightly underconfident.

[5] Experts Are More Persuasive When They’re Less Certain, Zakary Tormala, Harvard Business Review, March 2011.

[6] The term was coined (I believe) by Gary Klein, in his book, The Power of Intuition.

Read More
1 2 3 11