Which is more dangerous, a gun or a swimming pool? Why should suicide bombers buy life insurance? Steven D. Levitt and Stephen J. Dubner answered such questions in their controversial books “Freakonomics” and “Superfreakonomics”. They have a unique way of looking at problems and often find surprising insights and solutions to common problems.
In “Think Like a Freak”, they write about the principles and concepts behind their thinking. Thinking like a freak means…
Letting go of the conventional wisdoms that torment us. Letting go of the artificial limits that hold us back–and of the fear of admitting what we don’t know. Letting go of the habits of mind that tell us to kick into the corner of the goal even though we stand a better chance by going up the middle.
The book’s stories about how to win an all-you-can-eat hot dog competition or why the famous Nigerian e-mail scam is still around are already great reasons to read it. Besides that, the book includes some solid concepts and mental models that will improve your thinking.
Know what you’re talking about
“I don’t know” is something we say and hear rarely. This is interesting because in most cases we don’t, though we love to have opinions about everything: Yes of course they’ll win the election… Everyone will be vegan in five years… The war won’t last a month… Even experts are terrible at understanding the world. Experiments by Philip Tetlock have shown that in most cases, they’re not more accurate than random predictions.
According to Levitt and Dubner, we don’t like to admit that we don’t know because
- we don’t want to look stupid
- if I’m right about something I get brownie points, if I’m wrong people won’t remember
- we’ve learned a strong moral compass with clear instructions to what is right and wrong (even if reality begs to differ)
Only if we overcome this urge to arrive at pre-determined answers can we discover useful insights. Stop faking that you know what you’re talking about and admit that you don’t know (but you could figure it out).
To figure it out, Levitt and Dubner recommend experiments. Well-designed experiments are a great way to learn because you get real-life feedback. However, an experiment is only as good as the question it tries to answer.
Open up the solution space
Often we jump immediately into solution mode without being clear about the problem we’re trying to solve. What is the question you’re trying to answer? Asking different questions will lead to different answers.
(…) when serious people talk about education reform, they rarely talk about the family’s role in preparing children to succeed. That is in part because the very word “education reform” indicates that the question is “What’s wrong with our schools?” when in reality, the question might be better phrased as “Why do American kids know less than kids from Estonia and Poland?”
In the first question, the solution is already somewhat defined, it must be something with changing the schools. In the second one, the solution space is more open. Re-defining the problem in such a way makes it more likely that you’re solving the root cause of a problem and not a symptom.
To do this well, Levitt and Dubner recommend thinking more like a child. This means asking lots of “obvious” and “dumb” questions. Challenging the preconceived notions we have is often what uncovers surprising and creative solutions.
Have fun, think small, don’t fear the obvious.
One of the tactics that gave the Japanese hot-dog-eating champion Kobi an edge against his competitors, was that he ate the bread and the sausage separately and soaked the bread in water first. None of his competitors thought of this “obvious” tactic to increase eating speed.
Lastly, the book recommends to rather solve smaller problems than bigger ones. They’re less complex and hence easier to understand. They’re great to uncover the cause-and-effect relationships and design experiments around them. Solutions to smaller problems are also easier to implement because they mean less change to the status quo.
The tools of the trade
If there is one mantra a Freak lives by, it is this: people respond to incentives.
Whenever there are people involved, understanding incentives is a central part of the solution process. If the solution creates the wrong incentives the outcome can severely backfire.
They describe as an example the famous “cobra effect”. The British rulers in colonial India tried to solve the problem of too many cobras in the city of Delhi with a bounty scheme: cash for every cobra skin. Unfortunately, this incentive had an unwanted side effect and people all over Delhi started to breed cobras. This, of course, defeated the purpose and the rule was abolished. The result: Everyone released their bred cobras and the problem was worse than before.
The book offers some rules to design effective incentive structures:
- Understand what people really care about and don’t rely on what they say.
- Create incentives that are valuable to them but cheap for you.
- Experiment first and see how people respond to the incentive. If the result is surprising try to understand it.
- Try to create incentives that don’t cry manipulation, but create a collaborative frame with people.
- It’s “the right thing to do”, is not an incentive.
Since there will always be people that try to game the system you devise, Levitt and Dubner suggest creating a system that lets people self-filter. Such a system takes advantage of the fact that manipulators will act differently than a person with honest intentions.
The best example is the Nigerian e-mail scam. This is an annoying scam email that tries to convince you to transfer money to the scammers. For most people, it’s also blatantly obvious. This is by intention because the scammers only want gullible people to respond. The mail is written in a way that 99% of people know that it’s a scam and only the very few that have a high potential to be scammed proceed. Otherwise, the scammers would waste lots of time on people who eventually figure out that it’s a fraud.
How to persuade people of your solution
You devised your system and it works beautifully. How can you convince someone that doesn’t want to be convinced of your solution?
Here are Levitt and Dubner’s tips to do so:
- Your argument might be good, but if it doesn’t resonate with the other person it’s useless.
- Don’t pretend your argument is flawless. This will only make the other side suspicious and defensive. Admit the downsides and risks behind your solution.
- Engage with your opponent’s arguments and acknowledge their strengths. This shows that you truly listen and allows you to explore common bases.
- I’m surprised this had to be mentioned, but don’t insult the other person.
- Tell good stories.
[A good story] uses data, statistical or otherwise, to portray a sense of magnitude; without data, we have no idea how a story fits into the larger scheme of things. A good story also includes the passage of time, to show the degree of constancy or change; without a time frame, we can’t judge whether we’re looking at something truly noteworthy or just an anomalous blip. And a story lays out a daisy chain of events, to show the causes that lead up to a particular situation and the consequences that result from it.
Conclusion
I think the book provides some great fundamental concepts for problem-solving. A lot of the advice seems obvious but as Levitt and Dubner say:
a lot of obvious ideas are only obvious after the fact–after someone has taken the time and effort to investigate them, to prove them right (or wrong).
And it would be beneficial if more people listen to the fundamental advice they’re describing in the book.
However, the book has clear strengths and weaknesses. There’s a lot around problem definition and asking the right questions. I see the strength of the book in that area. After that, things “thin out” a bit. The solution-finding aspects are limited exclusively to the idea of incentives and the last parts of the book that discuss persuasion and quitting lack the insight and depth of the previous chapters.