Kyle Harrison
← Bookshelf

The Great Mental Models Volume 1: General Thinking Concepts

Shane Parrish
Read 2021

Key Takeaways

Under Consideration — to be added.

Interconnections

Under Consideration — to be added.

Highlights

  • Munger has a way of thinking through problems using what he calls a broad latticework of mental models. These are chunks of knowledge from different disciplines that can be simplified and applied to better understand the world.
  • As the Roman poet Publius Terentius wrote: “Nothing has yet been said that’s not been said before.”
  • In life and business, the person with the fewest blind spots wins. Removing blind spots means we see, interact with, and move closer to understanding reality. We think better. And thinking better is about finding simple processes that help us work through problems from multiple dimensions and perspectives, allowing us to better choose solutions that fit what matters to us. The skill for finding the right solutions for the right problems is one form of wisdom.
  • Mental models describe the way the world works. They shape how we think, how we understand, and how we form beliefs.
  • Our approach to the Great Mental Models rests on the idea that the fundamentals of knowledge are available to everyone. There is no discipline that is off limits—the core ideas from all fields of study contain principles that reveal how the universe works, and are therefore essential to navigating it. Our models come from fundamental disciplines that most of us have never studied, but no prior knowledge is required—only a sharp mind with a desire to learn.
  • When understanding is separated from reality, we lose our powers. Understanding must constantly be tested against reality and updated accordingly.
  • The only way you’ll know the extent to which you understand reality is to put your ideas and understanding into action.
  • Our failures to update from interacting with reality spring primarily from three things: not having the right perspective or vantage point, ego-induced denial, and distance from the consequences of our decisions.
  • The first flaw is perspective. We have a hard time seeing any system that we are in.
    • We can’t overcome mortality while in mortality
  • The second flaw is ego. Many of us tend to have too much invested in our opinions of ourselves to see the world’s feedback—the feedback we need to update our beliefs about reality.
    • Charles Darwin
  • First, we’re so afraid about what others will say about us that we fail to put our ideas out there and subject them to criticism. This way we can always be right. Second, if we do put our ideas out there and they are criticized, our ego steps in to protect us. We become invested in defending instead of upgrading our ideas.
  • The third flaw is distance. The further we are from the results of our decisions, the easier it is to keep our current views rather than update them.
  • We also tend to undervalue the elementary ideas and overvalue the complicated ones.
  • Most geniuses—especially those who lead others—prosper not by deconstructing intricate complexities but by exploiting unrecognized simplicities. » Andy Benoit
    • Not just simple enough that people understand, but so simple that people can’t misunderstand
  • Understanding only becomes useful when we adjust our behavior and actions accordingly.
    • It is not enormity just to know; we must do
  • We are afraid to learn and admit when we don’t know enough. This is the mindset that leads to poor decisions. They are a source of stress and anxiety, and consume massive amounts of time.
  • Rather than update our views, we double down our effort, accelerating our frustrations and anxiety.
  • We are passive, thinking these things just happened to us and not that we did something to cause them. This passivity means that we rarely reflect on our decisions and the outcomes. Without reflection we cannot learn.
  • When things happen in accord with our view of the world we naturally think they are good for us and others. When they conflict with our views, they are wrong and bad.
  • When we use flawed models we are more likely to misunderstand the situation, the variables that matter, and the cause and effect relationships between them.
  • Better models mean better thinking.
  • Sometimes making good decisions boils down to avoiding bad ones.
  • When it comes to applying mental models we tend to run into trouble either when our model of reality is wrong, that is, it doesn’t survive real world experience, or when our model is right and we apply it to a situation where it doesn’t belong.
    • Doesn’t survive real world experience = “everyone has a plan until they get punched in the face.”
  • Applied incorrectly: “to a person with a hammer, everything looks like a nail”
  • Only by repeated testing of our models against reality and being open to feedback can we update our understanding of the world and change our thinking.
  • Most of us study something specific and don’t get exposure to the big ideas of other disciplines. We don’t develop the multidisciplinary mindset that we need to accurately see a problem. And because we don’t have the right models to understand the situation, we overuse the models we do have and use them even when they don’t belong.
  • “Every statistician knows that a large, relevant sample size is their best friend. What are the three largest, most relevant sample sizes for identifying universal principles? Bucket number one is inorganic systems, which are 13.7 billion years in size. It’s all the laws of math and physics, the entire physical universe. Bucket number two is organic systems, 3.5 billion years of biology on Earth. And bucket number three is human history, you can pick your own number, I picked 20,000 years of recorded human behavior. Those are the three largest sample sizes we can access and the most relevant.” —Peter Kaufman
    • #History
  • We have a tendency to think that how the world is, is how it always was. And so we get caught up validating our assumptions from what we find in the here and now.
    • #[[SALY Principle]]
  • Looking to the past can provide essential context for understanding where we are now.
  • We’re much like the blind men in the classic parable of the elephant, going through life trying to explain everything through one limited lens. Too often that lens is driven by our particular field, be it economics, engineering, physics, mathematics, biology, chemistry, or something else entirely. Each of these disciplines holds some truth and yet none of them contain the whole truth.
    • Find quote: “we embrace all truth, no matter the source”
  • To increase your mental efficiency and reach your 400-horsepower potential, you need to use a latticework of mental models.
  • The chief enemy of good decisions is a lack of sufficient perspectives on a problem. » Alain de Botton
  • «Disciplines, like nations, are a necessary evil that enable human beings of bounded rationality to simplify their goals and reduce their choices to calculable limits. But parochialism is everywhere, and the world badly needs international and interdisciplinary travelers to carry new knowledge from one enclave to another.» Herbert Simon
  • The map of reality is not reality.
  • Korzybski introduced and popularized the concept that the map is not the territory. In other words, the description of the thing is not the thing itself. The model is not reality. The abstraction is not the abstracted.
    1. A map may have a structure similar or dissimilar to the structure of the territory.
    1. Two similar structures have similar “logical” characteristics.
    1. A map is not the actual territory.
    1. An ideal map would contain the map of the map, the map of the map of the map, etc., endlessly.
  • The truth is, the only way we can navigate the complexity of reality is through some sort of abstraction.
  • When we read the news, we’re consuming abstractions created by other people. The authors consumed vast amounts of information, reflected upon it, and drew some abstractions and conclusions that they share with us. But something is lost in the process. We can lose the specific and relevant details that were distilled into an abstraction.
  • We run into problems when our knowledge becomes of the map, rather than the actual underlying territory it describes.
    • Connect to The West Wing map episode and why our “abstractions” shape our view of reality
  • Reality is messy and complicated, so our tendency to simplify it is understandable. However, if the aim becomes simplification rather than understanding we start to make bad decisions.
  • What is common to many is taken least care of, for all men have greater regard for what is their own than for what they possess in common with others. –Aristotle
  • we may think adherence to the map is more important than taking in new information about a territory.
    • We seek out evidence that our beliefs or models are wrong
  • «Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.» George Box
  • We can and should update them based on our own experiences in the territory. That’s how good maps are built: feedback loops created by explorers.
    • The internet makes it so much easier to share their learnings and update everyone else’s map
  • Jane Jacobs in her groundbreaking work, The Death and Life of Great American Cities.
  • Jacobs’ book is, in part, a cautionary tale of what can happen when faith in the model influences the decisions we make in the territory. When we try to fit complexity into the simplification.
  • I’m no genius. I’m smart in spots—but I stay around those spots. Thomas Watson1
  • If you know what you understand, you know where you have an edge over others. When you are honest about where your knowledge is lacking you know where you are vulnerable and where you can improve.
  • In order to get the most out of this mental model, we will explore the following: What is a circle of competence? How do you know when you have one? How do you build and maintain one? How do you operate outside of one?
  • The difference between the detailed web of knowledge in the Lifer’s head and the surface knowledge in the Stranger’s head is the difference between being inside a circle of competence and being outside the perimeter. True knowledge of a complex territory cannot be faked.
  • There is no definite checklist for figuring this out, but if you don’t have at least a few years and a few failures under your belt, you cannot consider yourself competent in a circle.
  • «We shall be unable to turn natural advantage to account unless we make use of local guides.» Sun Tzu2
  • In Alexander Pope’s poem “An Essay on Criticism,” he writes: “A little learning is a dangerous thing; Drink deep, or taste not the Pierian spring: There shallow draughts intoxicate the brain, And drinking largely sobers us again.”
  • Building a circle of competence takes years of experience, of making mistakes, and of actively seeking out better methods of practice and thought.
    • Connect to David Perell talking about how knowledge workers should train like athletes
  • There are three key practices needed in order to build and maintain a circle of competence: curiosity and a desire to learn, monitoring, and feedback.
  • «Learn from the mistakes of others. You can’t live long enough to make them all yourself.»
  • The reason we have such difficulty with overconfidence—as demonstrated in studies which show that most of us are much worse drivers, lovers, managers, traders (and many other things) than we think we are—is because we have a problem with honest self-reporting.
  • Keeping a journal of your own performance is the easiest and most private way to give self-feedback. Journals allow you to step out of your automatic thinking and ask yourself: What went wrong? How could I do better? Monitoring your own performance allows you to see patterns that you simply couldn’t see before. This type of analysis is painful for the ego, which is also why it helps build a circle of competence. You can’t improve if you don’t know what you’re doing wrong.
    • When progress is measured vs when progress is measured and reported
  • A lot of professionals have an ego problem: their view of themselves does not line up with the way other people see them. Before people can change they need to know these outside views. We need to go to people we trust, who can give us honest feedback about our traits. These people are in a position to observe us operating within our circles, and are thus able to offer relevant perspectives on our competence. Another option is to hire a coach.
  • It takes courage to solicit external feedback, so if defensiveness starts to manifest, focus on the result you hope to achieve.
    • Spirit of humility
  • There are three parts to successfully operating outside a circle of competence: Learn at least the basics of the realm you’re operating in, while acknowledging that you’re a Stranger, not a Lifer. However, keep in mind that basic information is easy to obtain and often gives the acquirer an unwarranted confidence. Talk to someone whose circle of competence in the area is strong. Take the time to do a bit of research to at least define questions you need to ask, and what information you need, to make a good decision. If you ask a person to answer the question for you, they’ll be giving you a fish. If you ask them detailed and thoughtful questions, you’ll learn how to fish. Furthermore, when you need the advice of others, especially in higher stakes situations, ask questions to probe the limits of their circles. Then ask yourself how the situation might influence the information they choose to provide you. Use a broad understanding of the basic mental models of the world to augment your limited understanding of the field in which you find yourself a Stranger. These will help you identify the foundational concepts that would be most useful. These then serve as a guide to help you navigate the situation you are in.
  • The problem of incentives can really skew how much you can rely on someone else’s circle of competence. This is particularly acute in the financial realm.
    • Incentives
  • Elizabeth knew there were aspects of leading the country that were outside her circle of competence. She had an excellent education and had spent most of her life just trying to survive. Perhaps that is why she was able to identify and admit to what she didn’t know.
    • Spirit of humility
  • In her first speech as Queen, Elizabeth announced, “I mean to direct all my actions by good advice and counsel.”9 After outlining her intent upon becoming Queen, she proceeded to build her Privy Council—effectively the royal advisory board. She didn’t copy her immediate predecessors, filling her council with yes men or wealthy incompetents who happen to have the same religious values. She blended the old and the new to develop stability and achieve continuity. She kept the group small so that real discussions could happen. She wanted a variety of opinions that could be challenged and debated.
  • There is only so much you can know with great depth of understanding. This is why being able to identify your circle, and knowing how to move around outside of it, is so important.
  • «Ignorance more often begets confidence than knowledge.» Charles Darwin11
  • Warren Buffett. When asked, he recommended that each individual stick to their area of special competence, and be very reluctant to stray. For when we stray too far, we get into areas where we don’t even know what we don’t know. We may not even know the questions we need to ask.
  • Karl Popper wrote “A theory is part of empirical science if and only if it conflicts with possible experiences13 and is therefore in principle falsifiable by experience.” The idea here is that if you can’t prove something wrong, you can’t really prove it right either.
  • In a true science, as opposed to a pseudo-science, the following statement can be easily made: “If x happens, it would show demonstrably that theory y is not true.” We can then design an experiment, a physical one or sometimes a thought experiment, to figure out if x actually does happen.
  • Falsification is the opposite of verification; you must try to show the theory is incorrect, and if you fail to do so, you actually strengthen it.
  • «A theory is part of empirical science if and only if it conflicts with possible experiences and is therefore in principle falsifiable by experience.» Karl Popper
  • Trend is not destiny.
  • Bertrand Russell’s classic example of the chicken that gets fed every day is a great illustration of this concept.14 Daily feedings have been going on for as long as the chicken has observed, and thus it supposes that these feedings are a guaranteed part of its life and will continue in perpetuity. The feedings appear as a law until the day the chicken gets its head chopped off. They are then revealed to be a trend, not a predictor of the future state of affairs.
  • I don’t know what’s the matter with people: they don’t learn by understanding; they learn by some other way—by rote or something. Their knowledge is so fragile! Richard Feynman1
  • First principles thinking is one of the best ways to reverse-engineer complicated situations and unleash creative possibility. Sometimes called reasoning from first principles, it’s a tool to help clarify complicated problems by separating the underlying ideas or facts from any assumptions based on them.
  • Rather, first principles thinking identifies the elements that are, in the context of any given situation, non-reducible.
  • When it comes down to it, everything that is not a law of nature is just a shared belief. Money is a shared belief. So is a border. So are bitcoin. So is love. The list goes on.
  • Socratic questioning can be used to establish first principles through stringent analysis. This is a disciplined questioning process, used to establish truths, reveal underlying assumptions, and separate knowledge from ignorance. The key distinction between Socratic questioning and ordinary discussions is that the former seeks to draw out first principles in a systematic manner. Socratic questioning generally follows this process: Clarifying your thinking and explaining the origins of your ideas. (Why do I think this? What exactly do I think?) Challenging assumptions. (How do I know this is true? What if I thought the opposite?) Looking for evidence. (How can I back this up? What are the sources?) Considering alternative perspectives. (What might others think? How do I know I am correct?) Examining consequences and implications. (What if I am wrong? What are the consequences if I am?) Questioning the original questions. (Why did I think that? Was I correct? What conclusions can I draw from the reasoning process?) Socratic questioning stops you from relying on your gut and limits strong emotional responses. This process helps you build something that lasts.
  • The Five Whys is a method rooted in the behavior of children. Children instinctively think in first principles. Just like us, they want to understand what’s happening in the world. To do so, they intuitively break through the fog with a game some parents have come to dread, but which is exceptionally useful for identifying first principles: repeatedly asking “why?” The goal of the Five Whys is to land on a “what” or “how”. It is not about introspection, such as “Why do I feel like this?” Rather, it is about systematically delving further into a statement or concept so that you can separate reliable knowledge from assumption. If your “whys” result in a statement of falsifiable fact, you have hit a first principle. If they end up with a “because I said so” or ”it just is”, you know you have landed on an assumption that may be based on popular opinion, cultural myth, or dogma. These are not first principles.
  • «Science is much more than a body of knowledge. It is a way of thinking.» Carl Sagan2
  • The dogma of the sterile stomach was so entrenched as a first principle, that it was hard to admit that it rested on some incorrect assumptions which ultimately ended with the explanation, “because that’s just the way it is”.
  • To improve something, we need to understand why it is successful or not. Otherwise, we are just copying thoughts or behaviors without understanding why they worked.
  • «As to methods, there may be a million and then some, but principles are few. The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble.» Harrington Emerson8
  • Thought experiments can be defined as “devices of the imagination used to investigate the nature of things.”
  • Thought experiments are more than daydreaming. They require the same rigor as a traditional experiment in order to be useful. Much like the scientific method, a thought experiment generally has the following steps: Ask a question Conduct background research Construct hypothesis Test with (thought) experiments Analyze outcomes and draw conclusions Compare to hypothesis and adjust accordingly (new question, etc.)
  • When we say “if money were no object” or “if you had all the time in the world,” we are asking someone to conduct a thought experiment because actually removing that variable (money or time) is physically impossible.
  • Re-imagining history: A familiar use of the thought experiment is to re-imagine history.
  • These approaches are called the historical counter-factual and semi-factual. If Y happened instead of X, what would the outcome have been? Would the outcome have been the same? As popular—and generally useful—as counter- and semi-factuals are, they are also the areas of thought experiment with which we need to use the most caution. Why? Because history is what we call a chaotic system. A small change in the beginning conditions can cause a very different outcome down the line. This is where the rigor of the scientific method is indispensable if we want to draw conclusions that are actually useful.
    • Compare to the chaotic system you have to deal with when evaluating alternative methods to fighting climate change
  • Also the scene looking at if the Hindenburg hadn’t crashed - https://en.wikipedia.org/wiki/The_Never_War?wprov=sfti1
  • This experiment was first proposed in modern form by Philippa Foot in her paper “The Problem of Abortion and the Doctrine of the Double Effect,”3 and further considered extensively by Judith Jarvis Thomson in “The Trolley Problem.”4
  • An example of this is the famous “veil of ignorance” proposed by philosopher John Rawls in his influential Theory of Justice. In order to figure out the most fair and equitable way to structure society, he proposed that the designers of said society operate behind a veil of ignorance. This means that they could not know who they would be in the society they were creating.
  • What’s not obvious is that the gap between what is necessary to succeed and what is sufficient is often luck, chance, or some other factor beyond your direct control.
  • Technology is fine, but the scientists and engineers only partially think through their problems. They solve certain aspects, but not the total, and as a consequence it is slapping us back in the face very hard. Barbara McClintock1
  • Second-order thinking is thinking farther ahead and thinking holistically. It requires us to not only consider our actions and their immediate consequences, but the subsequent effects of those actions as well.
  • Very often, the second level of effects is not considered until it’s too late. This concept is often referred to as the “Law of Unintended Consequences” for this very reason.
  • During their colonial rule of India, the British government began to worry about the number of venomous cobras in Delhi. To reduce the numbers, they instituted a reward for every dead snake brought to officials. In response, Indian citizens dutifully complied and began breeding the snakes to slaughter and bring to officials. The snake problem was worse than when it started because the British officials didn’t think at the second level.
    • Connect to Russia oil-drilling incentives creating more holes in the ground but less oil
  • «Stupidity is the same as evil if you judge by the results.» Margaret Atwood
  • In 1963, the UC Santa Barbara ecologist and economist Garrett Hardin proposed his First Law of Ecology: “You can never merely do one thing.”3 We operate in a world of multiple, overlapping connections, like a web, with many significant, yet obscure and unpredictable, relationships. He developed second-order thinking into a tool, showing that if you don’t consider “the effects of the effects,” you can’t really claim to be doing any thinking at all.
    • Connect to climate change; we can’t just solve one thing.
  • «When we try to pick out anything by itself, we find it hitched to everything else in the Universe.» John Muir4
    • You pick up one end of the stick and have to pick up the other
  • Trust and trustworthiness are the results of multiple interactions. This is why second-order thinking is so useful and valuable. Going for the immediate payoff in our interactions with people, unless they are a win-win, almost always guarantees that interaction will be a one-off. Maximizing benefits is something that happens over time. Thus, considering the effects of the effects of our actions on others, or on our reputations, is critical to getting people to trust us, and to enjoy the benefits of cooperation that come with that.
  • Life is filled with the need to be persuasive. Arguments are more effective when we demonstrate that we have considered the second-order effects and put effort into verifying that these are desirable as well.
  • Second-order thinking, as valuable as it is, must be tempered in one important way: You can’t let it lead to the paralysis of the Slippery Slope Effect, the idea that if we start with action A, everything after is a slippery slope down to hell, with a chain of consequences B, C, D, E, and F.
  • Second-order thinking needs to evaluate the most likely effects and their most likely consequences, checking our understanding of what the typical results of our actions will be. If we worried about all possible effects of effects of our actions, we would likely never do anything, and we’d be wrong.
  • Probabilistic thinking is essentially trying to estimate, using some tools of math and logic, the likelihood of any specific outcome coming to pass.
  • There are three important aspects of probability that we need to explain so you can integrate them into your thinking to get into the ballpark and improve your chances of catching the ball: Bayesian thinking Fat-tailed curves Asymmetries
  • The core of Bayesian thinking (or Bayesian updating, as it can be called) is this: given that we have limited but useful information about the world, and are constantly encountering new information, we should probably take into account what we already know when we learn something new.
  • Now we need to look at fat-tailed curves: Many of us are familiar with the bell curve, that nice, symmetrical wave that captures the relative frequency of so many things from height to exam scores. The bell curve is great because it’s easy to understand and easy to use. Its technical name is “normal distribution.” If we know we are in a bell curve situation, we can quickly identify our parameters and plan for the most likely outcomes.
  • The more extreme events that are possible, the longer the tails of the curve get. Any one extreme event is still unlikely, but the sheer number of options means that we can’t rely on the most common outcomes as representing the average. The more extreme events that are possible, the higher the probability that one of them will occur. Crazy things are definitely going to happen, and we have no way of identifying when.
  • In a bell curve type of situation, like displaying the distribution of height or weight in a human population, there are outliers on the spectrum of possibility, but the outliers have a fairly well-defined scope. You’ll never meet a man who is ten times the size of an average man. But in a curve with fat tails, like wealth, the central tendency does not work the same way. You may regularly meet people who are ten, 100, or 10,000 times wealthier than the average person. That is a very different type of world.
  • In the next ten years, how many events are possible? How fat is the tail?
  • Asymmetries: Finally, you need to think about something we might call “metaprobability”—the probability that your probability estimates themselves are any good.
  • If you look at nicely polished stock pitches made by professional investors, nearly every time an idea is presented, the investor looks their audience in the eye and states they think they’re going to achieve a rate of return of 20% to 40% per annum, if not higher. Yet exceedingly few of them ever attain that mark, and it’s not because they don’t have any winners. It’s because they get so many so wrong. They consistently overestimate their confidence in their probabilistic estimates. (For reference, the general stock market has returned no more than 7% to 8% per annum in the United States over a long period, before fees.)
  • Nassim Taleb puts his finger in the right place when he points out our naive use of probabilities. In The Black Swan, he argues that any small error in measuring the risk of an extreme event can mean we’re not just slightly off, but way off—off by orders of magnitude, in fact.
  • Nassim Taleb in a book curiously titled Antifragile. Here is the core of the idea. We can think about three categories of objects: Ones that are harmed by volatility and unpredictability, ones that are neutral to volatility and unpredictability, and finally, ones that benefit from it. The latter category is antifragile—like a package that wants to be mishandled. Up to a point, certain things benefit from volatility, and that’s how we want to be. Why? Because the world is fundamentally unpredictable and volatile, and large events—panics, crashes, wars, bubbles, and so on—tend to have a disproportionate impact on outcomes.
  • The problem is that nearly all studies of “expert” predictions in such complex real-world realms as the stock market, geopolitics, and global finance have proven again and again that, for the rare and impactful events in our world, predicting is impossible! It’s more efficient to prepare.
  • Failing properly has two major components. First, never take a risk that will do you in completely. (Never get taken out of the game completely.) Second, develop the personal resilience to learn from your failures and start again. With these two rules, you can only fail temporarily.
  • We notice two things happening at the same time (correlation) and mistakenly conclude that one causes the other (causation). We then often act upon that erroneous conclusion, making decisions that can have immense influence across our lives.
  • Trying to invert the relationship can help you sort through claims to determine if you are dealing with true causation or just correlation.
  • The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function. One should, for example, be able to see that things are hopeless yet be determined to make them otherwise. F. Scott Fitzgerald1
  • Inversion is a powerful tool to improve your thinking because it helps you identify and remove obstacles to success. The root of inversion is “invert,” which means to upend or turn upside down. As a thinking tool it means approaching a situation from the opposite end of the natural starting point. Most of us tend to think one way about a problem: forward. Inversion allows us to flip the problem around and think backward.
  • There are two approaches to applying inversion in your life. Start by assuming that what you’re trying to prove is either true or false, then show what else would have to be true. Instead of aiming directly for your goal, think deeply about what you want to avoid and then see what options are left over.
  • Bernays didn’t focus on how to sell more cigarettes to women within the existing social structure. Sales would have undoubtedly been a lot more limited. Instead he thought about what the world would look like if women smoked often and anywhere, and then set about trying to make that world a reality. Once he did that, selling cigarettes to women was comparatively easy. This inversion approach became a staple of Bernays’s work. He used the descriptor “appeals of indirection”, and each time when “hired to sell a product or service, he instead sold whole new ways of behaving, which appeared obscure but over time reaped huge rewards for his clients and redefined the very texture of American life.”
  • Kurt Lewin.10 In the 1930s he came up with the idea of force field analysis, which essentially recognizes that in any situation where change is desired, successful management of that change requires applied inversion. Here is a brief explanation of his process: Identify the problem Define your objective Identify the forces that support change towards your objective Identify the forces that impede change towards the objective Strategize a solution! This may involve both augmenting or adding to the forces in step 3, and reducing or eliminating the forces in step 4.
  • «He wins his battles by making no mistakes.» Sun Tzu11
  • Anybody can make the simple complicated. Creativity is making the complicated simple. Charles Mingus1
  • Simpler explanations are more likely to be true than complicated ones. This is the essence of Occam’s Razor, a classic principle of logic and problem-solving. Instead of wasting your time trying to disprove complex scenarios, you can make decisions more confidently by basing them on the explanation that has the fewest moving parts.
  • Ockham wrote that “a plurality is not to be posited without necessity”—essentially that we should prefer the simplest explanation with the fewest moving parts.
  • As scientist and writer Carl Sagan explains in The Demon Haunted World, A multitude of aspects of the natural world that were considered miraculous only a few generations ago are now thoroughly understood in terms of physics and chemistry. At least some of the mysteries of today will be comprehensively solved by our descendants. The fact that we cannot now produce a detailed understanding of, say, altered states of consciousness in terms of brain chemistry no more implies the existence of a ‘spirit world’ than a sunflower following the Sun in its course across the sky was evidence of a literal miracle before we knew about phototropism and plant hormones.
  • “When you hear hoofbeats, think horses, not zebras.”
  • One important counter to Occam’s Razor is the difficult truth that some things are simply not that simple. The regular recurrence of fraudulent human organizations like pyramid schemes and Ponzi schemes is not a miracle, but neither is it obvious. No simple explanation suffices, exactly. They are a result of a complex set of behaviors, some happening almost by accident or luck, and some carefully designed with the intent to deceive. It isn’t a bit easy to spot the development of a fraud. If it was, they’d be stamped out early. Yet, to this day, frauds frequently grow to epic proportions before they are discovered.
  • When Louis Gerstner took over IBM in the early 1990s, during one of the worst periods of struggle in its history, many business pundits called for a statement of his vision. What rabbit would Gerstner pull out of his hat to save Big Blue?
  • Smartly, Gerstner realized that the simple approach was most likely to be the effective one. His famous reply was that “the last thing IBM needs right now is a vision.” What IBM actually needed to do was to serve its customers, compete for business in the here and now, and focus on businesses that were already profitable. It needed simple, tough-minded business execution.
  • An explanation can be simplified only to the extent that it can still provide an accurate understanding.
  • I need to listen well so that I hear what is not said. Thuli Madonsela1
    • Listening
  • Hard to trace in its origin, Hanlon’s Razor states that we should not attribute to malice that which is more easily explained by stupidity. In a complex world, using this model helps us avoid paranoia and ideology. By not generally assuming that bad results are the fault of a bad actor, we look for options instead of missing opportunities. This model reminds us that people do make mistakes. It demands that we ask if there is another reasonable explanation for the events that have occurred. The explanation most likely to be right is the one that contains the least amount of intent.
  • The famous Linda problem, demonstrated by the psychologists Daniel Kahneman2 and Amos Tversky in a 1982 paper, is an illuminating example of how our minds work and why we need Hanlon’s Razor. It went like this: Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Which is more probable? Linda is a bank teller. Linda is a bank teller and is active in the feminist movement. The majority of respondents chose option 2. Why? The wording used to describe her suggests Linda is feminist. But Linda could only be a bank teller, or a feminist and a bank teller. So naturally the majority of students concluded she was both. They didn’t know anything about what she did, but because they were led to believe she had to be a feminist they couldn’t reject that option, even though the math of statistics makes it more likely that a single condition is true instead of multiple conditions.
  • They called it the “Fallacy of Conjunction.”
  • we’re deeply affected by vivid, available evidence, to such a degree that we’re willing to make judgments that violate simple logic. We over-conclude based on the available information.
  • What does this have to do with Hanlon’s Razor? The connection is this: When we see something we don’t like happen and which seems wrong, we assume it’s intentional. But it’s more likely that it’s completely unintentional. Assuming someone is doing wrong and doing it purposefully is like assuming Linda is more likely to be a bank teller and a feminist. Most people doing wrong are not bad people trying to be malicious.
    • The Evil Genius Default: people want to believe the world is messed up because of intentionally bad people but most of the time it’s because people are stupid
  • Failing to prioritize stupidity over malice causes things like paranoia. Always assuming malice puts you at the center of everyone else’s world. This is an incredibly self-centered approach to life. In reality, for every act of malice, there is almost certainly far more ignorance, stupidity, and laziness.
  • When we assume someone is out to get us, our very natural instinct is to take actions to defend ourselves. It’s harder to take advantage of, or even see, opportunities while in this defensive mode because our priority is saving ourselves—which tends to reduce our vision to dealing with the perceived threat instead of examining the bigger picture.
  • Robert Heinlein’s character Doc Graves describes the Devil Fallacy in the 1941 sci-fi story “Logic of Empire”, as he explains the theory to another character: “I would say you’ve fallen into the commonest fallacy of all in dealing with social and economic subjects—the ‘devil’ theory. You have attributed conditions to villainy that simply result from stupidity…. You think bankers are scoundrels. They are not. Nor are company officials, nor patrons, nor the governing classes back on earth. Men are constrained by necessity and build up rationalizations to account for their acts.”