How much do I want to read more: 7/10

Mental models is an interesting concept. Its promise is to handle high level thinking to better face any real life situation.
The example given in the Introduction is using multiplication instead of just addition, which work, but which is much less efficient.
The book seems pretty heavy with lots of details. Not all of them are that interesting.
Most of what I read so far is just plain common sense, or I may have read it elsewhere already.
Still, I'm curious to read more of it and track a few models that stand out.

Introduction: The Super Thinking Journey

Once you are familiar with mental models, you can use them to quickly create a mental picture of a situation, which becomes a model that you can later apply in similar situations.

An example of a useful mental model from physics is the concept of critical mass, the mass of nuclear material needed to create a critical state whereby a nuclear chain reaction is possible.

Charlie Munger:
What is elementary, worldly wisdom? Well, the first rule is that you can’t really know anything if you just remember isolated facts and try and bang ’em back. If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form.
You’ve got to have models in your head. And you’ve got to array your experience—both vicarious and direct—on this latticework of models.

suppose you are thinking about a company that involves people renting out their expensive power tools, which usually sit dormant in their garages. If you realize that the concept of critical mass applies to this business, then you know that there is some threshold that needs to be reached before it could be viable.

knowing the right mental models unlocks super thinking, just as subtraction, multiplication, and division unlock your ability to do more complex math problems.

Munger said:
And the models have to come from multiple disciplines—because all the wisdom of the world is not to be found in one little academic department… You’ve got to have models across a fair array of disciplines.
You may say, “My God, this is already getting way too tough.” But, fortunately, it isn’t that tough—because 80 or 90 important models will carry about 90 percent of the freight in making you a worldly-wise person. And, of those, only a mere handful really carry very heavy freight.
When I urge a multidisciplinary approach…
I’m really asking you to ignore jurisdictional boundaries. If you want to be a good thinker, you must develop a mind that can jump these boundaries. You don’t have to know it all. Just take in the best big ideas from all these disciplines. And it’s not that hard to do.

“If all you have is a hammer, everything looks like a nail.”
You want to use the right tool for a given situation, and to do that, you need a whole toolbox full of super models.

1. Being Wrong Less

YOU MAY NOT REALIZE IT, but you make dozens of decisions every day.

Carl Jacobi: “Invert, always invert”. He meant that thinking about a problem from an inverse perspective can unlock new solutions and strategies.

investing money from the perspective of not losing money.
avoid unhealthy food.
The concept of inverse thinking can help you with the challenge of making good decisions.
The inverse of being right more is being wrong less.

Antifragile mental model:
Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile.
Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.

If your thinking is antifragile, then it gets better over time as you learn from your mistakes.
It’s like working out at the gym—you are shocking your muscles and bones so they grow stronger over time.

KEEP IT SIMPLE, STUPID!

Arguing from first principles: thinking from the bottom up, using basic building blocks of what you think is true to build sound conclusions.

Elon Musk:
First principles is kind of a physics way of looking at the world…You kind of boil things down to the most fundamental truths and say, “What are we sure is true?”…and then reason up from there…
Somebody could say… “Battery packs are really expensive and that’s just the way they will always be. Historically, it has cost $600 per kilowatt-hour, and so it’s not going to be much better than that in the future.”.
With first principles, you say, “What are the material constituents of the batteries? What is the stock market value of the material constituents?” It’s got cobalt, nickel, aluminum, carbon, and some polymers for separation, and a seal can. Break that down on a material basis and say, “If we bought that on the London Metal Exchange, what would each of those things cost?”.
It’s like $80 per kilowatt-hour. So clearly you just need to think of clever ways to take those materials and combine them into the shape of a battery cell and you can have batteries that are much, much cheaper than anyone realizes.

by taking the first-principles approach, you will gain a much deeper understanding of the subject at hand.

de-risking:
There is risk that one or more of your assumptions are untrue.

break it into:

the next step is actually going out and testing these assumptions.
people often make the mistake of doing way too much work before testing assumptions.
In computer science this trap is called premature optimization, where you tweak or perfect code or algorithms (optimize) too early (prematurely).

minimum viable product, or MVP. The MVP is the product you are developing with just enough features, the minimum amount, to be feasibly, or viably, tested by real people.
LinkedIn cofounder Reid Hoffman: “If you’re not embarrassed by the first version of your product, you’ve launched too late.”

your first plan is probably wrong:
“No battle plan survives contact with the enemy.”
“Everybody has a plan until they get punched in the mouth.”

you must revise it often based on the real-world feedback you receive.

One way you can be wrong is by coming up with too many or too complicated assumptions up front.
Ockham’s razor: the simplest explanation is most likely to be true.
When you encounter competing explanations that plausibly explain a set of data equally well, you probably want to choose the simplest one to investigate first.
“shaves off” unnecessary assumptions.
“Everything should be made as simple as it can be, but not simpler!”
“When you hear hoofbeats, think of horses, not zebras.”

Romantic partner:
“I will only date a Brazilian man with blue eyes who loves hot yoga and raspberry ice cream, and whose favorite Avengers character is Thor.”

conjunction fallacy:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which is more probable?

  1. Linda is a bank teller.
  2. Linda is a bank teller and is active in the feminist movement.

most people answered that number 2 is more probable, but that's impossible.
two events in conjunction is always less than or equal to the probability of either one of the events occurring alone.

Overfitting occurs when you use an overly complicated explanation when a simpler one will do.

IN THE EYE OF THE BEHOLDER

You go through life seeing everything from your perspective.
In physics your perspective is called your frame of reference.

A frame-of-reference mental trap is "framing".
When you present an important issue to your coworker or family member, you try to frame it in a way that might help them best understand your perspective, setting the stage for a beneficial conversation.

News consumers must be [made] aware that editors can strategically use headlines to effectively sway public opinion and influence individuals’ behavior.

anchoring:

versus: