Seeking Wisdom by Peter Bevelin
from Darwin to Munger
Wisdom seeker makes an attempt to understand how the likes of naturalist Charles Darwin and super investor Charles Munger have achieved incredible clear thinking and the ideas influencing their decisions. He also looks into ways to improve our own thinking while asking hard questions using a multidisciplinary approach. Excellent read if you are looking to understand some very valuable mental models.
Finished: Jan 2021
Rating: ⭐⭐⭐
Get the book 👈

📌 Short review
This book is loaded with mental models in an exhaustive way—reasoning frameworks, misjudgements, biases, the whole catalog. It's dense though, and could use better editing. I'd prefer more depth on fewer models rather than surface coverage of everything. The best parts are examples from elite thinkers like Einstein, Feynman, and Buffett—actual applications of abstract concepts.
It deserves a second read, or maybe just keeping it around as reference material when you need to dig deeper on specific topics. Well researched and valuable, but the writing style makes it hard to recommend unless you're already into mental models and have read some other stuff in this space.
⚡️TL;DR
More encyclopedia than book. Use it as reference material or to discover frameworks worth exploring deeper. Works best if you already know the landscape and just need everything in one place.
📘 Notes
Inversion as a practical tool: Instead of asking "How do I succeed?", flip it to "How could I guarantee failure?" and avoid those paths. Works way better for complex problems where success has many routes but failure follows patterns Example: Don't ask "What makes great code?"—ask "What makes unmaintainable garbage?" and systematically avoid it.
Use checklists for high-stakes decisions: Pilots use them because memory fails under pressure. Build decision checklists for stuff you do repeatedly: hiring, architecture reviews, incident response, vendor evaluation. Check them before the decision, not during.
Recognize when multiple biases align: The "lollapalooza effect" explains why certain ideas spread like wildfire—multiple psychological tendencies reinforcing each other. When everyone around you believes something, stop and ask: "Is this actually true or just heavily repeated?" Super relevant when evaluating hot technologies or industry trends.
Incentives predict behavior better than intentions: When analyzing claims (vendor pitches, management decisions, industry advice), map the incentives first. What's this person optimized for? Their behavior follows their incentives, not their stated values.
Base rates beat anecdotes: Before evaluating any specific case, ask: "What usually happens in situations like this?" Your colleague's startup success story is way less informative than the base rate of startup success (most fail). Start with historical patterns, not exceptions.
Think in second and third-order effects: Every action creates ripples. AI tooling increases code output (first-order), reduces debugging skill development (second-order), creates maintainability crisis in 3-5 years (third-order). Most people stop at first-order thinking.
Survivorship bias is everywhere: We study successful companies, developers, strategies—but ignore the thousands who tried the same approach and failed anyway. Every piece of advice from successful people needs filtering: "How many people did this and failed?"
Availability cascades shape reality: Ideas become credible through repetition, not evidence. The more you hear "developers will be obsolete," the more real it feels, regardless of actual evidence. Actively seek disconfirming data for popular narratives.
Compound effects dominate outcomes: Small daily improvements compound exponentially. Small daily degradations (technical debt, shortcuts, skill erosion) compound the same way. Most of what determines success or failure in 5 years is the daily micro-decisions you're making today.
💎 Gems
"It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent."
The Munger/Buffett philosophy of inversion—engineer success by systematically eliminating ways to fail.
"The map is not the territory."
Models are simplifications. They highlight certain features while ignoring others. Understanding a model's limitations matters as much as understanding the model itself. Particularly relevant when working with AI systems or any abstraction.
"Small advantages compound exponentially over time."
This applies to both positive behaviors (learning, skill development) and negative ones (technical debt, bad habits). The math of compounding is intuitive but the timeline isn't—most people overestimate short-term effects and underestimate long-term ones.
"Show me the incentive and I will show you the outcome."
Understanding what people are optimized for explains behavior better than understanding what they say they value. Critical for evaluating vendor claims, organizational dynamics, or policy effects.
"When multiple biases and tendencies act in the same direction, the combined effect is far greater than the sum of parts."
The lollapalooza effect. Recognizing these convergence points explains otherwise inexplicable human behavior. Useful for understanding why certain technologies or ideas catch fire while technically superior alternatives languish.