AI Mental Models: 201 Thinking Frameworks for Better Work

A practical map of 201 AI mental models, organized by use case, with a simple way to turn thinking frameworks into better prompts and workflows.

Icon illustration cover for an article about 201 AI mental models as a curated set of thinking-lens handles you hand to an agent before prompting.

AI Mental Models: 201 Thinking Frameworks for Better Work

The short version: Mental models are not magic words. They are handles. The value is not knowing 201 names. The value is having enough handles to choose a better way to think before you ask AI to act.

This is Part 1 of the AI Mental Models series.

Part 1 gives you the map. Part 2 will turn the map into a Claude Code workflow. Part 3 will explain the personal system I use when I want AI to think with me instead of merely answer me.

Everyone says mental models are "wisdom for thinking." Actually, in the AI era they are handles you hand to the model so the output gets a spine. The value is not memorizing 201 names. The value is choosing the right lens before you ask AI to act.

If you already prompt with named lenses, skip ahead to §5 for the pattern. If you are still asking AI in plain language and getting mush, read on.

I used to make decisions with one tool: a pros-and-cons list.

Sometimes I used SWOT because it looked more serious.

Then I would stare at the page and pretend the matrix had solved the problem. It had not. It had only made my uncertainty look organized.

The better version came from reading across disciplines: Charlie Munger's latticework idea, Shane Parrish's mental model writing, Kahneman's work on fast and slow thinking, business strategy books, systems thinking, probability, and the uncomfortable experience of watching my own projects fail for reasons I should have seen earlier.

The AI era makes this more useful, not less.

Without a thinking framework, an AI agent behaves like a very capable assistant with no briefing. It can write, summarize, code, search, and format, but the shape of the work depends on the shape of your instruction. If the instruction is vague, the output is often vague. If the instruction contains a thinking model, the output gets a spine.

The problem is not that AI lacks intelligence. The problem is that we often fail to give it a way to think.

1. What I mean by AI mental models

I use "AI mental models" in two connected ways.

First, they are models for your own thinking: first principles, inversion, expected value, circle of competence, second-order thinking, and so on.

Second, they are models you can give to AI: not as long essays in every prompt, but as compact operating lenses. A good model tells the agent what to inspect, what to ignore, what to challenge, and how to report uncertainty.

That second part matters.

If you ask:

Should I build this product?

you get a generic answer.

If you ask:

Analyze this product idea with first principles, inversion,
expected value, switching costs, and a failure pre-mortem.
Separate facts from assumptions. End with a test plan.

you get a different kind of output. The model names are not decoration. They create the rails.

Google's People + AI Research guide makes a related point from the product side: people build mental models of AI systems, and those models shape trust, expectations, and misuse. In workflow design, the reverse is also true. We build mental models for AI systems so their work becomes more predictable.

2. The practical rule: small active set, large reference library

Here is why this matters: an AI agent without a thinking framework behaves like a capable assistant with no briefing. The same model gives sharp answers to a framed question and mush to a fuzzy one. The lens is what frames the question.

Do not put 201 models into your daily prompt.

That is the fastest way to make your agent verbose and confused.

Use two layers instead.

Layer Size Where it belongs What it does
Active set 5-12 models Your working prompt or CLAUDE.md Shows up often, because it matches your daily work
Reference library 50-201 models A Skill, reference file, or knowledge base Loaded only when the task asks for deeper thinking
Verification layer 3-6 checks End of prompt or workflow Prevents the agent from forcing the wrong framework

For my own work, the active set is boring on purpose:

  1. First principles: what is actually true?
  2. Inversion: how would this fail?
  3. Probabilistic thinking: how likely is each path?
  4. Pareto: which few variables matter most?
  5. Second-order thinking: what happens after the first effect?
  6. Circle of competence: what do I understand, and where am I guessing?
  7. Checklist: what must be verified before shipping?

That is enough for most decisions.

The full library is for harder situations: strategy, product positioning, hiring, writing, learning, negotiation, system design, and any problem where one lens makes you too confident too early.

Icon illustration of a small daily active set of mental models held close, with a much larger reference library waiting in the background for harder problems.

3. The 201-model map

The source material behind this article is a very large framework library. I am not turning it into a short inspirational post. The point of this section is to preserve the full map while making it usable for AI workflows.

Read it like a toolbox, not a novel.

Icon illustration of a categorized toolbox standing in for the 201-model map, with broad thinking categories acting as drawer labels.

The mistake is treating each model as a separate essay. For AI work, I use them as retrieval handles. The category tells the agent what kind of reasoning to start with. The model name tells it which lens to apply.

General Thinking Tools

Use these when the problem is still fuzzy and you need to shape the question before you answer it. Ask the agent to separate facts, assumptions, failure paths, and testable claims.

  • First Principles: rebuild the problem from the few facts that must be true.
  • Inversion: ask how the plan would fail before asking how it would work.
  • Second-Order Thinking: inspect what happens after the first effect.
  • Probabilistic Thinking: replace certainty with likelihood, downside, and confidence.
  • Occam's Razor: prefer the simpler explanation until the evidence forces complexity.
  • Hanlon's Razor: test incompetence, incentives, and confusion before assuming malice.
  • Circle of Competence: mark what you know, what you can learn, and what you are guessing.
  • The Map Is Not the Territory: compare the model with reality before trusting the model.
  • Thought Experiments: run the decision in a simplified world to expose the core variable.
  • Checklists: turn known failure modes into a repeatable pre-flight check.
  • Dual-Track Analysis: analyze both rational incentives and human psychology.
  • Latticework of Mental Models: combine models from different fields instead of forcing one lens.
  • Falsifiability: ask what evidence would prove the claim wrong.
  • Causation vs. Correlation: separate movement together from one thing causing another.
  • Necessity and Sufficiency: ask whether a factor is required, enough, both, or neither.
  • Scenario Analysis: compare multiple futures instead of one forecast.
  • Counterfactual Thinking: ask what would have happened if one condition changed.
  • Lateral Thinking: escape the obvious frame and search for a different route.
  • Root Cause Analysis: move from symptom to cause before proposing a fix.

Psychology and Cognitive Biases

Use these when the hard part is not the facts, but the human interpretation of the facts. Ask the agent to name the likely bias before it recommends an action.

  • Confirmation Bias: when you may be collecting evidence for a conclusion you already like.
  • Anchoring Effect: when the first number or framing may be pulling every estimate.
  • Availability Heuristic: when the most memorable example may not be the most common one.
  • Representativeness Heuristic: when something feels typical, but the base rate says otherwise.
  • Loss Aversion: when fear of losing may be stronger than the actual downside.
  • Sunk Cost Fallacy: when past spending is polluting the next decision.
  • Hindsight Bias: when a past outcome feels more predictable than it really was.
  • Dunning-Kruger Effect: when low skill may be creating false confidence.
  • Framing Effect: when changing the wording changes the decision.
  • Bandwagon Effect: when popularity is being mistaken for evidence.
  • Social Proof: when other people's behavior is being used as a shortcut.
  • Halo Effect: when one good trait is making everything else look better.
  • Fundamental Attribution Error: when you blame character before testing context.
  • Self-Serving Bias: when wins become skill and losses become bad luck.
  • Cognitive Dissonance: when a person protects identity by rejecting conflicting evidence.
  • Backfire Effect: when correction may harden the original belief.
  • Optimism Bias: when the plan assumes the smooth version of the future.
  • Planning Fallacy: when timelines ignore friction, coordination, and rework.
  • Peak-End Rule: when the end and emotional peak distort the whole memory.
  • Endowment Effect: when ownership makes the thing feel more valuable.
  • Present Bias: when the near reward beats the better long-term path.
  • Decision Fatigue: when the next decision is worse because the previous ones drained you.
  • Paradox of Choice: when more options reduce clarity instead of improving it.
  • Survivorship Bias: when you only see the winners who made it back.
  • Groupthink: when agreement is being purchased by silence.
  • Learned Helplessness: when past failure makes current action feel pointless.
  • Impostor Syndrome: when competence is filtered through the fear of being exposed.
  • Pygmalion Effect: when expectations shape performance.
  • System 1 vs System 2: when fast intuition needs slow checking.
  • Prospect Theory: when gains and losses are not felt symmetrically.
  • Mental Accounting: when money, time, or effort is split into fake buckets.
  • Default Effect: when the preselected option wins because it is there.
  • Priming Effect: when recent cues influence judgment without being noticed.
  • Gambler's Fallacy: when randomness is mistaken for a pattern that must reverse.
  • Clustering Illusion: when noise looks like a meaningful cluster.

Math and Probability

Use these when uncertainty, sample size, distribution shape, or downside matters. Ask the agent to show assumptions, base rates, sensitivity, and expected value.

  • Normal Distribution: when outcomes cluster around an average.
  • Power Law Distribution: when a few outcomes may dominate the total.
  • Bayes' Theorem: when new evidence should update an existing belief.
  • Base Rates: when the outside view should discipline the inside story.
  • Expected Value: when payoff, probability, and downside must be judged together.
  • Compound Interest: when small repeated gains can become large over time.
  • Regression to the Mean: when extreme results are likely to move back toward average.
  • Law of Large Numbers: when more samples make the pattern more trustworthy.
  • Central Limit Theorem: when averages become more stable than individual observations.
  • Monte Carlo Simulation: when one forecast is too fragile and you need many runs.
  • Permutation and Combination: when the number of possible arrangements matters.
  • Conditional Probability: when the chance of one thing depends on another.
  • Fat-Tailed Distribution / Black Swan: when rare events carry most of the damage.
  • Cost-Benefit Analysis: when every option has a price.
  • DCF / NPV: when future cash flow needs to be compared with present cost.
  • Sensitivity Analysis: when one hidden assumption can flip the answer.

Economics and Business

Use these when the decision depends on incentives, markets, tradeoffs, or resource allocation. Ask the agent to identify who benefits, who pays, and what behavior the system rewards.

  • Supply and Demand: when price or behavior is shaped by scarcity and desire.
  • Opportunity Cost: when choosing one path quietly rejects another.
  • Comparative Advantage: when the best owner of a task is not always the best absolute performer.
  • Diminishing Returns: when more effort stops producing proportional gains.
  • Economies of Scale: when size lowers unit cost or increases leverage.
  • Sunk Cost: when old spending should not decide the next move.
  • Externalities: when costs or benefits spill onto people outside the transaction.
  • Asymmetric Information: when one side knows more than the other.
  • Moral Hazard: when someone takes risk because someone else pays the cost.
  • Adverse Selection: when bad participants are more likely to enter the market.
  • Principal-Agent Problem: when the person acting has different incentives from the owner.
  • Tragedy of the Commons: when rational individual use destroys a shared resource.
  • Free Rider Problem: when people benefit without contributing.
  • Market Failure: when the market result is not the socially useful result.
  • Goodhart's Law: when a metric stops working after it becomes the target.
  • Cobra Effect: when incentives produce the opposite behavior from what you wanted.
  • Pareto Principle: when a small fraction of causes produces most of the result.
  • Creative Destruction: when new systems replace old winners.
  • Gresham's Law: when lower-quality behavior drives out higher-quality behavior.
  • Coase Theorem: when transaction costs decide whether bargaining can solve the problem.
  • Arbitrage: when the same thing is priced differently in two places.
  • Game Theory: when each person's best move depends on others' moves.
  • Prisoner's Dilemma: when self-protection can make everyone worse off.
  • Nash Equilibrium: when no player can improve alone by changing strategy.
  • Incentive Alignment: when personal gain points in the same direction as system gain.

Competitive Advantage and Moats

Use these when you are judging a product, market, startup, or career position. Ask the agent to test whether the advantage survives imitation.

  • Economic Moats: when a business needs a durable defense.
  • Network Effects: when each new user makes the system more useful.
  • Switching Costs: when leaving is painful enough to create retention.
  • Brand Differentiation: when trust, taste, or identity changes the purchase.
  • Cost Advantage: when producing cheaper creates strategic room.
  • Barriers to Entry: when new competitors face hard constraints.
  • Supply-Side Economies of Scale: when scale lowers production cost.
  • Demand-Side Economies of Scale: when scale makes the product more attractive.
  • Learning/Experience Curve: when repeated doing creates a cost or quality edge.
  • Winner-Take-All Markets: when the top player captures most of the value.
  • Platform Economics: when two or more sides create value through the platform.
  • Lock-in: when customers stay because leaving breaks something important.
  • Disruptive Innovation: when a weaker-looking entrant improves from below.
  • Crossing the Chasm: when early adopters do not automatically lead to the mainstream.
  • Product/Market Fit: when demand pulls the product forward.
  • First-Mover Advantage/Disadvantage: when being early helps or hurts.
  • Regulatory Capture: when the referee becomes shaped by the players.
  • Sustainable Competitive Advantage: when the edge can compound instead of decay.

Physics, Engineering, and Systems Thinking

Use these when the situation behaves like a system, not a one-step task. Ask the agent to draw inputs, outputs, feedback loops, constraints, and intervention points.

  • Feedback Loops: when outputs change future inputs.
  • Critical Mass: when a system needs enough mass before it becomes self-sustaining.
  • Tipping Point: when gradual change suddenly becomes visible change.
  • Inertia: when the current state keeps moving unless force is applied.
  • Flywheel: when each cycle makes the next cycle easier.
  • Homeostasis/Equilibrium: when the system resists change to preserve stability.
  • Entropy: when systems drift toward disorder without maintenance.
  • Leverage: when a small input moves a larger system.
  • Activation Energy: when the hardest part is getting started.
  • Catalyst: when one event or actor accelerates change without doing all the work.
  • Bottlenecks & Constraints: when the slowest part sets the speed of the whole system.
  • Emergence: when the whole behaves differently from the parts.
  • Margin of Safety: when the system needs room for error.
  • Redundancy: when backup capacity prevents a single point of failure.
  • Butterfly Effect: when small changes can compound through a complex system.

Biology and Evolution

Use these when adaptation, competition, survival, or environment matters. Ask the agent to identify selection pressure and the traits that survive it.

  • Natural Selection: when the environment filters what lasts.
  • Adaptation & Fitness Landscapes: when different environments reward different traits.
  • Red Queen Effect: when you must keep improving just to stay in place.
  • Niches: when the winning move is to serve a narrow environment well.
  • Ecosystem: when many actors co-create the outcome.
  • Cooperation/Symbiosis: when two sides win by fitting together.
  • Self-Preservation Instinct: when resistance is about survival, not logic.
  • Replication & Variation: when copies plus small changes create evolution.
  • Lindy Effect: when survival over time is evidence of durability.
  • Antifragility: when stress, within limits, makes the system stronger.

Organizations and Institutions

Use these when people coordinate, scale, delegate, or avoid responsibility. Ask the agent to map ownership, incentives, meeting cost, and feedback quality.

  • Peter Principle: when promotion moves people into roles they cannot do well.
  • Parkinson's Law: when work expands to fill the time allowed.
  • The Mythical Man-Month: when adding people makes late work later.
  • Dunbar's Number: when group size breaks informal coordination.
  • Bystander Effect: when everyone sees the problem and nobody owns it.
  • Directly Responsible Individual: when every decision needs one accountable name.
  • Deliberate Practice: when skill improves only through targeted feedback.
  • Radical Candor: when useful feedback needs both care and directness.
  • Fixed Mindset vs. Growth Mindset: when identity blocks learning.
  • Maslow's Hierarchy of Needs: when higher motivation depends on lower needs being stable.
  • Maker's Schedule vs. Manager's Schedule: when deep work and meeting time collide.
  • 10x Team: when role fit and culture multiply individual talent.

Military and Game Strategy

Use these when there is conflict, negotiation, deterrence, or strategic positioning. Ask the agent to model the opponent's incentives and the cost of each move.

  • Mutually Assured Destruction: when both sides can hurt each other enough to avoid escalation.
  • Deterrence: when the point is to prevent action, not win a fight.
  • War of Attrition: when endurance and resources decide the result.
  • Guerrilla Warfare: when the weaker player wins by changing the battlefield.
  • Two-Front War: when splitting attention creates strategic weakness.
  • Pyrrhic Victory: when winning costs too much.
  • Exit Strategy: when entering without a way out creates hidden risk.
  • All-In Strategy: when focus helps only if the bet deserves concentration.
  • Scorched-Earth Tactics: when retreat destroys value to deny it to others.
  • Tit for Tat: when cooperation is maintained by matching behavior.
  • Carrot and Stick: when rewards and penalties shape behavior together.
  • Red Line: when a boundary must be clear before it is crossed.

Art and Narrative

Use these when communication, persuasion, teaching, or design matters. Ask the agent to test how the audience will perceive the structure, not just whether the logic is correct.

  • Audience: when the same idea must be adapted to different readers.
  • Genre: when expectations shape what feels right.
  • Contrast: when difference creates attention and meaning.
  • Framing: when the surrounding context changes interpretation.
  • Rhythm: when pacing controls attention.
  • Melody: when repetition and variation make a message memorable.
  • Representation: when the way you show reality changes what people see.
  • Plot: when events need causality, not just sequence.
  • Character: when motivation drives the story.
  • Setting: when context explains behavior.
  • Performance: when delivery changes the message.

Classic Business and Strategy Frameworks

Use these when the work needs structure, sequencing, or a repeatable decision path. Ask the agent to turn the model into a checklist or diagnostic.

  • SWOT: when you need a quick scan of strengths, weaknesses, opportunities, and threats.
  • Porter's Five Forces: when industry structure matters more than product features.
  • MECE Principle: when categories must be complete and non-overlapping.
  • PEST: when politics, economics, society, and technology shape the landscape.
  • BCG Matrix: when a portfolio needs resource allocation.
  • Pyramid Principle: when the message must lead with the conclusion.
  • SCQA Structure: when a narrative needs situation, complication, question, and answer.
  • OODA Loop: when faster sensing and acting beats perfect planning.
  • PDCA Loop: when improvement needs a repeatable cycle.
  • Eisenhower Matrix: when urgency and importance are being confused.
  • Decision Tree: when choices branch into different outcomes.
  • North Star Metric: when a team needs one guiding measure.
  • Logic Tree: when the problem needs to be decomposed.
  • Hypothesis-Driven Method: when work should test a claim instead of wander.
  • McKinsey 7S Framework: when strategy, structure, systems, skills, style, staff, and shared values must fit.

Learning, Creativity, and Personal Growth

Use these when the goal is to learn faster, create better, or explain more clearly. Ask the agent to turn the model into a practice loop.

  • Feynman Technique: when you need to prove you understand something by explaining it simply.
  • Six Thinking Hats: when a group needs to separate facts, emotion, risk, optimism, creativity, and process.
  • SCAMPER: when you need systematic creative variations.
  • Interleaving Practice: when mixing related skills improves transfer.
  • Spaced Repetition: when memory needs timed review.
  • 10,000-Hour Rule: when time matters only if practice quality is real.
  • T-shaped Skills: when depth and breadth need to work together.
  • Flow State: when challenge and skill are matched closely enough for deep focus.
  • Bloom's Taxonomy: when learning must move from remembering to creating.
  • Learning Loop: when every attempt should produce feedback for the next attempt.
  • Compound Learning: when each new model makes other models easier to use.
  • Curse of Knowledge: when expertise makes beginner confusion invisible.
  • After Action Review: when the fastest learning comes from a short post-run debrief.

4. How to choose the right model

Before mental models, you wrote prompts in plain language and hoped. After, you name a lens, "use second-order thinking," "apply inversion," and the output sharpens immediately. Same model, much better answer.

The worst way to use mental models is to ask, "Which famous model can I mention here?"

The better question is:

What kind of problem is this?

Use this simple routing table.

Problem type Start with Add if the stakes are high Output you should ask AI for
A decision with downside Inversion Expected value, margin of safety Failure paths, probabilities, stop-loss rule
A product idea First principles Jobs to be done, switching costs, network effects Core user need, weak assumptions, smallest test
A strategy question Circle of competence Moat, five forces, disruptive innovation Where we can win, where we are pretending
A writing problem Pyramid Principle SCQA, framing, Feynman Technique One-sentence thesis, structure, unclear sections
A learning plan Pareto Deliberate practice, spaced repetition 20 percent curriculum and weekly practice loop
A team problem Incentives Dunbar's number, DRI, radical candor Ownership map and incentive conflict
A complex system Feedback loops Entropy, bottlenecks, second-order thinking System diagram and intervention points

If an AI answer feels smart but not useful, the problem is often routing. You gave it a task, but not the right lens.

Icon illustration of a routing junction that matches the kind of problem to the right starting thinking lens before AI is asked to answer.

5. A five-step prompt pattern

Here is the prompt shape I use most often:

I need to decide: [decision].

Use these models:
1. First principles: separate facts from assumptions.
2. Inversion: list the most likely failure paths.
3. Expected value: estimate upside, downside, and probability.
4. Circle of competence: say what I actually know and what I am guessing.
5. Pareto: identify the few variables that matter most.

Output:
- Model-by-model analysis
- Contradictions between models
- The smallest test I can run this week
- What would change your recommendation

The last line is the most important one.

AI systems are good at producing a clean recommendation. You need them to tell you what would make the recommendation false.

Icon illustration of a five-step prompt pattern that hands AI a sequence of named lenses and ends with a falsifiable test of the recommendation.

6. The anti-abuse checklist

The most common failure mode for first-time mental-model users is collecting all 201 and using none. Pick three you already understand. Apply them daily for a week. Add one a week after that. Names without practice are decoration, not thinking tools.

Mental models can make you wiser.

They can also make you sound wiser while becoming more rigid.

Before you accept an AI answer based on models, ask these questions:

Check Why it matters
Does the model actually fit the problem? Not every question needs first principles or a moat analysis.
Did the agent separate facts from assumptions? Confident structure can hide weak evidence.
Did it use more than one lens? One model can become a hammer.
Did it name the tradeoff? A recommendation without tradeoff is marketing.
Did it include verification? The answer should produce an action you can test.
Did it soften uncertain claims? Opinion should not be dressed up as fact.

This is where many prompt libraries go wrong. They turn Munger, Musk, Kahneman, or any other thinker into a costume.

Do not ask AI to "be Charlie Munger."

Ask it to apply inversion, incentives, base rates, opportunity cost, and circle of competence to your problem, then show the reasoning and the limits.

Icon illustration of an anti-abuse verification checkpoint that pauses a confident-looking AI answer for inspection before the recommendation is accepted.

7. Where this series goes next

This article gives you the map.

The next article turns the map into a Claude Code workflow: where to put the active set, when to use CLAUDE.md, when to move the large library into a Skill, how to design triggers, and how to keep context from exploding.

The third article is more personal. It explains why I keep around 200 models, why I use only a small set daily, and why AI finally made the big library practical.

The key is simple:

Your brain is the commander. AI is the staff room. Mental models are the operating doctrine.

Give the staff room better doctrine.

Hand off early. Ship confidently.

— Leo

FAQ

Icon illustration of an FAQ block answering common reader questions about AI mental models with a few compact, paired replies.

What are AI mental models?

AI mental models are thinking frameworks you can use yourself and encode into AI workflows so the agent reasons with clearer lenses instead of producing generic advice.

Do I need to memorize all 201 mental models?

No. Start with a small daily set, keep the rest as a reference library, and ask AI to select the relevant models for each situation.

How should beginners use mental models with AI?

Use three steps: name the problem type, select three to five models, then ask the AI to show assumptions, risks, and verification checks.

What is the biggest risk of using mental models with AI?

The main risk is forcing a model onto the wrong problem. Always ask whether the model fits and what it might hide.

References

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Workflow Pro.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.