Claude Code ships blind — no context gauge, no quota counter. I built a 7-segment statusline and baked in a 50% context handoff rule, backed by Lost-in-the-Middle research. Every segment, every color rule, and the full production script explained.
Intelligence isn't about what you know — it's about how you interact with the world. That one insight explains why AI is growing up in exactly the same order as a human child, and doing it a million times faster.
70 prompt tips for Claude Code are floating around. 7 actually moved the needle for me. Real before/after terminal sessions, a copy-paste cheat sheet, and the 3 patterns I dropped.
70 prompt tips for Claude Code are floating around. 7 actually moved the needle for me. Real before/after terminal sessions, a copy-paste cheat sheet, and the 3 patterns I dropped.
70+ "prompt tips for Claude Code" are floating around. I tested most of them
7 actually moved the needle. 3 were cargo culting. The rest were neutral
Below: the 7 ranked by ROI with real before/after terminal sessions, the 3 I dropped and why, and a copy-paste cheat sheet
Works whether you're a programmer or not — examples are from writing, research, and content work
Here's the thing nobody says about prompt engineering for Claude Code: most of the advice you'll read is from 2024. We're in 2026. Claude is smarter. Half the old tricks are now either automatic or actively wasted tokens.
I had to see a measurable before/after. Task finishes faster. Fewer retries. Less wasted context. If I couldn't point to a concrete improvement, the pattern didn't make it.
Non-programmers had to be able to use it. If the pattern assumed you could read a stack trace or rewrite a regex, out. I test everything in my content workflow first — files, folders, drafts, research notes.
It had to still apply in 2026. Claude 4-series is a different animal from Claude 2 or 3. Some of the most-shared 2024 tricks are now redundant because Claude does them automatically.
Most of the 70 "tips" online fail filter #1 — they're repeated because they sound clever, not because anyone showed the numbers.
The 7 Claude Code Prompts That Actually Work
Ranked by return on effort. #1 costs nothing and saves the most. #7 is the smallest, but it compounds over time.
#1 — State the goal, not the steps
The pattern: Tell Claude what you want. Don't tell it how to get there.
Before (the wrong way):
Go through each file in /research/. Read the metadata.
Group them by date. Then group by topic. Then pick the
top 10. Then write a summary for each. Then combine the
summaries into one outline.
Claude does exactly that. Mechanically. Seven steps. If any step misfires, the rest go sideways.
After:
> Turn /research/ into an outline for a long essay on
> AI workflows. Rank sections by what's most interesting
> to a non-technical reader.
That's it. Claude reads the folder, decides the grouping logic on its own, and gives you an outline in one pass.
Why it works: Claude is good at deciding how. It's less good at guessing what — because it doesn't know what you want. Over-specifying steps constrains Claude to a worse plan than the one it would have picked on its own.
When to override: If the task has a hard constraint ("output must be in this JSON format"), state that at the end. Constraints are fine. Steps are usually noise.
#2 — Give one example, not five
The pattern: One concrete before/after beats five abstract rules.
Before:
Rewrite these emails following our brand voice.
Be clear. Be warm. Be concise. Match the tone of a
thoughtful friend. Don't use corporate language.
Don't be too formal but also not too casual.
Include a greeting. Sign off with first name only.
Eight rules. Claude follows about five of them, picks two at random, and ignores the last.
After:
> Rewrite these emails to match this one:
> [paste your best email, ~100 words]
> Same tone, same rhythm, same sign-off style.
One complete example. Claude pattern-matches against it and produces consistent output.
Why it works: LLMs generalize from examples better than from rules. A good example carries ten rules worth of signal in half the tokens. Anthropic's own prompting guide calls this "show, don't tell" — and it's the #1 trick they recommend.
#3 — Use the thinking triggers (`think`, `think hard`, `ultrathink`)
The pattern: For complex tasks, put a thinking trigger word in your prompt. Claude allocates more reasoning tokens before replying.
> How should I restructure my research folder? There
> are 200 files and they're a mess.
Claude gives you a generic three-bucket suggestion in 20 seconds.
After:
> Think hard about how to restructure my research folder.
> Look at the actual filenames, find natural groupings,
> and propose a structure that matches my working style.
Claude now spends 60 seconds reasoning before answering. The suggestion is specific, references actual filenames, and considers patterns the default response missed.
When not to use it: Simple tasks. "Rename this file" doesn't need think hard — you'll just burn tokens for no quality bump. Use it for planning, refactoring, analysis, and decisions you'd want a colleague to actually think about.
#4 — `@` reference, don't paste
The pattern: Instead of pasting a file into the prompt, use @path/to/file to reference it.
Before:
> Follow this format for the next article: [pastes 400-line template]
Every session pays for those 400 lines. The template gets stale. When you update it, old sessions don't know.
After:
> Follow the format in @templates/article-standard.md
> for the next article.
Claude reads the file when relevant. Your prompt stays short. The template stays canonical.
Why it works:@ is a pointer. Claude loads the referenced file lazily, on demand. That's the same principle your CLAUDE.md should follow — and Anthropic's docs push this directly: "Use @ to reference files, paste screenshots/images, or pipe data directly."
Extra trick: You can point at specific line ranges — @src/utils/format.ts#L15-40 — and Claude loads just that slice. The VS Code extension adds a keybinding that inserts the reference from your current selection automatically.
#5 — Stage after every approved step
The pattern: After each change Claude makes that you're happy with, run git add . — don't commit yet. Staging becomes your safety net.
Flow:
> Rename the files in /notes/ to kebab-case.
[Claude renames 12 files]
> Looks good.
$ git add .
> Now consolidate the duplicates.
[Claude merges files. One merge is wrong.]
> Undo that last merge.
$ git checkout . && git stash pop
If the next step breaks something, staged changes are your rollback point. Commits are too permanent; staging is cheap. One Hacker News thread on this pattern has over 500 upvotes for a reason — it's the single biggest "oh, that's why" moment most new users have.
Why it works: Claude's completion summary is not a diff. Claude will tell you it succeeded while something drifted. Your eyes plus staging are truth; the summary is vibes.
#6 — Reverse the question: ask Claude to ask you
The pattern: When you're about to give Claude a big ambiguous task, don't. Instead, have Claude interrogate you first.
Before:
> Build me a personal CRM in a folder structure. I'll
> use it to track the 200 people I know professionally.
Claude makes seven assumptions about the schema you never specified. You end up re-doing half of it.
After:
> I want to build a personal CRM as a folder structure.
> Before you propose anything, ask me the 5 questions
> you most need answered to design this well.
Claude comes back with questions like: "Do you need tags, or categories, or both? Will this sync across devices? Should interactions be separate files or appended?" You answer. Claude now has a 10x better brief than you would have written freehand.
Why it works: You don't know what you don't know. Claude is better at spotting the questions than you are at anticipating them. This pattern alone has saved me probably 20 hours of rework.
Template file you can reuse — save this as @templates/reverse-ask.md:
Before you start on the task below, ask me up to 5
questions you need answered to do this well.
Prioritize questions that would prevent rework.
Wait for my answers before proposing anything.
Task: {paste your task here}
#7 — Update-as-you-go
The pattern: Ask Claude to log what you learned at the end of each session into a running file.
At the end of a task:
> Before we close this out: add one line to
> @notes/lessons.md describing what we learned today
> that future-me should know.
Over weeks, lessons.md becomes a compressed history of your work. Next session, you can point Claude at it with @notes/lessons.md and it'll pick up context faster than re-explaining.
Why it works: Auto Memory handles some of this, but it's opaque — you don't always know what it saved. A plain lessons.md is explicit, diffable, and yours. A Reddit user described it well: "the real trick is making the write explicit in your instructions, because the model skips it when context gets heavy."
The 3 Claude Code Prompt Patterns I Dropped
Dropped #1 — "Act as a senior engineer / PhD researcher / world-class editor"
This pattern had real juice in 2023. Telling GPT-3.5 "act as a senior engineer" genuinely changed its output.
It doesn't anymore.
I tested this on Claude 4.6 and 4.7. I compared:
"Refactor this function"
"As a senior software engineer with 20 years of experience, refactor this function"
The outputs were indistinguishable. Modern instruction-tuned models already default to competent-professional tone. The prefix adds tokens without adding signal.
What to do instead: Specify the output you want (format, length, audience), not the role you want Claude to play.
Dropped #2 — "Take a deep breath" / "Step by step" / "Let's think carefully"
These classic 2023 "chain of thought" primers were powerful when models didn't reason by default. Claude 4-series has native extended thinking built in (documented here) — saying think or think hard triggers it explicitly and more reliably.
"Take a deep breath" now does nothing measurable. It's a fossil from an earlier era. Use the thinking triggers (pattern #3) instead.
Dropped #3 — Stuffing system-level rules into every user prompt
I used to start every prompt with something like:
> You are an expert writing assistant. Output must be
> plain English, no corporate jargon. Use short sentences.
> Don't use em-dashes. Don't add emojis. Don't summarize
> what you did at the end. Now: [my actual request]
That's what CLAUDE.md is for. If you're repeating the same 200 tokens in every prompt, move them into ~/.claude/CLAUDE.md or the project ./CLAUDE.md — they'll load automatically, once, and every prompt stays lean.
The dropped version of that same session:
> Rewrite this paragraph for clarity.
Eight tokens instead of sixty. Same result, because CLAUDE.md already set the voice rules.
A Quick Word on Slash Commands
Prompts aren't everything. Two slash commands matter more than any prompt pattern for day-to-day flow:
Command
What it does
When I use it
/clear
Wipes the conversation, fresh context
Between unrelated tasks — not mid-task
/compact
Summarizes the conversation, keeps key state
When context is around 20% full and I want to keep going
Multiple users on r/ClaudeAI report the same rhythm: /compact when the context window is getting full but the task isn't done, /clear when you're starting something unrelated. The official slash commands reference has the full list, but these two cover 90% of the value.
The Claude Code Prompt Cheat Sheet
One screen, copy-paste, edit the bracketed parts.
# 7 prompts that actually work (2026)
1. Goal, not steps
"Turn /{folder}/ into {output}. Rank by {criteria}."
2. One example, not five
"Match this: [paste one complete example]"
3. Thinking trigger
"Think hard about {complex question}. Consider {dimensions}."
4. @ reference
"Follow @{path/to/file.md}. Update {specific part}."
5. Stage, then continue
After each good step: git add .
Next prompt: "Now do {next step}."
6. Reverse-ask
"Before you start on {task}, ask me the 5 questions
you most need answered to do this well."
7. Update-as-you-go
"Add one line to @notes/lessons.md about what we
learned today that future-me should know."
Tape that above your keyboard for a week. You won't need it after that.
When a Pattern Graduates Into a Skill
Here's the honest progression nobody draws a diagram of: a prompt pattern you use once or twice is a pattern. A prompt pattern you use weekly is a habit. A prompt pattern you use daily is a candidate for a Skill — Claude Code's lightweight templates that load on demand.
Pattern #6 (reverse-ask) is the clearest example. I ran the full template three times a week for a month before realizing I was pasting the same boilerplate each time. I converted it into a Skill — a reverse-ask.md file Claude could invoke with a short trigger — and the boilerplate vanished from every prompt thereafter. I kept 5 Skills and deleted 3; this one survived because the pattern behind it was already proven.
Rule of thumb: don't build a Skill for a prompt you've used twice. Build it for the one you've used fifteen times and still catch yourself retyping.
If you're curious about the cost side of running with these patterns every day — exact dollar amounts, plan choices, and the number of sessions I logged — my six-month retrospective has the full breakdown.
Key Takeaways
State the goal, not the steps — Claude is better at picking how than you are at prescribing it. Over-specifying steps constrains Claude to a worse plan
One complete example beats five abstract rules — a single pasted "write like this" reference out-performs 10 rules with the same intent
The thinking triggers work at different levels — think (~4k tokens), think hard / megathink (~10k), ultrathink (~32k). Only use for planning, analysis, and deep reasoning
Classic 2023 prompt tricks are now dead weight — "act as a senior engineer" and "take a deep breath" add zero value on Claude 4.x. System-level prompt stuffing is what CLAUDE.md is for
/clear between unrelated tasks, /compact when a single task runs long — context management is the #1 skill for Claude Code output quality
A prompt you type 15+ times is a Skill candidate — the graduation path is prompt → habit → Skill
FAQ
Do prompts still matter now that Claude Code has CLAUDE.md?
Yes, but differently. CLAUDE.md handles the things that are true every session — voice, rules, file conventions. Prompts are how you brief Claude on today's task. A good CLAUDE.md plus a good prompt is maybe 3-4x better than either one alone. If your CLAUDE.md is dialed in, your prompts can be much shorter.
What's the difference between `think`, `think hard`, and `ultrathink`?
They're escalating triggers for extended thinking. think allocates around 4,000 tokens of reasoning; think hard (or megathink) goes to about 10,000; ultrathink caps near 32,000. More thinking = better answers on complex tasks, but also more tokens consumed. For simple tasks, skip them entirely.
Should I use `/clear` or `/compact` between tasks?
/clear when the next task is unrelated to the current one — Claude starts fresh with just CLAUDE.md loaded. /compact when you want to continue the same thread but the context is getting heavy. A good rule: /compact around 20% remaining, /clear when switching domains entirely.
What if English isn't my first language?
The patterns work regardless. If anything, patterns #2 (one example) and #6 (reverse-ask) are more useful for non-native speakers — you don't have to phrase things perfectly, you just have to provide the shape of what you want or let Claude ask the right questions. I use these constantly, and English is my second language too.
Do I need to memorize the exact trigger words?
No. think and think hard are the two I use weekly; the others are there if you need them. Claude recognizes natural phrasing — "think carefully about X" and "really think through Y" both activate extended thinking. Don't stress the exact words.
Which pattern should I start with?
1 (Goal, not steps) — it costs nothing, works everywhere, and most people are already doing the opposite. Try it for a week. Every task. You'll feel the difference within three or four sessions.
Claude Code ships blind — no context gauge, no quota counter. I built a 7-segment statusline and baked in a 50% context handoff rule, backed by Lost-in-the-Middle research. Every segment, every color rule, and the full production script explained.
Intelligence isn't about what you know — it's about how you interact with the world. That one insight explains why AI is growing up in exactly the same order as a human child, and doing it a million times faster.
The CLAUDE.md I actually use — 67 lines, rewritten three times. Full copy, what I deleted between versions, and the six rules that made the final version stick.
I built 8 Claude Code Skills in three months. 5 still live on my machine. 3 are deleted. Full SKILL.md snippets for the survivors, honest post-mortems on the rest, and the 5-gate decision tree I use before building the next one.