Your prompts are a personal engineering playbook
Every prompt that produces good output encodes a decision. How you debug. How you refactor. What you check during review. That's institutional knowledge. Most of it evaporates when you close the terminal.
Debugging: save what worked, find it when it matters
Tuesday afternoon. You're staring at a connection pool that drops requests under load. You write a prompt that gets Claude to trace the exact issue:
Trace the race condition in the connection pool. Check thread safety of the shared state in pool.py. Show me the execution timeline when two requests hit acquire() simultaneously. Focus on the lock ordering between _get_conn and _return_conn.Claude nails it. Points to the exact line where the mutex isn't held during the state transition. You fix it, ship it, move on.
Three weeks later. Different project, same symptom. Requests dropping under concurrency. You open Prompt Cellar and search race condition pool.
There it is. The exact prompt. The project context. You copy it, adjust the file name, and send it. Two minutes instead of twenty.
The prompt isn't just text. It's a debugging methodology you've already validated. The specificity of "show me the execution timeline" and "focus on the lock ordering" is what made it work the first time. That's worth keeping.
Refactoring: consistent patterns across projects
You're converting a module from hard-coded imports to dependency injection. You've done this before. You know what prompt produces clean results:
Refactor this module to use dependency injection. Keep the public API identical. Add constructor parameters for the dependencies currently imported at module level. Default parameter values should preserve current behavior so existing callers don't break. Add a factory function that wires up the production dependencies.That last sentence matters. Without it, Claude sometimes refactors the module but leaves callers stranded with no obvious way to instantiate the new version. You learned that the hard way on a previous project.
With Prompt Cellar, you search dependency injection refactor and pull up the prompt that includes all the constraints you've iterated on. Not a rough approximation from memory. The actual prompt, with the guardrails that prevent bad output.
Why this matters for refactoring specifically
Refactoring prompts are deceptively hard to write well. The difference between "refactor to use DI" and the prompt above is the difference between a clean PR and three rounds of review comments. The good version encodes constraints you figured out through trial and error. Losing it means repeating those errors.
Code review: build a library of what to check
You've developed a mental checklist for reviewing backend PRs. At some point you turned it into a prompt:
Review this PR for: SQL injection via string concatenation, unvalidated input reaching database queries, missing error handling on external service calls, N+1 queries in ORM usage, and race conditions in concurrent access to shared state. For each issue found, show the file, line, and a concrete fix.That prompt didn't start this complete. The first version just said "review this code for security issues." Over a few weeks you added the specific categories that kept catching real bugs. The N+1 query check got added after a production incident. The race condition check after a postmortem.
Now you have variants. One for backend PRs. One for API endpoints. One for database migrations. Each tuned to catch the problems that actually happen in your codebase.
Without saved prompts
You write "review this PR" and get a generic response. You forget to check for N+1s. The issue ships. You add it to your mental list. Until you forget again.
With saved prompts
You search backend review, grab your battle-tested prompt, and get results that check for everything you've learned to look for.
Migrations: prompts that encode your safety checklist
Database migrations are high-stakes. You've learned what to check through painful experience. A good migration prompt encodes all of it:
Write an Alembic migration to add the audit_log table. Columns: id (uuid, primary key), user_id (foreign key to users), action (varchar 50), target_type (varchar 50), target_id (uuid), metadata (jsonb, nullable), created_at (timestamp with timezone, default now). Add an index on (user_id, created_at). Include a downgrade that drops the table. Verify the migration is safe to run on a table with existing traffic—no full table locks.That last constraint—"no full table locks"—is there because you once ran a migration that locked a 50M-row table for four minutes in production. You'll never forget it. But you might forget to mention it in your next migration prompt.
When you save migration prompts, you're saving more than SQL generation instructions. You're saving your safety checklist. The one you built by making mistakes and learning from incidents.
# Three weeks later, different project, same need
# Search Prompt Cellar for your proven migration template
# Adjust the table name and columns. Keep the safety constraints.The compound effect
One prompt saves 5 minutes. A hundred saved prompts make you measurably faster.
Week 1: a handful of prompts
You save a debugging prompt and a refactoring prompt. You reuse one of them. Feels nice. Marginal time savings.
Month 1: patterns start forming
You have 40 prompts. You notice you have three variants of your code review prompt, each tuned for a different type of PR. You stop writing review prompts from scratch. You search and adapt.
Month 3: it's a personal toolkit
You have prompts for debugging concurrency issues, generating test fixtures, writing migration scripts, reviewing PRs, drafting design docs, and analyzing performance traces. When you hit a task you've seen before, you reach for a prompt instead of starting from zero.
Month 6: institutional knowledge
Your prompt library reflects how you work. It encodes your debugging methodology, your review checklist, your refactoring approach. A new team member could learn your engineering patterns by reading your prompts. That's not an accident. That's the compound effect of capturing what works.
How it works in practice
You don't change how you work. Prompt Cellar captures everything in the background.
You use Claude Code normally
Write prompts. Debug. Refactor. Review. Every prompt is captured automatically by a native hook.
Search when you need something
Open Prompt Cellar. Search migration audit_log or race condition pool. Find the prompt. Copy it.
Reuse and adapt
Paste the prompt into your current session. Adjust the specifics. Keep the structure and constraints that make it work.
npm install -g @promptcellar/pc
pc login
pc setupWorks with Claude Code, Codex CLI, and Gemini CLI. Setup takes under a minute.
Start building your prompt playbook
Your first 100 prompts are free. No credit card. Setup takes one command.
Get Started Free