The gap between an LLM that produces acceptable code and one that produces code that passes code review almost never lies in the model. It lies in how you brief it.
Any decent model can write a function that compiles. Passing code review is another matter: the code has to fit the repo, the team conventions, the edge cases that were not in the prompt, and the context the human reviewer takes for granted. None of that comes from a vague prompt — it comes from a designed one.
Here are five practices that consistently reduce review rounds when working with Claude —or any serious LLM— on real code.
1. Paste repo context, not just the task
The most common mistake is writing "add an endpoint that does X" without showing the model what the rest of the project's endpoints look like. The result is code that works but uses a different folder structure, a different validation pattern, a different error-handling style. The reviewer flags it, you go fix it by hand, half an hour gone.
A useful prompt starts with a real fragment from the repo: a neighboring endpoint, the repository interface, how errors are returned. Pasting it costs 30 seconds and the model delivers code that already mirrors the style. The mental rule is simple: if the human reviewer needs to see file X to understand the change, the model needs it too.
This holds even for small changes. "Add logging here" without showing how logging works in the rest of the module ends in code with console.log or some invented wrapper.
2. Spell out the constraints the team takes for granted
Team conventions almost never live in a README. They live in heads. Things like "no any in TypeScript", "domain errors return as Result<T, DomainError> not as exceptions", "DB columns are snake_case, entity properties are camelCase, the ORM handles the mapping in this file".
The model cannot infer them. And if it skips them, the code will fail review for the obvious reasons. The investment is small: a "Rules" section with three to five bullets at the end of the prompt and half the problems disappear.
3. Pass negative examples when there is a pattern you reject
Saying "I do not want a giant switch" is vague. Showing a snippet and saying "I would not accept this pattern because every new case forces a change in this file" is actionable. The model sees the shape of the problem and produces an alternative that already avoids that specific trap.
The same applies to libraries: "do not use moment" works; "we already migrated everything to date-fns, show me the change that would break this" works better, because it makes the model think about risk, not just about a banned-imports list.
4. Ask it to validate assumptions before coding
This is the practice that saves the most wasted time. Before generating the code, ask the model to state its three strongest assumptions and check whether they are correct.
It works because the LLM reveals on the spot whether it misunderstood the domain. "I am assuming the user is already authenticated on this endpoint" is an assumption worth confirming before writing a hundred lines that take it for granted. If you reply "no, this endpoint is also called from a webhook with no auth", the model rewrites the structure before generating, not after.
It is an extra chat round, but it saves the second round of human review.
5. Close the prompt with the acceptance criteria
The closest thing an LLM has to a test is knowing how you will judge the output. End the prompt with two or three bullets stating what the code must satisfy for you to accept it:
- Compiles with no warnings under the repo strict TS config.
- Explicitly handles the "entity not found" case (404, not exception).
- Adds no new dependencies.
The model prioritizes those criteria over everything else. If it adds a new dependency, it justifies it in its reply. If it cannot meet one, it tells you. It is the closest thing to TDD you can do in a prompt.
The anti-pattern that ruins everything above
There is one pattern I see often that neutralizes every practice above: asking the model to "make this quick" or "a simple solution". It sounds pragmatic, but it gives the LLM permission to skip validations, error handling, and edge cases that the reviewer will flag immediately.
If the solution truly has to be simple, let the code prove it. Ask for the right shape and you will get something simple, because the problem is simple. Asking for "quick and simple" is asking for "skip the steps I will have to redo myself".
What sits underneath all these practices
The five reduce to one principle: treat the model as a collaborator who has read a lot but does not know this repo, this team, or this domain in particular. People with that mix exist in real life — someone who joined a project in their first week. Nobody would ask them "implement the endpoint" without first showing the module code, the team conventions, and which patterns we avoid.
That same level of onboarding is what is missing in the prompts that produce code which fails review. And it is the only reason such obvious practices move the needle so much.
No LLM will infer what your team takes for granted. But it can read and respect whatever you make explicit in thirty seconds.