Skip to content
Back to all posts
Article

How Repo Rules, MCP, and Approvals Control AI Coding Agents

The three control surfaces that decide what an agent can read, run, and change inside your repo.

2 min read

Watch (3:09)


Overview

The three control surfaces that decide what an agent can read, run, and change inside your repo.


Full transcript (from the video)

Most AI coding demos skip the layer that shapes the result. They show the model then jump to the diff. The real control comes earlier. Repo rules, tool access, and proof steps shape the first edit.

When that layer is solid, the model looks disciplined. When it is weak, the model drifts. The instruction stack is not one file. It includes repo guidance tools and a final check. Those pieces shape the agent from the start. That is why two similar models can behave in different ways. One works inside a defined system. The other is improvising. Repo rules shape the first pass. Without them, the agent searches the wrong part of the repo. Good rules keep the search tight. They name the safe paths, the paths to avoid and the check that proves the change. That does not make the model smarter. It keeps the work focused. This is why MCP matters.

The goal is not more tools. The goal is the right evidence. A clean tool surface helps the agent ground. A noisy tool surface creates more guessing. The best MCP setup is curated. Approvals and sandboxing are the core. They decide what the agent can do first. If every action needs approval, the workflow slows down. If nothing is constrained, it gets risky. The middle ground is simple. Let the agent read, search, and test by default. Ask for approval before destructive change. This is the part more demos should show. Retrieval can suggest likely files, but it does not prove the path is right. Terminal proof does. When the agent runs the test, checks the symbol, reviews, and reruns the same command, the loop becomes objective. That is how you catch wrong file work early. This is the comparison to remember. Keep the model the same and change the stack around it and the result changes fast. A weak setup leads to broad shaky edits and wasted time. A grounded setup leads to tighter search, smaller diffs, and quicker proof. The model did not become magical. The working environment became disciplined.

The practical setup is simpler than it looks. Start with one repowide guidance file. Add pathspecific rules only where you keep the MCP list short and useful.

Give each task one clear check that gives the agent a real finish line. The real takeaway is simple. AI coding agents are not just a prompt and a model. They need clear instructions, good and a proof loop. That is why the same model can feel chaotic in one repo.

In a clear setup, it feels steady. Strong teams build that setup