← All notes

The loose and the tight

A taxonomy of which decisions to tighten with AI and which to leave deliberately loose.

March 27, 2026 · 7 min

A claim I have come to hold strongly, after watching too many companies get this wrong: most decisions in a business should be either tight or loose, and the worst place a decision can sit is in the middle.

A tight decision is one that is made the same way every time, by a system, fast. The decision rule is explicit. The inputs are structured. The output is consistent. Examples in 2026: deciding whether a customer qualifies for a particular plan, deciding which of three SKUs to recommend in an email, deciding how to route an inbound issue to the right specialist queue. Tight decisions reward consistency. They lose value when they are made differently every time.

A loose decision is one that is made differently every time, by a person, slowly. The inputs are unstructured. The right answer depends on context that does not generalize. Examples: how to handle a high-value customer who is upset, what creative direction to take on a launch, whether to move a struggling executive to a different role. Loose decisions reward judgment. They lose value when they are forced into a template.

A middle decision is one that is partially formalized, partially intuitive, made inconsistently by different people in different moods on different days. Most decisions in most companies are middle decisions. They are also the ones AI is most often pointed at, and the ones where AI most often disappoints.

Here is why middle decisions disappoint. AI is good at consistency. If you point it at a decision that should be tight, it tightens it, and the company gets better. AI is bad, in the deployments most companies can run, at substituting for human judgment. If you point it at a decision that should be loose, it tightens it, and the company gets worse, often invisibly, in ways that show up two years later as customer churn or talent flight.

The right move with AI is not to deploy it indiscriminately. It is to first decide, decision by decision, which side of the line the decision belongs on. Then to tighten the ones that should be tight, and explicitly to defend the looseness of the ones that should be loose.

Most companies are doing the opposite of this. They are tightening decisions that should be loose (creative direction, escalation handling, performance management, hiring debate) because tightening looks productive and produces dashboards. They are leaving loose decisions that should be tight (pricing, eligibility, routing, returns) because the politics of touching them is hard and the workflow project never quite reaches them.

The result is a company where the decisions that should be predictable are random, and the decisions that should be human are bureaucratized. Customers feel both. Employees feel both. Performance suffers in ways that do not show up cleanly on any single dashboard but compound across many of them.

How do you tell which side of the line a decision belongs on? I have a working test. It has three parts.

The first part is the stakes test. Does the value of this decision come from being right on average across many instances, or from being right on this specific instance? Pricing, taken across thousands of customers, is an average-rightness decision. Whether to retain this specific senior hire is a specific-rightness decision. Average-rightness decisions tend to want to be tight. Specific-rightness decisions tend to want to be loose.

The second part is the legibility test. Are the inputs that should determine this decision legible to a system, or do they live in places a system cannot see? A pricing decision can be made on data that is structured (account size, usage, segment, history). A creative direction decision is made on context that is mostly tacit. The first is a candidate for tightening. The second is not, regardless of how much the team wants it to be.

The third part is the reversal test. If we get this decision wrong on a single instance, can we reverse it cheaply, or is it expensive to undo? Cheap-to-reverse decisions can be tightened without much risk. Expensive-to-reverse decisions need to stay loose, because the cost of a systematic error in a tight system is greater than the cost of variance in a loose one.

A decision that passes all three tests (stakes are average, inputs are legible, reversals are cheap) wants to be tight. A decision that fails all three (stakes are specific, inputs are tacit, reversals are expensive) wants to be loose. A decision that passes some and fails others wants careful design, often a hybrid where the loose human judgment lives at one layer and the tight system lives at the layer below.

The strategic move that follows from this is uncomfortable. It says: do less AI work than you think you should, and do it on a smaller and more deliberate set of decisions. Most leaders react to this badly because their incentive is to look like they are doing a lot. The way to look like you are doing a lot is to have many AI projects. The way to make the company actually compound is to have fewer.

I will say this about the loose side of the line. The decisions you defend as loose, deliberately and with explanation, are usually the ones that turn into competitive advantage. The companies that retain creative judgment as a loose decision while their competitors automate it tend to win on brand. The companies that retain customer escalation handling as a loose decision tend to win on retention. The companies that retain executive judgment about marginal hires as a loose decision tend to win on culture.

These are not ROI-legible advantages. They are slow. They compound. They are the reason some companies feel different to work with, even when their products and prices are similar. Loose decisions made well are how that difference is generated. Automate them and the difference disappears, slowly enough that no single dashboard catches it.

So the move is: tighten what should be tight, defend what should be loose, refuse to live in the middle. This sounds simple. It is not. The middle is comfortable. The middle is where most decisions in your company currently live. Moving them to the tight side requires investment. Defending them on the loose side requires explanation. Either is harder than leaving them where they are.

But leaving them where they are is the move that has dominated the last two years of corporate AI deployments, and it is, in my view, the main reason so many of those deployments feel like nothing has changed.

Newsletter

Liked this? The next one lands in your inbox.

Notes on building agents in production, what Claude Code unlocks for solo operators, and the occasional teardown of a build that did or did not work.

Coming soon