Against AI readiness
Most readiness assessments are an elaborate way to avoid making the hard call.
I have been asked to run "AI readiness assessments" four times in the last six months. Each time I have politely declined. I want to write down why, because I think the assessment is doing something other than what it advertises.
The pitch is that the company is unsure whether it is ready for AI, and would like a structured assessment that scores it on dimensions like data quality, organizational maturity, technical infrastructure, governance posture, and so on. At the end of the assessment, the company knows where it stands. Gaps are identified. A roadmap is produced. Everyone feels good about the rigor of the process.
The thing the assessment is doing, underneath the rigor, is delaying a decision the leadership team already knows it needs to make and would prefer not to face yet.
Here is the decision. The company has, somewhere in its operating model, one or two load-bearing activities that are about to be reshaped by AI. The leadership team senses this. They are not sure which activities. They are not sure how. The honest move is to name a hypothesis, place a directional bet, and learn fast. The dishonest move is to commission a readiness assessment.
The assessment is dishonest in a specific way. It pretends to ask "are we ready?" when the actual question, the one that matters, is "what are we going to do?" Readiness is downstream of the strategic choice. You are ready for the thing you have decided to do, less ready for the things you have not. Asking the question abstractly, in general, is asking nothing.
I want to be careful not to overstate this. There are situations where a real readiness exercise is useful. A regulated industry doing AI in patient-facing or trading-facing systems needs governance assessment. A company about to make an enormous capital commitment to a particular AI infrastructure needs technical due diligence. These are specific exercises, scoped to specific decisions. They have answers that bind.
The general readiness assessment, the kind that scores you on twelve dimensions and gives you a heat map, is not that. It is consulting theater. The output is too generic to bind any decision. The leadership team takes it, files it, and goes back to the question they were already not facing.
I have come to think the test of whether an AI readiness assessment is useful is to ask, before commissioning it, what specific decision will be made differently as a function of its findings. If the answer is "we will know where we stand," the assessment will be expensive and produce nothing. If the answer is "if it scores us low on data infrastructure, we will defer the inventory project for two quarters and invest in the data layer first," the assessment is potentially useful. The second answer is rare. The first is everywhere.
There is a related move I see often, which is the AI strategy offsite. A leadership team takes two days, hears from a futurist, sees demonstrations of three AI products, breaks into small groups, and produces a list of priorities. The list has somewhere between fifteen and thirty items. Each item is scored on impact and effort. The team commits, with great seriousness, to running the top quartile.
This is also consulting theater. The list is a workflow list. It has been generated through the workflow fallacy. The strategic question, the one about which decisions are about to be reshaped, did not come up at the offsite, because it would have required uncomfortable specificity that the offsite format does not support.
The pattern, in both cases, is that a process designed to look like rigor is being used to defer a strategic decision the leadership team is unwilling to make in the open.
I am sympathetic to why this happens. The strategic decision is hard. It involves naming a load-bearing activity in the company that is about to change shape, and committing to changing it before everyone agrees that change is necessary. The CFO will push back. The COO will push back. The team that owns the activity will push back hardest of all. The CEO has to be willing to spend political capital on a hypothesis that might be wrong.
The readiness assessment removes the need to spend that capital. It produces a roadmap that nobody disagrees with, because it is general enough to be unfalsifiable. It buys six months of activity that looks like progress. It costs the company, in my experience, about a year of compounding behind the leaders who skipped it and made the call.
If you are running a company and feeling the pull toward an AI readiness assessment, I would gently push back. Skip it. Spend the money on a focused, three-month engagement that names one decision in the company and reorganizes one quarter of work around it. You will learn more in those three months than you would in a year of assessments. You will also have something to point to that is concrete enough to argue with, which is the only kind of strategic artifact worth having.
I will admit there are companies where a structured assessment is the political prerequisite for any AI work to happen at all. The board needs to see the heat map. The CIO needs to be able to point to a process. In those cases, run the assessment, but treat it as a procurement exercise, not a strategic one. Spend a fixed and small amount on it. Do not let it absorb the strategic energy of the team.
The strategic energy belongs at the decision layer. The assessment lives at the readiness layer. Confusing the two is one of the most expensive mistakes I have seen leadership teams make in the last two years. It costs more than a bad pilot, because a bad pilot at least teaches you something. A readiness assessment teaches you almost nothing, beautifully.