One person. A working theory. A small practice.
I write about AI in business, teach builders to use Claude Code, ship the occasional small tool, and take a few consulting engagements a year. The four are the same work. The writing sharpens what I think. The teaching pressure-tests it against people who are actually trying to ship. The tools keep me honest about what is easy in 2026 and what is still hard. The consulting is where the thinking meets a real company with a real call to make.
I work as one person on purpose. The kind of work I want to do does not scale with headcount. It scales with attention, and the attention has to be mine.
What I am working on right now
A weekly-ish note on the quiet software thesis. A first course, The AI OS, four weekends for owner-operators who want a private system that already knows their business, with cohort one open for applications. A small Chrome extension called SaveToMD that turns any page into clean markdown. Two consulting engagements I cannot name. A few personal builds that may or may not become products.
What I believe about AI in business
The most economically valuable AI work over the next decade will not happen in the products most people associate with AI. It will happen inside companies, in software no one outside the company will ever see, sitting next to the few decisions that actually shape what the company produces.
Most companies are pointing AI at workflows. Workflows have a ceiling. Decisions do not. A 38 percent cycle-time reduction on a workflow is a workflow story. A pricing decision that moves from quarterly to weekly, made on richer signal, is a different company a year later. The first is fashionable. The second compounds.
Most decisions in a business should be either tight or loose, and the worst place a decision can sit is in the middle. AI is good at tightening. It is bad at substituting for judgment in the deployments most companies can actually run. The job of an operator in 2026 is not to deploy AI broadly. It is to decide, decision by decision, which side of the line a decision belongs on, and to defend the loose ones as carefully as you tighten the tight ones.
None of this is settled. Each of these claims is something I am willing to be wrong about in writing. The notes are how I find out which ones survive contact with reality.
What I will not do
I will not run AI readiness assessments. I do not believe they answer the question they pretend to. I will not take an engagement whose goal is to legitimize a decision that has already been made. I will not produce thought leadership for hire. I will not write a memo I do not actually agree with.
I will not build customer-facing chatbots without a hard fallback, autonomous agents that take expensive irreversible actions, or anything I cannot put on the homepage. The homepage test is the simplest one I have. If a build cannot live in the open, the price changes or I pass.
Who I am probably wrong for
If you have already chosen what to build and need a vendor to execute, I am wrong for that. There are excellent shops for it.
If you want certainty about the future of AI, I am wrong for that too. The honest answer to most forward-looking questions in this field is I do not know yet, but here is the working hypothesis and what would change my mind. If that is not the kind of answer you can use, the engagement will frustrate both of us.
If you are looking for a strategist who will tell you what you want to hear, I am the wrong person. The reason to work with me is specifically that I will tell you what I actually think.
Where I work
Based between Chennai and British Columbia. Mostly remote. I keep my evenings for reading, building, and a few projects that are not for sale.
If any of this is useful to you, the easiest way to follow along is the newsletter. The slowest path is also the most useful one.
Newsletter
A note every other Sunday.
Notes on building agents in production, what Claude Code unlocks for solo operators, and the occasional teardown of a build that did or did not work.