What Looks Like Anthropic's Principles Is Actually Their Pricing
Anthropic has been one of the most-written-about company in tech this year.
The Pentagon walkaway. The Super Bowl ad. The Long-Term Benefit Trust pieces in the corporate governance journals. The Mythos preview and the Project Glasswing coalition that came with it. The Amazon deal. And just this week another release on a partnership with SpaceX. Each story has been reported, debated, and moved on from.
What I have not seen anywhere is anyone connecting them and the story this tells about their Business Model.
I spent this week running Anthropic through Furze, the platform we are building at Firehills to create insight on any organisation’s business model and capabilities. The thing though that surprised me was not any single item in the model. It was what happened when we found the red thread through their whole model.
A stack of expensive promises
Anthropic has been quietly building a stack of commercially expensive, publicly visible, hard-to-reverse commitments. Most of them have been reported individually. None of them have been reported as a single architecture.
Walking away from a $200 million Pentagon contract rather than loosen its use policy, then getting designated a federal supply chain risk for the trouble. A Super Bowl advert promising Claude would never carry advertising — closing off the largest revenue model in the history of consumer software, the one Google and Meta are built on. A Long-Term Benefit Trust that prevents Anthropic's own investors — including Amazon, now in for up to 33 billion dollars — from voting to override safety decisions. A Responsible Scaling Policy, now on its sixth public version, with external review gates that can actually block a release. An 84-page Constitution published in January spelling out, in public, what the product will refuse to do.
Each of those, in isolation, looks like a cost.
Something the company is absorbing because it believes in the mission. Read together, they look like something else entirely.
They look like a product.
Why making the words expensive matters
Think about the person inside a large bank, a hospital system, or a pharmaceutical company whose job is to approve new technology vendors. They need to be able to answer a question from their own regulator that goes something like this: “how did you satisfy yourself that this vendor's behaviour will be predictable when money gets tight?”
Every frontier AI model company says the right things about safety. Every sales deck has the same slide. But words are cheap.
What Anthropic has done, whether by design or by accumulation, is make the words expensive. Each commitment is something the company cannot quietly walk back. The Pentagon one especially. If you are a procurement team at a bank, you just watched that commitment tested in public with a nine-figure cheque on the other side of it. It held.
And while this has been happening, enterprise revenue has done the opposite of what you might expect. Customers spending over one million dollars a year doubled in under two months. The over-100,000 cohort grew sevenfold in a year. Eight of the Fortune 10 are now customers. The run rate moved from roughly nine billion at the end of 2025 to over thirty billion by April 2026.
Safety positioning is not slowing that. It looks very much like it is driving it.
Mythos is the policy working in real time
The Mythos preview is what made me confident this reading is correct. For three years, the Responsible Scaling Policy has threatened to block releases if a model crossed certain capability thresholds. A reasonable critic was always entitled to ask whether it had ever actually bound. Whether the company had ever built something and not shipped it.
The answer arrived this April. The model crossed the threshold. The policy held. The company built the most powerful thing it had ever built and chose not to ship it.
That is not a coincidence with the Pentagon refusal eight weeks earlier. It is the same policy operating consistently across two very different situations. And Project Glasswing then turns the held-back capability into a commercial coalition with exactly the kind of organisations Anthropic most wants as long-term customers — every hyperscaler, every major cybersecurity incumbent, the Linux Foundation, a Fortune 10 bank. Anthropic is effectively saying: we built the thing that could break the internet, so now we are the natural party to lead the defence of it. Nobody else in frontier AI can credibly occupy that position, because nobody else has the safety architecture to be trusted with it.
Where Claude Code fits
The ethical architecture gets Anthropic past the compliance gate at a regulated organisation. That is not the same as getting adopted once you are inside.
Claude Code went from zero to 2.5 billion dollars in annualised revenue in nine months. More interesting is how it spreads. Developers install it on their own machines. Epic's Chief Information Officer recently noted that over half their Claude Code usage is from non-developers — finance staff, clinical implementation teams, operations people — using a terminal tool because a colleague showed them.
This is how Slack got into the enterprise. It is how Figma did. It is a product-led motion in a category where every analyst assumed adoption would be mandated from the top.
Two mechanisms, compounding. The ethical architecture gets Anthropic through the door at the procurement office. Claude Code gets onto the laptops of the people whose workflows the enterprise software will eventually be rebuilt around. Neither on its own is the whole story. Together, they explain quite a lot about why this company is growing faster than most of us modelled 18 months ago.
What this means more broadly
The reason this matters beyond Anthropic is that it shows you can build a frontier-tier business in which the apparent costs are the product. The Long-Term Benefit Trust, the Pentagon refusal, the no-advertising pledge, the Constitution, the Responsible Scaling Policy, the held-back Mythos release — these are not what Anthropic puts up with despite being a commercial company. These are what Anthropic sells.
Microsoft, Google and Salesforce can match Anthropic on compliance tooling at the application layer. They cannot retrofit an independent trust structure, a six-version public safety policy, a demonstrated willingness to lose a major government contract on principle, or a model deliberately withheld from market. That is a much harder moat to cross than a product moat.
One last thing
None of this brand new information. Every fact in this post is available somewhere.
What I think is worth saying is the shape these facts make when you put them in one frame — the shape that suggests Anthropic's apparent principles are actually their pricing, sold to a buyer who values the constraint itself.
That is what we are building Furze to do. But running it on Anthropic was a useful reminder of why we started. Sometimes the story is not hidden. It just has not been put together yet.
All the best, Rob
The Business Model Canvas referenced in this post was generated by Furze on 21 April 2026. It is a 39-page document with sources throughout, drawing on Anthropic's own disclosures, Harvard Law Review, Lawfare, the Council on Foreign Relations, CNBC, Menlo Ventures, NBC News, Foreign Policy and others.
If you would like to see what Anthropics full business model actually looks like, drop me a line: rob@firehills.io