Let me start with a confession that will surprise precisely nobody who has spent more than fifteen minutes in a technology strategy conversation with me: I have a visceral, almost physiological reaction to bullshit.
Not the garden variety sort – the innocent exaggeration of a CV, the polite fiction that yes, that presentation was very insightful, thank you Derek. That kind of low-grade dishonesty is the social lubricant that keeps civilisation from grinding to a halt, and I’ve made my peace with it in the way one makes peace with the weather or the persistent existence of LinkedIn motivational posts about ‘hustle culture’. I still think these things are annoying but they aren’t likely to bring down the world’s economy in isolation.
No, the bullshit I’m referring to operates at an altogether more impressive scale. It is the kind of bullshit that gets capitalised at astronomical valuations, the kind that attracts sovereign wealth fund investment, the kind that employs thousands of people whose job title might as well be ‘Professional Narrative Maintenance Engineer’. It is bullshit elevated to an art form, and – I say this with a degree of professional admiration for the sheer craftsmanship involved – the artificial intelligence sector has produced some of the finest examples the technology industry has ever witnessed.
(I should note that this article is going to be long, somewhat technical in places, and deeply unfashionable in its conclusions. If you’re looking for breathless optimism about how AI will cure cancer, end poverty, and finally teach your labrador to use the washing machine, I’d recommend the nearest LinkedIn feed. What follows is closer to a systems analysis of why the current moment feels less like the dawn of a new era and more like watching someone build a cathedral on quicksand whilst insisting the foundations are ‘disrupting traditional geology’.)
the epistemology of horseshit: a brief taxonomy of technological deception
The technology sector has always operated within a framework where excitement about potential breakthroughs runs several laps ahead of material reality. This is not, in itself, a problem. Optimism about future capability is what attracts capital, and capital is what funds the research that occasionally – yes, occasionally – produces genuine transformation. The internet was overhyped before it was underestimated. Mobile computing was dismissed before it was everywhere. The cycle of hype and correction is as predictable as our fine British weather and approximately as amenable to control.
What makes the current AI moment distinctive is not the presence of hype – we’ve had that since the 1960s, when researchers at Dartmouth cheerfully predicted that ‘every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it’ and then spent the next six decades discovering that they had, perhaps, been a touch optimistic.
(Forgive me as I have flashbacks to helping my then girlfriend Becca with her Prolog homework in the early 2000s during her time as a computer science undergraduate before she rightly fucked the whole subject off and did biology and then genetics at Edinburgh. I appreciate the irony of a man who has spent the last twenty five years in tech telling someone to run away from computer science at high speed but she was far more interested in biology than the numbers underpinning computer science which ended up being my thing)
The distinction in progress lies in the velocity of capital deployment relative to the clarity of the value proposition.
In the dot-com era, at least the bullshit had a certain transparency to it. ‘We’ll sell dog food on the internet’ was a comprehensible business proposition, even if Pets.com’s execution suggested otherwise. The crypto boom had the decency to be obviously speculative – anyone buying a $300,000 blockchain validated JPEG of a bored ape knew, on some level, that they were participating in a collective hallucination about value. They did, right?
The AI sector, by contrast, benefits from a peculiar form of what I’ll call Complexity Camouflage™ – the phenomenon whereby the genuine technical sophistication of the underlying systems provides near-perfect cover for the strategic deployment of impossible promises. When a vendor walks into your boardroom claiming their Cognitive Decision Engine® will deliver 80% efficiency gains across your operations, the sheer density of the jargon creates a kind of epistemic fog in which otherwise intelligent people find themselves nodding along because the alternative – admitting they don’t fully understand what’s being proposed – feels professionally dangerous.
(I’ve sat in meetings where this has happened. I’ve watched the nods. I’ve felt the gravitational pull of the collective pretence. It takes a particular kind of bloody-mindedness to raise one’s hand and say ‘I’m sorry, could you explain what that actually means in terms of things that happen in reality?’ – a bloody-mindedness I’ve cultivated over twenty five years of consulting and which has made me simultaneously valued and occasionally unwelcome, depending on whether the person running the meeting wanted truth or theatre. As my writing will attest to – I offer the truth because honesty is at least actionable whereas bullshit is not.)
The real-world data tells a rather different story from the pitch deck. Genuine AI implementations typically deliver efficiency gains in the region of 15–30%, which is perfectly respectable and often worth the investment. Any claim significantly exceeding this threshold should be treated with the same scepticism one would apply to a man in a pub explaining that his uncle definitely knew the Queen.
the bubble mechanics: or, how we keep doing this to ourselves
Historical precedents for the current moment are not difficult to find, and their consistency would be almost comforting if the consequences weren’t so reliably catastrophic.
The expert systems boom of the 1980s promised professional-level human intelligence simulation and delivered what amounted to very expensive if-then statements that couldn’t handle the messy ambiguity of actual decision-making. The resulting ‘AI winter’ set the field back by a decade and destroyed careers with the thoroughness of a controlled demolition.
The dot-com crash demonstrated that ‘eyeballs’ and ‘burn rate’ are not, as it transpires, adequate substitutes for revenue. The fibre-optic backbone survived; the companies that laid it, largely, did not – some major winners aside. Amazon? Yes. Black Star? Not so much. Google? Yes. Webcrawler? Not really.
The cryptocurrency and NFT frenzy offered decentralisation and ‘smart utility’ to a user base that, in retrospect, was primarily composed of people hoping to get rich by selling digital assets to the next person hoping to get rich by selling digital assets. A magnificent perpetual motion machine of speculation that worked right up until the moment it stopped, which is rather the nature of these things.
Each cycle follows the same elegant choreography: genuine innovation attracts genuine interest; genuine interest attracts speculative capital; speculative capital inflates expectations beyond any reasonable connection to reality; reality eventually asserts itself with the subtlety of a freight train or a punch to the face by a heavyweight boxer; and then the infrastructure built during the mania remains, creating the actual foundation for the next wave of genuine innovation. The dot-com crash left us with e-commerce as a core of the internet. The crypto crash left us with distributed ledger experimentation albeit as an echo of a banking crisis that created paranoia. The question for the current AI cycle is what the wreckage will leave behind that proves genuinely useful.
(For what it’s worth, my bet is ‘quite a lot’. The underlying technology is real and materially useful in ways the previous bubbles’ technologies often weren’t. The problem isn’t the technology. The problem is what happens when you pour several hundred billion dollars of venture capital onto something before anyone has worked out what it’s actually for beyond the general-purpose answer of ‘making things better, somehow, trust us, look at this demo’.)
the investor’s dilemma: or, FOMO as a fiduciary strategy
The current market is defined by what some analysts have termed ‘The Great Compression’ – a phenomenon whereby the traditional venture capital stages collapse into one another like a telescope being shut, forcing investors into what amounts to a ‘Winner Takes Most’ mindset at every entry point. This creates an acute dilemma: the pressure of missing the next Google or Microsoft compels professional capital to enter at valuations that everyone privately acknowledges are disconnected from any reasonable expectation of near-term revenue, whilst simultaneously recognising that the sector exhibits characteristics that any honest observer would describe as ‘bubbly’.
This is the investor’s version of the ancient philosophical paradox about the crocodile: you know it’s probably going to bite you, but you’ve been told there might be treasure in its mouth, and your limited partners are watching, and the fund down the road already has its arm in there, and – well, you can see how these things escalate.
What makes the current situation particularly intriguing from a systems perspective is the emerging bifurcation between pure-play AI laboratories and the diversified incumbents. OpenAI, Anthropic, and their peers are essentially one-trick ponies – extraordinarily sophisticated one-trick ponies, to be sure, but their entire existence is predicated on the continuing escalation of AI capability. If the technology plateaus or the market corrects, they have nothing to fall back on. They are the metaphorical equivalent of a restaurant that serves only one dish: spectacular when the dish is in fashion, catastrophically exposed when tastes change.
Microsoft, Alphabet, and Amazon occupy an entirely different structural position. These are the landlords of the AI revolution – they own the data centres, the cloud infrastructure, the distribution networks through which AI products reach users. Whether OpenAI succeeds or fails, Azure still gets paid. Whether Anthropic’s models prove transformative or merely adequate, Google Cloud Processing still gets its cut. The platform providers win regardless, in much the same way that the people who sold shovels during the gold rush made rather more reliable returns than the people actually panning for gold.
(As someone who has spent years advising organisations on technology strategy, this is the bit that makes me simultaneously fascinated and slightly nauseated. The structural dynamics are genuinely elegant from an analytical perspective. They’re also a perfect illustration of how capital markets reward positioning over innovation, infrastructure over invention, and being in the right place over being the right person. Which is, I suppose, a lesson that extends well beyond technology. Techno-feudalism – love it or hate it – means many of the older generation of technology success stories not only are diversified but are actually gaining revenue from the current bubble that leaves them shrugging but also stuffing their pockets regardless.)
the scaling wall: or, why more of the same stops working
Here is where things get properly interesting, and where the narrative that has sustained several hundred billion dollars of investment starts to develop some rather inconvenient cracks.
The entire AI revolution – or at least the current chapter of it – has been built on a single, elegant premise: scaling laws. Between 2020 and 2024, the empirical evidence seemed to demonstrate that making models bigger, feeding them more data, and throwing more compute at the training process would produce smooth, predictable improvements in capability. Bigger was better. More was more. The relationship between investment and intelligence appeared to be, if not linear, then at least logarithmically reliable.
This was a magnificent story for fundraising purposes. If intelligence scales predictably with compute, then intelligence becomes a simple function of capital deployment. Pour in more money, get out more intelligence. The implications were intoxicating: with sufficient investment, artificial general intelligence – whatever that means, and the inconsistency of the nomenclature should itself be a warning sign – was simply a matter of when, not whether.
The problem, which has been quietly accumulating in the empirical literature since late 2024, is that the scaling laws appear to be hitting a wall.
Not a temporary wall. Not a ‘we need a bit more data’ wall. A ‘the fundamental architecture of these systems has inherent limitations that more compute cannot overcome’ wall. A problem less of how to convince investors to hand over cash and more simple laws of mathematics that remain inconveniently in place like the walls of a particularly well cooled cell.
Research into the persuasiveness of frontier language models – the very largest, most expensive systems currently in operation – found that they were only marginally more effective than models an order of magnitude smaller. For tasks requiring coherence and consistency, we appear to be approaching what might be called an ‘effective ceiling’ on the returns to simply making transformers bigger. The brute-force approach to intelligence – the one that justified all those datacenter investments and all those NVIDIA GPU purchases – is encountering a mathematical reality that no amount of fundraising narrative can negotiate away.
(This is why I often end up having conversations in the tech space regarding large scale LLM models and how smaller SLM designs have far better capability due to focused intent and a reduced impact of what can colloquially be called “context rot”)
This is, to use a technical term, a problem.
It is a problem because the valuations of pure-play AI labs are predicated on the assumption that capability will continue to scale with investment. If that assumption fails – if we’re approaching the point where doubling the compute budget produces a 0.5% improvement in capability rather than a 10% improvement – then the entire financial architecture of the sector needs to be reassessed. The emperor may not be entirely naked, but he’s certainly rather more exposed than the pitch deck suggested.
the inference pivot: thinking harder because growing bigger stopped working
The industry’s response to the scaling wall has been characteristically ingenious and characteristically insufficient. Rather than acknowledge that the fundamental approach might have limitations – an admission that would be financially catastrophic for approximately everyone involved – the narrative has pivoted to ‘inference-time compute’. Instead of making the models bigger during training, you give them more time to ‘think’ when answering questions.
This is represented by systems like OpenAI’s o1 and o3 reasoning models, which are explicitly designed to spend longer processing a query before producing a response. It is, in essence, the AI equivalent of that advice your university tutor gave you about exam technique: ‘spend more time thinking before you start writing’.
The approach works, to a point. Giving these systems more computational budget at inference time does improve their performance on problem-solving tasks. This is genuine, measurable, and not bullshit.
What is bullshit – or at least a significant omission in the narrative – is the suggestion that this represents a fundamental breakthrough rather than a shift in where the costs accumulate. Training-time scaling pushed costs into building the model. Inference-time scaling pushes costs into running the model. The total bill doesn’t shrink; it just arrives on a different line of the invoice.
The implications are significant:
Higher inference budgets mean higher costs per query, which means the $20/month subscription that currently makes these tools accessible to consumers becomes progressively harder to sustain as the models are asked to do more complex work. There is a reason that enterprise AI pricing looks rather different from consumer pricing, and that reason is mathematics.
Inference-time compute cannot be scaled exponentially without a corresponding exponential increase in chip production, which operates on manufacturing timelines measured in years rather than the software iteration cycles measured in weeks. You can write new code overnight. You cannot build a new semiconductor fabrication facility overnight, no matter how much venture capital you have or how persuasively you describe the opportunity to potential investors.
Perhaps most fundamentally, these models do not learn whilst working. Unlike a human professional who accumulates expertise through practice, an AI system running at inference time is essentially spending computational resources to extract the maximum value from its existing training. It can think harder, but it cannot think differently. The ceiling may be higher than with pure training-time scaling, but it is still a ceiling.
(This is one of those observations that seems obvious once stated but that I’ve found curiously absent from most mainstream technology commentary. The incentive structures of the media ecosystem reward breathless excitement rather than measured analysis, because ‘AI HITS FUNDAMENTAL LIMITS’ generates fewer clicks than ‘AI WILL TRANSFORM EVERYTHING’. I say this without bitterness – it’s simply the mathematical reality of attention economics, and it would be hypocritical of me to complain about market dynamics I’ve spent my career analysing.)
openai and the art of being too big to fail
OpenAI presents perhaps the most instructive case study in the dynamics of the current moment, and the leadership of Sam Altman provides a masterclass in what happens when Silicon Valley founder mythology collides with the material constraints of physics and finance.
Altman’s strategic playbook – informed by his early-career failure with Loopt and his subsequent years running Y Combinator – can be understood as a systematic application of the principle that in venture capital, narrative is as important as reality, and potentially more so. The ‘Loopt loop’, as one might call it, involves rapid iteration, aggressive fundraising, political manoeuvring, and the cultivation of an image of visionary inevitability that makes continued investment feel less like a choice and more like a historical obligation.
The numbers, viewed dispassionately, tell a story that the narrative works hard to contextualise:
OpenAI exited 2025 at a $20 billion revenue run rate. This sounds impressive until you learn that the company is reportedly burning through tens of billions annually and is, by its own optimistic forecasts, years away from profitability. The gap between revenue and expenditure is being bridged by a combination of investor capital, Microsoft’s infrastructure subsidy, and what can only be described as faith.
This faith is not entirely unreasonable – the technology is genuinely transformative in many applications, and first-mover advantages in platform markets can be decisive. The question is whether the faith is proportionate to the valuation, and at $150 billion, one begins to wonder what proportion of the global economy’s problems would need to be solved by large language models to justify the price tag.
(Rather a lot, is the answer. Rather more than seems plausible when you consider that the core technology still struggles with basic numerical reasoning, confidently produces incorrect information with the serene assurance of a politician, and has yet to demonstrate anything approaching the kind of generalised intelligence that the ‘AGI’ branding implies. Hopes of cancer being solved through the generative discipline of AI remain as unlikely as my dog winning the Nobel Prize – this doesn’t mean my dog is useless, just that her particular skills don’t readily align with solving world conflict)
What is more concerning than the valuation itself is the trajectory it implies. OpenAI’s response to the scaling wall has been to expand aggressively into adjacent markets – healthcare, e-commerce, entertainment – whilst simultaneously planning for $1 trillion in datacenter investment. The company has ramped its lobbying spending to $3 million in 2025, hired consultants across the political spectrum, and positioned AI as a matter of national security in competition with China.
This is the ‘too big to fail’ playbook, executed with considerable skill. By intertwining its fate with national interests and government contracts – including a $200 million Department of Defence deal – OpenAI is building a political architecture that makes its continued funding feel like a strategic necessity rather than a commercial decision. The Loopt loop has been extended from ‘iterate and fundraise’ to ‘iterate, fundraise, and make yourself politically indispensable’.
Brilliant. Absolutely fucking brilliant.
Whether it’s sustainable is an entirely different question, and one that the political architecture is specifically designed to make it feel unpatriotic to ask.
anthropic: the quiet bet on being useful rather than revolutionary
Anthropic’s strategic positioning provides an instructive contrast to OpenAI’s maximalist approach, and – I should declare an interest here, as someone who works extensively with AI tools and has opinions about how they should be built – it appears to be the more intellectually honest of the two strategies, for whatever that’s worth in a market that rewards narrative over nuance.
Where OpenAI pursues broad artificial general intelligence as both a technical goal and a fundraising story, Anthropic has pivoted toward enterprise-grade ‘agentic’ AI – systems designed to be governed, audited, and trusted at scale within specific industries. The ‘Claude for Healthcare’ initiative and its focus on practical cognitive partnership represent a recognition that the ‘pilot era’ is over, and that the market is shifting from ‘isn’t this impressive’ to ‘does this actually work, and can you prove it, and what happens when it doesn’t’.
This is a narrower vision, to be sure. It lacks the messianic grandeur of ‘we’re building God’ that characterises certain Silicon Valley narratives with a fervour that would make a revivalist preacher envious. What it offers instead is the considerably less dramatic but potentially more durable proposition of ‘we’re building tools that do specific things reliably, in contexts where reliability actually matters’.
The strategic logic is sound: by targeting vertical AI sectors where domain expertise and proprietary data create defensible positions, Anthropic may avoid what I’ll call the General-Purpose Bullshit Trap – the tendency of broad capability claims to dissolve upon contact with the messy specificity of real-world problems. Healthcare, with its stringent regulatory requirements and genuine life-or-death stakes, is a domain where ‘approximately right most of the time’ is not an acceptable performance standard, and where the ability to be governed and audited is not a nice-to-have but a fundamental requirement.
(This reminds me of the old saying “operation successful, patient dead” which, whilst having some benefit for surgeon’s ongoing training offers scant consolation to the relatives of the newly deceased. In matters of life or death, being right is a non-negotiable and best kept away from probabilitstic technology based roulette wheels.)
Whether this strategy generates the kind of returns that justify Anthropic’s own substantial valuation is another matter entirely, and one that depends on assumptions about market size and willingness to pay that I suspect are rather more uncertain than the pitch materials suggest.
microsoft: the landlord always wins
If you want to understand why Microsoft’s position in the AI landscape is structurally almost unassailable, consider the following thought experiment.
Imagine that tomorrow, every AI startup simultaneously discovered that large language models were a dead end. That the technology had fundamental limitations that could not be overcome. That the entire sector’s investment thesis was built on a misunderstanding of what these systems could achieve.
Microsoft would be fine.
Not ‘fine’ in the sense of ‘slightly disappointed’. Fine in the sense of ‘still generating $50 billion per quarter from cloud services, productivity software, and an enterprise ecosystem that is more deeply embedded in global business operations than any other technology platform in human history’.
This is the structural advantage that pure-play labs cannot replicate and that, candidly, most commentary about the AI ‘race’ systematically underestimates. Microsoft doesn’t need AI to work for its business model to function. AI is a growth accelerator for a company that was already growing at a rate most organisations would consider spectacular.
Yes, Copilot is anaemic in comparison to the front runners, but Azure generated over $75 billion in revenue in fiscal 2025–2026, and the AI component of that growth – whilst significant – sits within a diversified revenue base that includes productivity software, enterprise services, gaming, and cloud infrastructure.
The ‘Foundry’ approach – offering access to over 11,000 models from multiple providers including OpenAI, Meta, and DeepSeek – is particularly instructive. Microsoft has effectively commoditised the model layer, positioning itself as the platform through which AI is consumed regardless of which model provider succeeds. If OpenAI wins, Microsoft wins through its investment and Azure integration. If Anthropic wins, Microsoft wins through Foundry access. If some as-yet-unknown competitor emerges, Microsoft wins through platform distribution.
It is the infrastructure play in its purest form: own the pipes, and it doesn’t much matter what flows through them.
(As someone who has spent years advising organisations on technology strategy, I find this position simultaneously admirable from an analytical perspective and slightly depressing from a ‘wouldn’t it be nice if the most innovative companies captured the most value’ perspective. The market rewards structural advantage over technical brilliance with a consistency that should trouble anyone who believes in meritocracy, but which will surprise precisely nobody who has observed capital markets for more than a calendar quarter. The world may crave equality but reality is quick to illustrate the naïveté of believing it is anything other than a pipe dream beyond equality of opportunity)
google: the silent predator with the hardware moat
If Microsoft wins through diversification, Alphabet wins through something potentially more formidable: vertical integration across the entire AI stack, from custom silicon to proprietary data to fundamental research.
The part of this story that receives insufficient attention – largely because chip design is less narratively exciting than chatbots – is Google’s Tensor Processing Unit programme. Whilst the rest of the industry pays what I’ll call the ‘NVIDIA Tax’ – the premium for hardware that NVIDIA can price aggressively due to its near-monopoly on AI training accelerators – Google has been quietly building its own custom silicon for the better part of a decade.
The economics are significant. The TPU v6e offers roughly four times better performance per dollar compared to NVIDIA’s H100 for large language model training and high-volume inference. Midjourney’s migration from NVIDIA clusters to TPU v6e resulted in an annualised saving of $16.8 million, which is the kind of number that makes CFOs sit up rather sharply. When you’re operating at the scale of Google’s AI ambitions, the unit-cost advantage of custom silicon compared to buying NVIDIA’s margin-rich products represents a structural competitive advantage that compounds over time.
(In a secondary context Apple, whilst doing the equivalent of a somersault off a diving board into an empty swimming pool, are also advantaged by diversification, and have quietly started to shift toward Gemini from OpenAI)
The second moat – and this one is harder for competitors to replicate than anything involving silicon – is data.
Google has spent twenty-five years indexing the world’s information, processing search queries that reveal human intent at a scale no other entity can match, and – through YouTube – accumulating the largest repository of video data on Earth. Sam Altman may have used AI to hoover up Reddit as a gargantuan and growing data source, but in an era where AI models are trained on data and where the public web has been largely exhausted as a training resource, Google’s proprietary data assets represent a differentiation that cannot be replicated through engineering talent or capital deployment alone.
You can build a better chip with enough investment. You cannot retrospectively accumulate twenty-five years of search click-stream data and two decades of video uploads. The data moat is temporal as well as structural, and this is the kind of advantage that should keep competitors awake at night rather more than it appears to.
Then there is Waymo, which sits in the peculiar category of ‘moonshots that are quietly becoming real businesses’. Developing autonomous driving systems requires a decade-long commitment to data collection, safety validation, and regulatory navigation that few organisations have the patience or the balance sheet to sustain. Google has both, and the resulting dataset represents a physical-world intelligence moat that extends the company’s advantages beyond the purely digital.
the great decoupling: or, the moment reality reasserts itself
What we’re witnessing in 2026 is the beginning of what might be called the Great Decoupling – the divergence between the narrative of universal AI transformation and the material reality of who actually captures value from these technologies.
The initial phase of the AI hype cycle treated the sector as a monolith. ‘AI is the future’ implied that all participants in the AI ecosystem would benefit from the rising tide. Venture capital flowed with democratic enthusiasm into startups, labs, infrastructure providers, and application developers, driven by a FOMO so intense it had practically achieved sentience of its own.
The correction, which is now underway, involves the market developing opinions about which participants will actually capture durable value and which are destined to become cautionary tales in future business school case studies. The ‘decade-high down rounds’ now appearing in startup fundraising data represent the market beginning to distinguish between genuine innovation and ‘AI-washing’ – the superficial application of AI branding to products that are, at their core, doing what they always did, just with a more fashionable vocabulary.
(The parallels with the ‘digital transformation’ hype of the previous decade are almost painfully exact. Replace ‘AI-powered’ with ‘digitally transformed’ and you have approximately the same phenomenon: organisations spending significant sums to rebrand existing capabilities in the language of the moment, creating a magnificent illusion of progress that evaporates upon contact with the question ‘what specifically has changed?’)
The winners emerging from this decoupling share three characteristics:
Infrastructure ownership. Dominance in the physical layer – chips, data centres, networking – provides insulation against volatility in the model and application layers. You can swap models; you cannot easily swap data centres. Organisations citing multi-cloud resilience eventually find out in less than ideal circumstances that your code isn’t upping sticks from Azure to AWS without both massive egress charges, downtime and a complete refactor of your code base.
Data moats. Access to exclusive, high-quality datasets that cannot be replicated through web scraping or synthetic generation. These moats are self-reinforcing: the more users interact with your platform, the more data you accumulate, the better your models become, the more users you attract. It’s a virtuous cycle, or a monopolistic feedback loop, depending on one’s perspective and tolerance for market concentration. If you’re Google, it’s the former. If you’re OpenAI, it’s the wall that prevents them from beating Google even if they succeed.
Integrated distribution. The ability to embed AI capabilities within existing, high-margin products that millions of people already use daily. Copilot in Office 365 doesn’t require users to adopt a new platform or change their workflow; it simply appears within the tools they’re already using, with all the gentle inevitability of moss growing on a north-facing wall. It may be the worst of all current AI tools but we’ve seen before that Microsoft can be far from the best and still be ubiquitous nonetheless.
Microsoft and Google possess all three characteristics in abundance. OpenAI and Anthropic possess none of them independently, which is why their long-term survival depends on either building them, buying them, or maintaining partnerships with entities that have them – partnerships in which the leverage increasingly tilts toward the infrastructure owners.
the geopolitical dimension: or, when your startup becomes a national security asset
The global adoption data from late 2025 reveals a dimension of the AI story that most technology commentary treats as peripheral but which is, in structural terms, rather more significant than the latest model benchmark: AI is becoming a geopolitical asset, and the countries that have invested early in digital infrastructure and institutional adoption are pulling ahead with a momentum that will be extremely difficult to reverse.
China, for example, is spending significant funds in bringing AI into military contexts with trained robotics innovations that could see robot soldiers and drones at the front line – not necessarily operating as combatants but, at the very least, as augments to the challenges of feeding supply lines in theatre.
The UAE leads global adoption at 64%, followed by Singapore at 61%, with South Korea experiencing an 80% increase in adoption since late 2024. These figures represent not just consumer enthusiasm but institutional commitment – government services, educational systems, healthcare infrastructure built around AI capabilities that are rapidly becoming expectations rather than novelties.
This geopolitical reality is what makes the Sam Altman political turn both strategically rational and profoundly unsettling. By framing AI as a ‘technology race against China’ and securing government contracts and political alliances, Altman is executing a play that extends well beyond commercial strategy into the domain of national industrial policy. The $200 million Department of Defence contract is not primarily about revenue; it’s about making OpenAI’s continued funding feel like a matter of national interest rather than a commercial investment decision.
This is sophisticated statecraft disguised as corporate strategy, and it represents a structural shift in how frontier technology companies relate to government power that deserves rather more scrutiny than it currently receives. When a private company’s commercial interests become intertwined with national security narratives, the normal mechanisms of market discipline – the possibility of failure, the requirement for profitability, the accountability to customers rather than taxpayers – tend to weaken with a speed that should concern anyone who believes that markets function best when participants can actually fail.
The challenge in recent weeks is complex though – IT services organisations are seeing prices crash as hype based general consensus implies AI is about to make human consulting a thing of the past. The complexity lies in multiple domains – that the crash of large company share prices through the indirect impact of AI hope creates overall market impact, as well as increasing concentration of revenues in hyped companies leads to a potential for an economic crash when one of OpenAI or Anthropic implodes taking a massive amount of investor hope with it.
the make-or-break window: 2028–2032
For pure-play AI laboratories, the period between 2028 and 2032 represents what analysts are increasingly describing as a ‘crucial make-or-break window’. The scaling wall, the inference-cost challenge, and the structural advantages of diversified incumbents create a narrowing corridor through which companies like OpenAI and Anthropic must navigate to achieve long-term viability.
Survival likely requires three things happening simultaneously:
First, a successful pivot from general-purpose models to vertical AI – specialised tools with demonstrable return on investment in specific industries like healthcare, legal services, or biopharma. The era of ‘AI that does everything, sort of’ is ending; the era of ‘AI that does this specific thing very well, provably, with auditable results’ is beginning. This pivot requires domain expertise that most AI labs don’t currently possess, and acquiring it costs time and money that the scaling wall is making increasingly scarce. Narrow AI is far from a new thing – and what most of us who have been in the technology industry have cited as an indicator of success far more than generalised intelligence. Having tools that can take care of systems through simple deterministic machine learning models offers far more value than flawed transformer architecture – one only need look at tools as diverse as the Apple Watch or an automated observabilty platform to see machine learning is far stronger a bet than generative.
Second, a dramatic improvement in inference efficiency that makes the per-query economics of AI services genuinely profitable at consumer price points. This is, fundamentally, a hardware problem as much as a software problem, and it’s a hardware problem that currently has ‘NVIDIA’ written on the bottleneck in rather large letters. Obviously the closer we get to the brick wall, the more likely Jensen Huang may be nervous of a stock price collapse when it’s identified that we can’t just do more.
Third, the successful execution of the political strategy – deep institutional alliances that ensure continued access to the capital, power (in the literal, electrical sense), and regulatory accommodation that frontier AI development requires. This is the Altman playbook, and its success depends on political conditions that are, by their nature, less predictable than engineering timelines.
Any one of these challenges would be formidable. All three simultaneously, against competitors with deeper pockets, broader distribution, and more diversified revenue bases, represents a degree of difficulty that should give pause to anyone pricing these companies as though success were inevitable.
the uncomfortable conclusion: or, what happens when the tide goes out
The philosopher Harry Frankfurt drew a useful distinction between lying and bullshit. A liar knows the truth and deliberately misrepresents it. A bullshitter is indifferent to truth altogether – the relationship between their statements and reality is, at best, coincidental. The distinction matters because it suggests that much of what passes for AI industry discourse isn’t deliberately deceptive so much as it is fundamentally unconcerned with whether its claims correspond to anything real.
The scaling laws will continue to improve AI capability, albeit at diminishing rates. The technology will find genuine applications that create genuine value. The infrastructure built during this era of excess will serve as the foundation for the next wave of innovation, as it always does.
What will not survive is the narrative – the story that intelligence scales predictably with investment, that artificial general intelligence is a few more billion dollars of compute away, that the companies promising transformation are all equally likely to deliver it, and that the market dynamics of frontier technology somehow exempt these companies from the ordinary constraints of physics, finance, and human organisational capacity.
The companies that will win are those that combine genuine technical capability with structural advantages in infrastructure, data, and distribution. Microsoft, with its diversified revenue base and commodity-model platform strategy. Google, with its vertical hardware-to-data integration and unit-cost advantages in custom silicon. Perhaps Amazon, with its AWS dominance for tools like Bedrock and Q, combined with decades of logistics data. Ultimately, the landlords, the infrastructure owners, the companies that get paid regardless of which application-layer narrative proves correct.
The companies that face existential risk are those whose entire proposition depends on the narrative continuing to hold – on scaling laws continuing to scale, on capital markets continuing to fund unprofitable growth, on political alliances continuing to provide the cover that commercial performance cannot. With growing trouble appearing politically in the US and their president continuing with reckless abandon in trying to rewrite economics as purely governed by tariffs, just convincing Trump may be insufficient to maintain the narrative. America may end up beaten by China not through some Far East innovation but rather by companies absorbing modern narcissistic marketing leading to their downfall.
This is not a prediction of failure. It is an observation about structural risk, and about the difference between a business proposition and a fundraising narrative.
As someone who has spent twenty five years in technology consulting – someone whose actual job involves helping organisations distinguish between what technology can genuinely do and what someone with a pitch deck says it can do – I find the current moment simultaneously fascinating and slightly terrifying. Fascinating because the underlying systems analysis is genuinely complex and intellectually rewarding – navigating this territory is literally why my day rate is what it is. Terrifying because the capital deployed against these assumptions is of a scale where being wrong has consequences that extend well beyond the venture capital ecosystem into pension funds, sovereign wealth, and the broader financial system.
The tide will go out, as it always does. What matters is whether we’ve built something real whilst it was in, or merely arranged the deckchairs in a particularly impressive formation on an increasingly exposed beach.
(Rather more the latter than the former, I suspect. Though I’ve been wrong before, and the recursive nature of being someone who analyses systems for a living means I’m perpetually aware that my own analysis might itself be a form of pattern-matching that mistakes correlation for insight. The snake eats its tail. Welcome to my brain.)
