As generative AI reshapes how content is created and distributed, publishers face a structural shift in value. This article outlines why incremental decision-making leaves businesses exposed and how to assess resilience in an AI-mediated market.
This article is part of an FT Strategies series exploring resilient content businesses in the Age of AI.
When content becomes a data derivative, small bets are the biggest risk of all
Much attention is paid to the difficulty of managing capex-intensive businesses with long investment cycles, where decisions will only yield results in a decade. Less attention is paid to the opposite problem: managing fast-moving, low-capital businesses where the ability to act quickly becomes a bias against structural investment.
This is the curse of small stakes - when individual decisions are low-risk, the accumulation of small actions becomes a de facto strategy. But a decentralised culture built on incremental optimisation and responsiveness to daily news is ill-equipped for structural disruption, when content itself is being reclassified as a data derivative, and value comes from the raw material that feeds AI systems. Publishers risk capturing neither.
Most successful post-Internet content businesses test quickly, apply learnings immediately and fail cheaply. Speed pays: content businesses with a culture of experimentation have thrived, and executives who prospered in this environment are often quick-thinking and quick-to-act.
When GenAI arrived, publishers immediately tested the quickest, lowest-risk applications that increased efficiency or enabled nice-to-have, but not strategic, enhancements. Meanwhile, a generation of tech start-up competitors is being built from scratch around unrecognisable AI workflows, organising in a new value chain that bypasses publishers entirely.
It is difficult to predict how exactly the imagination of thousands of talented entrepreneurs will reshape the content market. But being unable to predict the future does not mean it is impossible to prepare for it.
Content businesses need to build from foundations that are resilient to AI competition.
AI removes two important barriers to entry in the content market: the cost and the skill required to produce quality content at scale. The consequences run in both directions:
Supply: Information is commoditised and global. Content will be micro-targeted and used for brand-building and community strategy. Publishers will compete for attention and talent against players for whom content is a marketing cost.
Demand: Publishers are disintermediated. The gathering, curation, contextualisation, and updating of information are being automated. Users, or the AI agents acting on their behalf, can connect directly to primary sources. Content will be filtered through AI aggregators, which will repurpose raw data, customise and embed outputs into people’s routines and professional workflows without a publisher in the chain.
Publishers retain a key advantage: as volume becomes overwhelming and agents become a new category of audience, there will be a premium on provenance and trusted sources. And many have spent years calibrating products to user preferences and building trusted brands.
But that advantage does not always translate into monetisation. Our AI resilience testing shows that much (and sometimes most) of what publishers produce can already be replicated by AI with competent prompting plus fact checking (AI or human); any piece of content derived from public information is at risk.
The window to establish differentiation is closing quickly, and publishers must choose: what to cover, what types of product to develop, what business models to adopt and what audiences - human and machine - they are targeting.
The starting point is an honest assessment of where GenAI poses the greatest threat to the current business. FT Strategies’ framework evaluates resilience across two dimensions, combining our proprietary content audit AI tool and audience analytics to identify where value is most at risk. We split resilience into two dimensions:
- Content Replicability: How easy is it for AI to generate your content?
Factors driving resilience include personality, content exclusivity, and format diversification.
- Platform Defensibility: Are users locked in?
Factors driving resilience include churn attrition, embedding into current workflows, community and network effects and matching with specific needs.
Underpinning those categories are two preconditions of resilience: discoverability and trust. If users can't find you, the differentiation is irrelevant. If they don't trust you, they won't engage with the content in a way that is defensible in the medium term.

Mapping products against this framework and quantifying how much revenue is in each quadrant helps make the risk concrete. Publishers with easily replicable content and no direct connections to users are most immediately exposed, especially if they depend on advertising revenue.

Moving right on the matrix means increasing engagement, making the product more present, more useful and more personalised to target specific “jobs to be done”. For B2B, it means becoming part of a professional workflow, with outputs tailored to different roles, sectors and geographies. This strategy direction requires:
- Investment in tools, features and integrations to support critical decisions. Interactivity, personalised analysis and partnerships will be essential.
- Nudging users to increase their switching cost: saving preferences, customising interfaces, collecting streaks and high scores, commenting or participating in communities.
This requires investment in acquiring and integrating multiple sources of data - not only robust first-party but also third-party data for contextualisation, personalisation and recommendation engines.
Moving up means making content that AI cannot produce. AI’s ability to mimic style and provide multimodal outputs is improving quickly - anything that can be generated from public information will eventually be generated. Long-term resilience requires uniqueness: original reporting, sourced from exclusive relationships, written with a style that a safety-constrained AI could not replicate. Or proprietary datasets, opinion, analysis and information that competitors, human or artificial, cannot access.
The strategies available for publishers will depend on their position in the matrix:
The Wire is a volume game, better played by global players. Publishers in this quadrant with a wide reach can invest in personalised niche feeds to create communities and build utility by developing data-as-news services that can be licensed wholesale. An example is the Associated Press' strategy of becoming the authoritative source for public structured data, such as election results, which LLMs find it easier to license than to replicate.
Publishers in the Column quadrant have an audience and are often respected brands considered successful in their fields, but success can lead to complacency, and they must ensure they have a defensible business model. Lack of control over their engagement with users can make it difficult to monetise their business if they lose search referrals.
Players in the Terminal quadrant must defend their position in the workflow through partnerships and product investment. These are often profitable businesses providing critical intelligence, but they are at risk when workflows change. One of the strategies for players trying to move up in this quadrant is to become “content operating systems”, platforms combining SaaS, content, and integrating with businesses’ internal processes - choosing segments of the workflow where they can become indispensable and partnering to consolidate their position.
Publishers in The Club quadrant are often in a privileged position with direct engagement with users and exclusive content. Deep engagement generates data and feedback that improve the product further. However, publishers in this quadrant must ensure they extract full value from their unique data and user relationships. Their position can be eroded by competitors with deep pockets who are adept at capturing users’ attention and a place in corporate workflows.
All of these strategies also pose a fundamental question for publishers: what to stop doing.
The symptom of the curse of small stakes is the existence of multiple initiatives without a clear guiding vision - many of these initiatives are likely positive, but distracting, and contribute to services that are likely to be doomed in the long term. Publishers who will emerge in the strongest position are those who deliberately choose to focus on resilient strategies.
FT Strategies has developed a set of products specifically designed to answer these questions:
- AI resilience tests, a 3-4 week review to quantify revenues at risk and opportunities for improvement.
- “AI North Star” workshops with executive and product teams to review their competitive position in the AI value chain.
- AI tech and governance maturity audits to identify priority tactical actions.
- Portfolio reviews, to support publishers in re-evaluating their value propositions.
FT Strategies’ AI Resilience Sprint has been designed to help you assess your exposure to AI, prioritise the most critical risks and opportunities, and define a clear plan for building a more resilient content and product strategy. Get in touch to explore how our targeted, high-impact sprints can support your organisation.
