Just over a year ago, the Financial Times made a deeply consequential decision. Instead of relying on fixed rules to govern access to articles, it launched a new AI-driven paywall that learns on its own, deciding who sees content freely, who receives an offer, and what that offer should be.
The impact was decisive. Conversion rates rose 92%. Progression through the subscription funnel increased 118%. Subscriber lifetime value jumped 78%. Just as importantly, the system reduced manual overhead, freed engineering capacity, and gave the FT confidence to pursue more ambitious reader-revenue strategies than static paywalls ever allowed.
But here’s the cool part. Those numbers matter not because they are exceptional, but because they are repeatable.
Tear Down the Wall
In the 90s, the vast majority newspapers and magazines decided to give away all their online content for free, then hoped and prayed for the traffic and advertising to arrive. It didn’t pan out. At one point, people were talking about The New York Times going out of business.
We began to see early paywalls (Paywall 1.0), which offered a lifeline, albeit a rigid one. They were brute force mechanisms that generalized readers as subscribers or non-subscribers and barred access to the latter if they didn’t pay, ignoring all kinds of alternative revenue opportunities.
At the Guardian Media Conference in 2013, I gave a presentation about Paywall 2.0, an evolution of the model that prioritized reader relationships by offering the right offer, at the right time, to the right person. It improved on the model by using user data to identify high-propensity subscribers.
Publishers began using detailed analytics to track user behavior and preferences. This allowed them to target these high-potential subscribers with personalized offers and content. It was data-oriented and editorially supervised.
Two Fallacies
Unfortunately, Paywall 2.0 relied on two basic fallacies. The first one was that people were the best managers of these systems (as opposed to the systems themselves). There’s simply no way a small team of people can have enough capacity and judgement do handle the daily flux of the news business. If something big happens, it can day days to react. Things move too fast.
The second misunderstanding was that reader data was relatively clean and interpretable. This turned out not to be the case. Media data is a notorious dumpster fire.
“The big problem in media is that the data is really bad,” notes Jonathan Harris, Senior Director of Product at Zuora. “It’s heavily imbalanced. A lot is unobservable. People switch devices, clear cookies, and look like new users when they’re not.”
His colleague Shahbaz Khan describes the task as “arranging random Lego pieces into something that makes sense. The patterns are there, but all the noise kills performance.”
Slow humans. Bad data. What to do?
Paywall 3.0
Paywall 3.0 is agentic. It reacts in real-time. It continuously learns from user interactions, dynamically adjusting its offers to maximize engagement and conversion. It’s the difference between static and dynamic.
“At scale,” Harris explains, “an AI paywall finds the relationship between utility and charge over time out of messy, imbalanced, and constantly moving data.”
Utility here isn’t an editorial judgment. It’s the lived value a reader derives from content in a given moment. “The credibility of the content defines utility,” Harris says, “but the utility itself is defined by the user.”
Static paywalls assume value is uniform—that all articles are equal. Paywall 3.0 challenge that assumption constantly, making millions of real-time decisions about when to block content, what offer to show, and how to guide readers toward subscription.
Beyond Media
Please note – this isn’t just a media story! Any business operating in a volatile environment with imperfect data and clear feedback loops can benefit from systems that learn faster than humans can configure rules: ride-sharing surge pricing, airline seats, cloud computing, etc.
The lesson from the FT is simple: growth no longer comes from people making assumptions. You’re never going to outthink the data. It comes from building systems that continuously test (and discard) those assumptions.
It’s the scientific method at scale. And if you have public-facing data and aren’t taking advantage of dynamic paywalls, you’re missing out.
Next week, we’ll take a look inside the curiosity engine of Paywall 3.0. Get ready for some quenched disorder and greedy actions!