Medium’s metric that matters: Total Time Reading

One million page views!
50,000 signups!
Five million posts!
165 million active users!

Web companies like metrics — especially when big numbers can be used to woo the tech media into writing about us.

Away from the publicity glare of the Valley tech blogs, every web company should have some not-so-bullshit metrics that guide the business and provide an indication of its health. Ideally, there is one number to rule them all. Josh Elman calls this The Only Metric That Matters.

At Medium, our number is Total Time Reading, or TTR.


The Only Metric That Matters

Let’s first take a step back. Why have a number at all? And if you accept that numbers are a good way by which to measure the success of a business, why have only one?

Away from internet-based companies, most businesses measure their success in dollars. But the media industry has always been a little different. Typically, advertisers pay based on the size of an audience. Various techniques have been used to measure audience size: Radio used diaries, in which listeners would write down what they listened to, and when. Print media added up the total number of copies that were distributed or sold, and then made a guess at how many people saw each copy.

When the web took hold (and e-commerce was just a glint in its eye), only events — like page views and, later, clicks—could be measured. With the widespread use of cookies (and Google Analytics), we progressed to talking about users. For non-revenue-generating start-ups, users were the only currency: registered users, sign-ups, and finally active users.

“Big data” has brought with it the luxury of being able to measure any (and every) interaction that a user has with an application. We can record what a user does, with what device, when, and for how long. The data is cheap to store and relatively easy to process.

We’ve crossed a point at which the availability of data has exceeded what’s required for quality metrics. Most data scientists that I meet tell me that they’re gathering way more data than they can ever hope to use. And yet, in many cases, they still don’t have useful metrics.

Businesses (those with revenue models) are still optimizing for money. Today’s wealth of data helps to better understand what is driving their revenue. Data analysts can join the dots between the earliest user interactions (like marketing campaigns, referral sources, etc.) and end-of-funnel activities (such as spending money or clicking an ad). The data can also provide insight into product diversification or potential new revenue streams.

Companies that don’t have revenue still need to optimize for user behavior that is still valuable. In Medium’s case, that valuable behavior is engaging our users on our platform.

Engagement

Engagement has been the buzzword of growth marketers for a couple of years. When a user engages with your platform, you have their attention. And attention is the precious commodity of the super-connected era.

I think of competing for users’ attention as a zero-sum game. Thanks to hardware innovation, there is barely a moment left in the waking day that hasn’t been claimed by (in no particular order) books, social networks, TV, and games. It’s amazing that we have time for our jobs and families.

There’s no shortage of hand-wringing around what exactly “engagement” means and how it might be measuredif it can be at all. Of course, it depends on the platform, and how you expect your users to spend their time on it.

For content websites (e.g., the New York Times), you want people to read. And then come back, to read more.

A matchmaking service (e.g., OkCupid) attempts to match partners. The number of successful matches should give you a pretty good sense of the health of the business.

What about a site that combines both of these ideas? I sometimes characterize Medium as content matchmaking: we want people to write, and others to read, great posts. It’s two-sided: one can’t exist without the other. What is the core activity that connects the two sides? It’s reading. Readers don’t just view a page, or click an ad. They read.

At Medium, we optimize for the time that people spend reading.


Measuring reading time

TechCrunch’s Gregory Ferenstein wrote:

In fairness to news editors, we do know how much time readers spend on an article: We know that less than 60 percent will read more than half of an article, and a significant slice won’t read anything at all.

I think this is optimistic. It is true that Chartbeat’s analytics will tell you how deeply users engage with content. By their data, on average fewer than 60 percent of users read more than half an article. We see it differently: for us, there are no average users, and there are no average posts.

We measure every user interaction with every post. Most of this is done by periodically recording scroll positions. We pipe this data into our data warehouse, where offline processing aggregates the time spent reading (or our best guess of it): we infer when a reader started reading, when they paused, and when they stopped altogether. The methodology allows us to correct for periods of inactivity (such as having a post open in a different tab, walking the dog, or checking your phone).

The aggregate Total Time Reading (TTR) is a metric that helps us understand how the Medium platform is doing as a whole. We can slice that number in lots of ways (logged-in vs. logged-out, new posts vs. old, etc.).

We’re thinking about other ways in which this data can be used to learn about Medium users — and their interactions with specific posts. For example:

  • How can we motivate users to increase the total time spent reading the posts that they’ve written?
  • We measure the length of posts in Expected Reading Time. So, which is better: a user spending three minutes reading half of a six-minute post, or a user spending two minutes reading a two-minute post?
  • If a user spends four minutes reading a six-minute post, did she skim it? Is she just a super-fast reader? Or is our time estimate wrong?
  • How long does it take the eye to register an image?
  • What’s the optimal length of a post if we want to maximize TTR?

And so many more.

Maintaining perspective in a startup

The high startup failure rate and the increasing popularity of startup roles means that young people entering the workforce are perhaps more likely to experience redundancy than previous generations.

This, so logic would have it, will be a traumatic experience that comes totally out of the blue. But should that really be the case? After all, the only startups that really go on to become the next Facebook or Google are Google and Facebook.

Writing on the wall

A few months back, I was made redundant from a startup (not the one in my profile tagline) that I had been working with on-and-off for the past two years. The company was pivoting its strategy towards (what will hopefully be) greener pastures and my entire team was laid off as a consequence. Seeing two years of work amount ultimately to nothing more than audience-building for the new product launch was a predictably disappointing experience.

On balance, the writing had been on the wall for a while — not least because the entire company was aware that we were shifting our business model and strategy. Funding can only stretch so far and we were planning to rebuild from the ground up; inevitably, heads had to roll.

Unlike my colleagues, I was in the unique position of only working part-time while I finished my studies in advance of joining a law firm. Accordingly, beyond the immediate disappointment, I was not plunged into the same insecurity as my peers. Above all, I did not have to go through the rigmarole of applying for new positions while worrying about how I was going to make next month’s rent.

The paradox

Herein seems to lie the unspoken paradox of working in a startup: you typically work long hours and accept low pay in order to scale a business that you probably do not have any meaningful equity in.

There are, of course, tremendous upsides too. In my two years at the company, I reckon I learned more than I would have during two years of a business or management degree — not least because the various vicissitudes of the business moved faster than a university syllabus ever could.

As clichéd as it sounds, when I joined the business there were just three of us in an office basement. I took a personal hiatus to intern at an investment bank for three months before rejoining the company to find that we had moved into a new office, tripled our headcount, and were on the verge of closing our first major funding round. I assisted preparations for our first investor show, did my best to source new talent as we grew, and generally chipped in whenever I was needed.

Regardless, we all worked veryhard — though it must be said, none more than our CEO/founder — despite many of us knowing that we could be making more elsewhere. My own team even helped to formulate our pivot, which, in the end, amounted to signing our own proverbial death warrants, before seemingly proceeding to forget what we had just done.

I think that most people who work in very young startups are eager to pitch in above and beyond their pay-grade, not because they are desperate for a promotion but merely because there is a very clear sense that the company they work for ultimately consists of the people around them. (Of course, the veracity of this belief ultimately boils down to whether one considers a company to be its employees, its shareholders, or a combination of both.)

In this environment, it’s easy to occasionally lose perspective as your own personal goals become intertwined with your employer’s.

Maintaining perspective

Ultimately, a job is a job — unless you own tangible equity in the startup you work for, this is a fact worth remembering. Business is risky, and none are more so than startups, particularly when even the faintest sliver of profit is beyond the horizon. Maintaining perspective is therefore paramount, since your job can be snuffed out by a change of strategy or a dearth of cash.

Working in a startup does often involve squaring the paradox of acceptingboth long hours and (often) lower pay with what the numbers do not show: one hell of an experience, an insane learning curve, and the chance to actually build something. In my brief experience, at least, those stereotypes undeniably held true.

As such, I have just three words for my colleagues who remain: best of luck.

New technology: bad for radio?

Automation Killed The Radio Star, says the latest blog from Dick Taylor, a US radio writer.

Two things about this.

The first is the use of a lazy Buggles headline. Radio is still very much alive, with 9 out of 10 people in most large countries listening every week. Nothing has killed anything.

I collect lazy Buggles headlines. The song was, of course, the first song to be played by MTV, back in the days when it played music instead of vapid reality television shows. Amusingly, radio outlasted MTV.

Every time we repeat a “killed the radio star” headline, we reinforce the thought that radio is, in some way, in trouble. It isn’t. For parts of the US population, radio is more popular than television!

The other part of Dick’s blog post that I disagree with is the finger-pointing at technology — in this case, automation.

It takes people to use, or misuse, any form of technology. Technology, by itself, isn’t capable of being good or bad.

The postal service is not a bad thing, just because occasionally people send bad things through it, after all.

Automation is capable of getting the best out of your programming. It’s capable of a warm friendly voice overnight, instead of a tone or piped-in programming from the other side of the world.

Automation is capable of polish and tweaks that were impossible in the age of cart machines and turntables.

Poor automation is poor radio, granted — but we’d be foolish to claim that all automation is poor.

New technology, used well, has the potential of delighting our audience, and out of that, bringing ratings and revenue. Used badly, it can have the opposite effect.

But, as is hopefully relatively clear, I’m a fan of what new technology can bring to radio. Including automation.

If anything killed the radio star, it’s the humans who used automation badly. Perhaps radio needs less of those types of humans.

Bloomberg Media is using text-to-audio to keep app users engaged

Bloomberg Media in May introduced a text-to-audio function in its app and online with the hunch that commuters would prefer to multitask while getting their news.

According to Julia Beizer, global chief product officer, adoption started off slow, particularly on mobile web, and shortly after launch, people were listening to two and a half stories on average per app session. Now, this has increased to six stories and has become the second-most popular media type on the app (behind live TV).

“Audio is particularly interesting for our audience because of that multitasking utility, that is a real news use case,” said Beizer. “The delivery of journalism is changing to meet this moment, audio for a multitasking audience a huge tool in our toolkit.”

Publishers like the Financial Times, which has a similar audience segment of global business decision makers, has been converting text to audio articles since last year and is seeing that people are coming back regularly to listen.

Audio fits into the product team’s wider goals of driving utility for the Bloomberg audience, particularly a younger audience. According to Beizer, the Bloomberg audience age is varied, skewing younger than expected in areas. For the Markets area of the site, for instance, 48 percent of the audience is under 35 years old.

Studies show that podcast listeners tend to be younger: Research from U.K. radio trade body Radio Joint Audience Research in March found that two-thirds of new podcast listeners are aged between 16 and 35. And new users are growing: 21 percent of podcast listeners have started listening in the last six months.

Bloomberg has taken advantage of the renaissance in podcasting. The company said that audience downloads for its some 25 podcast have increased 35 percent year over year, but was unwilling to give exact numbers.

Bloomberg broadcasts a number of different podcast formats. One of the most recently launched, TicToc, an extension of news network on Twitter, details daily news. This summer Bloomberg ran its first mini-series podcast with The Pay Check, a six-episode series looking at the gender pay gap through sociological, financial and personal lenses. Since its launch in May, the podcast has had 200,000 downloads. The success of this, said Beizer, is encouraging Bloomberg to create more mini-series this year, including one on the new economy covering the challenges facing the world economy, and one on navigating the productivity industry.

Bloomberg was early in on developing skills for Amazon audio focused Echo devices and has two to three people who work getting content like its Market Minute on other smart speakers like the Apple HomePod, Echo and Home.

But the scale isn’t there on smart speakers for Bloomberg to create platform-specific content. Bloomberg’s Twitter show, TicToc, is distributed on Amazon’s Echo Show, which features a screen, is performing well, according to Beizer, because both social-created content and Echo Show content are experienced with the sound off.

“The rise of smart speakers is particularly remarkable in an era when every one of us would rather get a text message than talk on the phone,” she said.

Image: courtesy of Bloomberg Media. 

WHAT LIES BEYOND PAYWALLS

By 

“We can combine machine learning, predictive, and anticipatory analytics to optimize the value exchanged from this reader, on this device, coming from this platform, on this article, at this exact moment in time.”

For over a decade, digital publishers have been wrestling with an existential strategic question: Should we pursue consumer or advertising revenue as our primary revenue stream? In 2017, that question, and the tradeoff it implies, will become obsolete by the widespread adoption of machine-learning, predictive, and anticipatory analytics. In creating a dynamic meter among publishers, their readers, and their advertisers, these algorithms have the potential to transform how the publishing industry generates revenue.

One exciting side effect of building a dynamic meter is that it puts the entire organization’s emphasis on each individual story. If the story isn’t of high journalistic or engagement value, then it becomes much harder to build a business model around it. This moves publishers away from the all-or-nothing pursuit of scale at the expense of depth for advertising models, or loyalty at the expense of reach for consumer revenue models. Each article has to stand on its own. This changes dramatically the calculation that reporters and editors make in determining whether to cover a story.

All of it makes the notion of having binary on-or-off paywalls and press releases touting “10 free articles a month” seem antiquated.

Machine learning

Most publishers still identify their articles through the traditional tagging mechanisms of People, Places, and Topics. While these tags are helpful for categorizing the “what” of our journalism, they are unsophisticated tools for categorizing the “why” of our journalism. For example, when we tag a story as People: Donald Trump, Places: Washington, D.C., and Topic: Politics, we now know how we can present that article to our reader — but we don’t know why that story exists.

By mining the text in the article using natural language processing and seeking out complex patterns, machine-learning tools like IBM’s Alchemy API can go deeper in describing the emotional drivers of that story. Perhaps the article is really about a reader feeling outrage, or about feeling like the underdog. So while it may seem at first glance to be another story about the Trump transition team, by applying machine learning to that article, it may reveal that the story has more in common with a sports underdog story about Cleveland’s baseball team.

It’s the Kurt Vonnegut theory of storytelling, that every story follows a consistent emotional pattern — come to life on every article.

Once able to identify the emotional response expressed by a reader on individual articles, publishers can then begin to understand reader patterns and how eliciting specific emotions could create measurable value for each unique visitor.

Predictive analytics

Predictive analytics have the potential to increase revenue for publishers on an article basis while reducing overall cost-per-acquisition. Analytics can also help better convert visitors to subscribers, and most importantly, increase readers’ satisfaction.

Imagine a reader browsing the web on their smartphone while on a train heading into work. They click on a link through Reddit and arrive on your news site where they are served a paywall. Using predictive analytics, we are quite certain that this Reddit mobile reader will not subscribe to your website. In fact, the reader may even post on Reddit just how much she despises your paywall. So, instead of wasting our time trying to get that reader to subscribe, what other kinds of value can you exchange with her that could be of mutual benefit? Perhaps it’s an email newsletter signup form that could begin an inbound marketing relationship? Perhaps it’s a video preroll ad with a high CPM to generate maximum ad revenue? Perhaps it’s a prompt for the reader to “like” you on Facebook so that they can help expand your reach?

By looking at the data provided by past readers, publishers can predict what the ideal value exchange and conversion rate would be for any visitor arriving on any individual article from any referrer, any platform, at any time of day and then serve them a dynamic meter accordingly. Yes, achieving this will require an investment in data analysts, but there are already third-party tools on the market that could reduce the cost of implementation.

Anticipatory analytics

Predictive analytics have one fundamental flaw: They’re only based on historical data. By learning on the fly, anticipatory analytics are able to adapt in real-time to the conditions surrounding an article.

Remember that Reddit reader whose historical data suggests is unlikely to subscribe? Well, what if the article she is clicking on was an exclusive investigation directly related to Reddit itself? What if the article was just beginning to gain traction in the digital platform ecosystem but it wasn’t yet being picked up by other news outlets? Would that Reddit reader be more likely to subscribe then?

Anticipatory analytics allow publishers to make value-exchange decisions in real time on each article. If they wished, this could allow them to wall-off an article for subscribers only for the precise amount of time when it is of greatest value, and then, “open it back up” when the value of an ad impression surpasses the value of potential subscription revenue.

Machine learning + predictive analytics + anticipatory analytics

Now that we understand how machine learning can identify emotional drivers within stories, how predictive analytics can identify the value exchange between publishers and their individual readers, and how anticipatory analytics can adjust the maximum value exchange on the fly, we can begin to envision how combining all three could become the holy grail for publishers.

For advertisers, a publisher can identify that an article eliciting hope generates higher interest from a Facebook visitor than an article eliciting fear. Knowing hope results in greater advertiser satisfaction when their brand is placed next to those stories, publishers can raise the price of placing an advertisement next to that piece of journalism or insert a relevant piece of sponsored content in that article, while the story is being amplified and accelerating in interest.

From a subscription standpoint, a publisher can adjust paywalls according to a visitor’s likelihood of subscribing. For example, if one identifies that articles eliciting anger generate higher interest from a desktop homepage visitor than those eliciting sympathy and that this anger results in a higher subscription conversion rate while the story is amplifying in interest, you could put a hard wall around that article for that unique moment of time.

None of these scenarios are mutually exclusive. We can combine machine learning, predictive, and anticipatory analytics to optimize the value exchanged from this reader, on this device, coming from this platform, on this article, at this exact moment in time. In other words, a dynamic meter.

A prediction caveat

What could prevent this prediction from coming true is not technology, but organizational culture. The technology needed to pursue the dynamic meter already exists. The challenges of implementation are less technical and more cultural, as they would require publishers to collaborate across departments. In order to maximize overall revenues, publishers may have to do so at the expense of revenues in one department’s P&L or another. It also requires a serious investment in analytics, product, and technology at a time when budgets continue to shrink in newsrooms across North America. Finally, it requires publishers unifying around the common goal of putting their readers, and their stories, and not their own self-interest, first.

If collaboration, investment, and philosophy can align, then 2017 may be the year when a sophisticated, data-driven approach to revenue comes to fruition, and a dynamic value exchange among readers, the advertisers, and publishers, can be achieved.

 

Our 2019 challenges: Voice, Video and new Visual Journalism

Next year, our profession will have to overcome a VVV challenge. We value your help in shaping the upcoming GEN Summit, so please reach out to us if you have any suggestions for speakers, sessions and new formats of interaction with the audience.

Anticipating months in advance what the next hot topics of the news industry will be is becoming more and more difficult every year because of the speed of media innovation: how can we predict what will be relevant for an editor-in-chief in June 2019 during the fall of 2018? Nevertheless, at the Global Editors Network, we like to take risks and, so far, they have paid off: in 2017, we said that there was a platform crisis and in 2018, we focused the GEN Summit on AI, machine learning and blockchain.

  1. Voice and Voice AI are the obvious topic for 2019. All the editors who I met in the last six months have become obsessed with smart speakers offered by Amazon, Google and Apple. There are already 47 million smart speakers next to people’s couches in the United States and there are 220 million adults in the country. According to Chartbeat, this number could double in the next six months. It means this mass phenomenon will very soon have a serious impact on how news will be consumed. Who will be the winners of this new battle? Big conglomerates, local news providers, or infotainment companies? What will the speed of adoption be in Europe, Asia, Latin America, and Africa? It seems obvious that countries with strong public or private broadcasters will be at the top of the evolution, but what about new agile players?
  2. Video is the second priority for next year. Kevin Delaney, Quartz editor, said that we’re entering ‘the golden age of video journalism’, an idea that I am taking very seriously, even if the “pivot to video” strategy failed for some media organisations. It means YouTube videos will lose their current monopoly in the upcoming years. Think about the new wave of live events, anticipate news shows on Facebook Watch and very soon on Netflix, imagine new formats based on vertical or square videos on Snap and Instagram and you’ll immediately understand that we’re in a new era for video journalism. And it is only just beginning.
  3. Why are we talking about visual journalism as the third challenge? The word seems a bit outdated or overused, but 2019 will be the year of new storytelling methods based on a real integration of texts, sounds, videos and data-vizs. New software will help us to conceive new ways of telling stories, as well as AR experiments on your smartphones or new devices such as smart glasses or smart watches. The human brain is ready to welcome these new ways to tell stories, but we’re still waiting for a new generation of storytellers. 2019 will be also the year of the first experiments for monetising visual journalism (and this has never happened before).

Our industry is of course facing other issues, including two more V challenges: Verification and value of news. Journalists will continue to have to fight misinformation, propaganda, and rumours. Rather than just producing news, they will have to manage and control what happens on social media and closed messaging apps. In terms of value, new membership and subscription models are based on the idea that users are ready to pay for exclusive or personalised information and that micropayment systems can allow newsrooms to reach a very fragmented audience. While these issues are already well framed, we will continue to discuss them and strive to find new solutions at this year’s GEN Summit.