A new system uses AI to automatically find relevant broadcast-quality video clips from any broadcaster and embed them in an article in seconds.
You may have read articles that are accompanied by videos, usually of interviews with featured subjects, in which they delve deeper into the topic at hand. Or, in the case of many news sites, the video simply replicates the text underneath.
Oovuu, which provides digital capabilities to publishers, has been working on upping the game on this text-video fusion and lately has been able to generate significantly increased video output as part of the content being delivered, in real-time, based on artificial intelligence.
This is a key capability sought by online publishers seeking to deliver richer and better-targeted content to readers. These days, audiences spend 88% more time on websites with video, and the ads accompanying that content. Manually identifying and embedding the right video at the right time – considering the huge amount of content now available – is a herculean task.
With major publishers “now writing and syndicating as many as 2,000 stories an hour, they simply can’t manually create, or find, enough relevant high-quality video content fast enough,” Ricky Sutton, founder and CEO at Oovvuu, explains in a case study published by IBM that discussed boosting digital publishing revenue up to 35 times with IBM Watson. “And yet they have to get the video somewhere.” The way this was typically done is to employ “a dozen or more producers to create or find relevant videos to embed in articles,” he continues. “At best, teams could insert perhaps 40 videos into articles daily, driving a few million views, which isn’t enough readership to compete for ad dollars with digital platforms like Facebook, YouTube, or Google.”
To help publishers address this challenge, Oovvuu updated its Compass video distribution platform with machine learning and natural language processing capabilities. The goal was to automate video selection that enabled editors to find the right broadcast-quality clip from any broadcaster in the world and embed it in an article within seconds.
Oovvuu employed IBM Watson technology to build an AI contextuality engine that matches videos with articles, even as news breaks, based on the focus and nuances of each reported story and analysis of the video content. The service looks through the metadata on all of the thousands of articles on a given topic, extracting from 26 million possible keywords and concepts it finds, the case study reports. The system then derives values based on the metadata’s location in the article and instantly assigns each article a “contextuality index,” used to identify and rank the most appropriate videos to embed within the article.
The platform also scans a cloud-based library populated with thousands of videos from more than fifty broadcasters, including the BBC, Bloomberg, AFP, Reuters, ITN, and ABC. With the system, editors can embed as many as 500 videos into articles in an hour instead of just 40 a day. “It’s like a global news editor being run by AI that helps human editors ensure the most relevant video will be in the article,” Sutton says. The AI system also scores embedded videos based on viewership data, and if the video is not well received, the system adjusts the video’s contextuality index score and automatically replaces it with another video.