VIEQUES — For video to become more discoverable, the metadata needs to be more sophisticated and connect to text and other related videos as well, says Tom Wilde, CEO of RAMP, a content optimization platform during an intervew with Jason Pontin, Editor-in-Chief and Publisher of the MIT Technology Review at the Beet.TV executive retreat
“Videos are the next ‘text documents’,” Wilde says, and technologists are working on ways to ensure video properly complements articles, textand other video via metadata, he says in a deep-dive discussion at the conference.
“How do you make video contextual to something else?” Wilde asks. “How do you relate two videos together even if the user hasn’t expressed that? How do I create video training material that references printed manuals that go with it, for instance?” That’s the next level of metadata that RAMP is working on when it comes to video, he tells Pontin. The key is to “wrap” videos with more content and context so they can better augment broadcast programming, photo slideshows and ecommerce, for instance.
“But I can’t do that unless I know what’s going on, so [it’s a matter of] how to associate the video with more rich content.” In so doing, video assets become more valuable for programmers because they are then more discoverable, representing a bigger opportunity for making money with the video, he says.
RAMP is hosting a Beet.TV Leadership Summit at NBC News on February 12.