What Can We Learn from Getty’s New Free, Embed Model?
Getty Images made an interesting content-usage model announcement last week. After years of playing whack-a-mole with everyone who's ever stolen one of their images, Getty decided to embrace the free model for a portion of their library. You'll find additional details on this here and here.
As a wise man once said, you can significantly reduce piracy if you make your content available at a reasonable price and in a convenient format.
OK, free is a pretty radical price and of course piracy evaporates when content becomes free. But it's important to note that Getty isn't just giving up and letting pirates have their way. They've introduced a model that I think could become a viable template for other types of content.
Note that Getty isn't saying everyone can now just copy and paste the images into their sites. Getty is instead providing a snippet of HTML code you'll use to legally embed the images on your site. This approach offers a number of benefits for Getty including tracking and, more importantly, a new potential revenue stream.
By embedding the code in your site Getty will be able to gather data about where their images are being viewed and who is viewing them. This data could eventually be valuable to Getty as they'll suddenly have access to plenty of metrics they knew nothing about before. Unless you've been hibernating the past few years you know that big data can be quite valuable and it's easy to see how Getty's data will become big rather quickly.
What's even more intriguing to me is how Getty will be able to control how those images are rendered on your site. The images live on Getty's servers, much like YouTube videos live on Google's servers.
Do you remember when YouTube videos didn't have pre-roll ads? These days it's rare to watch any YouTube video without first having to sit through a short ad.
Will embedded Getty images soon have ads in them? Maybe. It makes sense for Getty to at least experiment with ads. They'll have plenty of opportunities to study all that data they're gathering to determine the viability of ads with images.
What about other types of content? Magazine and newspaper articles come to mind. How often is that content illegally copied and pasted onto someone's website? How often is the same content scraped off the publisher's website and dropped into an app? The app might give credit to the publisher, and even offer a link to where the content originally appears on the publisher's site, but how many times do readers get what they need from the ad-free scraped version and never click through to the publisher's site? How many ad impressions are lost as a result?
What if publishers offered a model like Getty's, so someone could grab a snippet of code to embed the article in their own site? That version would provide data and a possible advertising opportunity, just like Getty's.
OK, I think I know what would happen... Most publishers would resist, saying they want the traffic coming to their own site and threatening legal action against anyone who copies and pastes illegally. The smart publisher, on the other hand, would instead embrace this for the data and new, alternative revenue opportunities it represents.
I can't wait to see how Getty's model evolves and whether it will expand into other types of content.
Related story: Oyster Books: Disrupting the Disruptor
Joe Wikert is Publishing President at Our Sunday Visitor (www.osv.com). Before joining OSV Joe was Director of Strategy and Business Development at Olive Software. Prior to Olive Software he was General Manager, Publisher, & Chair of the Tools of Change (TOC) conference at O’Reilly Media, Inc., where he managed each of the editorial groups at O’Reilly as well as the Microsoft Press team and the retail sales organization. Before joining O’Reilly Joe was Vice President and Executive Publisher at John Wiley & Sons, Inc., in their P/T division.