Getty Images launched Generative AI by iStock earlier this year, a new tool that offers small businesses, designers, and marketers “affordable and commercially safe” generative AI capabilities. Getty Images wants to elevate and power creativity through the visuals that customers create. Customers want to do this quickly and cost-effectively and minimize risk in this process. My interest in this new technology was from a risk perspective. I previously wrote about mitigating legal risks when using generative AI. So, I engaged with Grant Farhall, Chief Product Officer, and Bill Bon, Senior Director of Creative Operations, both from Getty Images, for a demo to get a few of my questions answered and to look at additional features they are releasing.

Risk is a significant consideration in AI, especially with gen AI. Many large language models are trained on content to which the model builder does not have sufficient IP rights. So, when Grant told me that Getty Images’ AI technology was both commercially safe and legally protected, I was intrigued. So, how is he able to make this claim?

In partnership with NVIDIA, Getty Images claims to have trained its generative AI model solely on its creative library. In doing so, Getty Images retains complete visibility into the training data set, knowing precisely what trained its model. Getty Images also ensures that these data sets are not polluted, as, according to Grant, “If you have complete control over what the model is being trained on, we know it cannot infringe on IP.” Getty Images strives to create a service for customers where they can experiment with AI with the confidence that there are no downstream challenges to using the outputs. Getty Images defines commercially safe as follows – Customers can use it in a manner that will not infringe on any IP threats or lawsuits in a commercial setting. To back up this claim, Grant said that the company provides $10,000 of legal protection for generated images on iStock, with higher levels and uncapped indemnification available for gettyimages.com and via our API.

An advantage of this approach is that it can be assumed that high-quality ingredients will create good outputs. Getty Images’ high-quality creative content library gives its product a competitive advantage. Because Getty Images has diversity in its creative library, its model can also produce diverse images. Interestingly, Getty Images’ model avoids topics that Getty Images deems problematic, such as famous people, landmarks, and trademarks. For example, the Getty Images model does not know who Taylor Swift is, as it does not have Taylor Swift images in its training data. However, the model can create generic images of a blonde-haired singer on stage singing to a crowd. Similarly, it can generate running shoes, but it cannot generate Nike running shoes. So, no Nike, Taylor Swift, or Darth Vader.

Getty Images has also implemented technical controls to manage this risk. Interestingly, the prompt is blocked when you try to create a Nike shoe. As Grant explained to me, the technical protections that block potentially problematic prompts aid the user in identifying the problems with the prompt, leading to lesser confusion. Reasoning behind the technical protection is simple, as Grant put it, “If you are allowing me to do something it must be OK.’ Well, that is one reason why we block the prompt.”

Getty Images goal remains to create a high-quality AI model from transparent and licensed training data. I asked Grant if Getty Images would claim any ownership interest in the output of the GenAI. He assured me that while the prompts and images may be used to re-train the model, Getty Images does not claim ownership interest. The generated images are not incorporated into the creative library for sale to Getty Images’ other customers. Furthermore, Getty Images limits its use of the generated images because, as Grant explained, Getty Images is concerned about self-consuming models, it does not allow for the training set to expand with gen AI output.

Getty Images wants its customers to know that they can safely use this and are assured of legal protection if needed.

Share.
Exit mobile version