How I'm Using AI In Publishing: The Future of Product Photography with Generative AI
At Ulysses, Generative AI is set to rapidly transform the way we market books to readers through images. And as book discoverability gets more and more difficult, we are eager to embrace anything that may make our digital media assets more engaging. Take the example below of a simple product shot of our book, The Golf Bucket List. The original product image was shot quickly on an iPhone and dropped into Photoshop; then the plain background was isolated out and a golf course generated in using Adobe’s AI. (Unlike other image generators like Dall-e, Midjourney, etc., Photoshop’s AI is trained on fully licensed images from Adobe’s stock archives—although there might be some concerning hidden issues.)
The possibilities for these types of tools have huge implications for book marketers. For one, dynamic product photos can be generated extremely quickly (this took a few minutes from start to finish). The speed and ease of these tools will allow marketing teams to build out realistic, on-brand settings without the need for expensive photo shoots. Marketers will be able to rapidly test creative, quickly adjust campaigns based on seasons, holidays or major events, and personalize product photography to individual audiences.
Taking it a step further, AI is beginning to allow for a blend of still product photography and video movement. There are myriad companies that have sprung up as AI wrappers that can transform images into animations and videos. (For The Golf Bucket List, I tested out Appypie, with meh results). But you can definitely see where this is going over the months to come and how it will transform the way marketing teams generate and advertise books across the internet.