Adobe has announced a series of new AI features for its flagship graphics editing package Photoshop.
Among the new AI features being added to Photoshop are:
- The ability to generate AI images from a blank canvas from within Photoshop for the first time
- The option to replace the background of images with AI-generated content
- The ability to supply Photoshop with reference images that you want the AI to copy the style of
- An option to detect people in a photo (such as tourists around a landmark) and automatically remove them
Adobe is also adding generative AI features to other apps in its portfolio, including Lightroom and the publishing package InDesign.
The new features come alongside an upgrade to Adobe’s AI engine, now called Firefly Image 3, which the company claims does a much better job of rendering lines and structures, and increases the range of different images that the AI can deliver.
Photoshop AI Upgrades
Arguably the most standout new AI feature in Photoshop is the ability to generate images from text prompts from scratch. Previously, you could add elements to existing images, but now Adobe is allowing customers to start with a blank canvas and simply type a text prompt describing the image they want the AI to generate.
Automatic background removal has been a feature of Photoshop for some time, but now customers can generate AI replacements. So, for example, you may have a photo of a dog lying on grass, then select the option to remove the background and generate a beach surrounding.
Photoshop offers three different background alternatives and the background is adapted to the lighting, size and positioning of the subject. In the case of the dog, for instance, the sand on the beach should form around the dog’s paws and body.
One thing that hasn’t improved since the previous generation of Firefly AI is the resolution of the images created. Whether you’re generating images from scratch or generating new backgrounds, the generated image will top out at around 1,500 x 1,500 pixels in size, which could make it difficult to use generated images at full-page size in magazines, for example. If the image in which a background is being replaced is larger than 1,500 x 1,500, the generated image will effectively be stretched.
Adobe’s chief technology officer, Ely Greenfield, told me the company is working to improve resolution, but it becomes a question of balancing cost and processing times.
He said there are three ways the company could improve the resolution of generated images. The first would be with “more horsepower—just throw more data at it, more compute at it.” However, Greenfield admitted that extra compute power “gets very expensive, very quickly” and increases the amount of time it takes to generate images considerably.
Alternatively, the company could apply upscaling. “We can separate the task of generating [images] from the task of making it really detailed,” he said.
The third method would be to generate images piece by piece instead of as one complete image. “We’re looking at all three of those [methods],” Greenfield claimed.
Huge Leap In AI Image Quality
Even if image resolution requires further improvement, the sheer quality of the images generated by Firefly AI has improved almost beyond recognition since it was first unveiled a little over a year ago.
When Firefly was first released, faces were often a disfigured mush, hands looked deformed and text was impossible to render. All three have been vastly improved, along with the overall quality of generated images.
Greenfield explains how some improvements simply required better training data. Adobe insists that its AI has only been trained on “commercially safe” images, such as photos stored in its own stock library. That can create problems when it comes to generating specific types of images.
For example, Adobe’s stock image library has relatively few images of crowds of people, because photographers are required to get model release forms signed from everyone that appears in such images. “When we do have crowds, it’s people facing away from the camera,” said Greenfield. “So with the first version of Firefly, if you tried to get a general image of crowds, you could get it, but they were always facing away from the camera.”
Greenfield says the company has also put a lot of focus on “prompt adherence”, ensuring that the AI delivers what people are actually asking for. “In the original days of Firefly, if you asked for a hippo riding a boat, you might get a hippo on a boat, or you might get a boat on a hippo.”
“Firefly has got much better at that, but it’s still an area we keep investing in. Does the model understand prepositions? Does it understand associations with color? How deeply can it understand the description of the text?”
The AI improvements to Photoshop and Adobe’s other Creative Suite apps are being announced at Adobe Max, which starts in London today.