DALL·E is a new AI that turns text descriptions into images. It’s just been launched publicly, but Spark Foundry gained beta access in April, giving us time to test the system and assess its potential.
The execution is simple, you type a description of the image you want to make, and DALL·E generates it for you. For example, we asked it for “a photo of knitted chocolate chip cookies on a yellow background”. It created the following image:
I won’t go into how it works or the commercialisation of the tool, but rather what opportunities this creates for media and advertising. We’ll look at three aspects of advertising, focusing on how the role of AI will power media creativity, what it could improve both right now and in the future.
People might be concerned that the immediate impacts could negatively affect jobs in the advertising industry, but we should make one thing clear early on: DALL·E is no substitute for media and creative teams. The images it produces are no match for properly produced work but will be brilliant in terms of enabling teams to better visualise and interpret briefs and ideas.
That said, here are three things that we can use it for right now along with some examples to showcase the effectiveness of the tool.
Idea mock-ups
It’s often difficult to convey an idea that hasn’t yet been worked up. Visual prompts are invaluable for explaining images but that takes time and money. DALL·E mock-ups allow media agencies to visualise campaign ideas, such as partnerships or experiential activities.
If a picture paints a thousand words, I can now generate that thousand-word image using about 12. To illustrate, let’s say that my proposed creative would feature a teddy bear that went travelling after being stuck in the house for the last two years. I could quickly type, “a photo of a teddy bear reading a map in Times Square” and within seconds I have this:
Even more excitingly, this could work in both directions. This helps brands and agencies get their heads around what is being proposed and acts as a better brief for media partners to build from.
Images lead to inspiration
The second immediate impact that DALL·E could have is on inspiration. At present, teams use existing work as inspiration and must sit at their desks trawling through mountains of content to find it. AI could help develop interesting inspiration and create content we’ve never seen before, even from seemingly random collections of words. For example, here’s, “an oil painting portrait of a capybara wearing mediaeval royal robes and an ornate crown on a dark background”:
As more people use DALL·E, the bank of imaginative images grows making for a fantastic catalogue of wacky inspirational images. Media agencies already sit on a database of insights about what elements of the creative work best. Sometimes something as simple as a change of colour or background is enough to supercharge effectiveness. AI is allowing us to test these elements at speed and with minimal cost.
Produce those reactive posts in record time
Finally, DALL·E could allow reactive posts to be produced in seconds. Let’s say that I wanted to make an image of a resigning prime minister looking sad, I might type, “a photo of a sad fluffy white dog leaving number 10 Downing Street” and get this image.
A designer would ideally touch it up (and turn the address to No 10) but I’ve created a bespoke image in seconds that conveys exactly the message I want it to. Reactive posts are more about speed than quality, so we can get away with images not being perfect without having to spend a couple of days shooting a dog on a doorstep that might look like 10 Downing Street.
This is important to media agencies as we’re often in charge of a brand’s social strategies and are the first to spot viral themes. AI allows us to react in real time, rather than being dependent on externals teams to produce something new.
Longer term impact
So, we have at least three immediate affects that DALL·E could have for advertising but let’s look at two longer-term possibilities.
There’s almost no limit to how DALL·E could accelerate test and learn practices. Not just within creative but within new product development, in-store and online too. Given the speed with which we can now produce images, we could create multiple versions and test them early on for a fraction of the cost. If we take this a step further, there’s even the potential to enhance personalisation by editing ads to the user seeing them or the context of the page.
And finally, we need to talk about AR and VR
DALL·E is not the only AI that will turn words into images. Meta’s Builder Bot AI looks to do the same, allowing users to build virtual worlds with just voice commands. This would lower the barrier for AR and VR executions immensely allowing for an explosion of UGC as well as significantly lowering the cost for brand activations.
We’ve covered just a few applications of this new technology and we haven’t even spoken about DALL·E’s ability to edit or create variations of existing images. Hopefully it’s clear that we’re barely scratching the surface of possibilities here. We’re constantly testing new tech like this with our clients to ensure that they’re at the forefront of effectiveness, and I can’t wait to see how we can start using this to help solve brand challenges in the future.
Will McMahon is head of adtech at Spark Foundry