Get in touch


People & Planet




Want Gartner award-winning
marketing technology?

Discover Storyteq


Reach millions of customers with personalised global campaigns in real-time.

Discover Deployteq

Simon Says: Could AI write this article?

Feb 20, 2023

Welcome to Simon Says – a new series of articles written by ITG Group CEO & founder, Simon Ward. In this first episode, Simon takes the plunge into the world of Artificial Intelligence, examining its role in the marketing landscape as a content creator. Could we see a world where imagery and text for major campaigns are generated purely by AI, what are the risks, and what does it all mean for marketers?

We’ve just gone through a massive 12 months for generative AI, with major strides and releases in automated image creation and text generation.

But have we reached the point where the thoughts I’m expressing here could have been written by a machine? Could I have saved myself a whole load of time and effort by employing AI to write an article about AI, instead of writing it myself?

These questions aren’t trivial for marketers. Every day, we create a vast amount of content, including text, imagery and video. We want it to be engaging – to foster everything from awareness of our offerings to brand loyalty and advocacy.

That means a lot of creative types spending a lot of time producing content to support our marketing activity, often at considerable expense.

If AI gives you the opportunity to create that content quicker, with much greater variety and, of course, more cheaply, why wouldn’t you take advantage of it?

The only real question is: does it do what it says on the tin?

What is generative AI?

Artificial intelligence (AI) is of course just a cover-all term for the simulation of human intelligence in computers. Basically, algorithms take in and analyse data to make decisions and carry out novel actions.

With generative AI – the ability of computer algorithms to generate content – a text-writing algorithm produces original sentences, paragraphs and articles after being ‘trained’ to analyse vast amounts of existing text.

For imagery, AI programs create original images by analysing the huge number of existing photos and illustrations in their databases.

Generating images

“You don’t need to be able to draw to type a simple text prompt”

AI image generators made it big in 2022, with several platforms released that can create novel images from a simple text input (prompt). Type what you want using everyday language, and the AI does the rest.

Image generators reference their own databases, which contain huge numbers of tagged images, including recognisable objects and artistic styles. This reference material enables them to read your text prompts and create novel images from scratch.

Particularly useful for marketers, they can generate lots of high-quality image variations at the same time. They also produce images much faster than human creatives, and you don’t need to be able to draw to type a simple text prompt.

You can ask the AI to produce an image in a particular style – pop art or cubist, photoreal or anime. Some systems allow you to draw an initial outline sketch, from which it creates a complete image.

On OpenAI’s site, you’ll see how its Dall-E image generator came up with 30 different designs after being fed the simple prompt: “an armchair in the shape of an avocado”. Several of the designs could have sat comfortably in a 1950s Arne Jacobsen collection.


“Political commentators talk about threats to democracy from deepfakes, and they may be right. But this doesn’t necessarily mean it’s a massive issue for marketing…”

Generative AI text-to-image programs are not without controversy. Some of the concerns are easy to answer, others trickier.

For example, a lot of navel-gazers ask questions such as, “can computer-generated images be considered art?”

Let’s be brutal. Who in the world of marketing really cares? We’re trying to maximise leads, sales and loyalty, not win the Turner Prize.

There are certainly potential issues around fake content – faking images, video and audio is a lot easier with AI. Political commentators talk about threats to democracy from deepfakes, and they may be right. But this doesn’t necessarily mean it’s a massive issue for marketing.

Even if we marketers weren’t the noble citizens we clearly are, who among us would want to risk our brand credibility (and face costly lawsuits) by stealing someone’s IP or engaging in easily traceable fakery?

We’re not running St Petersburg troll farms – we have an incentive to stick to whatever rules are agreed.

More complicated is the area of unwitting copyright and IP infringement. There are lots of image-from-text generators on the market, and we can’t be sure what reference images are in their databases.

Yes, the AI is generating a novel image from references – it isn’t warping or morphing existing images, but that doesn’t mean there aren’t examples of AIs generating original images that contain the Getty Images watermark. Clearly, there are risks in using images that are taken from copyrighted references, but the legal waters remain murky.

How to use image AI

“You may find that the best approach is to employ creative designers, then give them the AI to greatly increase their productivity”

Current AI image generators also have some technical drawbacks. For example, if you want a simple image, the AI will come up with lots of variations until you find one you like. But for more complex illustrations, there’s no guarantee it will produce the exact details you’re looking for without some human intervention.

They are only as good as the images in their database. Ask a particular AI for a style not well represented in its image database, and you may be disappointed.

There is also some concern over the quality of photorealistic faces. Some look truly excellent, but quality does vary.

When it comes to judging the quality of AI-generated images, one of the key things to remember is that good designers have an eye for design and proportion. Probably a better eye than a junior member of your marketing team. That’s why they’re designers.

While generative AI may have the potential to replace creatives, I suspect the software will be better driven by a designer than a marketer or administrator. It is likely the final images produced will be the result of collaboration between a database-driven piece of software and a creative professional who knows what they want and understands what looks good.

As a tool for coming up with creative ideas that the designer can then work on, the technology offers the potential to rapidly speed up the creation of campaigns. In other words, you may find that the best approach is to employ creative designers, then give them the AI to greatly increase their productivity.

Text-based AI

“In marketing terms, its ability to write to a specific length and style offers the potential to generate masses of content”

Currently, the most talked-about generative AI for text is OpenAI’s GPT-3 language model (quietly upgraded in December 2022 to GPT-3.5), which underpins the ChatGPT dialogue-based chat interface.

Far more data and computing power went into training GPT-3 than any previous model. This included analysing 45Tb of text from multiple sources (at a cost of $12 million for a training run).

At its most basic level, this training makes GPT-3 exceptionally good at predicting the next word (or punctuation mark) in any text, including the sentences it generates itself. In practical terms, it means you only need to give it a bit of contextual text and simple instructions (a prompt), and it can generate all kinds of content, from haikus to technical papers, or ad copy to video scripts.

It can also summarise and translate text (although it translates better into English, its primary language, than from English).

GPT-3 is undoubtedly impressive. It can generate original copy that could have been written by a human being, which is no small feat. It can also have a convincing conversation with you.

In marketing terms, its ability to write to a specific length and style offers the potential to generate masses of content, particularly short-form social media posts, product descriptions and captions, including multiple variations.

It can write cleanly, usually with a reasonable grasp of spelling and grammar, and can also be used to generate long-form content – including white papers and blog posts. Early tests of GPT-3.5 seem to suggest an even better style and fewer grammatical quirks than GPT-3.

But before you disband your writing teams and turn over everything to AI, a few words of caution.

Issues with AI

“Currently, even the best AI doesn’t possess the critical faculties required for you to risk letting it generate your content unsupervised”

First, Google currently sees GPT-3 text as automatically generated content, which is against its webmaster guidelines. This means if it detects it, Google’s webspam team would see AI content as spam and could take actions such as lowering the page rankings.

Although it can create original content, GPT-3 tends to use specific phrases and structures that Google can detect. So, if you are thinking that AI has the potential to boost your SEO, it has. But it also has the potential to mess it up, depending on whether the AI’s algorithm can outsmart Google’s.

Another issue is that generative AI has absolutely no understanding of what it is writing. It is merely using its vast database of written content to predict or calculate what its next word should be.

Give it a prompt to write an article entitled “The use of AI in creating marketing content”, and the AI can access tons of data on this subject and present you with a reasoned article pretty quickly.

But you have to ask, where is it getting its information? Typically, the source material for a blog of this kind would be from people who have already written about generative AI, many of whom are commercially involved or tech-loving journalists and bloggers.

In short, no matter how much data the AI can analyse compared to a human writer, if that data is skewed, the AI-written article is likely to share the same bias. In addition, GPT-3 is still capable of creating extremely convincing articles that have no basis in fact – something referred to as “hallucinating”.

While this is something developers will continue to address in future iterations, you would hope a trained human writer, well fed and rested, would have the critical faculties to detect obvious bias and hype.

More disturbing is when the source data contains deliberately fake or discriminatory information. You may recall that when Microsoft launched its Tay AI chatbot to interact with people on Twitter, it only took a few hours for Twitter users to corrupt the bot with racist and misogynistic messages.

The AI happily incorporated these into its own conversation, eventually tweeting that feminists should all burn in hell and that, “Hitler was right.”

The more recent ChatGPT has a lot more safeguards built in to avoid this, but malicious users are already finding ways to get round it. And ChatGPT is still capable of hallucinating extremely convincing fake articles.

Currently, even the best AI doesn’t possess the critical faculties required for you to risk letting it generate your content unsupervised. Is this a fatal flaw in AI-generated content? Not at all.

Human–machine collaboration

“The AI has plenty of knowledge, but no passion, no personal experiences or anecdotes to draw on”

AI can search a vast database of information on a topic and quickly create grammatically accurate text of any length and make a relatively coherent argument. This makes it an extremely useful tool.

However, you need to compensate not only for bias, but for style. The AI attempts to mimic the style of the initial text you give it. But that text has to be consistent and well written. If it contains a mix of formal and casual tones, the AI article will have an inconsistent voice.

It can do some nice tricks, such as write something in the style of a 1940s film gangster or serve you copy in iambic pentameter, but many of the test blogs that are out there feel a bit flat and lacking in nuance – more like a Wikis than blogs.

After all, the AI has plenty of knowledge, but no passion, no personal experiences or anecdotes to draw on. It’s worth doing a search yourself and making your own judgment on the merits of what’s currently out there.

Just as image generation may be best driven by designers, text generation may be best driven by writers. As a research tool, AI has access to far more data than a typical writer could research.

Ask the AI to generate an article (or several articles) on a topic, and it will come up with information that even the best researcher might have missed. It might even stumble on an interesting angle you hadn’t thought of.

However, facts and sources need checking, gibberish removing, arguments editing and honing – or creating from scratch. Most of all, if you don’t get a human writer to stamp their own style on it, what happens when all your rivals invest in the same software and are all happily generating variations on the same topic created by the same AI?

Not only will you fall foul of Google’s webspam team, you’ll also risk losing a lot of potential readers through lack of originality or personal connection. Most commentators, podcasters and columnists have people who follow them personally, because they like what they’ve read and heard before.

That’s not to say an AI can’t generate interesting content (or that they won’t find followers who like their algorithm’s quirky take on the world), but when rival marketers employ the same AI, might our voice lose its differentiation and individuality without a little bit of human input?

Got any questions about this article? You can contact us at – and look out for the next in the Simon Says series, when Simon will be looking at the risk vs reward of counter-intuitive thinking, and why data can’t always make sense of human behaviour.

Want to chat to one of our team?

Fill in the form and we’ll get back to you.

Please enable JavaScript in your browser to complete this form.

Latest from TeamITG