Generative AI is disrupting numerous sectors throughout academia and business. One of the industries that seem poised to take particularly energetic advantage of the explosive interest in generative AI is marketing. Birgitte Rasine looks at the use of generative AI by marketing professionals.
A tsunami has hit the Internet. It’s called generative AI, the technology built atop large language models, or LLMs if you prefer the acronym. It generates text, code, music lyrics, photorealistic images, and digital art. Its rockstar, ChatGPT, gained over 100 million users in the space of the first two months following its launch.
Riding in on this tsunami is new terminology, such as “hallucination,” “prompt engineering,” and “latent space.” Generative AI is disrupting numerous sectors throughout academia and business. One of the industries that seems poised to take particularly energetic advantage of the explosive interest in generative AI is marketing. The marketing sector is known for producing mountains of content: ad copy, taglines, hashtags, blog posts, social media posts, web pages, ebooks, case studies, battle cards, sales sheets, not to mention presentations and infographics and truckloads more. Given that generative AI works well with text and images, it should come as no surprise that marketing professionals have taken to the likes of ChatGPT, Bard, Jasper, and other generative text AI tools, like buffalo to water. As we are fast discovering, generative AI is to text what earthquakes are to ocean waves.
One thing is certainly clear: artificial intelligence has set sail, and it is up to us humans to make sure there’s a human captain aboard this ship.
All that content used to require the effort of humans to produce it. Human creativity, strategy, and insight. Human writers, editors and designers. It takes hours to produce a case study: the interviews, the meetings, the drafts, the edit rounds, the design and production sessions. Today, ChatGPT can spit out paragraphs of text in seconds. Midjourney can create images and designs also in seconds. You barely have time to blink. The text and the images are acceptable—and sometimes impressive—enough that many of us find ourselves a little bit overwhelmed with the possibilities.
But it’s a little disingenuous to compare text generated in response to a prompt or series of prompts by one person, to text that is the final product of a synthesis of conversations, thoughts, and exchanges among a group of human beings. As the saying goes, there’s no such thing as a free lunch, and marketing professionals would be well advised to understand the risks as well as the benefits of generative AI.
What the marketers say
In the interest of research, we’re running a survey to take the AI pulse of marketing professionals. (If your job involves marketing, you can take the survey here. Early results show that ChatGPT is by far the most-utilised generative AI tool, and respondents are using it primarily for short-form marketing copy like headlines and ads. Three-quarters of this group are small business owners or individual practitioners, which makes sense, as they hold the decision-making power over their own enterprises. Of the 73% of respondents who are actively using generative AI, they overwhelmingly cite time, speed, cost savings, and volume as the primary benefits:
“[AI] speeds up your workflow by hours.”
Eugene Cheng, partner at a Singapore-based strategic consultancy
“Leveraging artificial intelligence has been a game changer for our content. It’s shifted our approach from spending time writing simple content/outlines to investing time in thought leadership and unique opinions.”
Ashley McAlpin, head of marketing in Knoxville, Tennessee
“It saves me hours of early drafts, and the need of a copywriter for non-creative content.”
David Gómez-Rosado, owner of a design agency in the San Francisco Bay Area
These professionals also appreciate the ‘new ideas’ that generative AI provides, and seem to be fairly satisfied with the overall results, although they do have a number of concerns.
- Accuracy & reliability: The respondents report that they do have to fact-check the results. This could be due to incorrect or misleading text, or text that is quite simply fabricated, or ‘hallucinated’ (ChatGPT has been known to create titles of non-existent articles and attribute them to real journalists, or make up references outright). This indicates a misguided use of generative AI. It is not a reliable search engine, and was not originally meant to be used as a research tool. For one thing, ChatGPT’s training data only extends through September 2021. This means anything from 2022 or 2023 is, presumably, not included in the data sets.
- Print-readiness: The respondents report that they also have to review and edit the text that is generated, and, importantly, that they thought there would be a lot less work of this kind. Again, this reflects a popular misconception about generative AI, and that is, that a bot can write text good enough to use out of the (black) box.
- Copyright & plagiarism: One of the most controversial issues with the data used to train generative AI is that it includes copyrighted text and images, whose creators were neither asked permission nor compensated. The respondents rightly worry that the output they’re using might constitute plagiarism. To ensure the generated text is not plagiarised, more burden is placed on the content creator to do the work of running the text through search engines.
“AI is transgressing on copyrights — not something I want to support.”
Anonymous survey respondent
These concerns reflect the general sentiment of generative AI users across sectors and use cases. If we venture a little further down the AI rabbit hole, we uncover a few other potential downsides of using generative AI in business, which merit at least a brief mention:
- Critical/strategic thinking and idea generation. Some respondents are concerned about outsourcing too much of their own personal agency and ability to come up with fresh ideas and think critically. This is certainly a valid concern—is AI a tool or a digital crutch? Like any muscle that isn’t utilised often, will our own creativity atrophy over time? Perhaps the best approach is to be strategic with our implementation of AI in our workflows. Offload the repetitive, the administrative, the time-consuming work so you can focus on strategy, planning, and thought leadership. For example, many designers and marketers are using AI to create rough drafts of designs, layouts, and templates.
“What I find to be alarming and worth worrying about is the frequency at which we are now able to produce content regardless of its worth or merit,” says Youssef Hani, a tech marketer in Cairo, Egypt. “I feel executives and managers … have a really important role to play when it comes to this wave of AI; they shouldn’t prioritise instant or close gains at the cost of quality work and thought leadership in general.”
- (Non) Compensation for creators. Don Litzenberg, a fractional CRO (Chief Revenue Officer) based in San Diego, shares his “concerns about generative AI because of the artists not being compensated for the training models.” It’s one thing if you’re a sole proprietor and do not have the budget for design—in this case generative AI might be the thing that connects those dots for you. But if you’re part of a larger organisation, don’t fire your design team just yet. They still have more experience and a better eye for design than any AI.
- Bias and suppression of diverse voices. The topic of bias in AI has been covered extensively. More specifically in the case of generative AI, we also need to consider the language and content. If the LLMs have been trained primarily if not exclusively on Western European and American content, in the English language, the content now being generated by ChatGPT, Bard, and other AI tools is a remoulded, reformed iteration of that training data. It therefore stands to reason that non-English or non-Western content will fade away into the digital background. This is a key concern for non-English speaking cultures and societies around the world. The digital divide now has the potential to grow exponentially.
- Environmental factors. We’ve had this conversation before, with crypto. Crypto was—still is—infamous for gobbling up more energy than the entire country of Ireland. Not surprisingly, given AI runs on hardware just like crypto, generative AI tools are also very energy-hungry. The problem is that AI requires more energy than other forms of computing. A recent article in Bloomberg cites 2021 research that states training GPT-3 took 1.287 gigawatt hours—the equivalent of the electricity usage of 120 US homes in a single year, or 502 tons of carbon emissions. That’s not good news for the climate.
Asked whether marketing professionals should be held accountable for the content and messaging they produce with the help of generative AI, Gary Marcus, a renowned AI expert, author of Rebooting AI, and an outspoken voice of reason in the AI debates, gives an emphatic yes: “They should always be held accountable, and they should resist the tendency to use LLMs to produce garbage for which they are not accountable.”
Katrina Ingram, CEO of Ethically Aligned AI, agrees. “I think there should be some professional obligation to let clients know the degree to which generative AI tools are being used to create content outcomes,” she says, adding there are mechanisms already in place that could be leveraged for content generated by AI. “Professional marketers and advertisers have standards. Broadcast platforms are governed by legal and ethical obligations. Marketing copy itself is subject to legal standards, and those vary by product category.”
Lawmakers are often caught trying to catch up with the breakneck speed of technological evolution. This time, however, it seems the potential risks and dangers of generative AI outweigh the benefits by so many orders of magnitude that lawmakers have jumped on the case. In the US, Senate Majority Leader Chuck Schumer (D-N.Y.) is spearheading the congressional effort to introduce legislation regulating AI, as reported by AXIOS. In Europe, lawmakers are at work on similar legislation, provisionally called the European Union AI Act. China has already drafted a set of AI-oriented regulations.
In the meantime, the AI tools will continue to evolve, and people will continue to use—and misuse—them. We are likely to see this systemic push-pull effect for the foreseeable future. One thing is certainly clear: artificial intelligence has set sail, and it is up to us humans to make sure there’s a human captain aboard this ship.