The Risks of Over-Reliance on AI in B2B Content Creation
AI is now a great tool in B2B marketing, but the human touch is as crucial as ever
LLMs in B2B content in 2024: risks and benefits
In this post, we take a serious look at the benefits and risks of using LLMs in content creation and spotlight Open Strategy Partners’ (OSP) approach to AI in marketing, where AI complements rather than replaces human ingenuity.
As AI is not only a tool but will also increasingly be a competitor to human content, we emphasize how content made with humans at the helm remains, for now, markedly superior to synthetically produced content in 2024.
2023 was a year of magical advances in AI technology—and a year of magical thinking about what AI can do. Just the first few months of 2023 saw the release of several new large language models (LLMs), each able to produce texts and images on nearly any theme at the stroke of a key. Many thought it might be the end, or a real threat, to jobs in content. A study by the forecasting firm Oxford Economics predicted the loss of 20 million jobs to automation by 2030. As if the economic worries were not enough, some even claimed ChatGPT—by far the most popular but certainly not the only powerful LLM—was soul-destroying. For renowned figures like Nick Cave and Stephen Fry, writing with it means “participating in the erosion of the world’s soul and the spirit of humanity itself.” But contrary to fears of AI rendering human creativity obsolete, the reality has unfolded differently. For B2B marketers, LLMs and generative AI have emerged as tools that augment but cannot replace human skill, care, and creativity.
Equipped with fine-tuned inputs and careful oversight, tools like ChatGPT and DALL-E can prove useful for specific tasks, speeding up or liberating content creators from some mundane aspects of the work and allowing them to refocus their energies on the creative parts. However, these tools’ limitations—for example in crafting long-form, insightful texts—underscore the role of human creativity, diversity, and expertise in producing high-quality marketing content.
Across industries, it was widely expected even in early 2023 that AI’s impact would continue to grow. A McKinsey survey from April revealed that 75% of professionals anticipated AI to significantly disrupt their industry within three years. Specifically in marketing and sales, 14% of respondents were already using AI technology for tasks like drafting initial content, personalized marketing, and document summaries, outpacing other industries in AI adoption. Although the McKinsey report emphasizes that 2023 has not seen any major shifts in adoption, as “the percent of organizations adopting AI tools has held steady since 2022,” the practical applications in certain tasks are undeniable.
In this post, we take a serious look at the benefits and risks of using LLMs in content creation and spotlight Open Strategy Partners’ (OSP) approach to AI in marketing, where AI complements rather than replaces human ingenuity.
As AI is not only a tool but will also increasingly be a competitor to human content, we emphasize how content made with humans at the helm remains, for now, markedly superior to synthetically produced content in 2024.
Understanding AI’s role in content marketing
2023 was not only a year of advancements in AI technology, but also a period of recalibration in our understanding of AI’s role in content creation and B2B marketing. LLMs like Open AI’s ChatGPT, Google’s Bard, Microsoft’s Bing and Anthropic’s Claude 2 have their own specific strengths and weaknesses. On the whole, they excel in short writing genres like outlines and summaries, as well as in simple research tasks, and do best even at these when given clear input and prompts. Generally, the narrower the task and the more specific and original the information available to it, the better an LLM will do in producing something that saves you time and isn’t worthless.
It’s best not to think of generative AI as something that could give you a finished product, whether you are a student, content writer, artist, etc. If you do try that route, the result will be lengthy, vapid prose that may be flawlessly grammatical, but it will also be exceptionally forgettable and full of obvious, embarrassing errors. Trained on vast quantities of freely available online content, LLMs are able to produce convincingly correct, seemingly logically connected sentences for the same reason they cannot produce interesting original content: they are essentially sophisticated probability machines. They are “stuck in the box” of whatever subject you give them, and are not designed to be able to step outside of the statistical average, in other words, innovation is unavailable to them.
Here’s a short list of content tasks generative AI can perform, with some comments on the advantages and drawbacks, from our experience at OSP, of the results you typically see.
Generative AI Tasks | Advantages | Drawbacks |
Create images | Instantaneous, often interesting | Mostly impractical and low-quality, unless guided closely with detailed input |
SEO/SERP Analysis/ Competitve Research | Faster and better | Barring keyword research, no drawback |
Draft content briefs and outlines | Great for brainstorming, structure | Formulaic, repetitive |
Summarize document | Excellent summary | Zero insight, little nuance |
Summarize website | Excellent summary | Zero insight |
Research | Good basic overview | Cannot evaluate sources, “hallucinations” (makes things up) |
Content drafting | Gets something on the page | Mostly low-quality and impractical, with occasionally useful formulations |
Content editing | Streamlines writing, helps with continuity and staying on point | Formulaic, monotonous, repetitive |
What generative AI can do brilliantly is help with tasks that are already routine and sometimes tedious. Research is crucial for good content today, and an LLM with browsing capabilities can take care of this in moments. For SEO (Search Engine Optimization) and SERP (Search Engine Results Page) research, the AI can do most of the legwork for you, and you can then review the most important articles it provides. OSP has long advocated the usefulness of focusing your writing with a content brief, and, with the right input, an LLM can produce one for you in very little time. This input can be perfected and implemented as a model to use in creating briefs in the future.
Still, Generative AI requires careful management to succeed. Ethan Mollick, professor at the Wharton School of the University of Pennsylvania and author of the largely AI-focused blog One Useful Thing, likens working with these AI tools to onboarding interns. It’s about discovering their strengths and refining your approach to harness their capabilities effectively. Omar El Sabh, OSP Content Marketing Strategist, echoes this sentiment: “Don’t think of it as a thing coming to replace you. If you give it a lot, AI has huge potential. LLMs are not here to make you lazy—they’re here to make you faster, leaner.” El Sabh shared how, after a year of experimenting with inputs, he has been able to reduce time spent on some tasks from six hours down to an hour and a half—freeing him to devote more of his energy to less routine, more creative work. “The human part is really the innovation part from 0 to 1,” says Dr. Li Jiang, director of Stanford’s AIRE program (AI, Robotics, and Education). “That’s where AI cannot do a good job.” Whatever task machines can now be trained to do, they should do, argues Jiang, thus allowing humans to do the irreplaceable work of innovation.
Integrating with AI in content marketing isn’t about handing over the reins to technology. Instead, it’s about figuring out the degree of integration appropriate to the task. Mollick proposes the models of centaur and cyborg. In the centaur model, more appropriate for data scientists like himself, there is a clear separation between human tasks and AI tasks, like the clear distinction between human torso and horse body in the creature of Greek mythology. The cyborg model, on the other hand—which Mollick recommends particularly for writing with AI—is a more intertwined approach, where human expertise and creativity are used in tandem with AI efficiency. The key lies in leveraging AI to enhance efficiency while maintaining a firm grip on the creative process and strategic goal of communication.
The risks of over-reliance on AI Content: garbage in, garbage out
Even though AI, and generative AI and LLMs specifically, offer many new benefits in content marketing, there are tangible risks in over-reliance. A recent study by scientists from Rice and Stanford University describes what they call Model Autophagy Disorder (MAD), where the quality of AI output dramatically decreases as it is fed more source material produced by AI. The study illustrates the broad negative effect of LLM over-use, where the internet might be pervaded with low-quality, synthetically generated content, simultaneously forestalling the continued use value of generative AI. Understanding the more granular limitations of LLMs in performing the tasks we’ve outlined above is crucial for productive usage, and will also help to ensure that MAD does not overwhelm AI tools.
Quality, originality, and ethical concerns
You don’t have to go far in experimenting with generative AI before you discover limitations. The first is quality: a formulaic approach to any topic, bland ideas, an oddly unremarkable but instantly recognizable syntax, and phrases it particularly loves to repeat no matter the subject, such as “rapidly evolving landscape.” There is no sidestepping the quality issue. Like machine translation, which can save a translator oodles of typing but requires a line-by-line reading correction, LLM writing needs a supervisor. Prompted without clear input, generative AI will produce somewhat clumsy results. As in the image below.
The derivative nature of generative AI also raises ethical concerns about originality and intellectual property. AI systems, by their nature, remix existing content, potentially infringing on the originality of human creators.
Data privacy, bias, and transparency
Think before you share data with an AI. AI systems often rely on vast datasets, and it’s crucial to handle your data responsibly, ensuring compliance with privacy laws like Europe’s GDPR (General Data Privacy Regulation) and CCPA (California Consumer Privacy Act). This adherence not only safeguards consumer information but also builds trust. This is not a major issue in many content creation tasks like research or summarizing documents, but be wary before feeding private or customer data into an LLM.
AI systems, including those used in content marketing, learn from existing data, which can sometimes perpetuate existing biases. An AI system trained by Amazon to score software engineer résumés automatically rejected all female applicants. Since Amazon’s software engineers included no women, the AI had “learned” from the data that female applicants were ineligible. While such datasets are not typically used in content marketing, it remains true that AI can only give back to you what you have already given it, including one’s own blind spots. As Kate Crawford writes in Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, “There is no neutral ground for language, and all text collections are also accounts of time, place, culture, and politics.”
Lastly, transparency in AI-driven content creation involves clearly communicating the use of AI tools to stakeholders and audiences. It's about being open about the extent to which AI is used in content generation and how human oversight is integrated into the process. This transparency and honesty aligns with OSP’s commitment to authentic communication and, we believe, is a crucial part of building and maintaining the trust of clients and audiences.
Best practices for integrating AI responsibly
Guidelines for ethical AI use in content creation
Using AI ethically in content creation should mean formulating and adhering to principles prioritizing transparency and accountability. Above all, content creators should approach AI according to the maxim “trust, but verify,” rigorously checking and editing any material generated by an LLM. Content creators should then disclose the use of AI where applicable, maintaining a clear boundary between human-generated content and AI-assisted content. These guidelines ensure that AI is used as a tool for enhancement rather than as a means to cut corners around standards, in ethics and in quality.
This article, for example, was an experiment in AI-assisted content. The content brief for this article was drafted by ChatGPT-4 in partnership with the OSP authoring team. We combined information from interviews we conducted, the OSP website, SERP research conducted by the SEO Core AI and WebPilot ChatGPT extensions. We applied a set of fine-tuned inputs and prompts we’ve developed to ensure consistent and useful AI responses. This outline was then modified extensively in the process of writing. Even with a refined set of inputs, the LLM suggested redundant sections that overlapped incoherently. For several sections, the writer asked ChatGPT to come up with text and reviewed the results. Some sentences were edited and woven into the article, while many others contained errors, inaccuracies and meaningless fluff. This was after the chatbot had reviewed the company website, was aware of the article thesis and much of the material. ChatGPT also carried out research to find relevant statistics for this article. All the statistics were checked against the original source, and many of the chatbot’s suggestions turned out to be low-value surveys or statistics. Even when an original source found by ChatGPT was reliable and offered useful statistics, the writer discovered that the chatbot would often invent statistics. We checked all information from ChatGPT and corrected it in the (many) cases it was necessary. If this workflow it fits the cyborg model somewhat, it’s still a long way from seamless integration of man and machine.
AI’s impact on SEO and Google
At the time of writing in early 2024, we still don’t fully know the impact of generative AI on SEO and Google search rankings. The difficulty navigating Google results many people already have is partly due to their ranking by an AI-powered algorithm. It seems only natural, then, that LLMs are perfectly suited to help game Google’s algorithm. And yet, Google’s algorithm continues to emphasize qualities in search results that favor human-generated content, what it calls E-E-A-T principles: Experience, Expertise, Authoritativeness, and Trustworthiness. Original content with multiple sources, referred and linked to by other pages, all to help ensure that top-ranked articles will provide real value to users and not just SEO word salad.
AI as a tool, not a replacement
In our experience at OSP, which aligns very closely with other industry experts and scientists like Mollick, AI works best at short-form content and tasks that demand a minimum of creative input and insight. To create good long-form content, and also to ensure that generative AI does not go completely MAD, it is crucial for experienced writers to produce original work, bringing in interviews and quotes from diverse human perspectives with various kinds of expertise. This kind of content will continue to be something only humans can produce for the foreseeable future, even with significant improvements in generative AI capabilities. Founding partner Jeffrey A. “jam” McGuire sums up OSP’s strategic recommendations in the age of competition with AI content:
- Have something to say. Publish longer, more opinionated things.
- Differentiate through quality and depth.
- AI gets lost in longer texts.
- AI is currently unable to produce real insight.
- Interview and quote subject-matter experts.
- Put new information into the world. AI still cannot.
- New information adds value legible to human readers and Google’s algorithm.
OSP's approach to AI in content marketing
At OSP, the core of our content strategy work is encapsulating our clients’ technical truth in impactful communication. The OSP Value Map is our methodology for integrating technical specifics and business value into every level of messaging. The strength of the OSP Value Map lies in how it brings together a multiplicity of perspectives and insights within a company to define the value of products and services in technically accurate language. Capturing these three aspects—diverse points of view, insight based in working experience, and technical accuracy—remain beyond AI’s current abilities. This will continue to distinguish content produced according to our values and strategies from LLM content.
At OSP, we use algorithm-powered spelling and grammar tools, but we do not yet use AI in other processes. Is it worth it in the future? OSP founding partner Jeffrey A. “jam” McGuire thinks there’s potential there: “We are experimenting with adding AI to our processes where it makes some sense and maybe to help us work a little faster or a little easier. Working on the article you are reading here, using the AI tools was part of the research. While it didn’t make the process ‘easier’ per se, it shows us how it might help us work smarter in the future.” It is possible that AI can be strategically deployed to enhance efficiency, especially in tasks suited to automation, without compromising the unique perspectives that only human intelligence can bring—time will tell.
AI can produce SEO-optimized text better than any human can, but the result, for now, is largely useless to any human reader. It is better, in our view, to partner with generative AI to improve efficiency, while relying on human expertise, experience, creativity, and attention to detail to produce consistent, truly valuable content.
Explore how OSP’s full array of creativity, tools and strategic vision can revolutionize your content marketing — contact us today.