November 9, 2023

Generative AI Risks and Countermeasures

AI has enabled so many people to quickly scale their output, from marketers and sales reps at Fortune 100 companies, to entrepreneurs just starting out. Within a matter of seconds, businesses can: 

And they can do it all without sacrificing quality (or their morals). Turns out, it’s possible to do more with less. 

Now, while generative AI promises major efficiency gains, as with any technology, we need to use it responsibly. 

Let’s walk through three common risks when using generative AI and the countermeasures we put in place to battle them.

Risk #1: AI hallucinates

As we venture into the wide range of ways we can use AI in our daily lives, it's important to acknowledge that AI Models, like all technologies, have their limitations. 

One such limitation is what we call "hallucinations:" AI can make things up. 

Some common hallucinations include:

  • Changing information surrounding current and historical world events (e.g., claiming that the New England Patriots won the Superbowl in 2023)
  • Sources, including the links it’s sourcing information from
  • Generating irrelevant or random information (e.g., when prompting AI to “describe the city of London,” the output might be “London is a city in England. Dogs need to be walked at least three times per day.”)

Hallucinations typically happen when the AI misinterprets the patterns it’s learned during training. Increasing the temperature setting of the AI model you’re using can also increase the likelihood of hallucinations in your generated text.

Not sure what temperature is? You’re not alone. Let’s walk through it real quick. 

A quick lesson on temperature in AI

Within the world of AI, temperature is a piece of information that large language models (LLMs) can use to help make decisions around predictions, kind of like how you might use the temperature outside to decide what to wear.

Using a puzzle as our analogy here, a high 'Temperature' means the AI is more likely to choose an unexpected or creative next piece. So, if you’re just starting out your puzzle, it might try and build it from the inside out. 

A low 'Temperature,' on the other hand, means the AI will stick to the most likely or predictable next piece—like starting with border pieces. So, by adjusting the 'Temperature,' we can control how creative or predictable the AI's language generation is.

Countermeasures for AI hallucination risks

Let’s walk through a few ways you can counteract hallucinations when using AI to generate content:

Fact check generated content

This feels obvious, but just as you would when reviewing a freelance writer’s work, or reading the news, you should fact-check the information that’s being presented to you. 

It can also be as simple as clicking (and reading through) any links that the generation presents to you .

Add more context in your original prompt 

AI isn’t a mindreader. If you’re finding that the AI is hallucinating facts like company name, product appearance/description, job location, etc., add that information to your original prompt.

Pro tip: You can create referenceable information snippets with Infobase by Copy.ai.

Use prompt chaining and include reasoning steps

Just as you would want to break down a big project into smaller chunks, you should do the same when using generative AI platforms. Instead of making one big ask in your original prompt, try and break them down into steps. 

You can do this through prompt chaining: breaking down your generations into smaller, more manageable tasks for the AI to complete.

On top of prompt chaining, a great way to countermeasure hallucinations is to add a reasoning step within your chain of prompts. Let’s look at an example:

If you wanted to generate a cold email to a specific prospect, we can first think through the steps you’d take without AI: 

  1. Research the individual and summarize relevant information that you’d want to include in your pitch.
  2. Think through how you’d pitch the product, what Jobs To Be Done or features you’d lead with, and personal details you’d want to reference in the pitch.
  3. Take all of the information you’ve compiled and write the pitch.

Given that process, let’s look at the chain of prompts we’d use to generate that content with AI:

  1. Extract detailed information about this prospect: [link to LinkedIn profile]. Include detailed information about their work history, the types of posts they make, their education, and any personal interests they may have mentioned. 
  2. Given what you know about [your company name] and the prospect, reason through what may be compelling about our product [product details] to them and explain your rationale. 
  3. Using your rationale, generate a brief email that starts with a personalized icebreaker and then pitches our product and why they should care about it.

Risk #2: AI-generated content can be biased

When we look at the internet from the past 20 years, it’s hard to dispute that bias exists everywhere, across gender, sexual orientation, race, age, etc. And, knowing that LLMs—like GPT—were trained on datasets pulled from the internet, it’s easy to connect the dots. 

When it comes to LLM technology, some biases can include:

  • Gender Bias: The model may associate certain professions, roles, or behaviors more with one gender than another. For instance, the model might incorrectly suggest that 'nurses' are typically female and 'engineers' are usually male.
  • Racial and Ethnic Bias: The AI might stereotype or generate inappropriate content regarding certain racial or ethnic groups. 
  • Age Bias: The model may exhibit bias towards certain age groups, associating particular behaviors or abilities based on age.
  • Ideological Bias: The AI might lean towards certain political, religious, or social ideologies, reflecting the opinions and biases in the data it was trained on.

Understanding these limitations and implementing countermeasures is critical as we continue to develop and implement AI technology in our lives. That’s why it’s crucial for anyone using AI technology to be aware of the potential for bias within the content they generate. This is especially true for marketers because it’s our duty to be inclusive across the content that we create. 

But, just because bias exists doesn’t mean we can’t put processes in place to tackle and minimize this risk when using generative AI. 

Countermeasures for bias in AI-generated content

To countermeasure bias when using AI to generate content, it’s imperative to involve human oversight in the content creation process.

This can include: 

  • Making sure that a human edits and has the final say on 100% of the content that’s published
  • Building internal documents that educate your team across a wide range of DEIB initiatives and best practices
  • Approach content generation with a DEIB lens—actually including even more context and resources for AI to pull from when writing your prompt

By integrating human judgment and perspective, we can mitigate the potential biases introduced by AI algorithms. Humans can evaluate and refine the output generated by AI to ensure that it aligns with ethical and inclusive standards. By adopting a DEI lens, marketers can uphold a high level of inclusivity in all marketing materials.

Risk #3: AI can generate generic content

LLMs learn from a vast amount of data available on the internet, which involves understanding patterns, relationships, and sequences of words to generate text. However, sometimes, the model might produce text that closely resembles parts of its training data.

This might happen unintentionally when the model generates a common or popular phrase, sentence, or paragraph it has seen during its training. Think of common phrases we continue to see within the SaaS space in the past five years, like:

  • Unlock your productivity with [Product name]!
  • Do more with less
  • Level up your [whatever you want to be leveled up]

Another factor is the lack of creativity and originality in AI systems. While AI can analyze and mimic existing content, it may struggle to generate unique ideas or think outside the box. This limitation can lead to the production of generic content lacking creativity, subtlety, and nuance.

Additionally, the objective of AI-generated content is often to optimize for speed and efficiency, particularly in industries like marketing and content creation. AI can generate content quickly, but this focus on speed may compromise the quality and uniqueness of the output, resulting in generic content.

Countermeasures for generating generic content

While it won’t eliminate generic content entirely, there are things we can do to minimize how generic the content will be. Let’s walk through some no-brainer solutions. 

  • Involve people in the process: Just as we discussed tackling bias, it’s important to include humans in the process through roles like editing. Never publish AI-generated content that hasn’t been vetted by a human. 
  • Use plagiarism detection software: Educational institutions have had these safeguards in place long before generative AI was cool. If you’re concerned about copyright and plagiarism with any of the generated content you’re producing, test out plagiarism detection software to get some peace of mind. 
  • Write more specific prompts: Just as you would want to brief a writer on everything they need to know to write a specific, engaging, and on-brand piece, you should do the same with AI. Include relevant context whenever possible so that the AI can pull from your specific needs, rather than whatever it’s learned from the internet. 

Pro tip: Copy.ai’s Infobase and Brand Voice features will make it easier for you to be more specific with every prompt you write, giving you the ability to input easy-to-reference snippets of information in your prompts and generate content that’s true to your brand's voice.

It's important to note that AI technology is constantly evolving, and improvements are being made to address the issue of generic content generation. Researchers and developers are exploring techniques such as transfer learning, fine-tuning, and reinforcement learning to enhance the creativity and originality of AI-generated content.

Wrapping up

With the right safeguards and collaboration between humans and AI, the risks from generative AI can be mitigated. Used responsibly, generative AI can be a powerful tool that will help you scale fast and work more efficiently. 

Ready to level-up?

Write 10x faster, engage your audience, & never struggle with the blank page again.

Get Started for Free
No credit card required
2,000 free words per month
90+ content types to explore