5 Generative AI Pitfalls and How to Avoid Them

TL;DR: With respect to generative AI: 1) you can’t ignore it, 2) you can’t fully trust it, 3) you need to maintain “cognitive vigilance,” 4) your own words are still more powerful, and 5) you need to lean in and educate yourself right now.

Isn’t it interesting how, as we walk through life, the landscape underneath our feet keeps changing? In our era, one of the main causes of such change is technology. The personal computing revolution of the 1980s was quick followed by the spread of the internet in the 1990s, and then mobile phones and social media sprang up after the turn of the millennium. And now we have AI changing everything all over again.

These changes can be exciting, as new technologies bring with them new possibilities and new opportunities. But they can also be terrifying, because a shifting environment means we have to continually learn, adjust, and let go of how things used to be. 

Another main source of anxiety caused by technological change is the awareness that we can sometimes get it wrong. New technologies have many benefits, but they also have many negative, unintended consequences. The shifting landscape of technology is full of pitfalls that can be hard to spot and avoid. These pitfalls can be costly. Heck, companies can even get AI insurance that “pays for lawsuits associated with your AI product, so your team can move forward with confidence.” 

I have focused on technology pitfalls in my writing before. In my 2019 book, Avoiding Data Pitfalls, I warned my readers about the ways that we can get data wrong, from spurious correlations to poorly designed charts. In this article, I’d like to describe five pitfalls related to the adoption and use of generative AI, along with my advice for how business leaders – and everyday users alike – can steer clear of them.

1. Avoiding or Forbidding Generative AI Entirely

At one end of the technology adoption lifecycle, you’ll find laggard organizations that are telling their employees that they cannot use Generative AI for work purposes at all. The understandable concern is that employees might enter sensitive information like company secrets or customer records into a publicly available AI chatbot, resulting in costly breaches of data privacy or security. 

This isn’t just a hypothetical scenario; just ask Samsung. In April 2023, an engineer copied and pasted proprietary source code directly into ChatGPT, resulting in a temporary company-wide ban of the platform. Other companies followed suit.

Here’s the problem, though: if you ban it, your employees will use it anyway. Sure, you can have your IT team block certain platforms on company devices. But employees can always access it on their own devices, like personal laptops or mobile phones. And there’s research to back up that this is exactly what has been happening. A 2023 Salesforce study found that over half of global users of generative AI were using it “without the formal approval of their employers.” 

The good news is that popular AI chatbots like OpenAI’s ChatGPT and Anthropic’s Claude have evolved since the Samsung snafu so that users can now opt out of having their chat session data used for model training purposes. They also now offer enterprise-grade versions with enhanced security, data privacy, and administrative control features. But if using a cloud service is still too inherently risky for your business, it’s possible to integrate the foundational AI models themselves (OpenAI’s GPT-4.1, Anthropic’s Claude Sonnet 3.7) or open source models like Llama 4 within your own infrastructure.

Either way, generative AI isn’t going away, and your career and your business will suffer if you ignore or ban it.

How to avoid sticking your head in the sand: Develop an Acceptable Use Policy for generative AI, and then train your employees on it, educating them about the risks of violating it. Build enterprise-ready versions of today’s state-of-the-art AI chatbots that keep the contents of your employees’ chats in-house.

2. Accepting Generative AI Outputs Uncritically

At the other end of the technological adoption lifecycle, you’ll find innovative organizations whose employees are racing ahead to adopt generative AI as fast as possible. Often, in their rush to adopt these tools, they are failing to critically evaluate their outputs. This is a huge problem, of course, because LLMs are susceptible to so-called “hallucinations,” or outputs that are factually incorrect, nonsensical, or otherwise out of touch with reality. When such outputs get used or implemented without correction, there can be very real consequences. 

No profession has been more widely-covered falling into this pitfall than the legal profession. In June 2023, two New York lawyers were sanctioned by a U.S. district judge and their law firm was fined $5,000 because they submitted a legal brief to the court that included six fictitious case citations that had been generated by ChatGPT. Their excuse at the time was that they simply didn’t know the technology was capable of “making up cases out of whole cloth.” They found out the hard way that it is, indeed, capable of doing so.

This may have been the first such high-profile legal case (pun intended), but it was by no means the last. Similar incidents in Australia in October 2024, in Wyoming in February 2025, and in Alabama in April 2025 have come to light, and we can only imagine that we’ll see more in the months and years ahead.

Lawyers aren’t the only ones to fall into this pitfall, and text isn’t the only kind of AI-generated output susceptible to flaws. In April 2025, the Chinese newspaper Sin Chew Daily printed an AI-generated image of the Malaysian flag, the Jalur Gemilang, on their front page. The only problem was that the image of the flag omitted its signature crescent moon, a shape that’s critical from a political, cultural, and religious perspective. 

To make matters worse, that very same month, Malaysia’s own Education Ministry made a similar error with a faulty depiction of their country’s flag in an official report. In both cases, investigations, suspensions, and much hand-wringing about a general lack of human scrutiny ensued.

How to avoid accepting faulty outputs: Triple check everything that generative AI creates for you. Create checklists to inspect for common types of errors, such as misattribution, dropped attribution, over-generalizations, or bogus statistics. If generative AI creates mission-critical python or SQL for you, review the code line by line. If you’re not a coder, find someone who is and have them check it for you.

3. Over-Reliance on Generative AI Outputs

It’s hardly controversial to say that we should catch and correct erroneous AI-generated outputs. That’s only stating the obvious, and very few would argue that we should turn a blind eye to such mistakes. It turns out, however, that it’s even possible to find ourselves at the bottom of a pitfall after relying on accurate AI-generated outputs. How is this possible?

One major concern is known as “deskilling,” which is what happens when automation lowers the level of skill required for humans to do a job. This phenomenon pre-dates the rise of generative AI, as machines have been taking over tasks of skilled laborers since the Industrial Revolution. But deskilling has come into increased focus recently as even knowledge workers are relying more and more on AI. What happens when we no longer have to solve problems ourselves or engage in deep, critical thinking to get the job done? There is a very real risk that we will become less capable than we used to be.

The concern doesn’t just apply to workers, it also applies to students, who are at risk of “over-reliance [on generative AI] leading to reduced cognitive engagement,” according to recent research studying the effects of AI on learning. When students copy an entire essay from an AI chatbot instead of writing it themselves, they turn an assignment involving deep learning into one involving shallow learning. 

In another 2024 study, students were found to fall prey to “cognitive offloading,” and even “metacognitive laziness,” in which over-reliance on generative AI led to a “habitual avoidance of deliberate cognitive effort.” If you reflect back on your own experiences in the classroom, you’ll recall that it was the difficulties and challenges you encountered that forced you to engage more critically with the learning materials.

How to avoid over-reliance on generative AI: Force yourself to turn away from generative AI applications and “go it alone” sometimes, or at least jot down what you’d decide, create, or say either before or during the time when AI tools are generating their outputs for you. In a word, exercise cognitive vigilance. You don’t want your critical thinking skills to deteriorate, so keep using them.

4. Outsourcing Genuine Human Sincerity

It’s one thing to turn to generative AI for analytical outputs or technical deliverables such as instruction manuals or summaries of meeting minutes. But when we turn to generative AI to draft personal messages or to articulate weighty matters of the heart, we’re treading on dangerous ground indeed.

Social media is full of screenshots of people who obviously used AI for this kind of sentimental communication, such as a boss who expressed concern about an employee’s health while accidentally pasting the part of the response to their prompt that asked if they wanted AI to generate “a slightly more casual or more formal version.” When we see a glaring faux pas like this one, we laugh, we cringe, we shake our heads – and many of us secretly hope we don’t get caught making the same mistake.

Mark Wilson, global design editor of Fast Company, wrote an article in April 2025 titled, “If you use AI to write me that note, don’t expect me to read it.” In it, he argues that since it takes a person less time to copy and paste an AI-generated message than it takes for the other person to read it, the person sending the note must therefore care more about their own time than the reader’s. Even if that’s not always the case, a message full of obvious signs that it was generated by AI can seem rude or at least off-putting to the person on the receiving end.

How to avoid outsourcing sincerity: I recommend totally avoiding outsourcing personal sentiments altogether. Obviously, there’s a wide spectrum of sentimentality, and we each need to decide where we draw the line. But when you’re trying to say something genuine and heart-felt, in almost every case you should just use your own words. Even if your words aren’t as eloquent, the person on the receiving end will likely appreciate hearing from you, not your favorite LLM.

5. Failing to Develop AI Literacy

There’s a reason that AI literacy is such a hot topic right now. In their inaugural Skills on the Rise report, LinkedIn rated AI literacy as the fastest growing skill of 2025, across most regions and disciplines. It’s even mandated by law in Europe, with Article 4 of the EU AI Act requiring companies to “ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.” 

The thing is, we’re being asked to use a technology that is still very nascent, that’s evolving very rapidly, and that’s both powerful and dangerous. It’s no surprise, then, that we all need to become more fluent in what this technology is, how to use it, and how not to use it. That goes for the executives in an organization, and it goes for the team members at every level of the organizational chart.   

How to avoid being illiterate in AI: We need to really lean into generative AI right now, and learn the best way we know how: by reading books, by listening to podcasts, by taking online courses, and, above all, by rolling up our sleeves and trying it out for ourselves. If you’re designing an AI literacy program for a company, I wrote an article about what the best programs have in common

Conclusion

When it comes to generative AI, pitfalls abound. In this article, I’ve shared five pitfalls that I’m painfully aware of, along with my humble advice for how to avoid them. I haven’t always avoided these pitfalls, myself, but I’m trying very hard to avoid them going forward. My hope is that you’ll be a little safer navigating the road ahead, too. Perhaps our awareness of these pitfalls, along with an open dialogue about them, can lead to improvements in the technologies themselves.

That’s certainly a better approach than pretending that these generative AI pitfalls, and many others, simply aren’t there. 

Contact us if you’re ready to take your investment in AI literacy to the next level.

5 Generative AI Pitfalls and How to Avoid Them

TL;DR: With respect to generative AI: 1) you can’t ignore it, 2) you can’t fully trust it, 3) you need to maintain “cognitive vigilance,” 4) your own words are still more powerful, and 5) you need to lean in and educate yourself right now.

Isn’t it interesting how, as we walk through life, the landscape underneath our feet keeps changing? In our era, one of the main causes of such change is technology. The personal computing revolution of the 1980s was quick followed by the spread of the internet in the 1990s, and then mobile phones and social media sprang up after the turn of the millennium. And now we have AI changing everything all over again.

These changes can be exciting, as new technologies bring with them new possibilities and new opportunities. But they can also be terrifying, because a shifting environment means we have to continually learn, adjust, and let go of how things used to be. 

Another main source of anxiety caused by technological change is the awareness that we can sometimes get it wrong. New technologies have many benefits, but they also have many negative, unintended consequences. The shifting landscape of technology is full of pitfalls that can be hard to spot and avoid. These pitfalls can be costly. Heck, companies can even get AI insurance that “pays for lawsuits associated with your AI product, so your team can move forward with confidence.” 

I have focused on technology pitfalls in my writing before. In my 2019 book, Avoiding Data Pitfalls, I warned my readers about the ways that we can get data wrong, from spurious correlations to poorly designed charts. In this article, I’d like to describe five pitfalls related to the adoption and use of generative AI, along with my advice for how business leaders – and everyday users alike – can steer clear of them.

1. Avoiding or Forbidding Generative AI Entirely

At one end of the technology adoption lifecycle, you’ll find laggard organizations that are telling their employees that they cannot use Generative AI for work purposes at all. The understandable concern is that employees might enter sensitive information like company secrets or customer records into a publicly available AI chatbot, resulting in costly breaches of data privacy or security. 

This isn’t just a hypothetical scenario; just ask Samsung. In April 2023, an engineer copied and pasted proprietary source code directly into ChatGPT, resulting in a temporary company-wide ban of the platform. Other companies followed suit.

Here’s the problem, though: if you ban it, your employees will use it anyway. Sure, you can have your IT team block certain platforms on company devices. But employees can always access it on their own devices, like personal laptops or mobile phones. And there’s research to back up that this is exactly what has been happening. A 2023 Salesforce study found that over half of global users of generative AI were using it “without the formal approval of their employers.” 

The good news is that popular AI chatbots like OpenAI’s ChatGPT and Anthropic’s Claude have evolved since the Samsung snafu so that users can now opt out of having their chat session data used for model training purposes. They also now offer enterprise-grade versions with enhanced security, data privacy, and administrative control features. But if using a cloud service is still too inherently risky for your business, it’s possible to integrate the foundational AI models themselves (OpenAI’s GPT-4.1, Anthropic’s Claude Sonnet 3.7) or open source models like Llama 4 within your own infrastructure.

Either way, generative AI isn’t going away, and your career and your business will suffer if you ignore or ban it.

How to avoid sticking your head in the sand: Develop an Acceptable Use Policy for generative AI, and then train your employees on it, educating them about the risks of violating it. Build enterprise-ready versions of today’s state-of-the-art AI chatbots that keep the contents of your employees’ chats in-house.

2. Accepting Generative AI Outputs Uncritically

At the other end of the technological adoption lifecycle, you’ll find innovative organizations whose employees are racing ahead to adopt generative AI as fast as possible. Often, in their rush to adopt these tools, they are failing to critically evaluate their outputs. This is a huge problem, of course, because LLMs are susceptible to so-called “hallucinations,” or outputs that are factually incorrect, nonsensical, or otherwise out of touch with reality. When such outputs get used or implemented without correction, there can be very real consequences. 

No profession has been more widely-covered falling into this pitfall than the legal profession. In June 2023, two New York lawyers were sanctioned by a U.S. district judge and their law firm was fined $5,000 because they submitted a legal brief to the court that included six fictitious case citations that had been generated by ChatGPT. Their excuse at the time was that they simply didn’t know the technology was capable of “making up cases out of whole cloth.” They found out the hard way that it is, indeed, capable of doing so.

This may have been the first such high-profile legal case (pun intended), but it was by no means the last. Similar incidents in Australia in October 2024, in Wyoming in February 2025, and in Alabama in April 2025 have come to light, and we can only imagine that we’ll see more in the months and years ahead.

Lawyers aren’t the only ones to fall into this pitfall, and text isn’t the only kind of AI-generated output susceptible to flaws. In April 2025, the Chinese newspaper Sin Chew Daily printed an AI-generated image of the Malaysian flag, the Jalur Gemilang, on their front page. The only problem was that the image of the flag omitted its signature crescent moon, a shape that’s critical from a political, cultural, and religious perspective. 

To make matters worse, that very same month, Malaysia’s own Education Ministry made a similar error with a faulty depiction of their country’s flag in an official report. In both cases, investigations, suspensions, and much hand-wringing about a general lack of human scrutiny ensued.

How to avoid accepting faulty outputs: Triple check everything that generative AI creates for you. Create checklists to inspect for common types of errors, such as misattribution, dropped attribution, over-generalizations, or bogus statistics. If generative AI creates mission-critical python or SQL for you, review the code line by line. If you’re not a coder, find someone who is and have them check it for you.

3. Over-Reliance on Generative AI Outputs

It’s hardly controversial to say that we should catch and correct erroneous AI-generated outputs. That’s only stating the obvious, and very few would argue that we should turn a blind eye to such mistakes. It turns out, however, that it’s even possible to find ourselves at the bottom of a pitfall after relying on accurate AI-generated outputs. How is this possible?

One major concern is known as “deskilling,” which is what happens when automation lowers the level of skill required for humans to do a job. This phenomenon pre-dates the rise of generative AI, as machines have been taking over tasks of skilled laborers since the Industrial Revolution. But deskilling has come into increased focus recently as even knowledge workers are relying more and more on AI. What happens when we no longer have to solve problems ourselves or engage in deep, critical thinking to get the job done? There is a very real risk that we will become less capable than we used to be.

The concern doesn’t just apply to workers, it also applies to students, who are at risk of “over-reliance [on generative AI] leading to reduced cognitive engagement,” according to recent research studying the effects of AI on learning. When students copy an entire essay from an AI chatbot instead of writing it themselves, they turn an assignment involving deep learning into one involving shallow learning. 

In another 2024 study, students were found to fall prey to “cognitive offloading,” and even “metacognitive laziness,” in which over-reliance on generative AI led to a “habitual avoidance of deliberate cognitive effort.” If you reflect back on your own experiences in the classroom, you’ll recall that it was the difficulties and challenges you encountered that forced you to engage more critically with the learning materials.

How to avoid over-reliance on generative AI: Force yourself to turn away from generative AI applications and “go it alone” sometimes, or at least jot down what you’d decide, create, or say either before or during the time when AI tools are generating their outputs for you. In a word, exercise cognitive vigilance. You don’t want your critical thinking skills to deteriorate, so keep using them.

4. Outsourcing Genuine Human Sincerity

It’s one thing to turn to generative AI for analytical outputs or technical deliverables such as instruction manuals or summaries of meeting minutes. But when we turn to generative AI to draft personal messages or to articulate weighty matters of the heart, we’re treading on dangerous ground indeed.

Social media is full of screenshots of people who obviously used AI for this kind of sentimental communication, such as a boss who expressed concern about an employee’s health while accidentally pasting the part of the response to their prompt that asked if they wanted AI to generate “a slightly more casual or more formal version.” When we see a glaring faux pas like this one, we laugh, we cringe, we shake our heads – and many of us secretly hope we don’t get caught making the same mistake.

Mark Wilson, global design editor of Fast Company, wrote an article in April 2025 titled, “If you use AI to write me that note, don’t expect me to read it.” In it, he argues that since it takes a person less time to copy and paste an AI-generated message than it takes for the other person to read it, the person sending the note must therefore care more about their own time than the reader’s. Even if that’s not always the case, a message full of obvious signs that it was generated by AI can seem rude or at least off-putting to the person on the receiving end.

How to avoid outsourcing sincerity: I recommend totally avoiding outsourcing personal sentiments altogether. Obviously, there’s a wide spectrum of sentimentality, and we each need to decide where we draw the line. But when you’re trying to say something genuine and heart-felt, in almost every case you should just use your own words. Even if your words aren’t as eloquent, the person on the receiving end will likely appreciate hearing from you, not your favorite LLM.

5. Failing to Develop AI Literacy

There’s a reason that AI literacy is such a hot topic right now. In their inaugural Skills on the Rise report, LinkedIn rated AI literacy as the fastest growing skill of 2025, across most regions and disciplines. It’s even mandated by law in Europe, with Article 4 of the EU AI Act requiring companies to “ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.” 

The thing is, we’re being asked to use a technology that is still very nascent, that’s evolving very rapidly, and that’s both powerful and dangerous. It’s no surprise, then, that we all need to become more fluent in what this technology is, how to use it, and how not to use it. That goes for the executives in an organization, and it goes for the team members at every level of the organizational chart.   

How to avoid being illiterate in AI: We need to really lean into generative AI right now, and learn the best way we know how: by reading books, by listening to podcasts, by taking online courses, and, above all, by rolling up our sleeves and trying it out for ourselves. If you’re designing an AI literacy program for a company, I wrote an article about what the best programs have in common

Conclusion

When it comes to generative AI, pitfalls abound. In this article, I’ve shared five pitfalls that I’m painfully aware of, along with my humble advice for how to avoid them. I haven’t always avoided these pitfalls, myself, but I’m trying very hard to avoid them going forward. My hope is that you’ll be a little safer navigating the road ahead, too. Perhaps our awareness of these pitfalls, along with an open dialogue about them, can lead to improvements in the technologies themselves.

That’s certainly a better approach than pretending that these generative AI pitfalls, and many others, simply aren’t there. 

Contact us if you’re ready to take your investment in AI literacy to the next level.