People with Lower AI literacy are More Receptive to AI

A still from The Wizard of Oz showing the image of the wizard projected green and large with flames all around.
A still from The Wizard of Oz (1939).

This article first appeared in the Harvard Business Review here: https://hbr.org/2025/07/why-understanding-ai-doesnt-necessarily-lead-people-to-embrace-it

Why Understanding AI Doesn’t Necessarily Lead People to Embrace It

by Chiara LongoniGil Appel and Stephanie M. Tully

July 11, 2025

Summary.   

New research has uncovered a paradoxical relationship between AI literacy and receptivity: Individuals with lower AI literacy are more likely to embrace AI, despite perceiving it as less capable and more ethically concerning. This enthusiasm stems from a sense of “magic” associated with AI’s capabilities. Conversely, those with higher AI literacy, who understand the mechanics behind AI, tend to lose interest as the mystique fades. The findings challenge the assumption that increased education leads to greater adoption and have significant implications for business strategies and marketing decisions. While the research focused on consumers’ AI interest and adoption, its results have implications for a broad range of business decisions, including organizational-adoption strategies, product design, and marketing.

Artificial intelligence has become an invisible assistant, quietly shaping how we search, scroll, shop, and work. It drafts our emails, curates our feeds, and increasingly guides decisions in education, healthcare, and the workplace. As companies increasingly integrate AI into their products and services, a critical but often overlooked question emerges: Why do some people embrace AI enthusiastically while others seem more hesitant?In a new paper published earlier this year in the Journal of Marketing,we uncovered a surprising pattern: The more knowledge people have about AI and how it works, the less likely they are to embrace it. This pattern emerged when we combined two datasets: one measuring cross-country AI literacy (based on levels of “AI talent” assessed by Tortoise Media) and another measuring country-level interest in using AI (from Ipsos). People in countries with lower average AI literacy tended to be more open to adopting AI compared to those in countries with higher literacy levels. Then, across six additional studies involving thousands of U.S.-based participants—including undergraduate students and online samples selected to be representative of the U.S. in terms of age, gender, ethnicity, and regional distribution—we consistently found that lower AI literacy predicts greater receptivity to AI.

Our studies found that the greater interest in AI wasn’t because people with less knowledge thought AI was more capable or more ethical. Quite the opposite: People with lower AI literacy saw AI as less capable and more ethically concerning. Yet, they were more likely to have used it themselves and to want it used by others.

What explains this surprising finding? It comes down to the way people perceive AI. For those who know less about AI, envisioning AI completing tasks feels magical and awe-inspiring. This sense of “magic” fuels enthusiasm. But for those with higher AI literacy, who understand the mechanics—algorithms, data training, computational models—AI loses its mystique. Much like learning how a magic trick works, this knowledge strips away the wonder. With it, the interest in using AI fades.

The gap in interest in AI usage is more pronounced when AI tackles tasks we typically see as uniquely human such as writing a poem, composing a song, cracking a joke, or giving advice. In these creative and emotional domains, people with lower AI literacy are particularly likely to see AI as magical and more willing to hand over control to it. But when it comes to tasks rooted in logic like number crunching or data processing, where it is more obvious how AI can do the tasks and the magic is gone, this pattern fades. In some cases, it even reverses.These findings challenge a core assumption in tech adoption: that more education will naturally lead to greater adoption. In reality, as knowledge about AI grows, interest in AI-powered products and services may diminish.

While our studies focused on consumers’ AI interest and adoption, understanding who embraces AI—and why—has implications for a broad range of business decisions, including organizational-adoption strategies, product design, and marketing. Here’s how you can apply them.

Assess Managers’ and Employees’ AI Literacy

Managers and employees may be influenced by their level of AI literacy. Low AI literacy can make them more open to using AI across business functions like hiring, accounting, product design, and marketing, even if they may not be the most optimal solution. In contrast, those with higher AI literacy may have a more informed, less-emotionally-driven view of AI, which can lead to greater caution or even disinterest—not because they think AI is worse but because it feels less novel or transformative.

By understanding both their own AI literacy and that of their teams, managers can better calibrate how they approach AI adoption so they avoid both overenthusiasm and underutilization. That’s why we launched a free tool designed to help leaders assess their AI literacy and surface blind spots before they affect critical business choices like strategy, staffing, or customer trust. (The data collected through the tool is used strictly for academic research purposes and is fully anonymized.)

Don’t Assume Your Most Tech-Savvy Users Are Your Most Receptive

If you’re building or marketing AI-powered tools, our findings should give you pause. They suggest that the people in your target market who are the most technically sophisticated, such as those with AI-related degrees, may not be your most receptive ones. Especially in domains like creativity or coaching, target customers who are the least AI literate may be your most enthusiastic adopters.

Tailor Your Marketing to Your Audience’s Literacy Level

To tailor messaging effectively, companies need to first assess their audience’s AI literacy. This can be done through surveys, customer interviews, or behavioral proxies (e.g., technical forums visited, prior product usage patterns). Tools like ours can also help gauge AI literacy quickly and offer guidance for segmentation.

Some AI use cases are naturally a better fit for AI-savvy consumers—such as software engineers using generative AI models like GitHub, Copilot or Cursor to write better code or Google’s Vertex AI to help build AI agents. If your target customers are AI-savvy, don’t rely on the “wow” factor to increase adoption. Instead, highlight its capability, performance, or ethicality. In contrast, if the target audience for your AI product is the average consumer and if your value proposition includes generating awe, don’t demystify it by providing loads of detailed technical explanations.

Design Products with Different Literacy Levels in Mind

You may assume that your users have a solid understanding of technology and can navigate sophisticated UX designs or that your customers want maximum autonomy in using AI. But many users just want simplicity, clarity, and guidance. Effective onboarding and intuitive UX are key. ChatGPT’s success, for instance, had less to do with its back end and more to do with how accessible it felt to everyday users.

Be Transparent and Honest

Don’t interpret our findings as a call to keep consumers uninformed. Sustainable and responsible use of AI requires informing consumers of the tradeoffs involved when AI is used to support or replace human judgment, especially in high-stakes domains like hiring, healthcare, or education. This includes knowing that AI systems can reflect or amplify existing biases, that their outputs are shaped by the data they’re trained on, and that “automated” doesn’t mean infallible or neutral. Overreliance on intuitive impressions of AI can lead to misuse, misplaced trust, and ethical lapses. Businesses should ensure consumers are educated about any factors that could impact their welfare.

While the sense of magic can fuel initial enthusiasm, it is likely to backfire if AI doesn’t truly benefit the consumers it serves. When AI is marketed as magical but doesn’t provide real benefits, users will feel disappointed or manipulated. This will lead to a loss of trust.

The Bottom Line

AI is reshaping how we learn, work, and make decisions. But our relationship with it is driven not only by what it can do, but also by what we think of it. As a new tool, understanding how different people—consumers, employees, and managers—perceive AI and how this differs across such groups may be one of the most important steps we can take.Read more on AI and machine learning or related topics Sales and marketing and Managing employees

Chiara Longoni is an associate professor of marketing at Bocconi University in Milan. Follow her on Twitter @longoni_chiara.    

Gil Appel is an assistant professor of marketing at the George Washington University School of Business.

Stephanie M. Tully is associate professor of marketing at the University of Southern California’s Marshall School of Business.

https://archive.is/ru0pC