Institute of Education

Research & Expertise to Make a Difference in Education & Beyond

Fair or Flawed? Generative AI Bias Perceptions from Tweets to Teams

AI

AI

In their recent study, Evans Uhunoma and Maryann Asemota delve into how users perceive bias in generative AI outputs across two distinct contexts: the workplace and public discourse on social media. Drawing on survey data from UK-based employees and sentiment analysis of posts on X (formerly Twitter), the authors examine the extent to which generative AI systems—commonly regarded as innovative tools for content creation—may in fact reproduce or amplify existing societal biases.

Generative AI has rapidly transitioned from novelty to necessity, churning out emails, images, and entire reports at breakneck speed. But as more of our thinking is outsourced to machines, a critical question looms: are these tools truly neutral, or do they simply reflect—and sometimes exaggerate—our own societal biases? This study tackles that question head-on, comparing how two distinct groups—the general public on social media and employees in UK workplaces—perceive bias in AI outputs.

In the workplace, where generative AI is now used for everything from marketing to customer support, the study found surprisingly modest levels of perceived bias. Employees were asked whether AI-generated outputs seemed skewed against women, people of colour, or neurodiverse individuals. Across the board, the average response hovered near disagreement—suggesting users generally trust the fairness of their workplace AI tools. However, there was a notable exception: people of colour were more likely to be perceived as targets of subtle bias, with this group’s mean bias score sitting slightly higher than others.

Public sentiment on the other hand, especially from users on X (formerly Twitter), told a more cynical story. A text-mining analysis of nearly 2,000 tweets revealed a heavy lean toward negative perceptions of AI bias—particularly regarding gender and race. From generative images that reinforce stereotypes to text outputs privileging masculine traits in leadership descriptions, users online expressed real skepticism. They saw these biases not as occasional flukes but as systemic flaws rooted in the very data that trains these models.

Interestingly, neurodiverse users told a different story. Both online and in workplaces, people with neurodiverse conditions like dyslexia or ADHD often praised AI tools as communication aids. Far from marginalizing them, AI was viewed as empowering—helping structure their thoughts, streamline emails, and boost confidence in professional settings. This stark contrast adds a layer of complexity to the debate: AI might exclude some, but for others, it can be a rare bridge to clarity.

One of the core insights here is that bias isn’t just a data issue—it’s a perception issue. While workplace tools often benefit from curated datasets and structured use cases, public tools are exposed to the wild variability of online content. That means public users are more likely to encounter those amplified stereotypes researchers warn about. But perception is also shaped by familiarity. People who had only recently started using AI tools reported seeing more bias than long-time users—suggesting that trust, like skill, may grow with exposure.

The researchers also uncovered a critical truth often glossed over in tech hype: AI’s output is only as good as the prompt that feeds it. Some users found that more inclusive or specific instructions led to better, fairer results—while vague or generic prompts produced generic (and often biased) content. This feedback loop, where users shape AI and are in turn shaped by it, calls into question the notion of AI as an objective third party. It’s more like a mirror—one that sharpens or distorts depending on who's looking and what they ask.

Crucially, the study underscores the importance of better training data. Users from both groups urged developers to build more representative datasets, not just to avoid lawsuits or bad press, but to ensure AI serves everyone equitably. As AI systems are increasingly embedded in decision-making—from hiring to healthcare—the call for diverse, inclusive training inputs becomes not just ethical, but urgent.

In the end, the paper doesn’t offer easy answers—but it does offer a clear takeaway: the myth of neutral AI is just that—a myth. Perceptions of fairness vary widely depending on context, usage, and user identity. If generative AI is to truly enhance human potential, it must be designed with humanity’s diversity at its core. Until then, we must remain both hopeful and watchful, knowing that our digital assistants are still learning—mostly, from us.

Read the full paper