The echo chamber you hired
What happens to diversity of thought when AI joins the team?
Walk into almost any department store in the world and wander through the kids’ clothes section. What do you see? Boys get blue clothes with dinosaurs and race cars. Girls get pink dresses and hearts. Even at an early age, we’re telling children what their roles are. We’re encoding expectations into fabric.
These biases don’t stay in the clothing aisle. They follow us into schools, into hiring panels, into the data we collect and the systems we build. AI is trained on human data, and human data is soaked in the assumptions of the societies that produced it. This isn’t a new observation. What is newer, and less examined, is what happens when that biased system becomes a participant, or even a driver, in how teams think.
The invisible colleague
Most product teams are using at least one AI tool. It may be generating code, summarising research, drafting copy, suggesting approaches, or a combination of all of these. In many teams, it’s become a de-facto team member; one that contributes more lines of code than some of the humans, and whose suggestions carry weight precisely because they arrive fast and fully formed. As agents take on more complex tasks, that influence is only going to grow.
Every person on a team brings a perspective shaped by where they grew up, what they studied, who they’ve worked with, what they’ve failed at. AI brings none of that. Its perspective is an aggregate; the statistical centre of gravity of everything it was trained on. And the internet is not a balanced dataset. It over-represents certain languages, cultures, and viewpoints, and under-represents others. When a PM asks AI to draft user stories, it draws on those patterns. When an engineer asks it to review an approach, it suggests what is statistically likely.
Alex ‘Sandy’ Pentland, writing in Harvard Business Review, argued that individual reasoning and talent contribute far less to team success than one might expect; that the best way to build a great team is to learn how they communicate and to shape the team so that it follows successful communication patterns. So, what happens when the loudest voice in the room has no pattern of its own, only an echo of someone else’s? The internet is not a representative sample of the world. It over-indexes on English-speaking, technically literate perspectives; the cultures that built these tools are baked into them. That’s the echo your team is working with.
Whose voice does the product hear?
When a PM drafts a product brief with AI, they don’t start from scratch any more. They start from the AI’s version of scratch. The tool has opinions about what a good brief looks like, and they’re easy to accept because they’re close enough to right. But the struggle with the blank page is lost.
Over time, the risk is that the PM’s instincts start to bend toward the tool’s defaults. Not dramatically. Not in ways that announce themselves. But the range of ideas she considers narrows, because the first draft is no longer hers.
We talk about AI as an amplifier, but what if the human’s bias and opinions aren’t being amplified? What if the bias and opinion of the AI gradually reshapes the human perspective?
Now multiply that across the team. If everyone is leaning on the same tool for ideation, research, and decision support, the rough edges get smoothed out. The unusual perspectives, the culturally specific insights that make products resonate with people in real contexts, are quietly filed down. Diversity of thought is hard to build and easy to lose.
Organisations adopt AI to be more productive, more innovative, more competitive. But if the tool is compressing the range of perspectives that inform the work, then the organisation is becoming more homogeneous in its responses to an increasingly diverse and atomised market.
It’s what you do with it that counts
Addressing this requires some consideration of how to integrate AI. There’s no question that it’s changing work, and will continue to do so. Successful teams won’t treat AI as an oracle whose first answer is good enough. They will establish ground rules and voice profiles, encoding their taste with explicit instructions about what to challenge and what to preserve. The best teams will invest AI with what good looks like for their team, their product, their users. The poor ones will accept the statistical average.
Consider a product team at a mid-sized SaaS company. Two PMs, both using AI to draft briefs and synthesise research. The first accepts the defaults. Her briefs are clean, well-structured, and indistinguishable from every other AI-assisted brief in the industry. The second has spent time teaching the tool what her team values: how they frame problems, what questions they ask before committing to a solution, where they’ve been burned before by assumptions that went untested. She’s written ground rules that tell the AI to challenge her first instinct rather than validate it, to flag when a brief lacks a clear hypothesis, to push back when a proposed solution doesn’t account for her team’s specific users. Her briefs are messier. They’re also better, because they carry the team’s accumulated judgment rather than the internet’s statistical average.
We know that the best teams are designed for diversity. How do we bake that diversity into our tools, as well as our humans? How do we shape AI so that it doesn’t shape us? The teams that thrive will be the ones that treated AI the way they treat any new hire: with clear expectations, honest feedback. Continuing to encode diversity of thought into the fabric of our teams is not a nice-to-have. It’s the difference between success and failure.




