When the Conversation Becomes Harder Than the Problem

I’ve been noticing something about how we talk about AI, particularly in government and policy spaces. Much of the writing in this area is surprisingly difficult to understand.

This isn’t necessarily because the ideas themselves are unclear. In many cases, they are strong and well considered. But the way they are expressed often introduces an additional layer of effort for the reader.

The language tends to be abstract, dense, and filled with terminology that assumes a high level of familiarity. If you work in this space every day, that style of communication can start to feel normal. Over time, you build fluency in it. But when you pause and look at it from a different perspective, it becomes clear how much interpretation is required just to follow the core point.

I find myself translating as I read. I’ll come across a phrase, stop, and mentally rephrase it into something more concrete to make sure I understand what is actually being said. That extra step is easy to overlook, but it matters more than we might think. It introduces friction into something that is meant to inform, guide, or influence real decisions.

The audience for this work is broader than the people writing it. It includes program managers, caseworkers, designers, policymakers, and others responsible for applying these ideas within real systems. Many of them are already operating under pressure, balancing constraints, and making decisions that directly affect people’s lives. When the language itself becomes another obstacle to navigate, it creates distance between the idea and its application.

This is where something important can get lost. The ideas may still hold value, but they become less accessible. They require more time to process, more background knowledge to interpret, and more effort to translate into action. As a result, they may not travel as far as they could, or reach the people who would benefit from them most.

What stands out to me is that this often happens in writing that is trying to advocate for clarity, transparency, and more human-centered systems. There is a disconnect between the intent of the work and the experience of reading it. We talk about reducing complexity in systems, but sometimes replicate that complexity in the way we describe them.

None of this means that the work should become less rigorous or less precise. It means being more intentional about how ideas are communicated. Clear language does not dilute complexity; it makes it more accessible. It allows more people to engage with the idea, challenge it, and apply it in meaningful ways.

If the goal is to improve systems that serve the public, then the conversation around those systems should be reachable. Not simplified to the point of losing meaning, but expressed in a way that invites understanding rather than requiring translation. Otherwise, we risk creating a body of work that is thoughtful in theory but limited in impact, simply because it is harder to access than it needs to be.

This is one version of the same pattern that shows up in AI systems themselves.

I’ve been writing about these patterns over the past few weeks—where friction shows up, and how easily people disengage.

I put together a short checklist that captures the most common points where AI systems lose people.

If you’re working on or evaluating an AI tool, you might find it useful.






Next
Next

Where Friction Turns Into Abandonment