Welcome
There are moments in horror where everything appears normal at first glance, yet something beneath the surface feels unmistakably wrong. The lighting is steady, the environment is familiar, and nothing obvious signals danger. And still, there is a quiet tension—an unshakable sense that the system you are moving through is not behaving the way it should. That feeling, as it turns out, is not unique to horror. It shows up in the systems we rely on every day.
I have spent the past couple of years working at the intersection of content strategy, user experience, and artificial intelligence within government and social service systems. These systems are often designed with the best of intentions. They are meant to help people access healthcare, find housing, apply for benefits, and navigate some of the most critical and vulnerable moments in their lives. On the surface, they appear functional. The forms are available, the tools are live, and the policies are documented. Yet, despite this, people frequently struggle to move through them. They abandon applications midway, misunderstand key questions, or receive responses that are technically accurate but practically unhelpful. The result is often confusion, frustration, and a quiet erosion of trust.
This blog is an attempt to examine that gap between what systems are designed to do and how they are actually experienced. I am particularly interested in the role artificial intelligence plays within these environments, not as an abstract concept or a source of hype, but as a real component embedded within complex, policy-driven systems. When AI is introduced into these spaces, it does not operate in isolation. It interacts with existing rules, constraints, and assumptions. Sometimes it clarifies and supports users in meaningful ways. Other times, it amplifies confusion or reinforces existing gaps in ways that are difficult to detect at first glance.
These questions are not purely technical. They are fundamentally human. They involve language, interpretation, trust, and power. They require us to consider not only whether a system works, but for whom it works, under what conditions, and at what cost. In many ways, understanding AI in government systems is less about understanding the technology itself and more about understanding how people experience and respond to it.
From time to time, I will approach these ideas through a lens that might seem unexpected: horror. Horror, at its core, often explores systems that appear stable until they are not. It examines environments that trap people in loops, rules that exist but are never fully explained, and situations where individuals are forced to navigate structures that do not respond in predictable or humane ways. These themes are not far removed from the experiences many people have when interacting with complex bureaucratic systems. The parallels are subtle, but they can be revealing.
The goal is not to arrive at definitive answers, but to develop a deeper awareness of how these systems behave and how they might be improved. By paying closer attention to where breakdowns occur, we can begin to design systems that are more transparent, more responsive, and ultimately more humane. In a landscape where so much is at stake for the people relying on these tools, that work feels not only worthwhile, but necessary.