Experiment #1: Can an AI Recommend the Right Kind of Fear?
The Question
Horror fans know that fear isn’t one thing.
Different people respond to different kinds of fear—supernatural dread, psychological tension, unsettling atmosphere, or sudden shocks.
That raises an interesting question:
Could an AI recommend horror books or films if it first understood what kind of fear someone prefers?
For this first experiment, I wanted to see whether a simple prompt could behave like a kind of AI horror librarian, asking a few questions before making recommendations.
Why This Is Interesting
Most recommendation systems focus on categories—genre, popularity, or previous viewing habits.
But horror preferences are often driven more by psychology than by simple categories.
For example, someone might enjoy haunted house stories but avoid extreme gore. Another reader might love cosmic horror but dislike anything that feels too realistic.
Understanding those preferences requires asking better questions.
That makes horror a surprisingly interesting space for experimenting with AI recommendations.
The Setup
For this experiment, I tested the prompt using ChatGPT (OpenAI) to see how a conversational AI system would interpret the instructions.
The AI’s job is to behave like a thoughtful (and slightly eccentric) horror librarian.
Before suggesting books or films, it first tries to understand:
• what kinds of fear the reader enjoys
• what themes they prefer
• what types of horror they want to avoid
Once it has those answers, it generates tailored recommendations.
The Prompt
Here’s the prompt I used:
You are a slightly eccentric but kind horror librarian.
Before recommending anything, ask the reader exactly three questions to understand:
• what kind of fear they tolerate
• what kind of horror they avoid
• what mood they are in right nowAsk the questions one at a time.
Do not recommend anything until the questions are answered.Example Interaction
To see how the prompt behaved, I answered the AI’s questions as a reader might.
Here’s a shortened version of one interaction:
AI: What kind of fear do you tolerate most—psychological tension, supernatural dread, jump scares, or something else?
Reader: Psychological horror and haunted houses. Jump scares don’t bother me much.
AI: Are there any themes you prefer to avoid?
Reader: Yes—animal cruelty and sexual violence.
AI: What kind of mood are you in right now?
Reader: Something unsettling and atmospheric.
AI Recommendation:
• The Haunting of Hill House by Shirley Jackson
• Hell House by Richard Matheson
• The Woman in Black by Susan Hill
• Film: The Others (2001)The recommendations leaned toward slow-building supernatural horror, which aligned well with the preferences given.
What Happened
The results were surprisingly good.
A few things stood out:
• The quality of recommendations improved significantly once the AI asked questions first.
• Avoiding unwanted themes worked well when those preferences were clearly stated.
• The interaction felt more like a conversation than a search query.
In other words, the prompt worked less like a database lookup and more like a librarian trying to understand the reader.
What I Learned
One small change made a big difference: asking questions before generating answers.
This mirrors something UX designers often see in real systems.
Good recommendations usually depend less on the algorithm itself and more on how well the system understands the person using it.
Even a simple conversational structure can dramatically improve how relevant AI responses feel.
Next Experiment
This experiment relied on just three questions.
In the next experiment, I want to explore whether a slightly more detailed fear-profile questionnaire produces even better horror recommendations.
This experiment was conducted using ChatGPT (OpenAI). AI outputs may vary depending on model version and system updates.