Not every idea is a use case
How to separate real use cases from noise by looking for proof, not just opinions.
Sooner or later, someone is going to rope you into an “AI project.”
It will sound exciting. Maybe the execs will talk about an AI dashboard that “predicts customer churn” or an AI assistant that “takes tickets off our backlog.” They’ll turn to you and say, “Can you capture that as a requirement?”
This is the moment you need to take a pause. Because here’s the thing: most “AI requirements” aren’t requirements at all. They’re assumptions. They’re ideas. They’re wish lists. If you take them at face value, you’ll end up writing stories for something nobody actually needs.
The thinking you can’t skip
When you’re handed an AI “requirement,” don’t just write it down. First, ask:
How real is this problem?
Sometimes, all you have is a vibe, i.e., someone’s opinion that “AI could help.” That’s not a use case. That’s a wish.
Other times, you’ll hear the same complaint from multiple people. That’s a pattern. Better, but still fuzzy.
Stronger still is when you can measure it: how many hours it burns, how many errors it causes, how often it repeats. That’s when you know you’re onto something.
Best of all is when someone has already tried a scrappy workaround (a shortcut or a manual test) and proven there’s actually time saved or quality gained. That’s evidence.
Until you can point to something measured or tested, you don’t have a real AI use case. You just have a story.
Why this discipline matters
AI projects fail for one of two reasons:
We automate things that weren’t problems.
We solve problems that weren’t painful enough.
Both sound obvious in hindsight. Both are avoidable if you slow down long enough to ask:
Do we have proof this problem is real, costly, and worth fixing?
If the answer is “no,” you’ve just saved months of wasted effort.
If the answer is “yes,” now you know you’re working on something that matters.
Look for wasted judgment
AI isn’t magic. It doesn’t belong everywhere. If the task needs deep expertise or carries high risk when wrong, AI may not be the right fit (or at least not without a human in the loop).
But where people are re-reading documents, re-keying data, reconciling versions, or drafting the same summaries over and over… that’s wasted judgment. That’s the kind of repetitive decision-making where AI can actually add leverage.
So even if two problems feel equally real, prioritize the one where people are burning brainpower on tasks they never signed up for in the first place.
How this plays out in practice
Imagine a team pushing hard for “AI-powered triage” in their ticketing system. The pitch sounds promising:
It’ll cut resolution time in half.
Now pause and ask the only question that matters:
Where’s the proof?
If no one has measured how long triage actually takes, they may discover it’s only a small slice of the delay. The real bottleneck is customers taking hours or days to reply. In that case, automating triage with AI would shave minutes off a process that wasn’t the real blocker.
Without that check, the project could easily burn months and budget while leaving the real problem untouched.
What to do in the room
When you’re in the meeting and someone drops an “AI idea,” don’t kill the energy. Capture it. But also ask:
Have we seen this problem happen more than once?
Can we measure how often or how costly it is?
Has anyone tried a workaround that proved value?
If all you get is silence, you know where you stand: it’s an idea, not a use case.
The takeaway
When AI shows up on your desk, don’t start with requirements. Start with proof.
Sort the assumptions from the problems.
Prioritize the problems that are measured (not just muttered).
Focus on places where human judgment is wasted, not where it’s essential.
That’s the real work of a BA in this moment: not to feed the AI hype cycle, but to ground it. To be the one who says, “Let’s check if this is real before we build.”
Do that, and you’ll not only protect your team, you’ll build trust. Because everyone else is busy chasing shiny ideas. You’ll be the one making sure they don’t drive a fancy car down a broken road.
Until next time,
Pragati
Liked this piece? Go deeper:
See it live: Join a free Proof Lab to catch one of the tools in action.
Read more stories: Subscribe to the newsletter.
Explore the method: Visit proofsprint.com for the full toolkit.
Those are classic "solution in search of a problem" statements, aren't they? Both starting with the pre-conceived idea that AI is the best answer.