Why Doesn’t Great Support Always Feel Great?
The dashboards say we’re doing well:
✓ Response times are tight
✓ AI deflection is up
✓ The team is moving fast
✓ CSAT hasn’t dropped
So why does support still feel... off?
That’s the question I’ve been wrestling with, and I know I’m not alone.
📉When the metrics say “great,” but it doesn’t land that way
On paper, we’re hitting targets. But zoom in and the cracks show:
Fast replies, without real relevance
Resolved tickets, where nothing’s actually resolved
Help that shows up, but doesn’t help
And if you lead this work, you know the trap: Metrics look clean. But the experience isn’t.
👉 Run this audit, it changed how I think
Here’s what I did: Pulled 10 resolved tickets at random. No filters. No cherry-picking.
Then I asked:
“Would I feel good if this landed in my inbox?” “Was this complete and clear,
or just closed?” “Did it feel like we showed up?”
It was... humbling.
Some were technically accurate, but lacked enthusiasm. Some didn't quite follow through. Others felt more like handoffs instead of genuine assistance.
Here’s what we need to change
Short-term :
We should update our rubrics to score tone, clarity, and effort required by the customer, not just speed and handle time.
Long-term:
Let's redefine “quality” across AI and humans. Fast is fine, but it has to be useful. Automated is great, but it still needs to feel understood.
For CX/Care/Service leaders: A real question
AI can scale. Ops can optimize. But if your bar is just “resolution,” you’ll miss what matters.
Relevance is the bar.
So I’ll leave you with this: What signal tells you that support actually felt good?
Let’s build from there.
Best, Guneet
Disclaimer: The views expressed in this newsletter are solely mine. I am not a spokesperson for my employer, nor do I represent my employer's opinion.


