All Things CSAT
Hey All -
I work for an incredible startup as a CSM. This is an incredible grassroots company as of right now.
To gain insight into what works best for measuring and improving CSAT, I'd to hear from you - the community. We're looking for your input on the following questions:
- What questions do you get the best answers to when measuring CSAT?
- Who do you poll to get the most accurate CSAT data? Is it typically the point of contact or employees?
- How do you incentivize clients to respond to CSAT surveys?
- What strategies have you found to be effective in improving customer satisfaction?
Any insights you have is incredibly helpful.
Giving Thanks,
Kese
Comments
-
In my experience, CSAT questions are most successful when they are short and sweet and when they are sent immediately following an interaction and asking to rate that specific interaction. I'll answer each question more specifically below:
- Simple questions such as 'Was our team able to satisfactorily help solve your issue today?' or 'Were your needs addressed to your satisfaction?' were very effective This was followed up with a second optional question depending on their answer. So if we got a score of 1 out of 5 we provided a follow-up question that asked for more feedback on what we could improve. We also gave them options to select, which were things like Issue Was Unresolved, Ongoing Bug, Slow Response Time, Other, etc. We also provided an open-text field that allowed clients to give us additional context and feedback.
- The surveys were sent at the conclusion of a ticket or issue our team was responding to. This was also automated so that as soon as the ticket was resolved the survey went out to whoever was involved in the ticket. We did this process for our support team and our 1:Many Customer Success team.
- The best 'tactic' for getting clients to respond to CSAT was sending it immediately after the interaction as mentioned above. The best incentive for getting them to continue to provide feedback was following up with those who responded.
- Strategies to improve customer satisfaction all depend on what the root cause of low scores are. When you get responses back, whether it's CSAT, NPS, or other, tag them with categories, such as product bug, unresolved request, poor product market fit, etc. Then analyze that data to see what is driving your score up and down and that will inform your organization of where they need to focus in order to move the dial on CSAT scores. You can get some quick wins with CSAT when you do very timely follow-ups but from a long term perspective it's all about identifying and then solving the root cause of poor scores.
Lastly, if you are looking gauge client sentiment during specific points of their customer journey, such as post onboarding, then I'd look to other survey methodologies for that purpose and focus CSAT on specific interactions.
Hope this helps!
3 -
Elizabeth pretty much explained it better than I could. The only thing I'd add was to make sure we are sharing the same definition and purpose of "CSAT"
As Elizabeth implies, it is specific to measure your satisfaction with this transaction (support case, sales process, etc...). If we share that definition, then the rest is fairly straight forward. Simple short questions to get a rating and a why.
0 -
Without knowing your product or business, it's tough. But, Elizabeth gave a great summary.
For CSAT, I like measuring transactional things, NOT NPS! As Elizabeth mentions, the best process for gathering data like this is through customer interactions (ideally tickets of some sort). This can be email tickets, chats, phone calls, training sessions, QBRs, etc. I've had CSMs enter tickets from QBRs before to keep data rolling in.
A few basic questions I like to use as a starting point...
- Was your issue resolved (Y/N)
- Was this the first time you contacted us about this issue in the past X days/weeks/months? (Y/N)
- How easy was it to interact with us today? (1-5 Very easy to Difficult)
- How satisfied are you with our interaction today? (1-5 Very satisfied to Very Dissatisfied)
From here, you can start measuring things like top box on overall CSAT, bottom box on overall CSAT, first contact resolution, and things like issue escalations. Number 3 is helpful to understand the customer journey to the interaction.
As for your question in number 4, the biggest one is setting customer expectations properly. Often customers don't mind a wait, it's just the duration of the wait with the unknown. If someone submits a ticket, setting an expectation of 24-48 hours may often spawn a call if there's more of an urgent need. But if you don't respond in 48 hours, you set yourself up to fail. This is one example, but there are plenty of others. If you can set an expectation of what the next step looks like for a customer, that helps a great deal in driving CSAT.
1
Categories
- All Categories
- 2024 Demopalooza Videos
- 197 GGR Information
- 172 GGR Cafe
- 19 Welcome to the Community
- 6 Badge and Rank Program
- 195 Specialized Groups
- 27 Future Customer Success Professionals
- 807 CS Conversations
- 200 CS Conversations
- 34 CS Operations Conversations
- 273 CS Org Conversations
- 32 Industry Insights
- 197 Strategy & Planning
- 71 Customer Journey
- 715 Technology and Metrics
- 275 Digital CS (Engagement Programs)
- 203 CS Technology
- 237 Metrics & Analytics
- 17 Value Realization