CSAT Strategy
Find more posts tagged with
Comments
Hi @Michele McKenna - I'm late to the party in answering, but I'll throw in my two cents.
I developed a comprehensive VOC program at a company I worked for back in the early 2000s - we measured CSAT on a transactional level (support cases), NPS (from the end users as well as executive buyers), and did a semi-annual customer loyalty survey. I've saved all my old assets I used to pitch the program internally as well as the survey questions - I'd be happy to share those if they'd help.
@David Ellin makes two great points about the readability of the survey, as well as the knowing what you want to measure. That's spot on - you have to know what you want to get out of a survey so you can structure your questions correctly.
I did a presentation last week on building a CX program from scratch - this link will take you to the recap and to a recording of the event...I hope this sparks some ideas for you.
If you'd like to chat 1-1, please shoot me a message and I'll be happy to share any knowledge I can.
Make sure the surveys are written well so that customers understand what you're asking about. And more importantly, make sure you're asking about the things you really want to measure. I've talked with too many companies that did a survey and then realized the survey didn't measure what they wanted.
Here's an example. I worked with a client recently who had done a survey about the customer's experience with their support team. The survey asked the customers to rate the friendliness, responsiveness, and professionalism of the customer support reps. The reps got rave reviews but the company's retention was suffering greatly. The company realized that while their reps were very nice, no one was solving the customer's issues. They were measuring the wrong thing and never asked whether the company solved the problem.
Focus on the intent of what you want to measure and why...and then make sure your surveys get at the heart of your intent.
? @Naomi Aiken.
Great topic - thank you for getting this conversation started! I agree with others that there is a ton to discuss when it comes to CSAT and overall customer survey strategy. Here are some initial thoughts based on my experience:
- There are a few standard surveys in the industry: Customer satisfaction (CSAT), Customer effort score (CES), and Net Promoter Score (NPS). As you design your overall strategy, think about the type of survey you are sending out and what makes the most sense based on the timing. It's also not a bad idea to use different types of surveys for different aspects of the customer lifecycle, so you are not just asking the same thing over and over. For example, you might send CSAT after a new customer officially goes live with your software to gauge initial satisfaction, then you might send CES surveys after a customer uses/deploys/configures a critical feature for the first time, and then on a recurring quarterly basis you might send out NPS.
- CSAT vs CES vs NPS vs Custom Surveys - There are so many options! Do your research and make sure you are sending the right survey at the right time to your customer. Here's a really helpful article on the difference between CSAT, CES, and NPS.
- Timing is everything. Targeting too. - I caution my customers about sending out a survey too early and to the wrong people. All that does is invite uninformed and/or negative responses. Companies seem eager to send out NPS every quarter to everyone. Think about that for a second. Do you want a user who only became a customer yesterday for the first time to give you a CSAT or NPS score? I don't! It's too soon. I want to make sure I only survey users who can give informed responses, so my company can really use that data to improve, and so when I as the CSM follow up on a negative response, I can really dig in and understand it. Same concept about targeting. Do you want CES or CSAT feedback about the implementation of your software from someone who was not involved in the implementation? Nope! You get where I'm going here. Time your surveys strategically and target them to the right people - this ensures the best feedback possible.
- Don't bother asking if you aren't going to follow up - I know this sounds harsh, but I really believe this is critical. Why ask for a score and comments about that score, if you aren't going to do something with it? At my company, we have automation set up to follow up with every single survey response we receive - the good, the bad, and the ugly. We have separate messaging that goes out to our promoters, passives, and detractors, so it is a tailored follow-up message but no matter what we always follow up! And, we also use automation to send ALL survey responses to company-wide Slack channels so the entire company has visibility into customer feedback.
Anyway, sorry for the long post but I hope this is helpful to you.
All the best,
Naomi
- https://waypointgroup.org/stop-chasing-renewals-heres-how-to-keep-customers-engaged-so-renewals-and-more-will-just-come/
- https://waypointgroup.org/you-are-responsible-for-customer-success-but-are-you-driving-customer-success/
And without the benefit of knowing "the problem to be solved" here's a few thoughts in response to your questions:
1. There's a strong difference between transactional/touchpoint feedback and "overall relationship" feedback, where the latter is much more than the sum of the parts. It takes a whole company to create a customer-advocate/promoter, and when a customer's expectations aren't being met it's not often that the root-cause is where you think it is. A customer might have expectations set by sales or marketing or during implementation, and while the individual touchpoints might be performing well what was communicated or committed to the customer may not always be obvious.
2. You *MUST* have a unified feedback strategy. Both due to #1 above and because the "customer feedback" process must also exemplify a positive experience (it's you're adding yet another touchpoint), the goal is to make it easy AND valuable for the customer to share (if they perceive no WIIFM then why won't participate). I've never seen a siloed program work -- mostly you get people forced to be political with the data, via a "see how great we are" from bad data and no real change ever happens, creating yet another missed opportunity (and likely more missed customer expectations).
I'm happy to have a direct phone call if you want to discuss (this goes for anyone in the community... having done this work for 20+ years, and *not* being a salesperson that wants to push an agenda or solution... just here to network and help with best practices and templates).
/Steve