Agent CSAT, Customer effort Score or both after a recent support interaction?

Tanuj Diwan
Tanuj Diwan Member Posts: 30 Expert
5 Comments
edited October 2023 in Metrics & Analytics

I had an interesting conversation with a colleague today that how to gather valuable feedback after a resent Cuatomer Support Interaction.

Agent CSAT Should it be how satisfied were you with the agent, with an open ended follow up?

CES should be How easy was it for you to get the issue resolved with a followup ?

Agent CSAT will give an answer for agent quality and interaction, whereas CES gives you ease of transaction but might miss out on the agent part as Agent might be good but the wait time, IVR might be a frustration.

If we ask both with open ends it might be too much for a customer.

What do you suggest? CES, CSAT, both or something else?

Comments

  • Matt Vadala
    Matt Vadala Member Posts: 47 Expert
    edited July 2020

    A company should be collecting multiple points of data after an interaction. Just collecting on the agent alone can lead to agents receiving negative feedback that they are not responsible for, rather it was due to the product/software. Asking about both agent and software should give a truly wholesome sense of where the customer was during that interaction

  • Tanuj Diwan
    Tanuj Diwan Member Posts: 30 Expert
    5 Comments
    edited July 2020

    Absolutely, so asking both is defintely an option, have you tried or heard of any other way of asking these questions?

  • Alex Tran
    Alex Tran Member Posts: 38 Expert
    Second Anniversary
    edited July 2020

    If I had to choose, I'd choose CES because Agent scores are more personal. What's most important is the content discussed, and feedback customers' feedback to the right teams, like Product (e.g. customer feedback). 

  • Matt Vadala
    Matt Vadala Member Posts: 47 Expert
    edited July 2020

    @Tanuj Diwan  one could send a single open-ended question out after each interaction and ask: How would you rate your services overall to date with "company?"

     

    This would allow the customer to freely voice opinions on not only their interaction with the agent they worked with, but also with the overall service. Also, it achieves the minimalist approach of asking a single question. 

  • Daryl Colborne
    Daryl Colborne Member Posts: 50 Expert
    Third Anniversary 5 Comments Photogenic Name Dropper
    edited July 2020

    Right now we are using CSAT after Support tickets are closed, but this may change. Wootric is a really cool tool that we are very likely to invest in to aide us in improving our customer journey. It offers microsurveys for CES, CSAT and NPS. Like others above have said, it's important to gather data on more than just one area to ensure we have a wider view of areas that we can improve in, not just a Support agent's performance but also the product itself. Rallying around effort, loyalty and satisfaction will get us there :)

  • [Deleted User]
    [Deleted User] Posts: 260 Expert
    Third Anniversary 100 Comments 5 Insightfuls Photogenic
    edited July 2020

    @Steve Bernstein Do you have any thoughts on the best way to word some of these survey data points after an interaction with a customer? 

  • Tanuj Diwan
    Tanuj Diwan Member Posts: 30 Expert
    5 Comments
    edited July 2020

    Thats a great point @Matt Vadala One question can give overall answers for the brand, only place where Agent csat is important when it is a part of their KPIs

  • Tanuj Diwan
    Tanuj Diwan Member Posts: 30 Expert
    5 Comments
    edited July 2020

    I agree @Daryl Colborne . Measuring Product Feedback, NPS all are important but for service we wanted to understand what will get the better insights. I beleive we have to try using them as per the Objective we have from the survey.

  • Steve Bernstein
    Steve Bernstein Member Posts: 133 Expert
    Third Anniversary 100 Comments Name Dropper Photogenic
    edited July 2020

    Having been doing this for 20+ years, and assuming we're all talking about B2B SaaS, we've yet to find that "effort" is a key driver of the support experience.  Admins/end-users expect prompt resolution far more than effort... @Jeff Breunsbach asked me below to comment on the questions to consider, so I'll give more detail on that lower in this thread. 

    Also, be deliberate about the tool you select.  I mean, surveys are easy -- there are a ton of tools out there -- but you'll want a platform that can provide more so you aren't wasting your time trying to acquire meaning from your data:

    1. You'll need key-driver analytics that show you what the true drivers of customer sentiment are... remember that just because something is rated poorly doesn't mean that it's the best-bang-for-buck.  Don't waste time and energy focusing on things that aren't going to move the needle, so use some stats and financial linkage to understand the optimal priorities.  
    2. Longitudinally track and trend the same account and contact over time to make sure sentiment is trending in the right direction You'll also want to combine relationship feedback with transactional feedback to get the complete picture of the health of the journey, the account, and the contact(s). 
    3. Provide a closed-loop follow-up system to ensure that you are learning and improving, capturing root-cause, and converting detractors into promoters.
    4. Tell you who's engaged and who isn't -- participation rate is an incredible strong indicator of the health of the relationship, and just focusing on responses probably means you are leaving out the majority.
  • Steve Bernstein
    Steve Bernstein Member Posts: 133 Expert
    Third Anniversary 100 Comments Name Dropper Photogenic
    edited July 2020

    You know I do ? @Jeff Breunsbach !  I wrote a reply above with our research on Customer Effort Score (CES),  and assuming we're all talking about B2B SaaS then here are questions we recommend you consider:

    1. Confirm their persona – since we’re talking about Support you’ll want to understand if the person is an end-user of the app, an administrator, etc – and then make sure you use that information to drive the right logic so you are only asking questions that are relevant to the person
    2. The extent to which is the case resolved (fully, partially, not at all) and if the latter 2 options then should the case be reopened.  How appropriate is the solution – can they implement the solution, or are there other issues?
    3. Be sure that feedback about the Customer Support Engineer CSE, or whatever you call those folks) is separate-able from the issue itself, since the CSE often has little control over the end-to-end experience. So ask about the CSE’s handling of the incident, and separately ask about their perception of your company’s handling of the case overall. To what extent does the customer feel the issue was resolved when needed?
    4. The tactics – timeliness (proper sense of urgency), ease to get in touch with the right person that can address the issue, proper use of the customer’s time, effectiveness of the communications and status, completeness of the solution, expertise, etc
    5. CRITICAL is to understand how the interaction impacted the relationship: for example, did the interaction make this person more/less/same likelihood to recommend you IF that person is in a situation to be able to recommend your company. 

    Some “optional” questions that can be insightful:

    1. You’ll want at least 1 free-text comment question so the customer can clarify. Remember that free-text questions are very “expensive” in time to answer and also time for you to digest. You’ll never get to root-cause in a survey so leave some questions for your follow-up process to make best use of everyone’s time.
    2. Did the user go to the community (or other resources) before opening the case?  Why or why not?
    3. What’s the customer’s perception of the product overall? 
    4. How’s the documentation and support website?
    5. Do they need to speak with anyone regarding other issues or questions?  This is a GREAT way to engage the customer proactively, and helps drive customer participation even if they don’t have anything else pressing.

    You MUST consider the rating scale… here’s an article on that:  https://waypointgroup.org/why-a-0-10-scale-is-your-best-option/

    This may feel like a lot of questions but you’ll use logic to ask ONLY the right questions and keep the questionnaire to no more than ~2-3 minutes and communicate that in your outreach. Give your customer a clear “what’s in it for me” to respond / why should they give you the gift of feedback if it’s just going to go into a black hole? Communicate and set expectations around give-and-take and remember that NON-RESPONDERS are telling you something by not participating… ENGAGE them because we know that silent accounts are up to 14x (!) more likely to churn than accounts that participate in your feedback program. 

    Lastly, please remember that you'll want more than just a survey. Your technology to automate this should include, at minimum:

    1. A platform that can drive the right workflows and analytics that can prioritize the optimal improvements for you with financial linkage and key-driver analytics... just because something scores low doesn't mean that addressing it will give you the best-bang-for-buck
    2. Workflow that re-opens a case when necessary (or opens a new case, when appropriate)
    3. Captures root-cause information so you can address customer issues at the source (which we consistently find is NOT Product or Support related, but often comes from Sales and Marketing setting the wrong expectations!)
    4. Provides a holistic view of the account’s sentiment and engagement (i.e. relationship strength)

    Hope this helps! I'd L-O-V-E to address any questions or concerns as I've been doing this work for over 20 years and have learned a few things along the way as to how to make all this work while providing ROI on your effort!

  • Tanuj Diwan
    Tanuj Diwan Member Posts: 30 Expert
    5 Comments
    edited July 2020

    @Steve Bernstein Love your opinion as always. My questions on some of the points mentioned, How can you get the information to the support team about whether customer checked the community, knowledge base, customer perception about the product? Is it all through integrations with the right systems and getting this information in the help desk tools?

    Some of the things we need to understand cant come from a small 2_3 min survey but can be from followups.

    How do you priortize which people to do followups to dig deeper and find the root cause?

     

     

     

  • Steve Bernstein
    Steve Bernstein Member Posts: 133 Expert
    Third Anniversary 100 Comments Name Dropper Photogenic
    edited July 2020

    Your first question about data: The community and support technologies really can't tell you if the customer checked the docs or community first, as there could be other people involved and/or searches in those areas might not have been about the issue in question. As a result we have included this question (optionally) in the questionnaire, and make sure you provide some additional insight behind the answers options such as, "No, the incident was time-critical" and "No, didn't know about the community" and "Yes but by issue wasn't addressed" etc. You'll want to share all the resulting data with your Support team (another reason why a VoC platform makes more sense than just a survey tool -- you'll want to all be on the same page with insights, and not in Excel running pivot tables), and best to have them involved in the questionnaire design from the beginning so they are hands-on, showing commitment and engagement internally to drive the right improvements.

    Your second question about follow-ups:

    1. At minimum, follow-up with 100% of detractors to address their concern (even if the follow-up is to reset expectations). There is gold in the detractor sentiment. If you lack bandwidth to follow-up individually, then maybe you're stretching too thin with the wide outreach and should consider only getting feedback from the high-value accounts (but you'd be missing out and can't extrapolate feedback from high-touch accounts to infer for low-touch).
    2. Once the issue is addressed you should have permission to ask a few questions as to what they experienced exactly compared to what they expected instead and where those expectations came from. Record the gap in expectations in your Voice-of-Customer platform so you can shine the light on those gaps.
    3. Once you've done your key driver analysis (which defines that improvement areas most likely to deliver high-bang-for-buck), you'll want to select a sampling of accounts that scored that area low, and some that scored it high. Why did one person experience a gap, while another didn't -- what were the differences expectations and why did those exist?

    Does this help clarify? Any additional detail I can provide?
    /Steve