How do you measure the quality of support agnets?

Ido Barnoam
Ido Barnoam Member Posts: 22 Thought Leader
10 Comments Second Anniversary Office Hours Host 2022 Name Dropper
edited September 2020 in Metrics & Analytics
Hi everyone 

Our support team currently uses KPI's that are pretty standard for support teams. Namely:
  • 1st response time
  • 2nd+ response times
  • CSAT score from customers
We also monitor the number of tickets each agent does, although it's not a KPI we measure the performance on.

We are a very technical company and sometimes support tickets take a while to solve. Since our engineers have the freedom to go anywhere and talk to anyone in the company we see times where an engineer works on one ticket for a few days, while other tickets might take only a few minutes to handle

This can skew the actual performance of an agent in terms of the number of tickets he does in a given time. Because of that, we feel that it might be beneficial to also measure the effort, or quality put in a support response.

I've tried looking around for KPI's to measure that but didn't see anything in useful.

I'm wondering do you measure the quality/effort put in a support ticket? If so how are you measuring that?

Thanks!


Comments

  • Sunil Nair
    Sunil Nair Member Posts: 7 Seeker
    edited September 2020
    So how do you tell good from great? One idea might be to measure the amount of change an agent can effect - are they able to make enhancement recommendations to engineering? And you could measure the number of recommendations that make it through the validation cycle for inclusion in roadmap.
  • Ido Barnoam
    Ido Barnoam Member Posts: 22 Thought Leader
    10 Comments Second Anniversary Office Hours Host 2022 Name Dropper
    edited September 2020
    Thanks, Sunil!

    That's a great direction to explore. We'll look into it.
  • Star Hofer
    Star Hofer Member Posts: 9 Seeker
    edited September 2020
    Hi Ido, 

    One idea could be to create a QA rubrics for your support tickets where a manager would rank a selection of tickets per week/month per rep. The rubrics are a great way to provide quality feedback back to the rep as well. You could use the QA score as a way to assess how well you are delivering on your commitments to a customer from a service quality perspective. 

    I have used the tool maestroqa to help facilitate the process.
  • Ido Barnoam
    Ido Barnoam Member Posts: 22 Thought Leader
    10 Comments Second Anniversary Office Hours Host 2022 Name Dropper
    edited September 2020
    Thanks, @Star Hofer!
    W are currently investigating this tool based on your response.

    Much appreciated.
  • Ed Powers
    Ed Powers Member Posts: 168 Expert
    Third Anniversary 100 Comments 25 Insightfuls 25 Likes
    edited September 2020
    Hi @Ido Barnoam--

    At one time I ran the Quality function at a BPO contact center with about 3,200 agents doing customer support for B2B and B2C clients. My team evaluated quality by reviewing phone and screen recordings and scoring the interactions against set criteria, including how the agent opened, managed and closed the communications, how they used their resources, how they solved the problem, and how they treated the customer throughout. We would then coach the agents, reviewing the recordings and ticket records, describing how they could do better next time. We would track and report statistics to our clients and then execute continuous improvement projects to improve performance overall. I've used this approach subsequently when I've run Customer Support operations at software companies, the only change being today's much greater volumes of e-mail and chat vs. phone. 

    Some recommendations: 
    • First Contact Resolution (FCR)--this metric has the most significant impact on customer satisfaction and retention and it should be tops on your list. Customers want their problems solved quickly, without being transferred to someone else and without having to contact you again. 
    • Volume--this is obvious, but you must also track it by arrival time (day and time) and severity. 
    • Service Level (SL)--this is typically stated as a % time to resolution (e.g. 80% closed in 1 hour or less) by severity, or stated as a response time, such as Average Speed of Answer (ASA, e.g. 80% of calls answered in 20 seconds or less). This is a key customer specification, and customers are often willing to pay more for faster response times (e.g. Bronze, Silver, Gold support) given levels of severity or urgency. 
    • Handle Time (HT)--this is the time it takes to close a ticket and includes after-contact documentation work. As you note, handle times are exponentially distributed with most tickets being quickly resolved while others take much longer because of greater complexity. This metric is critical, however, because it drives your staffing model: the faster you turn tickets, the higher your Service Level and the fewer staff (and less cost) you need. In large contact centers, workforce management software uses models such as Erlang C to staff shifts to meet SLs given HTs and volumes by day and hour of operations. You also must be MANAGING HANDLE TIMES with a queue manager or supervisor to provide technical support and escalate, as needed, continuously. If this doesn't happen, tickets will start stacking up quickly, which blows your SL and FCR, lowers CSAT/NPS and increases your costs. 
    • Closure Rate--one way or the other, all tickets must be closed within a set time period by each tier, and again, this must be actively managed. 
    • Online Rate--support reps must spend a minimum % of their time live, online, answering chats, calls and emails. This is typically 70%-80% and it must be managed to meet SLs. Sometimes conformance to staffing schedule is used instead. 
    • Quality--as described above, we used a standard form and a 100-point scale to grade the interaction. For long HT tickets, conformance to customer update schedule (both automatically and manually) is also key because it manages customer expectations and keeps FCR down. 
    • CSAT--customer feedback post transaction. I DON'T RECOMMEND USING NPS! Contrary to the hype, it's useless for service transactions. Better to ask specific questions and use sampling methods to drive your continuous improvement efforts.  
    • Avoidable Rate--this is an essential metric that drives continuous improvement: the best way to address a problem is to prevent it from happening in the first place. A great example is a "How do I..." question which is a training problem, not a support issue. These types of tickets are avoidable by doing things better upstream. That means you must characterize the nature of your tickets and capture the % that can be prevented. 
    Keep in mind these are PROCESS metrics, not individual KPIs--I've found using statistical analysis that 94% of the variation in results is due to factors outside the agent's control. Managers only need to address individuals when their performance is consistently +/-2 standard deviations from the mean. The rest of the time, the manager should be leading the team on continuous improvement projects.

    Hope this manifesto helps. Feel free to contact me to talk more.