Compensation metrics for Tech-Touch CS team

Options
Jenny Leman
Jenny Leman Member Posts: 15 Thought Leader
edited August 2023 in CS Org Conversations

Any thoughts on how you'd measure and comp a CSM Team that runs a 1:Many, very tech-driven model for low ADS clients?   I want to drive and incentivize the right behaviors and this is a new model for us to transition to.  Thanks for your input!

Comments

  • Will Pagden
    Will Pagden Member Posts: 99 Expert
    edited July 2020
    Options

    @Katrina Coakley - thought you may have some input to give here? 

  • Ed Powers
    Ed Powers Member Posts: 180 Expert
    Photogenic 5 Insightfuls First Anniversary 5 Likes
    edited July 2020
    Options

    Contrary to popular belief, you can't measure people based on outcomes. If the results are normally distributed (and they almost always are) then according to the Central Limit Theorem at least three independent variables are combining to produce the outcome, the CSM + at least two others. So just because you can arrange your data by person doesn't mean the results are caused by that person. And that means pay-for-performance incentives are based more on chance than on merit. 

    Also contrary to popular belief, pay-for-performance systems undermine motivation, which makes them counter-productive. Self-Determination Theory has shown that extrinsic motivation makes employees less engaged, persistent, curious, proactive, creative, flexible, happy, and teamwork-oriented. In fact, pay-for-performance has only been shown to work when employees are doing simple, repetitive tasks, short-term vs. long-term gains are desired, volume is more important than quality, and adequate safeguards are in place to prevent cheating and abuse--which typically runs rampant. 

    My suggestion is to measure the outcome, study your process, and engage your team in continuous improvement. In a tech-touch environment, CSMs will have little individual impact on results, but there may be times when collectively they do, perhaps during (abbreviated) onboarding or managing customer "moments of truth." Use regression analysis to determine cause-and-effect impact of key factors, and improve your processes to improve the results. Track progress and consider paying a team bonus on a sustainable behavioral change, assuming something they do contributes to it. Consider using a spot bonus to reward and recognize outstanding individual contributions, but don't make them contingent (see above). Unexpected appreciation of the value a person contributes does not impair intrinsic motivation.  

  • Jenny Leman
    Jenny Leman Member Posts: 15 Thought Leader
    edited July 2020
    Options

    Thanks for your time and input here, Ed.  I like the idea of focusing around the important moments in a lifecycle and working backward from there to see the cause/effect.  It will take some work and time to understand, but important to wrap some data around in order to tell the right story and understand the factors correctly.   

  • Russell Bourne
    Russell Bourne Member Posts: 61 Expert
    GGR Blogger 2023 GGR Blogger 2021 First Anniversary
    edited July 2020
    Options

    I 100% recommend listening to @Ed Powers on this, he's the master of explaining how comp plans can drive behavior that undermines the plan.

    Jenny, do you have a higher-touch (like Mid-market or Strategic) customer segment too?  I had great outcomes when I assigned seasoned CSMs to high-touch customers and a junior "bench" or entry-level CSMs to the tech-touch customers.  The junior team wasn't on a quota, I was able to create a KSO-based bonus for them that allowed me to be nimble with what we measured in the KSO.  

    Those junior reps learned a lot in a short time because of the volume and it really groomed them to earn promotions as our team grew.  Upward career arc within the team helped with low turnover for me as a leader.  Happy to dive into more details if you need them.

  • Jeff Wayman
    Jeff Wayman Member Posts: 4 Navigator
    edited October 2020
    Options
    I love this @Ed Powers. Do you have any recommended reading someone could use to dive into this?

    I've encountered what you are describing first hand, arriving to the same conclusion, only without the proper way to articulate it.

    Specifically, where I've encountered this, is trying to show direct causation of a customer taking training, or consuming some piece of educational content and that directly impacting renewal or expansion. I think it's easy to draw correlation if you have user (customer)-identifiable information (which we don't always have). However, I have pulled back on trying to say x caused y - which is what tends to be desired during an interview or for performance review.

    I know that we can show educational content consumption as a factor, and improve on various facets of how the content performs, but I truly believe it's near impossible to ever say, "Yes, because a customer consumed x content, we know they purchased." Unfortunately, I don't think this is always what people want to hear, even if you're leading a part of the org treated as a cost center and not generating P&L.
  • Ed Powers
    Ed Powers Member Posts: 180 Expert
    Photogenic 5 Insightfuls First Anniversary 5 Likes
    edited October 2020
    Options
    Sure, @Jeff Wayman. One that made a major impact on my life is W. Edward Deming's Out of the Crisis, a seminal work. Deming is revered by Quality professionals around the world, and he got my head spinning about the dangers of "pay for results" many years ago. 

    Self-Determination Theory by Richard M. Ryan and Edward L. Deci is a terrific academic work that summaries nearly 40 years of their research. Highly recommended. Daniel Pink's excellent Ted Talk on motivation refers extensively to their work. 

    I also gave a presentation at our Colorado Customer Success Meetup last year called "Rethinking CSM Incentives" applying all of this (including Deming's Red Bead Experiment) to Customer Success. The discussion at the end is really good. 

    As you point out, quantifying the impact and economic value of training is a big challenge. Intuitively we know that it helps, but isolating the contribution of any individual factor in a complex system takes designed experiments (e.g. A/B testing). Rarely do circumstances permit that, however, but a well-constructed pilot test can demonstrate its impact.