Customer Health Scoring: a topic hugely important to a mature Customer Success practice, and a topic as “bludgeoned to death” by public discourse as Pepsi vs. Coke. Don’t get me wrong: those resources expounding the importance of leveraging usage data and thoughtful color schemes are hugely valuable, but they only scratch the surface of the challenges of building out a reliable indicator of customer renewal status. Below, we’ll dive into 6 less-often discussed aspects of creating a truly insightful customer health score, by way of what to avoid.
If more than 25% of your health measures are manually driven, you’re doing something wrong
It’s undeniable that some component of a reliable customer health score should be driven by aspects of the customer relationship that analytics can’t capture. You could argue it’s one of the core examples of why CSMs are invaluable to SaaS enterprises. However, having worked at organizations that relied on anywhere between 50 and 100% of health indicators being driven by the account owner, I can say with nightmare-induced confidence that it’s not the way to run and scale a CS organization.
There are four major reasons for this being an awful idea:
- Usage data is a much better indicator of actual product adoption. You can only gather so much info about product stickiness from monthly check-ins and EBRs.
- There are so many ways to interpret different “gut check” signals. Each of your CSMs may not, and often aren’t, using the same scale of reference for actually deciding what values to attribute to each manual health score.
- At a certain threshold, there are so many manual scores that it becomes difficult to separate them as a CSM. Your CSMs will spend more time thinking about “is this ‘red’ score more of a renewal risk or an implementation risk?” than is actually valuable.
- It’s such a pain to manage. Does the idea of updating five manual scores for a portfolio of 40 customers every Friday afternoon sound pleasant?
If your usage data pipeline isn’t up to snuff, this obviously becomes a very different conversation. But if that is the case, then reliable customer usage reporting should be your next quarter’s top priority, plain and simple. Otherwise, your CS organization will become nothing more than well-trained tea-leaf readers.
Healthy login rates do not equate to healthy usage
Staying on the topic of usage data, it’s common practice to equate in some capacity the measurement of login rates with health usage. Sure, having a login rate of 75% is significantly better than 5%, but what does it actually tell you about the stickiness of your tool in your customer’s organization?
This warrants a much deeper discussion, but in summary: good usage-based health indicators are based on either key differentiating features in your product, or critical usage moments in a user’s adoption (i.e. “once a user is able to do x, they’ve really indicated high maturity/stickiness”). The ability to enter a username, password, and click ‘Submit’ represent neither of those things.
Tangentially, “seat usage” falters in a similar fashion, assuming “using a seat” means assigning a license to an email. Even if your pricing model is purely based on the number of seats used, it’s completely useless at predicting churn if none of those warm seats are actually using the product in a healthy way.
All health measures can be alerts, but not all alerts can be health measures
It’s not unusual to think of your alerting workflows as somehow related to health scores; the signals that warn us about potential renewal risks could be similar, or the exact same, as those describing the overall health of the customer. But that’s not to say that all alerts (or rather, all signals that trigger alerts) should impact our health scoring schema.
Let’s consider an example. Increasing time-to-value (TTV) is a tenant of Customer Success, and when there are signals indicating that this may be lagging, it’s definitely something a CSM would want to be alerted on. Expecting customers to have built x within y days, and it’s y + 10 days? Sound the alarm! However, imagine this is due to a larger scale effort on the customer side to train their users and increase adoption first. The alert is still of value, because it’s a signal all parties should be apprised of, but it’s not indicative of a health concern because it’s for the benefit of overall adoption.
Obviously not all edge cases can or should be considered when building out a health schema, but they do show that not all alertable signals warrant sticking a health measure on top of it.
Be cognizant of measure sprawl
When one begins building or iterating on a health scoring schema, the tendency can be to measure and track every conceivable indicator of customer health, from usage to business health and relationships. But keep in mind the simple math of spreading your scorecard too thin: the more measures that roll up to your overall health, the more likely truly valuable signals will get averaged out and lost.
The exercise for choosing health measures that should contribute to overall health, and by how much, should go as follows:
- List all potential indicators (large and small) of customer health
- Choose 3-4 core indicators that you can say, beyond a reasonable doubt, have strong correlation to renewal/expansion likelihood
- Choose an additional small set of health indicators (aim for no more than 5-6) that also represent some aspect of the customer’s health, but are more accessory indicators than core
For the remaining indicators in your list not chosen as part of the core/accessory score, don’t hesitate to keep them around. The message here is not that those non-core signals are useless, but rather that they shouldn’t roll up/impact the overall health score.
Do you actually want a separate scorecard for onboarding and maturity?
This is likely the most controversial stance here, but a conversation that needs to be had nonetheless. At first glance, it may seem reasonable to have a completely separate health score for two different stages of the customer journey, since generally the signals one is looking for differ based on how mature the customer is (or better put, how far away from renewal they are).
But that last point is a key one – renewal date is the north star for proving a health score’s value. The ultimate value of measuring customer health is predicting what happens at renewal. If the customer ends up churning or contracting, we’d expect to see a lower health score. If they renew and expand their contract, we’d hope our health score returns a glowing grade. So with that in mind, how does tracking customer health with a different scheme for 2-6 months of their contract aid us here?
The other point worth making is “if we did decide to follow this pattern, wouldn’t we want to follow a similar line of thinking for each stage of the customer journey?”; in other words, where does this separate-schema-per-stage end? Most mature CS orgs have at least 4 stages of a customer journey, and since signals will fluctuate all along these stages, we should follow the same model throughout. As you can see, this gets out of hand quickly, and dilutes the entire point of the process – a simple, singular view of customer health.
You don’t need to get it right the first time
There’s not much more to this final point than its face value. Customer health scoring is an iterative task by nature, so no matter how accurate a job you do for the first version, there will always be improvements, since your tool and your customers improve over time as well. In addition, as you develop a backlog of renewal data, you’ll have more fodder for comparison in validating the accuracy of your health scorecard, allowing more data-driven improvements to be made.
Simply put, don’t hold yourself to an unachievable standard of accuracy right out of the gate. Time and experience are your friend, so don’t be afraid to leverage them.
---
Josh Levin is a Manager, Customer success at Honeycomb.io
Connect with Josh here