If you have an easy time attributing business impact of your loyalty program, chances are your measurement is wrong. When it comes to Loyalty, there’s no easy measure!
Without getting into the deep end of why people do loyalty programs, I’m going to dive right into the problem. Which is the way program impact is attributed. The methods used most often and the problems associated with them are:
|Measure||Looks Like||The Problem|
|% Program Contribution||Our members contribute 60% of our sales – excluding the enrollment transaction.||The program contributes, but that doesn’t mean attribution. Doesn’t answer the question of what the program delivers.|
|Member vs. Non member metrics||“Members transact 3.5 times a year as against non-members who transact only 1.8 times a year”||Self selection bias. Only good customers become members. Or conversely, those not interested in coming back choose to stay non members.|
|Redeemer Vs. Member metrics||Redeemers transact 7.3 times a year as against non-Redeemers who transact 3 times a year – clearly showing redemption leads to better frequency.||It doesn’t show causality. Perhaps the 7.3 would have happened without the redemption.|
|Campaign Lift metrics||Members who got 20 communications in the year did better frequency than those who got 3 communications in the year.||The problem here is that most brands send more/ better communication to active and high value members. So there’s a bias here too.|
In short, NONE of the above methods attributes impact to a loyalty program. They all show that there is potentially value in the program, but will fall flat in the face of determined cynicism.
To solve for the problem, you have to think like the opposition, with a view that you will accept no halfway proofs, that you will be equally determined in your cynicism. This means:
- Get as close to proper attribution as possible
- Whilst maintaining sufficient support (sample size)
- losing your mind
Let’s focus on the “Good Attribution” end of the spectrum today and why it’s needed. Every program manager needing to get budgets for the program will find it hard to defend as nobody can answer the question of – what would have happened if we didn’t have the program?
Self Selection bias
If your program actively enrolls only your best customers, then how do you attribute better customer metrics to the program? Even if you enroll everyone who wants to become a member – but perhaps only customers who already know they will keep coming back choose to enroll? Conversely, if those who choose not to enroll are customers uninterested in coming back doesn’t that weaken non-member metrics?
The answer is, again, hard to pin down. Self selection biases are dogged, and near impossible to remove, the best one can do is take the more obvious impact out and then take decisions on reasonably good data.
Setting up Lookalikes
The first problem here is that you need transaction history of customers who are not members. You need to know everything a member did as a non members before enrolling. This itself can be hard for most retailers.
If you do have non-member identification, however, go ahead create lookalikes. The watch outs here are:
- Are they truly look alike? To establish that is a hard job – needs customers to have some history before becoming members compared to others with the same history who chose to not enroll.
- You’ll have to establish a notional “enrollment date” for even non members to show a pre-post behavior. i.e. Before this date is Pre, after is Post.
- What constitutes “Look-alike”? Is it their frequency? The profile? The value? Ticket size? Outlet they go to? EOSS/ Non EOSS? Month in which they were acquired? If you have too many variables your comparison set will get too small. If you relax criteria, the look-alikes will not really look alike!
Once you do set it up and create lookalike segments for sets of members, you can start comparing metrics. If you now look at the “Pre” enrollment metrics for members vs. lookalikes they should be very similar:
Chances are, however, in trying to find lookalikes you’re now concentrating on a very small base of customers.
For every 100 members you start with there will be plenty who enrolled on their first transaction – can’t do Pre/ Post analysis there. Or those who just don’t have enough Pre data to build up a “Look” to even find “Lookalikes”! Finally, when you do find lookalikes some segments will just not have enough members in them to do any meaningful analysis (statistical significance et al.)
Once you establish members and lookalikes, you can evaluate Pre/ Post change for members vs. non members. i.e.
|6 month Frequency||Pre||Post||Change|
|Lookalike Non members||2.87||2.1||-27%|
What this shows that despite looking identical prior to membership, members did better frequency in the 6 months post membership as against Non-members. This is pretty solid ground to argue the case for program impact but alas, even this is not perfect. There’s just no getting around an argument that non-members did not become members as they knew they were not committed to the brand!
It is easy to tie oneself up in knots when it comes to loyalty program impact attribution. As analytics folks the duty is to give sufficient evidence of impact without confounding the business decision.
To me, sufficient evidence would be a combination of:
A last word – please don’t get swayed on anecdotal experiences as evidence of a program working/ not working. You’re a sample size of ONE!