Performance insights for an app user retention campaign

Why did our retention campaign results fall short of our expectations, reactivating less users than predicted?

Problem statement

Our app exists so members of the loyalty program can more easily view all offers available to them, activate and view offers before they shop. Some offers are available on a regular basis (new offer each week) while other offers are triggered by the member's behaviour.

While the app is a convenient way to access value from the program, some members haven't built a habit of using it, and need to be reminded that they have the app. As part of our marketing strategy, we run retention campaigns every few months, to bring these 'lapsed' users back and get them activating offers.

Background


Our loyalty app was 2 years old and we had just run a retention campaign. Users were offered bonus points if they came back into the app to activate the offer, and then shop (to successfully acquire the points).


However, compared to previous campaigns of the same nature, this campaign did not quite meet the performance we’d been hoping for. A lack of customer engagement meant we did not end up reaching the budget we'd allocated to it (i.e. we didn’t end up rewarding as many points as we had forecasted).


Given our loyalty program exists to reward customers, we wanted to understand the reasons behind this lower than expected performance. What could we learn from this campaign, and how could we improve the next one?

Approach

(1) Understand channel performance at a high level and determine scope

The campaign used our owned channels: email and push notifications. I ran high level diagnostics on both channels to understand how many customers had interacted with email (opened, clicked) and how many with push (tapped to come into app).

Given the short time-frame we had for the analysis and the much higher volume of users brought in via email (email had historically been our original channel, meaning many members were used to referring to email to hear from us) – I focused the resulting analysis on customers interacting with email.


(2) Identify hypotheses to guide the analysis

Next was forming a few hypotheses to test for evidence against the data. This involved drawing on my own experience (being involved in past campaigns), as well as input from the rest of our team – marketers, data scientist and BI analyst. 

We knew that the three past campaigns had not varied in terms of structure:


Our expected performance for this campaign had been based on the previous two campaigns. So what could be driving the lower than expected results? 

We discussed some hypotheses – could it be:

(3) Choose one hypothesis to deep dive into

We had learnt from our acquisition campaigns that audience changes over time were leading to a softening acquisition rate, so I decided to dive into the “change in audience base” hypothesis first. 

Breaking down the starting audience into one key segmentation the business had pre-defined (known to capture different types of rewards members / rewards behaviours) I realised that while the starting audience skewed towards less engaged members, the customers who responded skewed towards already more engaged members.

Comparing this same breakdown to the prior two retention campaigns showed how our campaigns were gradually being made up of a larger portion of less engaged members.

(Another supporting aspect to this argument, was the fact that another rewards team had recently released a new technology that had improved the user experience and was, as a result, naturally better engaging lapsing users. )

From prior analysis we knew it was generally harder to engage this particular segment of members, for a number of reasons such as:

 Outcome

Strong evidence of a changing audience base resulted in the recommendation to begin a test-and-learn program.

Based on the quant evidence of a changing audience, I recommended we review the campaign design. It was clear that performance was decreasing because the current design, unchanged for the last three executions, was not sufficient to engage the growing segment of less loyal members.

Our team brainstormed ways we could update the campaign, and decided to go with a ‘test-and-learn’ approach. The first test was to change the body copy of the email to provide more information on how offers are valuable. 

The next campaign was set up to A/B test two email versions, the existing and the updated copy version, to ensure we could isolate if any change in performance was indeed due to the copy change, or if it had no effect and we needed to test something else.