Churn leading indicators — what the last six weeks really show
We looked at 43,000 cancellations across our design partners. Here's what predicted them — and what didn't.
Dr Helena Kato
Head of data, Prism
In the 12 months to March 2026, design partners on Prism saw 43,214 paid members cancel. We looked at the 42 days leading up to each cancellation to understand what predicted it — and, just as importantly, what didn't.
What predicted churn
- Check-in cadence dropping 40%+ vs the member's 12-week baseline (AUC 0.81)
- Two or more missed booked classes in 14 days (AUC 0.74)
- App session count falling below 1 / week (AUC 0.71)
- Any direct-debit failure in the prior 30 days (AUC 0.68)
- Change in preferred class time (proxy for life-event) (AUC 0.62)
What didn't
- Tenure — surprising us, tenure was a weak predictor past month 6
- Demographic fields — age, gender, postcode all had negligible predictive power
- Class attendance volume — the absolute number matters much less than the change
- Referral source — acquisition channel didn't predict retention past month 3
The practical implication
A simple two-signal model — check-in drop plus one missed class — catches 62% of churn events 28 days before they happen. Adding the app-session signal takes it to 71%. Adding the DD-failure signal takes it to 77%. Those numbers make intervention possible.
The harder question is what to do with the signal. A generic 'we miss you' email is measurably worse than silence. A personalised outreach with a specific alternative class recommendation — based on the member's historical preferences — saves 38% of the at-risk cohort. Targeting matters more than volume.