Summary of Product
- User-level LTV predictions up to 1 year leveraging historical attribution, in-app revenue and engagement data
- User-level LTV predictions (day 30, 60, 90, 180 and 365 ) updated daily for every user from day 0
- Churn predictions at the user-level (churn definition supplied by client)
- Conversion predictions at the user-level (probability user will pay within the next 30 days)
- IAP, Ads, Subscription, eCommerce models
- Models project expected revenue from non-paying users, which often represents a significant portion of future revenue
How can I use it?
Setting CPI bids for all ad networks Campaign ad network optimization Setting channel level budgets Financial forecasting
There are two ways to assess the performance of an LTV model:
Accuracy can be measured in many different ways and we're happy to engage with you further on how to analyse that. We’d recommend analyzing the models in the following way:
Bias: shows whether predictions are over or under on average. To calculate bias simply aggregate the mean pLTV and compare it to the actual ARPI for a cohort. Typically we look at bias by platform to ensure both iOS and Android cohorts are correctly captured, as well as for major ad networks with significant volume for that app. To get an accurate estimate of bias it’s also important not to use small cohorts whose errors are more controlled by random variation at the user level. It may seem like 1000 users is a “large” cohort as our minds are naturally trained to think of data in terms of normal distributions, but for most apps a cohort of 1000 means the LTV ends up being dependent on around 10-20 who make purchases.
User-level error: captures how well the model is predicting individual users’ LTVs and therefore small cohorts’ LTVs. Here are two methods of viewing user level performance:
Conversion accuracy - by viewing predicted conversion accuracy, which is a subset of LTV with less random variation than the dollar amount, one can see how well the model is distinguishing potential payers. A good metric to use for conversion accuracy is the ROC curve which measures the true vs false positive rate at different thresholds, and the corresponding AUC. A “perfect” model would provide an AUC of 1, while simply assuming a constant conversion probability would provide an AUC of 0.5. Typically we want to see an AUC of 0.7 or above for predicted conversion rate.
MSE (mean squared error). This captures the error in the full dollar amount, but MSE (or RMSE, which is the square root) gives a number that isn’t very interpretable on its own. This is only really useful with another model as a benchmark, otherwise we recommend limiting the analysis to bias and AUC for conversions.
Because Algolift makes a prediction each day for each user, the analysis above should be based on one specific “age” (measured in days after install) only or else data will be repeated. We recommend doing the analysis separately at D1, D3 and D7 to see how much error reduces over those 3 ranges.
An accurate LTV model is essential but it’s crucial that it solves core use cases for an advertiser in an automated way to make best use of the model.
AlgoLift leverages its user-level model for:
- Campaign bid / budget automation (on Facebook, Google, Apple Search, Unity, Ironsource, Vungle and Applovin)
- Optimal monthly channel budget automation
- iOS14.5 measurement