Docs

Docs

  • docs
  • support

›FAQ

About

  • Overview

Getting Started

  • Introduction
  • Providing User Data
  • Providing Ad Performance Data
  • Providing Ad Network API access
  • Providing SKAdNetwork Postbacks
  • Receiving user-level LTV projections
  • Receiving pROAS for SKAN

LTV Projections

  • User Guide

Intelligent Automation

  • User Guide
  • Intelligent Automation FAQ

Intelligent Budget

  • User Guide
  • Intelligent Budget FAQ

pROAS for SKAN

  • Campaign pROAS FAQ

FAQ

  • LTV FAQ
  • Organic Lift FAQ

LTV FAQ

Summary of Product

  • User-level LTV predictions up to 1 year leveraging historical attribution, in-app revenue and engagement data
  • User-level LTV predictions (day 30, 60, 90, 180 and 365 ) updated daily for every user from day 0
  • Churn predictions at the user-level (churn definition supplied by client)
  • Conversion predictions at the user-level (probability user will pay within the next 30 days)
  • IAP, Ads, Subscription, eCommerce models
  • Models project expected revenue from non-paying users, which often represents a significant portion of future revenue

Data requirements

What data is required for AlgoLift to predict LTV?

We need user-level attribution data and revenue data to model LTV. Providing engagement data is highly suggested in order to supply more signals that might be indicative of user value.

What value does AlgoLift offer if there is limited data history in the case of a soft launch or unestablished app?

AlgoLift provides the most accurate LTV projections when we have 1 year+ of data.

  • This enables our models to observe temporal changes in revenue and user behavior and to fully understand how cohorts mature.
  • We can provide projections with less historical data, but this restricts the projection windows based on the amount of data available to train our models.
  • This table gives an overview of how AlgoLift releases new projection windows to clients based on the historical data available for modeling.

LTV Modeling Methodology

Can you share more about your general modeling framework and how you incorporate machine learning?

Our models combine machine learning techniques, which do the heavy lifting in terms of finding complex patterns and relationships between user demographic information, engagement and purchase behavior, and value. Secondary models then further predict LTVs into the future without a reliance on outdated cohort LTV curves or similar methods.

How does the size of inputs affect the efficacy of your models (e.g. install count, number of customers)?

Predictions do typically become more accurate with higher input volume. In order to reach our typical level of accuracy we need approximately 20,000 installs and 1-2% D7 payer conversion per week.

How do your models handle users who are active, but who only have one purchase?

We understand some existing LTV models use a RFM (recency, frequency, monetary) approach. We do not use RFM-based models and so we do not require a minimum number of purchases to operate. We will forecast LTV for every user regardless of their purchase behavior to date, this also includes users that have not yet made a purchase (“non-payers”).

How do your models account for changes to the product, competition, marketing strategy, etc?

This is dependent on the effect of the change. Updates whose effects are quickly reflected in early user behavior are better picked up by our models than those whose effects are recognized over a longer period of time.

For example, a change that decreases retention after D30, but has little effect on earlier user behavior will only be picked up by our models after D30 and only if the new user behavior persists. Furthermore, AlgoLift is able to ingest app version at the user-level as an input as well as any in-app engagement.

Do you need to be notified of these types of upcoming product changes?

No, we do not need to be notified. Our models are developed to adapt to changes in user behavior regardless of whether we're notified of the change.

It’s important to understand that while our models adapt to changes in user behavior it does still take time for the models to learn any realized impact of a given change. The biggest improvements to projections will come after the models see updated conversion and retention behavior; this may vary based on app category, revenue type, etc.

My game has specific gameplay / is a social casino game / is a 4X game / has unique monetization - do you have any experience with this type of game? How do you model LTV differently between my type of game and others?

AlgoLift has a generalized LTV modeling approach that uses attribution, revenue and engagement data to model LTV for that specific application and the model retrains daily based on the most recent days data from your application.

We use the same IAP LTV model for mid / hardcore as we do for casual games with the same level of accuracy. Genre differences, unique gameplay mechanics, live ops events and hyper-whale driven economies are all reflected in the underlying revenue and engagement data used to model LTV. With this data we’re able to accurately predict the underlying LTV without needing to understand the exact specifics of the gameplay etc

Projection Accuracy

How do you ensure the quality of your projections?

We use a threefold approach to ensuring we are delivering the best projections possible:

  1. Periodic back-testing of our models across all of our client portfolio. Back-testing refers to making projections using data up to a certain date in the past, then comparing them to actuals.

  2. We maintain secondary models that provide cohort-level LTV projections. While not as flexible or useful as user-level models, these still tend to be quite accurate and serve as effective QA tools to which we can compare our aggregated user-level projections and find any discrepancies. This mitigates the risk of more granular user-level modeling.

  3. We maintain a suite of anomaly detection tools that prevent questionable projections, or projections based on erroneous input data, from being delivered.

How long does a cohort need to mature for the LTV projections to be actionable?

When using our LTV projections for Intelligent Automation we leverage these projections immediately rather than wait for a cohort to mature. We take the approach that it’s better to use projections earlier and be directionally accurate than wait for a cohort to mature and lose the benefit of the early feedback on recent cohort behavior. However - for the most accurate predictions we’d recommend waiting 7 days for a cohort to mature. For high level long-term ROAS reporting or any other less time sensitive use-cases, wait 14 to 30 days.

At what cohort age(s) are projections updated?

Each user receives an updated projection daily.

Can you provide prediction intervals for your LTV projections?

We do not provide prediction intervals. The main reason for this is that they would not properly aggregate up to a cohort level that way a single expected value of LTV does, and thus are of little practical value.

Internally, when estimating LTV for a specific dimension (e.g. sub publisher) we do incorporate estimates of the range of uncertainty in the pLTV of future acquisitions.

Can you provide ongoing reporting of historical projection accuracy?

The preferred way to view historical errors varies by client. We deliver the daily user-level projections to clients through Amazon S3 to enable them to measure accuracy with their preferred approach.

Churn Predictions

What types of churn can you predict?

As long as we receive the event types that define retention, then we can predict various types of churn including:

  • Payer churn: Probability of no further purchases in the next X days
  • Subscriber retention: Probability of someone renewing a subscription at the next opportunity
  • Explicit churn: Probability the user will complete a known account closing or cancellation event
  • Custom: login retention, app retention, etc

Data Security

What data security protections do you have?

  • User-data is shared against a non-PII user ID, AlgoLift cannot determine who the installs are in the real world
  • Data is stored in S3 and in AWS Databases
  • For S3, we protect data through standard role and access control that allows only AlgoLift and the client to read/write data
  • In Databases, the servers are maintained within an AWS VPC which is logically isolated from the network and accessible only through VPN. The databases are further protected by password and access control
  • We also maintain a web portal which is encrypted by HTTPS and protected by password/access control
← Campaign pROAS FAQOrganic Lift FAQ →
  • Summary of Product
  • Data requirements
    • What data is required for AlgoLift to predict LTV?
    • What value does AlgoLift offer if there is limited data history in the case of a soft launch or unestablished app?
  • LTV Modeling Methodology
    • Can you share more about your general modeling framework and how you incorporate machine learning?
    • How does the size of inputs affect the efficacy of your models (e.g. install count, number of customers)?
    • How do your models handle users who are active, but who only have one purchase?
    • How do your models account for changes to the product, competition, marketing strategy, etc?
    • Do you need to be notified of these types of upcoming product changes?
    • My game has specific gameplay / is a social casino game / is a 4X game / has unique monetization - do you have any experience with this type of game? How do you model LTV differently between my type of game and others?
  • Projection Accuracy
    • How long does a cohort need to mature for the LTV projections to be actionable?
    • At what cohort age(s) are projections updated?
    • Can you provide prediction intervals for your LTV projections?
    • Can you provide ongoing reporting of historical projection accuracy?
  • Churn Predictions
    • What types of churn can you predict?
  • Data Security
    • What data security protections do you have?
Docs
Links
Corporate WebsiteAlgoLift App
Social
MediumTwitterLinkedIn
Copyright © 2022 AlgoLift