Q: What's the minimum spend level per channel or overall to use AlgoLift IA?
We don't have a minimum spend requirement on our IA product. The two reasons that we recommend a minimum level of spend are:
- Ad network best practices: Google and Facebook especially have relatively stringent best practices on the number of events needed to fuel their algos
- Clients getting value from Intelligent Automation: if you only have a couple of ad sets on Facebook spending $100 each it maybe challenging to see value in automating your UA using our platform
Q: We're currently optimizing toward D7 ROAS for our UA campaigns. What goal should we use for IA?
We always recommend clients bid towards their true, long-term business goals. Many companies use short-term metrics like D7 ROAS because of the lack of an accurate LTV model to assess long term performance. There's not a 1:1 relationship between D7 ROAS and D180/D365 so we always encourage clients to buy at their true long-term business target.
Facebook and Google
Q: Why is [a campaign with good recent pROAS] not receiving a higher % of spend?
- Predicted future pROAS is not as good as performance to date.
- The campaign is not able to scale efficiently (diminishing returns).
- The campaign’s budget has been increased recently, but started off at a low % of spend and is still being scaled up.
- Adding more spend to the campaign would violate a portfolio constraint.
Q: Why is [a campaign with poor recent pROAS] not being scaled down or paused?
- The campaign’s future pROAS is uncertain and the system wants more data before pausing or reducing to a lower volume.
- Future predicted pROAS is better than recent performance would suggest.
- The campaign is already at the minimum allowed daily spend.
- Other better-performing campaigns are not able to scale efficiently, so there is nowhere to better allocate the spend.
Q: What criteria does AlgoLift use to pause campaigns?
AlgoLift pauses campaigns when doing so is optimal relative to portfolio objectives and constraints. See the User Guide for additional information on automated campaign pausing.
Q: How is Facebook view-through attribution (VTA) handled by AlgoLift's IA system?
In April 2020 Facebook ceased to provide user-level attribution for view-through installs (as opposed to click-through), this results in some installs missing campaign level attribution. We request clients label the source of these installs as
Unattributed Facebook installs.
Ad/adset/campaign level data reported by Facebook still includes VTA installs and conversions, so we are still
able to capture the full number at this level. In ROAS reporting, projected revenue from the "Unattributed Facebook" installs
will be included in ROAS projections for Facebook as a whole (but not in projections for any individual campaign).
In order to make sure projected ROAS from VTA users is being properly accounted for, ensure that the user data you are passing to AlgoLift
includes the users with
Unattributed Facebook as a source, and the
campaign_id and related fields are
NULL for these users.
Q: The sum of daily adset and campaign budgets set by AlgoLift exceeds my daily spend goal, why is this the case?
- Certain types of campaigns -- for example, Facebook VO campaigns with high minROAS constraints -- may not spend their budgets in full. When this is the case, our automation system may end up setting higher budgets while still expecting to spend within the alloted daily amount.
SDK Ad Networks
Q: How does AlgoLift generate sub-publisher CPI bids?
AlgoLift generates sub-publisher CPI bids leveraging the following approaches:
- LTV Projection: Aggregating pLTV by subpub provides a base bid for subpub bids. However, it does produce noisy estimates and sample sizes are too low for this to be an effective strategy by itself
- Hierarchical Modeling: AlgoLift uses a separate bidding algorithm to produce subpub bids based on the pLTV from installs through that subpub backed up by prior data from larger groupings (e.g. geo, channel). For subpubs with a low number of installs, these robust estimates can look much different than the “raw” average pLTV
- Optimal Bid Exploration: i.e. the ability to gather more data on newer publishers to find potentially untapped value, is also considered in the bidding algorithm
- Desired Spend Recoup: CPI bids are set based on a client-specified % recoup at a defined horizon
Algolift will generate bids for all subpubs that have generated installs up to 60 days ago.
Q: How does AlgoLift exploration work?
There are two components to our exploration algorithm:
Exploring higher / lower CPI bids at the sub-publisher level to determine the optimal bid to hit the portfolio ROAS target
Exploring new publishers within a campaign targeting criteria (eg geo)
- AlgoLift uses a Bayesian approach to generate bids at the sub-publisher level regardless of volume, using priors from larger aggregates. Any subpub with little or no volume will receive a bid that’s informed by the performance of the others within the same groupings. The risk of over-reacting to performance over a low sample size is handled well by this approach.
- The exploration piece of the algorithm is designed to gather more data on low-volume subpubs, by bidding “optimistically” within certain groups, but in a way that does not jeopardize the overall portfolio goals (i.e. hitting 100% recoup on average).
Q: Why does performance experience some volatility after starting automation? How long does it usually take a campaign (campaigns) to ramp up and become stable?
There usually will be some calibration and adjustment during the process and it will take some time for the algorithm to learn about the performance of new campaigns. It is normal to see some volatility. In some cases, a “drop” in volume could also be that the initial bids or budgets were too high relative to ROAS targets, and therefore must be dropped. Stability for a campaign is based on many factors dependent on the ad network - inventory availability, audience saturation, competitors bidding more aggressively. That said - our algorithm is always doing some form of "exploration" throughout the process.
Q: Any criteria AlgoLift suggests to measure performance after automation starts?
Related to the question above, AlgoLift suggests looking at high level pROAS / ROAS six to eight weeks after automation starts.
Q: On SDK networks, how does Algolift treat whitelist campaigns vs RON?
AlgoLift generates a bid for every geo/sub-publisher combination with 1 install or more. These bids can be applied to both whitelist and RON campaigns.
Q: Why does the bid suggestion from AlgoLift differ from the raw LTV of aggregated users for a sub-publisher, even for a large sample size?
Our algorithm isn’t only bidding the expected value (which might converge toward raw LTV for very large sample sizes), we’re taking into account the probability of recoup. Our approach is built to quantify the uncertainty in LTV of future installs per asset (whether at the sub pub or campaign/geo level).
The CPI bid may be lower than the raw LTV of past installs for the following reasons:
- Our models are not confident enough in the raw LTV to bid at or above that value. This can happen when the raw LTV is high but there’s volatility around the LTV of purchasers or uncertainty in conversion rates. Even for a sample size of ~10K installs, there is still significant uncertainty in LTV, because only a couple hundred of those users might end up purchasing, and their purchase amounts can still show lots of volatility.
- Bidding the expected value of the LTV would result in a ~50% chance to recoup, or actually less since the posterior distributions tend to be right-skewed (see illustration below). Some positive margin on higher-volume assets is necessary to balance out the “cost” of exploring new, lower-volume assets so that the portfolio as a whole is meeting performance goals.
Q: How does AlgoLift ensure large sub-publishers don’t deliver all the daily budget?
- While there are no explicit constraints on subpub spend, the exploration logic in the bidding algorithm is meant to ensure that volume is spread out adequately. Large sub-publishers may still end up receiving a large percentage of spend if performance justifies it.
Q: How does AlgoLift treat a sub-publisher who previously performed poorly but has started to deliver higher value installs?
- The recency of the data will affect the bid. If the performance at the sub-publisher level has changed significantly, that sub-publisher should receive a higher / more exploration-oriented bid, despite the older poor performance.
Q: How does AlgoLift manage daily / weekly budgets for an ad network as part of a larger portfolio of AlgoLift managed channels?
- For clients who rely on AlgoLift to manage their spend across several or all channels, we maintain a separate set of algorithms & infrastructure to allocate spend between them. The resulting allocations are then passed to our social channel automation algorithms, and bidding algorithms for SDK networks and Apple Search, for those to execute on their lower levels
- The algorithm that manages channel allocations re-optimizes frequently to account for changes in performance & scalability, as opposed to doing a one time monthly or quarterly allocation