Assess prediction

A comparison of facts and forecasts help you asses prediction. You can build a report in the myTracker interface and calculate the weighted average error as described below.

We recommend assessing dimensions that you usually use. So you get an accurate assessment of decisions taken, find dimensions with serious fluctuations, and know weak spots of predictive models.

Comparison report

Build a comparison report in myTracker:

  1. In the Constructor select a report period, for which you have factual data.
  2. Add predictive and factual metrics, for example, Prediction LTV30 and LTV30d, and press the Calculate button. The report will be built.
  3. Open graph on the button in the table head. In the Compare with field select metrics for comparison (Prediction LTV30 for LTV30d and vice versa).

Add the Revenue type dimension to assess a forecast for in-app payments, subscriptions, and in-app ads.

Calc the weighted average error

Prediction models can operate with wrong data — game and app mechanics are often changed which influences user behavior and lead to prediction errors. But knowing the historical error, you can be more confident in prediction and can choose the forecast horizon for your current strategy.

Assess LTV Prediction in the Revenue type dimensions to calc the weighted average error for App LTV (revenue from in-app payments), Subscription LTV, and Ads LTV individually.

Method

  1. Select a time frame to calculate the error. It should be a period for which you have facts and predictions.
  2. Divide all installs for the selected period into cohorts.
  3. For App LTV (revenue from payments):
    • Project+Date+Partner
    • Project+Date+Country+Partner
    • Project+Date
    • Project+Month
    • Project+Date+Campaign (further, divide small campaigns into cohorts: build payment allocation on 8th day for the selected period and choose three groups: <50%, 50-75%, 75-100%)
    For Ads LTV (revenue from in-app impressions):
    • Project+Date
    • Project+Month
    • Project+Date+Ad network (admob, applovin, facebook, etc.)
    • Project+Date+Country
    • Project+Date+Campaign
    For Subscription LTV:
    • Project+Date
    • Project+Month
    • Project+Date+Country
    • Project+Month+Country
    • Project+Month+Traffic type

  4. Define the sum of prediction on 30th, 60th, 90th and 180th day individually for the selected cohorts.
  5. Example for the Project+Date cohort:
    • Prediction LTV30 Project1Data1 + ...+ Project1DataN
    • Prediction LTV60 Project1Data1 + ...+ Project1DataN
    • Prediction LTV90 Project1Data1 + ...+ Project1DataN
    • Prediction LTV180 Проект1День1 + ...+ Проект1ДеньN
    • Prediction LTV30 Project2Data1 + ...+ Project2DataN
    • ...

  6. Define the sum of factual data on 30th, 60th, 90th and 180th day individually for the selected cohorts.
  7. Example for the Project+Date cohort:
    • LTV30d Project1Data1 + ...+ Project1DataN
    • LTV60d Project1Data1 + ...+ Project1DataN
    • LTV90d Project1Data1 + ...+ Project1DataN
    • LTV180d Project1Data1 + ...+ Project1DataN
    • LTV30d Project2Data1 + ...+ Project2DataN
    • ...

  8. Calc the error for each cohort on 30th, 60th, 90th and 180th day individually:
    Cohort error = | (sum_of_prediction+1$) / (sum_of_factual_data+1$) – 1 |,
    where 1$ added to exclude division by zero.
  9. Example for the Project+Date cohort:
    • Error for Project1 = |(Prediction LTV30+1$)/(LTV30d+1$)–1|
    • Error for Project2 = |(Prediction LTV30+1$)/(LTV30d+1$)–1|
    • ...
    • Error for Project1 = |(Prediction LTV60+1$)/(LTV60d+1$)–1|
    • Error for Project2 = |(Prediction LTV60+1$)/(LTV60d+1$)–1|
    • ...

  10. Calc the weighted average error for each cohort on 30th, 60th, 90th and 180th day individually:
    The weighted average error = (Error_for_cohort_1*Sum_of_factual_data_for_cohort_1 + ... + Error_for_cohort_N * Sum_of_factual_data_for_cohort_N) / Sum of factual data for all cohorts
  11. xample for the Project+Date cohort:
    • The weighted average error = (Error_for_Project1*LTV30d_for_Project1+ ... +Error_for_ProjectN*LTV30d_for_ProjectN) / LTV30d_for_Project1 + ... + LTV30d_for_ProjectN

Result

With the weighted average error, you can assess the prediction quality and decide to use it, and do not forgot model restrictions. If you have big errors, pass results and data on sales, and updates to our support team  for some model adjusting.

Example of the weighted average error for App LTV Prediction

App LTV30 App LTV60 App LTV90 App LTV180
Project+Date+Partner 11,46% 14,05% 15,81% 20,65%
Project+Date+Campaign 16,26% 20,06% 22,68% 28,63%
Project+Date+Country+Partner 26,21% 32,92% 36,92% 43,72%
Project+Date 10,41% 12,55% 14,15% 18,72%
Project+Month 5,74% 8,41% 10,17% 14,27%

Example of the weighted average error for Ads LTV Prediction

Ads LTV30 Ads LTV60 Ads LTV90 Ads LTV180
Project+Date 11,77% 17,09% 18,80% 22,35%
Project+Month 08,80% 16,42% 17,28% 20,54%
Project+Date+Ad network 18,81% 25,93% 30,42% 32,53%
Project+Date+Country 16,01% 24,71% 29,64% 33,44%
Project+Date+Campaign 16,34% 26,63% 29,81% 31,11%

Example of the weighted average error for Subscription LTV Prediction

Subscription LTV30 Subscription LTV60 Subscription LTV90 Subscription LTV180
Project+Date 13,10% 18,40% 20,20% 26,30%
Project+Month 6,20% 9,70% 11,80% 16,50%
Project+Date+Country 17,60% 20,20% 22,70% 32,30%
Project+Month+Country 15,70% 19,04% 21,60% 27,60%
Project+Month+Traffic type 11,20% 13,60% 17,70% 22,60%