Predictive analytics

Predictive analytics of myTracker is advanced analytics tools that will expand your knowledge about the app audience, enriching gathered information with "data from the future".

Forecasts of various metrics, scorings, event correlation, user quality assessment, the event-important prediction of financial metrics and audience churn, etc. – these are all predictive analytics tools.

Prediction enables you to take a more informed decision on app promotion without having to wait for actual data to accumulate. For example, you can remove inefficient channels before they become a real problem, or rebalance your budget to reduce costs and improve user acquisition.

How it works

Predictive analytics is based on a large amount of data collected by myTracker. A combination of several groups of continually evolving and updated predictive models works to give you the most accurate forecast possible.

Below are the common stages of predictive analysis in myTracker:

  1. Gathering the data. The embedded SDK collects data on your app. The more data it has, the more accurate the forecast.
  2. Learning. All app users are divided into cohorts based on shared characteristics. The models analyze historical data for each cohort and calculate ratios for future forecasts. The learning time depends on the model group.
  3. Using the models. Once a model’s learning process completes, myTracker starts making its forecast. Every day all incoming traffic is divided into tiny cohorts, with each to be given its own forecast in myTracker. Forecasts are updated every 24 hours during the first several days after the app’s installation as new data flown in. On the eighth or ninth day (depending on prediction metrics), the results are fixed for each cohort, and the final, seven or eighth forecast is made.
  4. The first several days provide a solid enough foundation to make a strategic decision on things like buying traffic. This is why we update our predictions during the first seven or eight days and give the final forecast on the eighth or ninth day (depending on prediction metrics).

  5. Prediction assess. Predictions should be compared with facts. You can do this when the amount of tracked data goes beyond the forecast horizon (for example, on August 23, you can compare "LTV 6m" and "LTV Prediction 6m" for the cohort that installed the app on February 22). You’ll see that the decisions made based on the forecasts paid off, so it’s safe to continue using them. If a forecast considerably deviates from the facts, please contact our support team. We will carefully review each case and improve the predictive models.

Predictive models

LTV Prediction

myTracker LTV forecasts are based on a combination of predictive models that can be distilled into three groups:

  • Models that make decent predictions as early as the day after users installed your app. Although they don’t need much data, a forecast will be more accurate if there is some app history.
  • Models that make good predictions only based on a large amount of current data.
  • Models that give an accurate forecast using a large amount of historical data.

Any prediction is based on a combination of these three groups. This mixed model makes a prediction as early as the day after app installation, and it can work with little data and provide an accurate forecast based on a history of just 30 days.

For more details, see the LTV prediction section

Churn prediction

Churn forecast is based on an individual model, for which it is desirable to have an app history of the past six weeks.

This model makes the first prediction one day after the app install, and provides an accurate forecast on the fifth and subsequent days after an install.

For more details, see the Churn prediction section