Predictive analytics of myTracker is advanced analytics tools
that will expand your knowledge about the app audience, enriching gathered information with "data from the future".
Forecasts of various metrics, scorings, event correlation, user quality assessment, the event-important prediction of financial metrics, etc. – these are all predictive analytics tools.
Prediction enables you to take a more informed decision on app promotion without having to wait for actual data to accumulate. For example, you can remove inefficient channels before they become a real problem, or rebalance your budget to reduce costs and improve user acquisition.
How it works
Predictive analytics is based on a large amount of data collected by myTracker.
A combination of several groups of continually evolving and updated
predictive models works to give you the most accurate forecast possible.
Below are the common stages of predictive analysis in myTracker:
- Gathering the data. The embedded SDK collects data on your app. The more data it has,
the more accurate the forecast.
- Learning. All app users are divided into cohorts based on shared characteristics.
The models analyze historical data for each cohort and calculate ratios for future forecasts.
The learning time depends on the model group.
- Using the models. Once a model’s learning process completes,
myTracker starts making its forecast.
Every day all incoming traffic is divided into tiny cohorts,
with each to be given its own forecast in myTracker.
Forecasts are updated every 24 hours during the first eight days
after the app’s installation as new data flown in.
On the ninth day, the results are fixed for each cohort and the final, eighth forecast is made.
In other words, you can see a forecast the very next day after users install your app
and watch it become more accurate by the day.
The first eight days provide a solid enough foundation to make a strategic decision on things like buying traffic. This is why we update our predictions during the first eight days and give the final forecast on the ninth day.
- Prediction assess.
Predictions should be compared with facts. You can do this when the amount of tracked data goes beyond the forecast horizon (for example, on 23 August, you can compare "LTV180d" and "Prediction LTV180" for the cohort that installed the app on 22 February). You’ll see that the decisions made based on the forecasts paid off, so it’s safe to continue using them. If a forecast considerably deviates from the facts, please contact our support team. We will carefully review each case and improve the predictive models.
myTracker forecasts are based on a combination of predictive models
that can be distilled into three groups:
- Models that make decent predictions as early as the day after users installed your app.
Although they don’t need much data, a forecast will be more accurate if there is some app history.
- Models that make good predictions only based on a large amount of current data.
- Models that give an accurate forecast using a large amount of historical data.
Any prediction is based on a combination of these three groups.
This mixed model makes a prediction as early as the day after app installation,
and it can work with little data and provide an accurate
forecast based on a history of just 30 days.