Over the last couple of decades there has been a disparity between the growth in data storage and processing efficiency. Storage productivity has doubled every year, while processing productivity has doubled every year-and-a-half. This disparity creates huge pools of data – often called big data – which cannot be efficiently processed. The question for organisations now is not how much data they have, but what they can effectively and efficiently do with it.
The goal of predictive analytics is to use this data to predict the outcome of future business decisions, such as determining the optimal extent of capital investments in new hotel receptions; offering buy-one-get-one-free (BOGOF) promotions for certain categories; keeping bank branches open later; or increasing prices on starters in a restaurant.
In the past, one of the most common methods in using data to make these decisions was to use correlation-based approaches. These approaches tried to predict future performance based on the relationship between various factors (e.g. price) and financial performance (e.g. sales). While such analysis generally leads to interesting hypotheses, there are a number of factors at play that make the resulting answers inaccurate in predicting the true incremental impact of a business action.
Take one example of a restaurant that drops the price on peppermint mochas in December. Unit sales would skyrocket and correlational analysis would suggest that restaurants across the network should drop prices. However, much of that unit increase may have occurred anyway in December at the earlier price. In this case, a price decrease may actually be the wrong decision for the restaurant, as it would simply give away money instead of driving incremental purchases. So how can executives accurately predict the cause-and-effect relationship between business decisions and profits?
The best way to make decisions is to try them out first and examine the effects – so in this case to reduce the price on peppermint mochas in some stores in order to understand the incremental impact before rolling it out more broadly. Unfortunately, the business world is much less perfect than a scientific laboratory, where such empirical analysis has historically been conducted.
>See also: Goldman Sachs invests $100m in predictive analytics firm APT
Volatility in weekly financial metrics, demographic differences, weather changes, and competitor actions all make it challenging to find optimal test and control situations and to isolate the impact of a programme. So with all the data available, management is still left asking the question: “Did my action lead to higher profits or did something else cause it?”
Using advanced technologies and sophisticated algorithms, over 100 leading organisations globally, including giants such as ASDA, Boots, Hilton, and Costa, are now able to scientifically test business ideas.
To overcome the challenges of performing tests in the real world, these organisations use specialised test-and-learn software that enables robust test design capabilities, rigorous control group matching, and automated segmentation and targeting capabilities.
The test-and-learn function is typically owned by either centralised analytics teams, which serve as a resource for the whole organisation, or by analysts within individual functions, such as merchandising, marketing, and pricing.
These teams objectively measure the true impact of each new strategy, ensuring that test-and-learn does not simply validate the current strategy but instead encourages innovation. Testing teams also provide a standardised framework by which all decisions are made, significantly reducing the internal debate about the impact of each new programme.
For these reasons, companies across numerous sectors are starting to industrialise an empirical approach to experimentation and business decision-making. Organisations can adopt bold new ideas, since the test-and-learn process will provide the evidence for doing so – there’s no longer a need to take a leap of faith.