Login 

Share

FORECASTING UK INFLATION BOTTOM UP

New research uses disaggregated Consumer Prices Index (CPI) item-level data – for example, price indices for ‘light bulbs’, ‘cinema tickets’, etc. – directly to forecast aggregate measures of UK inflation.

The study by Andreas Joseph, Galina Potjagailo, Eleni Kalamara and George Kapetanios finds that exploiting this high granularity of information strongly and significantly improves forecast accuracy of up to 30-70% against an autoregressive process and a dynamic factor model, beyond and above the gains from traditionally used macroeconomic predictors.

More…

Using about 600 item series, the study compares a wide range of models in a pseudo real-time out-of-sample forecasting exercise between 2015-2019. The set of models include dimensionality reduction techniques, penalised regressions, and commonly used machine learning methods, such as random forests and artificial neural networks.

Ridge regression – a shrinkage model that penalises weights on parameters that contribute little to model performance without fully discarding them – achieves the highest forecast accuracy across horizons, targets and specifications. This suggests that dealing effectively with this large information set is more important than the ability to fit complex functional forms.

But there is tentative evidence that machine learning models capture turning points and changes in volatility in aggregate inflation dynamics – this makes them a promising tool for crisis episodes and when longer micro-based data sets become available.

The use of disaggregated price data makes it possible to draw on information that is directly relevant for such aggregate outcomes, while the distributional moments of item indices do not necessarily translate straightforwardly to the aggregate level.

This is important, as inflation forecasts play an important role in economic policy choices as well as business decisions in the wider economy. Central banks regularly publish consumer price inflation forecasts that form the basis for policy decisions, but also affect decisions in the real and financial sector.

The approach comes with challenges in interpreting forecast outcomes, or a so-called ‘black box critique’. This does not only arise due to the non-linear form of the machine learning models, but also because of the high dimensionality of the input space. Both complicate the interpretability and communication of the findings.

To interpret results and to derive potential policy conclusions, three questions are of interest. What is the contribution of a predictor to the forecast? Are certain sectoral divisions of CPI items more relevant than others for forecasts of a specific target variable? And conditional on its explanatory power in a model, does a component statistically co-move with inflation?

The study addresses these questions via a model-agnostic and flexible approach that measures the contribution of individual items to each forecast using Shapley values. In particular, the researchers decompose each model prediction in Shapley components attributed to each predictor. They then partially re-aggregate this intermediate output into the contributions from item categories that are intuitively understood, such as Clothing & footwear, or Recreation & culture.

Finally, they statistically test the association of these categories with future aggregate inflation using an auxiliary regression analysis. This makes it possible to assess and communicate he results in a widely understood way.

For example, Recreation & culture serves as a robust predictor of aggregate CPI inflation across horizons and models. This may be of particular interest when investigating the impact of Covid-19 going forward, as this sector has been particularly affected by the pandemic.