Skip to main content

Analyzing Forecast Differences for the Arizona Model

Jan. 23, 2026

Results for the past 50 forecasts

Image
Magnifying glass with check marks

As a general rule, forecasts differ from what actually happens. This article analyzes forecast differences for the EBRC Arizona model since 2012. That encompasses 50 quarterly forecasts from the fourth quarter of 2012 through the first quarter of 2025.

Forecasting and Model Background

Forecast differences arise from many sources, including revisions to data history, changing econometric relationships captured at a point in time; U.S. forecast differences; less than perfect model specification; judgemental adjustments that miss the mark, and many more.

Let’s take each one of these in turn. The EBRC modeling strategy employed for the Arizona model relies on econometric methods (regression analysis) to pin down relationships between dependent variables (left-hand side of the equation) and right-hand side variables (aka explanatory variables). These right-hand side variables can be taken as either exogenous (U.S. variables, where we further assume that what happens in Arizona does not materially impact the U.S. economy) or they can be lagged endogenous variables from other equations within the model. Here, think of Arizona income used as an explanatory variable in an Arizona retail sales equation.

These regression equations can now be combined with a forecast of the U.S. variables to drive a forecast for Arizona. Both the regression estimation and the forecast itself assume that we know perfectly past history. But this is not the case. Economic data are constantly revised. This creates problems both for the regression estimation and for the forecast that show up as forecast differences.

For Arizona nonfarm jobs, revisions have sometimes been quite large. For instance, preliminary data for 2024 suggested total jobs (seasonally adjusted) for the year were 3,272,900. Just three months later that was revised down to 3,238,200. That translated into a downward revision to history of 1.1%, which impacted forecast differences for 2024 and will impact differences in the future for two-year ahead and three-year ahead forecasts. The key takeaway is that when we know the recent past with little precision, that translates into less precision in forecasts.

Next up is the U.S. forecast. During the past 12 years, I have used U.S. forecasts from S&P Global to drive the Arizona forecasts. Obviously, S&P Global faces the same forecast challenges. If the U.S. forecast is incorrect, then that flows down to forecast differences in the Arizona forecast.

Even if we knew perfectly (and without revision) the history of economic data used to estimate the regressions that make up the Arizona model, there will still be uncertainty in the estimated relationships, creating another source of difference between forecasts and actuals. Furthermore, an econometric relationship that was correct at the time of estimation may have changed one, two, or three years later, creating forecast differences.

All forecasts in the Arizona model are impacted to some extent by judgemental adjustments added by the model operator (me). Although intended to improve forecast performance, there is no guarantee of that.

Finally, there are many other sources of forecast differences. The above is not intended as an exhaustive list of all the things that might happen in the future that we cannot know at the time the forecast is generated.

What follows is an analysis of the aggregate impact of these sources of forecast differences.

Evaluating Forecast Differences

The future is uncertain. There is no getting around that. However, in many cases, the far future is more uncertain than the near future. We should incorporate that information into our analysis of forecast differences. In other words, we should standardize our analysis by forecast horizon.

For the Arizona model, each year I generate three quarterly forecasts each year with 10-year horizons and one with a 30-year horizon. Each forecast then contains a one-year ahead forecast, a two-year ahead forecast, a three-year ahead forecast, and so on. In this analysis, we will focus on the overall model performance at the one-year, two-year, and three-year ahead horizons.

We have 50 one-year ahead forecasts for the variables in the Arizona model. How should we summarize the forecast differences for those forecasts? One standard way to do it is to calculate the percentage difference between forecast and actual and then average the percentage differences across forecasts. This is called the Mean Percentage Difference.

The percentage difference for each forecast is calculated by the following formula:

Image
Forecast value divided by actual, minus 1.0, multiplied by 100

In this case, a positive value indicates that the forecast exceeded the actual. A negative value indicates that the forecast fell short of the actual.

Averaging the percentage differences for all forecasts gives us the Mean Percentage Difference (MPD). We hope that the average is small, indicating that the forecasts have been close to actuals. 

You will note that in this case, positive percentage differences will cancel out with negative values in the process of taking the average. We will also be interested in the mean percentage difference removing the cancelling out of positive and negative values. We do this by taking the absolute value of the percentage differences before averaging. This is the Mean Absolute Percentage Difference (MAPD).

Forecast Differences for the Arizona Model

The results for the key variables for the Arizona model are summarized in Exhibit 1. Note that for the MPD and the MAPD, the forecast differences often grow with the forecast horizon. For instance, on average across forecasts, the MAPD for Arizona nonfarm jobs was 0.8% at the one-year ahead horizon, 1.3% at the two-year ahead horizon, and 1.7% for the three-year horizon. This is as expected: the further out we forecast, the more uncertain is the result.

Also expected is the fact that results for the MAPD are generally larger than for the MPD. We also see the same pattern for forecast horizon.

One way to translate these mean absolute percentage differences into levels, is to multiply the percentage difference by current forecast data. This will give us a general idea of the confidence interval to expect going forward, based on past model performance. For instance, the one-step ahead MAPD for nonfarm employment was 0.8%. In the Fourth Quarter 2025 projections, Arizona nonfarm jobs were forecast to be 3,263,100 in 2025. Thus, these results suggest that the actual value may well be 25,600 jobs higher or lower than that forecast, once the final benchmark revisions are released. 

Exhibit 1: Forecast Evaluation for the Arizona Model, Percent

Image
Exhibit 1: Forecast Evaluation for the Arizona Model, Percent