## 7.7 Forecasting with ETS models

Point forecasts are obtained from the models by iterating the equations for \(t=T+1,\dots,T+h\) and setting all \(\varepsilon_t=0\) for \(t>T\).

For example, for model ETS(M,A,N), \(y_{T+1} = (\ell_T + b_T )(1+ \varepsilon_{T+1}).\) Therefore \(\hat{y}_{T+1|T}=\ell_{T}+b_{T}.\) Similarly, \[\begin{align*} y_{T+2} &= (\ell_{T+1} + b_{T+1})(1 + \varepsilon_{T+1})\\ &= \left[ (\ell_T + b_T) (1+ \alpha\varepsilon_{T+1}) + b_T + \beta (\ell_T + b_T)\varepsilon_{T+1} \right] ( 1 + \varepsilon_{T+1}). \end{align*}\]Therefore, \(\hat{y}_{T+2|T}= \ell_{T}+2b_{T},\) and so on. These forecasts are identical to the forecasts from Holt’s linear method, and also to those from model ETS(A,A,N). Thus, the point forecasts obtained from the method and from the two models that underlie the method are identical (assuming that the same parameter values are used).

ETS point forecasts are equal to the medians of the forecast distributions. For models with only additive components, the forecast distributions are normal, so the medians and means are equal. For ETS models with multiplicative errors, or with multiplicative seasonality, the point forecasts will not be equal to the means of the forecast distributions.

To obtain forecasts from an ETS model, we use the `forecast`

function. The R code below shows the possible arguments that this function takes when applied to an ETS model. We explain each of the arguments in what follows.

```
forecast(object, h=ifelse(object$m>1, 2*object$m, 10),
level=c(80,95), fan=FALSE, simulate=FALSE, bootstrap=FALSE, npaths=5000,
PI=TRUE, lambda=object$lambda, biasadj=NULL, ...)
```

`object`

- The object returned by the
`ets()`

function. `h`

- The forecast horizon — the number of periods to be forecast.
`level`

- The confidence level for the prediction intervals.
`fan`

- If
`fan=TRUE`

,`level=seq(50,99,by=1)`

. This is suitable for fan plots. `simulate`

- If
`simulate=TRUE`

, prediction intervals are produced by simulation rather than using algebraic formulae. Simulation will also be used (even if`simulate=FALSE`

) where there are no algebraic formulae available for the particular model. `bootstrap`

- If
`bootstrap=TRUE`

and`simulate=TRUE`

, then the simulated prediction intervals use re-sampled errors rather than normally distributed errors. `npaths`

- The number of sample paths used in computing simulated prediction intervals.
`PI`

- If
`PI=TRUE`

, then prediction intervals are produced; otherwise only point forecasts are calculated. `lambda`

- The Box-Cox transformation parameter. This is ignored if
`lambda=NULL`

. Otherwise, the forecasts are back-transformed via an inverse Box-Cox transformation. `biasadj`

- If
`lambda`

is not`NULL`

, the backtransformed forecasts (and prediction intervals) are bias-adjusted.

```
fit %>% forecast(h=8) %>%
autoplot() +
ylab("International visitor night in Australia (millions)")
```

### Prediction intervals

A big advantage of the models is that prediction intervals can also be generated — something that cannot be done using the methods. The prediction intervals will differ between models with additive and multiplicative methods.

For most ETS models, a prediction interval can be written as \[ \hat{y}_{T+h|T} \pm k \sigma_h \] where \(k\) depends on the coverage probability, and \(\sigma_h\) is the forecast variance. Values for \(k\) were given in Table 3.1. For ETS models, formulas for \(\sigma_h\) can be complicated; the details are given in Chapter 6 of Hyndman et al. (2008). In the table below, we give the formulas for the additive ETS models, which are the simplest.

Model | Forecast variance: \(\sigma_h\) |
---|---|

(A,N,N) | \(\sigma_h = \sigma^2\big[1 + \alpha^2(h-1)\big]\) |

(A,A,N) | \(\sigma_h = \sigma^2\Big[1 + (h-1)\big\{\alpha^2 + \alpha\beta h + \frac16\beta^2h(2h-1)\big\}\Big]\) |

(A,A_{d},N) |
\(\sigma_h = \sigma^2\biggl[1 + \alpha^2(h-1) + \frac{\beta\phi h}{(1-\phi)^2} \left\{2\alpha(1-\phi) +\beta\phi\right\}\) |

\({} - \frac{\beta\phi(1-\phi^h)}{(1-\phi)^2(1-\phi^2)} \left\{ 2\alpha(1-\phi^2)+ \beta\phi(1+2\phi-\phi^h)\right\}\biggr]\) | |

(A,N,A) | \(\sigma_h = \sigma^2\Big[1 + \alpha^2(h-1) + \gamma h_m(2\alpha+\gamma)\Big]\) |

(A,A,A) | \(\sigma_h = \sigma^2\Big[1 + (h-1)\big\{\alpha^2 + \alpha\beta h + \frac16\beta^2h(2h-1)\big\} + \gamma h_m \big\{2\alpha+ \gamma + \beta m (h_m+1)\big\} \Big]\) |

(A,A_{d},A) |
\(\sigma_h = \sigma^2\biggl[1 + \alpha^2(h-1) +\frac{\beta\phi h}{(1-\phi)^2} \left\{2\alpha(1-\phi) + \beta\phi \right\}\) |

\({} - \frac{\beta\phi(1-\phi^h)}{(1-\phi)^2(1-\phi^2)} \left\{ 2\alpha(1-\phi^2)+ \beta\phi(1+2\phi-\phi^h)\right\}\) | |

\({} + \gamma h_m(2\alpha+\gamma) + \frac{2\beta\gamma\phi}{(1-\phi)(1-\phi^m)}\left\{h_m(1-\phi^m) - \phi^m(1-\phi^{mh_m})\right\}\biggr]\) |

For a few ETS models, there are no known formulas for prediction intervals. In these cases, the `forecast.ets`

function uses simulated future sample paths and computes prediction intervals from the percentiles of these simulated future paths.

### Bibliography

Hyndman, Rob J, Anne B Koehler, J Keith Ord, and Ralph D Snyder. 2008. *Forecasting with Exponential Smoothing: The State Space Approach*. Berlin: Springer-Verlag. www.exponentialsmoothing.net.