Autoregressive (AR) Model

As the name suggests, autoregressive (AR) models are based on the idea that future values can be predicted from past values.

Table of contents
  1. Univariate autoregressive model
    1. $AR(1)$
    2. Lag operator
    3. $AR(p)$

Univariate autoregressive model

$AR(1)$

The simplest AR model is the $AR(1)$ where the $1$ indicates a lag of $1$ time step:

\[y_t = \phi_0 + \phi_1 y_{t-1} + \varepsilon_t\]

where $\varepsilon_t$ is the white noise term with constant variance.

We calculate the expected value and variance of $y_t$ given $y_{t-1}$:

\[\begin{gather*} \E[y_t \mid y_{t-1}] = \phi_0 + \phi_1 y_{t-1} + \varepsilon_t \\[1em] \Var[y_t \mid y_{t-1}] = \Var[\varepsilon_t] = \sigma_\varepsilon^2 \end{gather*}\]

Lag operator

Lag operator (denoted by $L$) is a convenient notation for autoregressive models.

It is also called a backshift operator and is denoted by $B$ in some literature.

The lag operator can be raised to an arbitrary power $k$ to indicate a time series lagged by $k$ time steps:

$$ L^k y_t = y_{t-k} $$

$AR(p)$

The general form of $AR(p)$ (read “AR model of order $p$”) is:

$$ y_t = \sum_{i=1}^p \phi_i y_{t-i} + \varepsilon_t $$

We often simplify the notation by using the lag polynomial notation:

$$ \phi(L) y_t = \varepsilon_t $$

Polynomial notation \[\begin{gather*} y_t = \sum_{i=1}^p \phi_i y_{t-i} + \varepsilon_t \\[1em] y_t - \sum_{i=1}^p \phi_i y_{t-i} = \varepsilon_t \\[1em] (1 - \sum_{i=1}^p \phi_i L^i) y_t = \varepsilon_t \\[1em] \phi(L) y_t = \varepsilon_t \end{gather*}\]