Castedo Ellerman

Multiplicative autocorrelation in stationary Markov processes

E. Castedo Ellerman
2021-04-10

Abstract

DOCUMENT TYPE: Open Study Answer

QUESTION: For any stationary markov process, is the autocorrelation of an interval the product of the autocorrelations of subintervals?

Summary

A stationary process \(Z_t\) has multiplicative autocorrelation when \[ \operatorname{Cor}\!\left[{ Z_t}, { Z_r}\right] = \operatorname{Cor}\!\left[{ Z_t}, { Z_s}\right] \operatorname{Cor}\!\left[{ Z_s}, { Z_r}\right] \] for all \(t \le s \le r\). Autocorrelation is defined as \[ \operatorname{Cor}\!\left[{ Z_t}, { Z_s}\right] := \frac{ \operatorname{Cov}\!\left[{ Z_t}, { Z_s}\right] }{\sigma^2} \] with \(\sigma^2 = \operatorname{Var}({ Z_t})\).

A stationary autoregressive process has multiplicative autocorrelation [1]. However, not all stationary Markov processes have multiplicative autocorrelation. See the section below about a real-valued 3-state Markov chain for a counterexample.

Among discrete-time stationary processes, only autoregressive processes have multiplicative autocorrelation. Some Markov processes are not obviously autoregressive processes even though technically they are. For example, all stationary real-valued two-state Markov chains are autoregressive (and thus also have multiplicative autocorrelation).

Multiplicative autocorrelation implies autoregression

Consider any real-valued discrete-time stationary Markov process \(Z'_t\) and translate it to \(Z_t := Z'_t - {\operatorname{E}\!\left[{ Z'_t}\right]}\) without loss of generality.

Let \[ \begin{aligned} \sigma^2 & := \operatorname{Var}({ Z_t}) \\ \rho & := \operatorname{Cov}\!\left[{ Z_t}, { Z_{t+1}}\right] / \sigma^2 \end{aligned} \]

Multiplicative autocorrelation implies \[ \begin{aligned} \operatorname{Cor}\!\left[{ Z_t}, { Z_{t+n}}\right] & = \rho^n \\ \operatorname{Cov}\!\left[{ Z_t}, { Z_{t+n}}\right] & = \rho^n \sigma^2 \end{aligned} \]

Define what will be shown to be "white noise" of \(Z_t\) as autoregressive process: \[ \epsilon_t := Z_t - \rho Z_{t-1} \] By convenient translation, \[ \begin{aligned} {\operatorname{E}\!\left[{ Z_t}\right]} & = 0 \\ \epsilon_t & = 0 \\ \operatorname{Cov}\!\left[{ Z_t}, { Z_s}\right] & = {\operatorname{E}\!\left[{ Z_t Z_s}\right]} \\ {\operatorname{E}\!\left[{ Z_t^2}\right]} & = \sigma^2 \\ {\operatorname{E}\!\left[{ Z_t Z_{t+1}}\right]} & = \rho \end{aligned} \]

Consider any \(n > 0\). \[ \begin{aligned} {\operatorname{E}\!\left[{ \epsilon_t \epsilon_{t+n}}\right]} & = {\operatorname{E}\!\left[{ (Z_t - \rho Z_{t-1})(Z_{t+n} - \rho Z_{t+n-1})}\right]} \\ & = {\operatorname{E}\!\left[{ Z_t Z_{t+n}}\right]} + \rho^2 {\operatorname{E}\!\left[{ Z_{t-1} Z_{t+n-1}}\right]} - \rho ({\operatorname{E}\!\left[{ Z_t Z_{t+n-1}}\right]} + {\operatorname{E}\!\left[{ Z_{t-1} Z_{t+n}}\right]}) \\ & = (1 + \rho^2) \rho^n \sigma^2 - \rho (\rho^{n-1} \sigma^2 + \rho^{n+1} \sigma^2) \\ & = 0 \end{aligned} \] thus \(\epsilon_t\) satisfies the "white noise" condition for expressing \(Z_t\) as the autoregressive process \[ Z_{t+1} = \rho Z_t + \epsilon_t \] QED

Real-valued 2-state Markov chain

For any stationary two-state Markov chain [1] \(Z_t\), \[ \operatorname{Cor}\!\left[{ Z_t}, { Z_0}\right] = \operatorname{Cor}\!\left[{ Z_t}, { Z_s}\right] \operatorname{Cor}\!\left[{ Z_s}, { Z_0}\right] \]

Proof Let \[ \begin{aligned} q_1 & := \operatorname{P}({ Z_t = a_1 }) \\ q_0 & := \operatorname{P}({ Z_t = a_0 }) \end{aligned} \] Map \(Z_t\) to a more convenient \[ Y_t := \frac{Z_t - a_0}{a_1 - a_0} \] Since \(Y_t\) only equals \(0\) or \(1\): \[ {\operatorname{E}\!\left[{ Y_t}\right]} = {\operatorname{E}\!\left[{ Y_t^2}\right]} = q_1 \] and thus \[ \operatorname{Var}({ Y_t}) = q_1 - q_1^2 = q_1 q_0 \] For convenience let \[ \begin{aligned} p_0 & := \operatorname{P}({ Y_1 = 0 \mid Y_0 = 1 }) \\ p_1 & := \operatorname{P}({ Y_1 = 1 \mid Y_0 = 0 }) \\ s & := p_0 + p_1 \end{aligned} \] Since \(Y_t\) is stationary, it follows that \(q_i = p_i/s\) for \(i \in \{0,1\}\). In preparation for induction, assume \[ \begin{aligned} \operatorname{P}({ Y_t = 1 \mid Y_0 = 1 }) & = q_1 + q_0 (1-s)^t \\ \operatorname{P}({ Y_t = 1 \mid Y_0 = 0 }) & = q_1 - q_1 (1-s)^t \end{aligned} \] It must follow that \[ \begin{aligned} \operatorname{P}({ Y_{t+1} = 1 \mid Y_0 = 1 }) & = \operatorname{P}({ Y_{t+1} = 1 \mid Y_1 = 1 }) (1-p_0) + \operatorname{P}({ Y_{t+1} = 1 \mid Y_1 = 0 }) p_0 \\ & = [q_1 + q_0 (1-s)^t] (1-p_0) + [q_1 - q_1 (1-s)^t] p_0 \\ & = q_1 + [q_0 (1-p_0) - q_1 p_0](1-s)^t \\ & = q_1 + [q_0 (1-p_0) - (1- q_0) p_0](1-s)^t \\ & = q_1 + [q_0 - p_0](1-s)^t \\ & = q_1 + [q_0 - q_0 s](1-s)^t \\ & = q_1 + q_0 (1-s)^{t+1} \\ \end{aligned} \] and \[ \begin{aligned} \operatorname{P}({ Y_{t+1} = 1 \mid Y_0 = 0 }) & = \operatorname{P}({ Y_{t+1} = 1 \mid Y_1 = 1 }) p_1 + \operatorname{P}({ Y_{t+1} = 1 \mid Y_1 = 0 }) (1-p_1) \\ & = [q_1 + q_0 (1-s)^t] p_1 + [q_1 - q_1 (1-s)^t] (1-p_1) \\ & = q_1 + [q_0 p_1 - q_1 (1-p_1)](1-s)^t \\ & = q_1 + [(1 - q_1) p_1 - q_1 (1-p_1)](1-s)^t \\ & = q_1 + [p_1 - q_1](1-s)^t \\ & = q_1 + [q_1 s - q_1](1-s)^t \\ & = q_1 - q_1 (1-s)^{t+1} \\ \end{aligned} \] which completes induction, noting the base case of \(t=0\) is true.

Due to the convenient mapping to \(Y_t\), \[ \begin{aligned} {\operatorname{E}\!\left[{ Y_t Y_0}\right]} & = \operatorname{P}({ Y_t = 1 \mid Y_0 = 1 }) \operatorname{P}({ Y_0 = 1}) \\ & = (q_1 + q_0 (1-s)^t) q_1 \\ & = q_1^2 + q_0 q_1 (1-s)^t \end{aligned} \] thus \[ \begin{aligned} \operatorname{Cov}\!\left[{ Y_t}, { Y_0}\right] & = {\operatorname{E}\!\left[{ Y_t Y_0}\right]} - {\operatorname{E}\!\left[{ Y_t}\right]} {\operatorname{E}\!\left[{ Y_0}\right]} \\ & = q_1^2 + q_0 q_1 (1-s)^t - q_1^2 \\ & = q_0 q_1 (1-s)^t \\ \operatorname{Cor}\!\left[{ Y_t}, { Y_0}\right] & = (1-s)^t \end{aligned} \] QED

Counterexample of Real-Valued 3-State Markov Chain

Let \(Z_t\) be a stationary Markov process such that \[ \operatorname{P}({ Z_t = -1}) = \operatorname{P}({ Z_t = 0}) = \operatorname{P}({ Z_t = 1}) = 1/3 \] \[ \begin{aligned} \operatorname{P}({ Z_{t+1} = 0 \mid Z_t = -1}) & = 1/2 \\ \operatorname{P}({ Z_{t+1} = 1 \mid Z_t = 0}) & = 1/2 \\ \operatorname{P}({ Z_{t+1} = -1 \mid Z_t = 1}) & = 1/2 \end{aligned} \] and for all \(i \in \{-1, 0, 1\}\), \[ \operatorname{P}({ Z_{t+1} = i \mid Z_t = i}) = 1/2 \]

Conveniently \({\operatorname{E}\!\left[{ Z_t}\right]} = 0\), thus \(\operatorname{Cov}\!\left[{ Z_t}, { Z_s}\right] = {\operatorname{E}\!\left[{ Z_t Z_s}\right]}\). For one time step we have \[ \begin{aligned} \operatorname{P}({ Z_1 = 1 \wedge Z_0 = 1 }) & = (1/3)(1/2) \\ \operatorname{P}({ Z_1 = -1 \wedge Z_0 = 1 }) & = (1/3)(1/2) \\ \operatorname{P}({ Z_1 = 1 \wedge Z_0 = -1 }) & = 0 \\ \operatorname{P}({ Z_1 = -1 \wedge Z_0 = -1 }) & = (1/3)(1/2) \end{aligned} \] thus autocorrelation of one time step must be positive: \[ {\operatorname{E}\!\left[{ Z_1 Z_0}\right]} = (1 \cdot 1) \frac{1}{6} + (-1 \cdot 1) \frac{1}{6} + (-1 \cdot -1) \frac{1}{6} = \frac{1}{6} \] For two time steps \[ \begin{aligned} \operatorname{P}({ Z_2 = 1 \wedge Z_0 = 1 }) & = (1/3) (1/2)^2 \\ \operatorname{P}({ Z_2 = -1 \wedge Z_0 = 1 }) & = (1/3) [ (1/2)^2 + (1/2)^2 ] \\ \operatorname{P}({ Z_2 = 1 \wedge Z_0 = -1 }) & = (1/3) (1/2)^2 \\ \operatorname{P}({ Z_2 = -1 \wedge Z_0 = -1 }) & = (1/3) (1/2)^2 \end{aligned} \] thus the autocorrelation for two time steps must be negative: \[ {\operatorname{E}\!\left[{ Z_2 Z_0}\right]} = (1 \cdot 1) \frac{1}{12} + (-1 \cdot 1) \frac{2}{12} + (1 \cdot -1) \frac{1}{12} + (-1 \cdot -1) \frac{1}{12} = - \frac{1}{12} \] thus \[ \operatorname{Cor}\!\left[{ Z_2}, { Z_0}\right] \not= \operatorname{Cor}\!\left[{ Z_2}, { Z_1}\right] \operatorname{Cor}\!\left[{ Z_1}, { Z_0}\right] \]

References

1.
Hamilton JD (1994) Time series analysis. Princeton University Press, Princeton, N.J