Summary

A stationary process ZtZ_t has multiplicative autocorrelation when Cor ⁣[Zt,Zr]=Cor ⁣[Zt,Zs]Cor ⁣[Zs,Zr] \operatorname{Cor}\!\left[{ Z_t}, { Z_r}\right] = \operatorname{Cor}\!\left[{ Z_t}, { Z_s}\right] \operatorname{Cor}\!\left[{ Z_s}, { Z_r}\right] for all tsrt \le s \le r. Autocorrelation is defined as Cor ⁣[Zt,Zs]:=Cov ⁣[Zt,Zs]σ2 \operatorname{Cor}\!\left[{ Z_t}, { Z_s}\right] := \frac{ \operatorname{Cov}\!\left[{ Z_t}, { Z_s}\right] }{\sigma^2} with σ2=Var(Zt)\sigma^2 = \operatorname{Var}({ Z_t}).

A stationary autoregressive process has multiplicative autocorrelation [1]. However, not all stationary Markov processes have multiplicative autocorrelation. See the section below about a real-valued 3-state Markov chain for a counterexample.

Among discrete-time stationary processes, only autoregressive processes have multiplicative autocorrelation. Some Markov processes are not obviously autoregressive processes even though technically they are. For example, all stationary real-valued two-state Markov chains are autoregressive (and thus also have multiplicative autocorrelation).

Multiplicative autocorrelation implies autoregression

Consider any real-valued discrete-time stationary Markov process ZtZ'_t and translate it to Zt:=ZtE ⁣[Zt]Z_t := Z'_t - {\operatorname{E}\!\left[{ Z'_t}\right]} without loss of generality.

Let σ2:=Var(Zt)ρ:=Cov ⁣[Zt,Zt+1]/σ2 \begin{aligned} \sigma^2 & := \operatorname{Var}({ Z_t}) \\ \rho & := \operatorname{Cov}\!\left[{ Z_t}, { Z_{t+1}}\right] / \sigma^2 \end{aligned}

Multiplicative autocorrelation implies Cor ⁣[Zt,Zt+n]=ρnCov ⁣[Zt,Zt+n]=ρnσ2 \begin{aligned} \operatorname{Cor}\!\left[{ Z_t}, { Z_{t+n}}\right] & = \rho^n \\ \operatorname{Cov}\!\left[{ Z_t}, { Z_{t+n}}\right] & = \rho^n \sigma^2 \end{aligned}

Define what will be shown to be “white noise” of ZtZ_t as autoregressive process: ϵt:=ZtρZt1 \epsilon_t := Z_t - \rho Z_{t-1} By convenient translation, E ⁣[Zt]=0ϵt=0Cov ⁣[Zt,Zs]=E ⁣[ZtZs]E ⁣[Zt2]=σ2E ⁣[ZtZt+1]=ρ \begin{aligned} {\operatorname{E}\!\left[{ Z_t}\right]} & = 0 \\ \epsilon_t & = 0 \\ \operatorname{Cov}\!\left[{ Z_t}, { Z_s}\right] & = {\operatorname{E}\!\left[{ Z_t Z_s}\right]} \\ {\operatorname{E}\!\left[{ Z_t^2}\right]} & = \sigma^2 \\ {\operatorname{E}\!\left[{ Z_t Z_{t+1}}\right]} & = \rho \end{aligned}

Consider any n>0n > 0. E ⁣[ϵtϵt+n]=E ⁣[(ZtρZt1)(Zt+nρZt+n1)]=E ⁣[ZtZt+n]+ρ2E ⁣[Zt1Zt+n1]ρ(E ⁣[ZtZt+n1]+E ⁣[Zt1Zt+n])=(1+ρ2)ρnσ2ρ(ρn1σ2+ρn+1σ2)=0 \begin{aligned} {\operatorname{E}\!\left[{ \epsilon_t \epsilon_{t+n}}\right]} & = {\operatorname{E}\!\left[{ (Z_t - \rho Z_{t-1})(Z_{t+n} - \rho Z_{t+n-1})}\right]} \\ & = {\operatorname{E}\!\left[{ Z_t Z_{t+n}}\right]} + \rho^2 {\operatorname{E}\!\left[{ Z_{t-1} Z_{t+n-1}}\right]} - \rho ({\operatorname{E}\!\left[{ Z_t Z_{t+n-1}}\right]} + {\operatorname{E}\!\left[{ Z_{t-1} Z_{t+n}}\right]}) \\ & = (1 + \rho^2) \rho^n \sigma^2 - \rho (\rho^{n-1} \sigma^2 + \rho^{n+1} \sigma^2) \\ & = 0 \end{aligned} thus ϵt\epsilon_t satisfies the “white noise” condition for expressing ZtZ_t as the autoregressive process Zt+1=ρZt+ϵt Z_{t+1} = \rho Z_t + \epsilon_t QED

Real-valued 2-state Markov chain

For any stationary two-state Markov chain [1] ZtZ_t, Cor ⁣[Zt,Z0]=Cor ⁣[Zt,Zs]Cor ⁣[Zs,Z0] \operatorname{Cor}\!\left[{ Z_t}, { Z_0}\right] = \operatorname{Cor}\!\left[{ Z_t}, { Z_s}\right] \operatorname{Cor}\!\left[{ Z_s}, { Z_0}\right]

Proof Let q1:=P(Zt=a1)q0:=P(Zt=a0) \begin{aligned} q_1 & := \operatorname{P}({ Z_t = a_1 }) \\ q_0 & := \operatorname{P}({ Z_t = a_0 }) \end{aligned} Map ZtZ_t to a more convenient Yt:=Zta0a1a0 Y_t := \frac{Z_t - a_0}{a_1 - a_0} Since YtY_t only equals 00 or 11: E ⁣[Yt]=E ⁣[Yt2]=q1 {\operatorname{E}\!\left[{ Y_t}\right]} = {\operatorname{E}\!\left[{ Y_t^2}\right]} = q_1 and thus Var(Yt)=q1q12=q1q0 \operatorname{Var}({ Y_t}) = q_1 - q_1^2 = q_1 q_0 For convenience let p0:=P(Y1=0Y0=1)p1:=P(Y1=1Y0=0)s:=p0+p1 \begin{aligned} p_0 & := \operatorname{P}({ Y_1 = 0 \mid Y_0 = 1 }) \\ p_1 & := \operatorname{P}({ Y_1 = 1 \mid Y_0 = 0 }) \\ s & := p_0 + p_1 \end{aligned} Since YtY_t is stationary, it follows that qi=pi/sq_i = p_i/s for i{0,1}i \in \{0,1\}. In preparation for induction, assume P(Yt=1Y0=1)=q1+q0(1s)tP(Yt=1Y0=0)=q1q1(1s)t \begin{aligned} \operatorname{P}({ Y_t = 1 \mid Y_0 = 1 }) & = q_1 + q_0 (1-s)^t \\ \operatorname{P}({ Y_t = 1 \mid Y_0 = 0 }) & = q_1 - q_1 (1-s)^t \end{aligned} It must follow that P(Yt+1=1Y0=1)=P(Yt+1=1Y1=1)(1p0)+P(Yt+1=1Y1=0)p0=[q1+q0(1s)t](1p0)+[q1q1(1s)t]p0=q1+[q0(1p0)q1p0](1s)t=q1+[q0(1p0)(1q0)p0](1s)t=q1+[q0p0](1s)t=q1+[q0q0s](1s)t=q1+q0(1s)t+1 \begin{aligned} \operatorname{P}({ Y_{t+1} = 1 \mid Y_0 = 1 }) & = \operatorname{P}({ Y_{t+1} = 1 \mid Y_1 = 1 }) (1-p_0) + \operatorname{P}({ Y_{t+1} = 1 \mid Y_1 = 0 }) p_0 \\ & = [q_1 + q_0 (1-s)^t] (1-p_0) + [q_1 - q_1 (1-s)^t] p_0 \\ & = q_1 + [q_0 (1-p_0) - q_1 p_0](1-s)^t \\ & = q_1 + [q_0 (1-p_0) - (1- q_0) p_0](1-s)^t \\ & = q_1 + [q_0 - p_0](1-s)^t \\ & = q_1 + [q_0 - q_0 s](1-s)^t \\ & = q_1 + q_0 (1-s)^{t+1} \\ \end{aligned} and P(Yt+1=1Y0=0)=P(Yt+1=1Y1=1)p1+P(Yt+1=1Y1=0)(1p1)=[q1+q0(1s)t]p1+[q1q1(1s)t](1p1)=q1+[q0p1q1(1p1)](1s)t=q1+[(1q1)p1q1(1p1)](1s)t=q1+[p1q1](1s)t=q1+[q1sq1](1s)t=q1q1(1s)t+1 \begin{aligned} \operatorname{P}({ Y_{t+1} = 1 \mid Y_0 = 0 }) & = \operatorname{P}({ Y_{t+1} = 1 \mid Y_1 = 1 }) p_1 + \operatorname{P}({ Y_{t+1} = 1 \mid Y_1 = 0 }) (1-p_1) \\ & = [q_1 + q_0 (1-s)^t] p_1 + [q_1 - q_1 (1-s)^t] (1-p_1) \\ & = q_1 + [q_0 p_1 - q_1 (1-p_1)](1-s)^t \\ & = q_1 + [(1 - q_1) p_1 - q_1 (1-p_1)](1-s)^t \\ & = q_1 + [p_1 - q_1](1-s)^t \\ & = q_1 + [q_1 s - q_1](1-s)^t \\ & = q_1 - q_1 (1-s)^{t+1} \\ \end{aligned} which completes induction, noting the base case of t=0t=0 is true.

Due to the convenient mapping to YtY_t, E ⁣[YtY0]=P(Yt=1Y0=1)P(Y0=1)=(q1+q0(1s)t)q1=q12+q0q1(1s)t \begin{aligned} {\operatorname{E}\!\left[{ Y_t Y_0}\right]} & = \operatorname{P}({ Y_t = 1 \mid Y_0 = 1 }) \operatorname{P}({ Y_0 = 1}) \\ & = (q_1 + q_0 (1-s)^t) q_1 \\ & = q_1^2 + q_0 q_1 (1-s)^t \end{aligned} thus Cov ⁣[Yt,Y0]=E ⁣[YtY0]E ⁣[Yt]E ⁣[Y0]=q12+q0q1(1s)tq12=q0q1(1s)tCor ⁣[Yt,Y0]=(1s)t \begin{aligned} \operatorname{Cov}\!\left[{ Y_t}, { Y_0}\right] & = {\operatorname{E}\!\left[{ Y_t Y_0}\right]} - {\operatorname{E}\!\left[{ Y_t}\right]} {\operatorname{E}\!\left[{ Y_0}\right]} \\ & = q_1^2 + q_0 q_1 (1-s)^t - q_1^2 \\ & = q_0 q_1 (1-s)^t \\ \operatorname{Cor}\!\left[{ Y_t}, { Y_0}\right] & = (1-s)^t \end{aligned} QED

Counterexample of Real-Valued 3-State Markov Chain

Let ZtZ_t be a stationary Markov process such that P(Zt=1)=P(Zt=0)=P(Zt=1)=1/3 \operatorname{P}({ Z_t = -1}) = \operatorname{P}({ Z_t = 0}) = \operatorname{P}({ Z_t = 1}) = 1/3 P(Zt+1=0Zt=1)=1/2P(Zt+1=1Zt=0)=1/2P(Zt+1=1Zt=1)=1/2 \begin{aligned} \operatorname{P}({ Z_{t+1} = 0 \mid Z_t = -1}) & = 1/2 \\ \operatorname{P}({ Z_{t+1} = 1 \mid Z_t = 0}) & = 1/2 \\ \operatorname{P}({ Z_{t+1} = -1 \mid Z_t = 1}) & = 1/2 \end{aligned} and for all i{1,0,1}i \in \{-1, 0, 1\}, P(Zt+1=iZt=i)=1/2 \operatorname{P}({ Z_{t+1} = i \mid Z_t = i}) = 1/2

Conveniently E ⁣[Zt]=0{\operatorname{E}\!\left[{ Z_t}\right]} = 0, thus Cov ⁣[Zt,Zs]=E ⁣[ZtZs]\operatorname{Cov}\!\left[{ Z_t}, { Z_s}\right] = {\operatorname{E}\!\left[{ Z_t Z_s}\right]}. For one time step we have P(Z1=1Z0=1)=(1/3)(1/2)P(Z1=1Z0=1)=(1/3)(1/2)P(Z1=1Z0=1)=0P(Z1=1Z0=1)=(1/3)(1/2) \begin{aligned} \operatorname{P}({ Z_1 = 1 \wedge Z_0 = 1 }) & = (1/3)(1/2) \\ \operatorname{P}({ Z_1 = -1 \wedge Z_0 = 1 }) & = (1/3)(1/2) \\ \operatorname{P}({ Z_1 = 1 \wedge Z_0 = -1 }) & = 0 \\ \operatorname{P}({ Z_1 = -1 \wedge Z_0 = -1 }) & = (1/3)(1/2) \end{aligned} thus autocorrelation of one time step must be positive: E ⁣[Z1Z0]=(11)16+(11)16+(11)16=16 {\operatorname{E}\!\left[{ Z_1 Z_0}\right]} = (1 \cdot 1) \frac{1}{6} + (-1 \cdot 1) \frac{1}{6} + (-1 \cdot -1) \frac{1}{6} = \frac{1}{6} For two time steps P(Z2=1Z0=1)=(1/3)(1/2)2P(Z2=1Z0=1)=(1/3)[(1/2)2+(1/2)2]P(Z2=1Z0=1)=(1/3)(1/2)2P(Z2=1Z0=1)=(1/3)(1/2)2 \begin{aligned} \operatorname{P}({ Z_2 = 1 \wedge Z_0 = 1 }) & = (1/3) (1/2)^2 \\ \operatorname{P}({ Z_2 = -1 \wedge Z_0 = 1 }) & = (1/3) [ (1/2)^2 + (1/2)^2 ] \\ \operatorname{P}({ Z_2 = 1 \wedge Z_0 = -1 }) & = (1/3) (1/2)^2 \\ \operatorname{P}({ Z_2 = -1 \wedge Z_0 = -1 }) & = (1/3) (1/2)^2 \end{aligned} thus the autocorrelation for two time steps must be negative: E ⁣[Z2Z0]=(11)112+(11)212+(11)112+(11)112=112 {\operatorname{E}\!\left[{ Z_2 Z_0}\right]} = (1 \cdot 1) \frac{1}{12} + (-1 \cdot 1) \frac{2}{12} + (1 \cdot -1) \frac{1}{12} + (-1 \cdot -1) \frac{1}{12} = - \frac{1}{12} thus Cor ⁣[Z2,Z0]Cor ⁣[Z2,Z1]Cor ⁣[Z1,Z0] \operatorname{Cor}\!\left[{ Z_2}, { Z_0}\right] \not= \operatorname{Cor}\!\left[{ Z_2}, { Z_1}\right] \operatorname{Cor}\!\left[{ Z_1}, { Z_0}\right]

References

1.
Hamilton JD. Time series analysis. Princeton, N.J: Princeton University Press; 1994.