$ \newcommand{\Ha}{\mathcal H} \newcommand{\p}[1]{\left(#1\right)} \newcommand{\tr}{\mathrm{tr}} \newcommand{\vect}[1]{\underline{#1}} \newcommand{\mR}{\mathcal{R}} \newcommand{\Lf}{\mathcal{L}} \newcommand{\mE}{\mathcal{E}} \newcommand{\mG}{\mathcal{G}} \newcommand{\mA}{\mathcal{A}} \newcommand{\mP}{\mathcal{P}} \newcommand{\mQ}[1]{\mathcal{Q}\p{#1}} \newcommand{\fLf}[1]{ \frac{\Lf\p{#1} } {\Lf\p{\mE^n}}} \newcommand{\Esp}[1]{\left<#1\right>} \newcommand{\bra}[1]{\left<#1\right|} \newcommand{\ket}[1]{\left|#1\right>} \newcommand{\as}[1]{ \overset{\mathrm{a.s}} {#1} } \newcommand{\conv}[1] {\underset{#1 \rightarrow +\infty} {\rightarrow} } \newcommand{\convAs}{ \as{\conv{n}} } \newcommand{\R}{\mathbb{R}} \newcommand{\Prob}{\mathbb{P}} \newcommand{\Dist}[1]{\overset{\mathcal{D}}{#1}} \newcommand{\convD} {\Dist{ \conv{n} } } \newcommand{\Lc}{L_c} \newcommand{\Lg}{L_{\Gamma}} \newcommand{\lNormal}[2]{\mathcal{N}_{#1,#2}} \newcommand{\iid}{\mathrm{i.i.d.}} \newcommand{\cpath}{\mapsto} \newcommand{\wcon}{\leftrightarrow} \newcommand{\scon}{\leftrightarrows} \newcommand {\card} {\mathrm{card}} $
Random vectors with a matrix representation:
a framework for out-of-equibrium systems
Florian Angeletti
13 november 2013

At equilibrium:

  • Microcanonical ensemble: $ p(x_1, \dots, x_n ) = \mathrm{cst} $
  • Canonical ensemble: $ p(x_1, \dots, x_n ) = e^{-\beta \Ha(x_1,\dots, x_n)} $

Out-of-equilibrium:

  • Constant flow of heat or particles
  • Dynamic description
  • Correlation
  • Stationary distribution?

A simple and famous out-of-equilibrium systems

How do we describe the stationary distribution?

Aim

  • Theoretical framework for stationary distribution with correlation
  • Inspired from the Matrix product ansatz
\[ p(x_1,\dots,x_n) = \fLf{ \mR(x_1) \dots \mR(x_n)} \]
  • $ d> 1 $: Commutativity $\implies$ Correlation
  • Product structure: $p(x_1,\dots,x_n) = \fLf{ \mR(x_1) \dots \mR(x_n)}$
  • Moment matrix: $ \mQ{q} = \int x^q R(x) dx $
\[ \Esp{X_k^p} = \frac{\Lf\p{\mE^{k-1} \mQ{p} \mE^{n-k}}}{\Lf\p{\mE^n} } \] \[ \Esp{X_k X_l} = \frac{\Lf\p{\mE^{k-1} \mQ{1} \mE^{l-k-1} \mQ{1} \mE^{n-l}}}{\Lf\p{\mE^n} } \] \[ \Esp{X_k X_l X_m} = \frac{\Lf\p{\mE^{k-1} \mQ{1} \mE^{l-k-1} \mQ{1} \mE^{m-l-1} \mQ{1} \mE^{n-m}}}{\Lf\p{\mE^n} } \] \[ \dots \]
Translation invariance: \[ p(X_{k_1}=x_1,\dots, X_{k_l}= x_l ) = p(X_{c+k_1}=x_1,\dots, X_{c+k_l}= x_l ) \]

Sufficient condition

\[ [\mA^T,\mE] = \mA^T \mE - \mE \mA^T = 0 \] \[ \forall M,\quad \Lf(M\mE) = \Lf(\mE M) \]
\[ p(x_1,\dots,x_n) = \fLf{ \mR(x_1) \dots \mR(x_n)} \] How do we generate a random vector $X$ for a given triple $(\mA,\mE,\mP)$?

Markov chain

A correlated random process with short-term memory:
Characterized by


Hidden Markov Chain $\Gamma$

\[ p( \Gamma_k=i | \Gamma_{k+1}=j, \Gamma_{n}=f ) = \frac{\p{\mE^{n-k}}_{j, f}}{ \p{ \mE^{n-k+1}}_{i,f}} \] \[ p(\Gamma_0=i,\Gamma_n=j) = \frac{A_{i,j} \mE^n_{i,j} }{\Lf\p{\mE^n}} \]

Conditional pdf $(X|\Gamma)$

\[ p( X_k=x | \Gamma_k=i, \Gamma_{k+1}=j) = \mP_{i,j}(x) \]

Matrix representation

  • Algebraic properties
  • Statistical properties computation

Hidden Markov Model

  • $2$-layer model: correlated layer $+$ independant layer
  • Efficient synthesis
\[ \Esp{X_k X_l} = \fLf{ \mE^{k-1} \mQ{1} \mE^{l-k-1} \mQ{1} \mE^{n-l} } \] \[ \mE = B^{-1} \begin{pmatrix} J_{1,1} & 0 & \cdots & \cdots & 0 \\ 0 & \ddots & \ddots & & \vdots \\ \vdots & \ddots & J_{k,l} & \ddots& \vdots \\ \vdots & & \ddots & \ddots& 0 \\ 0 & \cdots & \cdots & 0 & J_{r,m} \end{pmatrix} B, \quad J_{k,l} = \begin{pmatrix} \lambda_k& 1 & 0 & \cdots & 0 \\ 0 & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & 0 \\ \vdots & & \ddots & \ddots & 1 \\ 0 & \cdots & \cdots & 0 & \lambda_k \end{pmatrix} \]

Case 1: Short-range correlation

  • More than one distinct eigenvalue $\lambda_k$: Short-range exponential correlation
    • $\Esp{X_k X_l} - \Esp{X_k}\Esp{X_l} \approx \sum \alpha_k \exp\p{- |l-k| [ \frac{1}{\tau_k} + 2\imath \pi \omega_k ] }$
    • $\tau_k = 1/(\ln \lambda_1 - \ln |\lambda_k|)$
    • $\omega_k = \mathrm {Arg}(\lambda_k)$

Case 2: Constant correlation

  • More than one block $J_{1,k}$: Constant correlation term

Case 3: Long-range correlation

  • At least one block $J_{1,k}$ with size $p > 1$: Polynomial long range correlation
    • $\Esp{X_k X_l} - \Esp{X_k}\Esp{X_l} \approx P(\frac{k}{n}, \frac{k-l}{n}, \frac{l}{n} ), \quad P \in \R[X,Y,Z]$
\[ \mE = \begin{pmatrix} I_1 & * & T_{k,l} \\ & \ddots & * \\ 0 & & I_r \\ \end{pmatrix} \]

Necessary but non sufficient conditions

Realization Marginal Correlation Square corr.
Prescribed

Sum

\[ S(\vect X) = \frac{1} {n} \sum_{i=1}^n X_i \]
Correlated random variables Two paths:
\[ p( S(\vect X) = s ) = \sum_{\vect \nu} p(\vect \nu) p( S(\vect X | \vect \nu ) = s ) \]

Limit distribution for $S(\vect X|\vect \nu)$

\[ p( S(\vect X | \vect \nu ) = s ) \]
+

Limit distribution for $\vect \nu$

\[ p( \vect \nu ) \]
\[ \Downarrow \]

Limit distribution for $S(\vect X)$

\[ p( S(\vect X) = s ) \]
Three important subclasses:

Sum

\[ S(\vect X) = \frac{1} {n} \sum_{i=1}^n X_i \]

Large deviation principle

\[ P( S(\vect X)/n = s ) \approx e^{-n I(s) } \]
Do we have a large principle?

Gärtner-Ellis Theorem

If $\Phi(w)$ exists and is differentiable, $I(s) $ exists and is \[ I(s) = \sup_{w} \{w s - \Phi(w) \} \]

Large deviation principle:

\[ I(s) = \sup_{w} \{w s - g(w) \} \]

Large deviation principle for short-range correlation:

  • Short-range correlation: $\lambda_1(w)$ is differentiable near $0$
\[ I(s) = \sup_{w} \{w s - \ln \lambda_1(w) \} \]

Long-range or constant correlation

$\Phi(w)$ is not differentiable in $0$.

Perspective

  • Physical model
  • Infinite dimension