Stochastic Discrete-Time Update Operator

Prerequisites

Description

In the case of a stochastic discrete-time system obeying the Markov property, a Markov transition matrix can be used as the update operator.

Equation

\[p\left( \mathbf{\htmlId{tooltip-randomVar}{X}}_{\htmlId{tooltip-wholeNumber}{n}+1} = \htmlId{tooltip-systemState}{\mathbf{x}}_{\htmlId{tooltip-2iter}{j}} \,\vert\, \mathbf{\htmlId{tooltip-randomVar}{X}}_{\htmlId{tooltip-wholeNumber}{n}} = \htmlId{tooltip-systemState}{\mathbf{x}}_{\htmlId{tooltip-iter}{i}} \right) = \htmlId{tooltip-updateOperator}{T}(\htmlId{tooltip-iter}{i}, \htmlId{tooltip-2iter}{j})\]

Symbols Used

\(X\)

This symbol represents a random variable. It is a measurable function that maps a sample space of possible outcomes to a a measurable space.

\(j\)

This is a secondary symbol for an iterator, a variable that changes value to refer to a series of elements

\(i\)

This is the symbol for an iterator, a variable that changes value to refer to a sequence of elements.

\(T\)

This is the symbol for a dynamical system's update operator.

\(\mathbf{x}\)

This symbol represents a state of the dynamical system at some time point.

\(n\)

This symbol represents any given whole number, \( n \in \htmlId{tooltip-setOfWholeNumbers}{\mathbb{W}}\).

Derivation

  1. Consider a stochastic system with discrete time and a finite number of \( \vert \htmlId{tooltip-stateSpace}{\mathcal{X}} \vert = n \) possible states:
    \[ \htmlId{tooltip-stateSpace}{\mathcal{X}} = \{ \htmlId{tooltip-systemState}{\mathbf{x}}_1, \htmlId{tooltip-systemState}{\mathbf{x}}_2, ..., \htmlId{tooltip-systemState}{\mathbf{x}}_n \} \]
  2. Assume the system obeys the Markov property:
    \[p\left( \mathbf{\htmlId{tooltip-randomVar}{X}}_{\htmlId{tooltip-wholeNumber}{n}+1} = \htmlId{tooltip-systemState}{\mathbf{x}}_j \,\vert\, \mathbf{\htmlId{tooltip-randomVar}{X}}_{\htmlId{tooltip-wholeNumber}{n}} = \htmlId{tooltip-systemState}{\mathbf{x}}_i, ... , \mathbf{\htmlId{tooltip-randomVar}{X}}_{1} = \htmlId{tooltip-systemState}{\mathbf{x}}_1 \right) = p\left( \mathbf{X}_{\htmlId{tooltip-wholeNumber}{n}+1} = \htmlId{tooltip-systemState}{\mathbf{x}}_j \,\vert\, \mathbf{\htmlId{tooltip-randomVar}{X}}_{\htmlId{tooltip-wholeNumber}{n}} = \htmlId{tooltip-systemState}{\mathbf{x}}_i \right)\]
  3. The transition from the current state \( \mathbf{\htmlId{tooltip-randomVar}{X}}_{\htmlId{tooltip-wholeNumber}{n}} = \htmlId{tooltip-systemState}{\mathbf{x}}_{\htmlId{tooltip-iter}{i}} \) to the following state \( \mathbf{\htmlId{tooltip-randomVar}{X}}_{\htmlId{tooltip-wholeNumber}{n}+1} \) takes place as dictated by the system's transition matrix:
    \[\left[ \htmlId{tooltip-markovMatrix}{T} \right]_{\htmlId{tooltip-iter}{i},\htmlId{tooltip-2iter}{j}} = p\left( \mathbf{\htmlId{tooltip-randomVar}{X}}_{\htmlId{tooltip-wholeNumber}{n}+1} = \htmlId{tooltip-systemState}{\mathbf{x}}_{\htmlId{tooltip-2iter}{j}} \,\vert\, \mathbf{\htmlId{tooltip-randomVar}{X}}_{\htmlId{tooltip-wholeNumber}{n}} = \htmlId{tooltip-systemState}{\mathbf{x}}_{\htmlId{tooltip-iter}{i}} \right)\]
  4. The distribution of \( \mathbf{\htmlId{tooltip-randomVar}{X}}_{\htmlId{tooltip-wholeNumber}{n}+1} \) can be computed (and its realization through e.g. distribution sampling).
  5. Thus the transition matrix functions as an update operator with values given by:
    \[ p\left( \mathbf{\htmlId{tooltip-randomVar}{X}}_{\htmlId{tooltip-wholeNumber}{n}+1} = \htmlId{tooltip-systemState}{\mathbf{x}}_{\htmlId{tooltip-2iter}{j}} \,\vert\, \mathbf{\htmlId{tooltip-randomVar}{X}}_{\htmlId{tooltip-wholeNumber}{n}} = \htmlId{tooltip-systemState}{\mathbf{x}}_{\htmlId{tooltip-iter}{i}} \right) = \htmlId{tooltip-updateOperator}{T}(\htmlId{tooltip-iter}{i}, \htmlId{tooltip-2iter}{j}) \]
    as required.

References

  1. Jaeger, H. (n.d.). Neural Networks (AI) (WBAI028-05) Lecture Notes BSc program in Artificial Intelligence. Retrieved May 17, 2024, from https://www.ai.rug.nl/minds/uploads/LN_NN_RUG.pdf

Was this page helpful?