Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Consider an irreducible Markov chain with a finite set of states \(\\{1,2, \ldots, N\\}\). Let \(\left\|P_{i j}\right\|\) be the transition probability matrix of the Markov chain and denote by \(\left\\{\pi_{j}\right\\}\) the stationary distribution of the process. Let \(\left\|P_{i j}^{(m)}\right\|\) denote the \(m\)-step transition probability matrix. Let \(\varphi(x)\) be a concave function on \(x \geq 0\) and define $$ E_{m}=\sum_{j=1}^{N} \pi_{j} \varphi\left(P_{j t}^{(m)}\right) \quad \text { with } l \text { fixed. } $$ Prove that \(E_{m}\) is a nondecreasing function of \(m\), i.e., \(E_{m+1} \geq E_{m}\) for all \(m \geq 1\)

Short Answer

Expert verified
In short, we proved that \(E_{m}\) is a nondecreasing function of \(m\) for an irreducible Markov chain with a finite set of states and its transition probability matrix. By using the properties of the Markov chain, the stationary distribution property, and the concave function, we derived the inequality: \[E_{m+1} - E_{m} = \sum_{j=1}^{N} \pi_{j} \left[\varphi\left(\sum_{i} P_{ij} P_{it}^{(m)}\right) - \varphi\left(P_{jt}^{(m)}\right)\right] \geq 0\] Therefore, we showed that \(E_{m+1} \geq E_{m}\) for all \(m \geq 1\), and \(E_{m}\) is a nondecreasing function of \(m\).

Step by step solution

01

Introduction to the proof

In order to prove that \(E_{m}\) is a nondecreasing function of \(m\), we need to show that \(E_{m+1} \geq E_{m}\) for all \(m \geq 1\). Let's first write down the expression for \(E_{m+1}\) and \(E_{m}\): \[E_{m+1} = \sum_{j=1}^{N} \pi_{j} \varphi\left(P_{jt}^{(m+1)}\right)\] \[E_{m} = \sum_{j=1}^{N} \pi_{j} \varphi\left(P_{jt}^{(m)}\right)\] Now let's find the difference between \(E_{m+1} - E_{m}\) and try to show that this difference is non-negative.
02

Applying the concave function

Since \(\varphi(x)\) is a concave function, we know that for any \(0 \leq x,y,z \leq 1\) and any probability \(p\): \[\varphi(x + y) \geq p \varphi(x) + (1-p) \varphi(z)\] Now, let \(x = P_{jt}^{(m+1)} - P_{jt}^{(m)}\), \(y = P_{jt}^{(m)}\), and \(z = \sum_{i} P_{ij} P_{it}^{(m)}\). We can use this inequality in the following way: \[\varphi\left(P_{jt}^{(m+1)}\right) - \varphi\left(P_{jt}^{(m)}\right) \geq p \left[\varphi\left(P_{jt}^{(m+1)} - P_{jt}^{(m)}\right)\right] + (1-p) \left[\varphi\left(\sum_{i} P_{ij} P_{it}^{(m)}\right)\right]\]
03

Using the stationary distribution property

We know that the stationary distribution of the Markov chain, \(\left\{\pi_{j}\right\}\), is such that: \[\pi_{j} = \sum_{i=1}^{N} \pi_{i} P_{ij}\] Thus, we can rewrite our inequality as: \[\varphi\left(P_{jt}^{(m+1)}\right) - \varphi\left(P_{jt}^{(m)}\right) \geq p \left[\varphi\left(P_{jt}^{(m+1)} - P_{jt}^{(m)}\right)\right] + (1-p) \left[\varphi\left(\sum_{i} \pi_{i} P_{ij} P_{it}^{(m)}\right)\right]\]
04

Rearranging the terms

Now let's rewrite the inequality as follows: \[\pi_{j} \left[\varphi\left(P_{jt}^{(m+1)}\right) - \varphi\left(P_{jt}^{(m)}\right)\right] \geq \pi_{j} \left[p \left[\varphi\left(P_{jt}^{(m+1)} - P_{jt}^{(m)}\right)\right] + (1-p) \left[\varphi\left(\sum_{i} \pi_{i} P_{ij} P_{it}^{(m)}\right)\right]\right]\] Summing this inequality over all \(j = 1,2,\cdots,N\), we get: \[\sum_{j=1}^{N} \pi_{j} \left[\varphi\left(P_{jt}^{(m+1)}\right) - \varphi\left(P_{jt}^{(m)}\right)\right] \geq 0\]
05

Using the transition probability properties

But, we also know that for an irreducible Markov chain, the \(m\)-step transition probability matrix properties imply: \[P_{jt}^{(m+1)} = \sum_{i=1}^{N} P_{ij} P_{it}^{(m)}\] Therefore, we can write our inequality as: \[\sum_{j=1}^{N} \pi_{j} \left[\varphi\left(\sum_{i} P_{ij} P_{it}^{(m)}\right) - \varphi\left(P_{jt}^{(m)}\right)\right] \geq 0\]
06

Completing the proof

Now, let's go back to the definition of \(E_m\) and see what we get: \[E_{m+1} - E_{m} = \sum_{j=1}^{N} \pi_{j} \varphi\left(P_{jt}^{(m+1)}\right) - \sum_{j=1}^{N} \pi_{j} \varphi\left(P_{jt}^{(m)}\right) = \sum_{j=1}^{N} \pi_{j} \left[\varphi\left(\sum_{i} P_{ij} P_{it}^{(m)}\right) - \varphi\left(P_{jt}^{(m)}\right)\right] \geq 0\] Thus, we have shown that \(E_{m+1} \geq E_{m}\) for all \(m \geq 1\). This completes the proof that \(E_{m}\) is a nondecreasing function of \(m\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Consider a finite population (of fixed size \(N\) ) of individuals of possible types \(A\) and \(a\) undergoing the following growth process. At instants of time \(t_{1}

Let \(\\{X(t), t \geq 0\\}\) and \(\\{Y(t), t \geq 0\\}\) be two independent Poisson processes with parameters \(\lambda_{1}\) and \(\lambda_{2}\), respectively. Define $$ Z(t)=X(t)-Y(t), \quad t \geq 0 $$ This is a stochastic process whose state space consists of all the integers. (positive, negative, and zero). Let $$ P_{n}(t)=\operatorname{Pr}\\{Z(t)=n\\}, \quad n=0, \pm 1, \pm 2, \ldots $$ Establish the formula $$ \sum_{n=-\infty}^{\infty} P_{n}(t) z^{n}=\exp \left(-\left(\lambda_{1}+\lambda_{2}\right) t\right) \exp \left(\lambda_{1} z t+\left(\lambda_{2} / z\right) t\right), \quad|z| \neq 0 $$ Compute \(E(Z(t))\) and \(E\left(Z(t)^{2}\right)\)

Let P be a \(3 \times 3\) Markov matrix and define \(\mu(\mathbf{P})=\max _{i_{1}, i_{2}, f}\left[P_{i_{1}, j}-P_{i_{2}, j}\right]\) Show that \(\mu(\mathbf{P})=1\) if and only if \(\mathbf{P}\) has the form $$ \left(\begin{array}{lll} 1 & 0 & 0 \\ 0 & p & q \\ r & s & t \end{array}\right) \quad(p, q \geq 0, p+q=1 ; \quad r, s, t \geq 0, r+s+t=1) $$ or any matrix obtained from this one by interchanging rows and/or columns.

Consider a discrete time Markov chain with states \(0,1, \ldots, N\) whose matrix has elements $$ P_{i j}=\left\\{\begin{array}{cc} \mu_{i}, & j=i-1 \\ \lambda_{i}, & j=i+1 ; \quad i, j=0,1, \ldots, N . \\ 1-\lambda_{i}-\mu_{i}, & j=i \\ 0, & |j-i|>1 \end{array}\right. $$ Suppose that \(\mu_{0}=\lambda_{0}=\mu_{N}=\lambda_{N}=0\), and all other \(\mu_{i}^{\prime} s\) and \(\lambda_{i}\) 's are positive, and that the initial state of the process is \(k\). Determine the absorption probabilities at 0 and \(N\).

Assume state 0 is positive recurrent. We take the initial state to be \(0 .\) Let \(\left\\{W_{n}\right\\}(n=1,2, \ldots)\) denote successive recurrence times which are of course independent and identically distributed random variables with finite mean and with a generating function \(F(t)=\sum_{k=1}^{\infty} t^{k} \operatorname{Pr}\left\\{W_{1}=k\right\\}(|t|<1) .\) Define \(Y_{n}\) as the time of the last visit to state 0 before the time \(n\). Show that $$ \sum_{n=0}^{\infty} t^{n} \sum_{j=0}^{n} x^{j} \operatorname{Pr}\left\\{Y_{n}=j\right\\}=\frac{(1-F(t))}{(1-t)(1-F(x t))} $$

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free