Chapter 3: Problem 1
Consider the following random walk: $$ \begin{array}{llll} P_{i, i+1} & =p & \text { with } \quad 0
Chapter 3: Problem 1
Consider the following random walk: $$ \begin{array}{llll} P_{i, i+1} & =p & \text { with } \quad 0
All the tools & learning materials you need for study success - in one app.
Get started for freeConsider a Markov chain with the \(N+1\) states \(0,1, \ldots, N\) and transition I robabilities $$ \begin{aligned} &P_{i j}=\left(\begin{array}{l} N \\ j \end{array}\right) \pi \dot{i}\left(1-\pi_{i}\right)^{N-j}, \quad 0 \leq i, \quad j \leq N \\ &\pi_{i}=\frac{1-e^{-2 a l / N}}{1-e^{-2 a}}, \quad a>0 \end{aligned} $$ Note that 0 and \(N\) are absorbing states. Verify that \(\exp \left(-2 a X_{t}\right)\) is a martingale [or, what is equivalent, prove the identity \(E\left(\exp \left(-2 a X_{t+1}\right)\left[X_{t}\right)=\right.\) \(\left.\exp \left(-2 a X_{t}\right)\right]\), where \(X_{t}\) is the state at time \(t(t=0,1,2, \ldots)\). Using this property show that the probability \(P_{N}(k)\) of absorption into state \(N\) starting nt state \(k\) is given by $$ P_{N}(k)=\frac{1-e^{-2 a k}}{1-e^{-2 a N}} $$
Consider a discrete time Markov chain with states \(0,1, \ldots, N\) whose matrix has elements $$ P_{i j}=\left\\{\begin{array}{cc} \mu_{i}, & j=i-1 \\ \lambda_{i}, & j=i+1 ; \quad i, j=0,1, \ldots, N . \\ 1-\lambda_{i}-\mu_{i}, & j=i \\ 0, & |j-i|>1 \end{array}\right. $$ Suppose that \(\mu_{0}=\lambda_{0}=\mu_{N}=\lambda_{N}=0\), and all other \(\mu_{i}^{\prime} s\) and \(\lambda_{i}\) 's are positive, and that the initial state of the process is \(k\). Determine the absorption probabilities at 0 and \(N\).
If \(\mathbf{P}\) is a finite Markov matrix, we define \(\mu(\mathbf{P})=\max _{i_{1}, i_{2} j}\left(P_{i_{1}, j}-P_{I_{2}, j}\right)\). Suppose \(P_{1}, P_{2}, \ldots, P_{k}\) are \(3 \times 3\) transition matrices of irreducible aperiodic Markov chains. Asbume furthermore that for any set of integers \(\alpha_{i}\left(1 \leq \alpha_{l} \leq k\right)\), \(i=1,2, \ldots, m, \Pi_{i=1}^{m_{1}} \mathbf{P}_{\alpha_{i}}\) is also the matrix of an aperiodic irreducible Markov chain. Prove that, for every \(\varepsilon>0\), there exists an \(M(\varepsilon)\) such that \(m>M\) implies $$ \mu\left(\prod_{i=1}^{m} \mathbf{P}_{\alpha_{i}}\right)<\varepsilon \quad \text { for any set } \alpha_{i}\left(1 \leq \alpha_{i} \leq k\right) \quad i=1,2, \ldots, m $$
Consider an irreducible Markov chain with a finite set of states \(\\{1,2, \ldots, N\\}\). Let \(\left\|P_{i j}\right\|\) be the transition probability matrix of the Markov chain and denote by \(\left\\{\pi_{j}\right\\}\) the stationary distribution of the process. Let \(\left\|P_{i j}^{(m)}\right\|\) denote the \(m\)-step transition probability matrix. Let \(\varphi(x)\) be a concave function on \(x \geq 0\) and define $$ E_{m}=\sum_{j=1}^{N} \pi_{j} \varphi\left(P_{j t}^{(m)}\right) \quad \text { with } l \text { fixed. } $$ Prove that \(E_{m}\) is a nondecreasing function of \(m\), i.e., \(E_{m+1} \geq E_{m}\) for all \(m \geq 1\)
Let \(\\{X(t), t \geq 0\\}\) and \(\\{Y(t), t \geq 0\\}\) be two independent Poisson processes with parameters \(\lambda_{1}\) and \(\lambda_{2}\), respectively. Define $$ Z(t)=X(t)-Y(t), \quad t \geq 0 $$ This is a stochastic process whose state space consists of all the integers. (positive, negative, and zero). Let $$ P_{n}(t)=\operatorname{Pr}\\{Z(t)=n\\}, \quad n=0, \pm 1, \pm 2, \ldots $$ Establish the formula $$ \sum_{n=-\infty}^{\infty} P_{n}(t) z^{n}=\exp \left(-\left(\lambda_{1}+\lambda_{2}\right) t\right) \exp \left(\lambda_{1} z t+\left(\lambda_{2} / z\right) t\right), \quad|z| \neq 0 $$ Compute \(E(Z(t))\) and \(E\left(Z(t)^{2}\right)\)
What do you think about this solution?
We value your feedback to improve our textbook solutions.