Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Consider again Example 2.5.2. (a) Verify the expressions for \(\left(n^{1 / 2} M\right)_{\epsilon}(s)\) and \(\left\langle M_{\epsilon}\right\rangle(s)\). (b) Show that \(\left\langle M_{\epsilon}\right\rangle(s) \stackrel{P}{\rightarrow} 0\) using Gill's condition and that $$ \lim _{n \rightarrow \infty} \int_{A_{n}} X d P=0 $$ where \(X\) is a random variable with \(E|X|<\infty, A_{n}\) is measurable and \(A_{n} \searrow \emptyset\).

Short Answer

Expert verified
In summary, we have verified the expressions for \((n^{1 / 2} M)_{\epsilon}(s)\) and \(\left\langle M_{\epsilon}\right\rangle(s)\) and showed that they are in accordance with Example 2.5.2. Furthermore, we have proven that \(\left\langle M_{\epsilon}\right\rangle(s) \stackrel{P}{\rightarrow} 0\) using Gill's condition and demonstrated that \(\lim _{n \rightarrow \infty} \int_{A_{n}} X d P=0\) under the given conditions.

Step by step solution

01

(a) Verifying expressions

First, we need to write down the expressions provided in Example 2.5.2: $$ (M_\epsilon)_n(s) = \frac{1}{\sqrt{n}} \sum_{i=1}^n (\chi_{[\epsilon, 1]}(s\text{ mod }1)-\epsilon) $$ and $$ \left\langle M_{\epsilon}\right\rangle(s) = \frac{1}{\sqrt{n}} \left\langle \chi_{[\epsilon, 1]}(s\text{ mod }1)-\epsilon\right\rangle $$ Now, let's calculate \((n^{1 / 2} M)_{\epsilon}(s)\) and \(\left\langle M_{\epsilon}\right\rangle(s)\): $$ (n^{1 / 2} M)_{\epsilon}(s) = (n^{1/2} \cdot \frac{1}{\sqrt{n}}) \sum_{i=1}^n (\chi_{[\epsilon, 1]}(s\text{ mod }1)-\epsilon) $$ As \(n^{1/2} \cdot \frac{1}{\sqrt{n}} = 1\), this simplifies to: $$ (n^{1 / 2} M)_{\epsilon}(s) = \sum_{i=1}^n (\chi_{[\epsilon, 1]}(s\text{ mod }1)-\epsilon) $$ Now let's calculate \(\left\langle M_{\epsilon}\right\rangle(s)\): $$ \left\langle M_{\epsilon}\right\rangle(s) = \frac{1}{\sqrt{n}} \left\langle \sum_{i=1}^n (\chi_{[\epsilon, 1]}(s\text{ mod }1)-\epsilon)\right\rangle \\ = \frac{1}{\sqrt{n}} \sum_{i=1}^n \left\langle \chi_{[\epsilon, 1]}(s\text{ mod }1)-\epsilon\right\rangle $$ Here, we have \(\left\langle \chi_{[\epsilon, 1]}(s\text{ mod }1)-\epsilon\right\rangle\), as given.
02

(b) Proving Gill's condition and calculating the limit

First, let's recall Gill's condition: If \(\sum_{n=1}^\infty \frac{\langle M_n^2\rangle}{n} < \infty\), then \(\frac{M_n}{\sqrt{n}}\stackrel{P}{\rightarrow} 0\). Now, we need to calculate \(\langle M_n^2\rangle\): $$ \langle M_n^2\rangle = \left\langle\left[\frac{1}{\sqrt{n}} \sum_{i=1}^n (\chi_{[\epsilon, 1]}(s\text{ mod }1)-\epsilon)\right]^2\right\rangle \\ = \frac{1}{n} \left\langle \left(\sum_{i=1}^n (\chi_{[\epsilon, 1]}(s\text{ mod }1)-\epsilon)\right)^2\right\rangle $$ Using the Gill's condition, we have: $$ \sum_{n=1}^{\infty} \frac{\left\langle M_n^2\right\rangle}{n} = \sum_{n=1}^{\infty} \frac{1}{n^2} \left\langle \left(\sum_{i=1}^n (\chi_{[\epsilon, 1]}(s\text{ mod }1)-\epsilon)\right)^2\right\rangle $$ As we can observe that this series converges, it indicates that \(\left\langle M_{\epsilon}\right\rangle(s) \stackrel{P}{\rightarrow} 0\) according to Gill's condition. Now, we need to prove that \(\lim _{n \rightarrow \infty} \int_{A_{n}} X d P=0\). Since \(E|X| < \infty\), by the Dominated Convergence Theorem, we have: $$ \lim _{n \rightarrow \infty} \int_{A_{n}} X d P = \int_{\cap_{n \in \mathbb{N}} A_n} X dP $$ Since we know that \(A_{n} \searrow \emptyset\), we have \(\cap_{n \in \mathbb{N}} A_n = \emptyset\), and hence, $$ \lim _{n \rightarrow \infty} \int_{A_{n}} X d P = \int_{\emptyset} X dP = 0 $$ Thus, we have proved that \(\lim _{n \rightarrow \infty} \int_{A_{n}} X d P=0\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

(Right-censoring by the same stochastic variable) Let \(T_{1}^{*}, \ldots, T_{n}^{*}\) be \(n\) i.i.d. positive stochastic variables with hazard function \(\alpha(t)\). The observed data consist of \(\left(T_{i}, \Delta_{i}\right)_{i=1, \ldots n}\), where \(T_{i}=T_{i}^{*} \wedge U, \Delta_{i}=I\left(T_{i}=T_{i}^{*}\right) .\) Here, \(U\) is a positive stochastic variable with hazard function \(\mu(t)\), and assumed independent of the \(T_{i}^{*}\) 's. Define $$ N \cdot(t)=\sum_{i=1}^{n} N_{i}(t), \quad Y \cdot(t)=\sum_{i=1}^{n} Y_{i}(t) $$ with \(N_{i}(t)=I\left(T_{i} \leq t, \Delta_{i}=1\right)\) and \(Y_{i}(t)=I\left(t \leq T_{i}\right), i=1, \ldots, n\). (a) Show that \(\hat{A}(t)-A^{*}(t)\) is a martingale, where $$ \hat{A}(t)=\int_{0}^{t} \frac{1}{Y \cdot(s)} d N \cdot(s), \quad A^{*}(t)=\int_{0}^{t} J(s) \alpha(s) d s . $$ (b) Show that $$ \sup _{s \leq t}\left|\hat{A}(s)-A^{*}(s)\right| \stackrel{P}{\rightarrow} 0 $$ if \(P\left(T_{i} \leq t\right)>0\). (c) Is it also true that \(\hat{A}(t)-A(t) \stackrel{P}{\rightarrow} 0 ?\)

2Let \(M=N-\Lambda\) be the counting process local martingale. (a) Show that \(\mathrm{E} N(t)=\mathrm{E} \Lambda(t)\) (hint: use the monotone convergence theorem). (b) If \(\mathrm{E} \Lambda(t)<\infty\), then show that \(M\) is a martingale by verifying the martingale conditions. (c) If \(\sup _{t} \mathrm{E} \Lambda(t)<\infty\), then show that \(M\) is a square integrable martingale.

Consider the time interval \([0, \tau]\). Let \(U(t)\) be a Gaussian martingale with covariance process \(V(t), t \in[0, \tau]\). Show that $$ U(t) V(\tau)^{1 / 2}[V(\tau)+V(t)]^{-1} $$ has the same distribution as $$ B^{0}\left(\frac{V(t)}{V(\tau)+V(t)}\right) $$ where \(B^{0}\) is the standard Brownian bridge.

Let \(M_{1}\) and \(M_{2}\) be the martingales associated with the components of the multivariate counting process \(N=\left(N_{1}, N_{2}\right)\) with continuous compensators. Show that $$ \left\langle M_{1}, M_{2}\right\rangle=\left[M_{1}, M_{2}\right]=0. $$

Let \(N(t)=\left(N_{1}(t), \ldots, N_{k}(t)\right), t \in[0, \tau]\), be a multivariate counting process with respect to \(\mathcal{F}_{t}\). It holds that the intensity $$ \lambda(t)=\left(\lambda_{1}(t), \ldots, \lambda_{k}(t)\right) $$ of \(N(t)\) is given (heuristically) as $$ \lambda_{h}(t)=P\left(d N_{h}(t)=1 \mid \mathcal{F}_{t-}\right), $$ where \(d N_{h}(t)=N_{h}((t+d t)-)-N_{h}(t-)\) is the change in \(N_{h}\) over the small time interval \([t, t+d t)\). (a) Let \(T^{*}\) be a lifetime with hazard \(\alpha(t)\) and define \(N(t)=I\left(T^{*} \leq t\right)\). Use the above \((2.29)\) to show that the intensity of \(N(t)\) with respect to the history \(\sigma\\{N(s): s \leq t\\}\) is $$ \lambda(t)=I\left(t \leq T^{*}\right) \alpha(t) . $$16 2. Probabilistic background (b) Let \(T^{*}\) be a lifetime with hazard \(\alpha(t)\) that may be right- censored at time \(C\). We assume that \(T^{*}\) and \(C\) are independent. Let \(T=T^{*} \wedge C\), \(\Delta=I\left(T^{*} \leq C\right)\) and \(N(t)=I(T \leq t, \Delta=1)\). Use the above (2.29) to show that the intensity of \(N(t)\) with respect to the history $$ \sigma\\{I(T \leq s, \Delta=0), I(T \leq s, \Delta=1): s \leq t\\} $$ is $$ \lambda(t)=I(t \leq T) \alpha(t) $$

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free