Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Mean Time Between Failures (MTBF), Mean Time To Replacement (MTTR), and Mean Time To Failure (MTTF) are useful metrics for evaluating the reliability and availability of a storage resource. Explore these concepts by answering the questions about devices with the following metrics.

MTTF

MTTR

3 Years

1 Day

(5.8.1) Calculate the MTBF for each of the devices in the table.

(5.8.2) Calculate the availability for each of the devices in the table.

(5.8.3) What happens to availability as the MTTR approaches 0? Is this a realistic situation?

(5.8.4) What happens to availability as the MTTR gets very high, i.e., a device is difficult to repair? Does this imply the device has low availability?

Short Answer

Expert verified

(5.8.1) The MTBF of the device is 1096 days.

(5.8.2) The availability of the device is 0.99.

(5.8.3) The availability becomes 1. This is a realistic situation.

(5.8.4) MTTR plays significant role in calculating availability. This does not imply low availability.

Step by step solution

01

Define MTTF and MTTR

MTTF is a reliability measure. The two states of delivered service are accomplishment and interruption. Reliability is a measure of continuous accomplishment of service. MTTR is a measure of service interruption.

02

Calculate MTBF

(5.8.1)

MTBF is short for Mean Time Between Failures. It is calculated using the following formula:

MTBF = MTTF + MTTR

In the given problem, the value of MTTF for a device is 3 years, that is, 1095 days. And the value of MTTR of a device is 1 day. Thus, the value of MTBF is

MTBF = 1095+1

The value of MTBF is 1096 days

03

Calculate Availability

(5.8.2)

Availability is measure of service accomplishment to the Mean Time Between failures. The formula to calculate Availability is given as:

For the device given in the question, the availability is :

Thus, the availability is equal to 0.99.

04

Effect of MTTR approaching 0 on Availability

(5.8.3)

If the value of MTTR approaches 0 that means the reliability of the device increases. The value of Availability becomes

That is equal to 1. With the advancement of technology, the hardware devices are now cheap and easy to install. Thus, it is feasible to have almost 0 replacement time for hardware devices.

05

Effect of high MTTR on Availability

(5.8.4)

Increase in the value MTTR, decreases the value of Availability. In determining the value of availability, MTTR will play a significant role. However, if the value MTTF is high as compared to the value of MTTR, then the availability will not be low because availability depends on both MTTR and MTTF.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

This exercise examines the impact of different cache designs, specifically comparing associative caches to the direct-mapped caches from Section 5.4. For these exercises, refer to the address stream shown in Exercise 5.2.

(5.7.1) Using the sequence of references from Exercise 5.2, show the final cache contents for a three-way set associative cache with two- word blocks and a total size of 24 words. Use LRU replacement. For each reference identify the index bits, the tag bits, the block offset bits, and if it is a hit or a miss.

(5.7.2) Using the references from Exercise 5.2, show that final cache contents for a fully associative cache with one-word blocks and a total size of 8 words. Use LRU replacement. For each reference identify the index bits, the tag bits, and if it is a hit or a miss.

(5.7.3) Using the references from Exercise 5.2, what is the miss rate for a fully associative cache with two-word blocks and a total size of 8 words, using LRU replacement? What is the miss rate using MRU (most recently used) replacement? Finally what is the best possible miss rate for this cache, given any replacement policy?

Multilevel caching is an important technique to overcome the limited amount of space that a first level cache can provide while still maintaining its speed. Consider a processor with the following parameters:

Base CPI, No Memory Stalls

Processor Speed

Main Memory Access Time

First Level Cache MissRate per Instruction

Second Level Cache, Direct-Mapped Speed

Global Miss Rate with Second Level Cache, Direct-Mapped

Second Level Cache, Eight-Way Set Associative Speed

Global Miss Rate with Second Level Cache, Eight-Way Set Associative

1.5

2 GHz

100 ns

7%

12 cycles

3.5%

28 cycles

1.5%

(5.7.4) Calculate the CPI for the processor in the table using: 1) only a first level cache, 2) a second level direct-mapped cache, and 3) a second level eight-way set associative cache. How do these numbers change if main memory access time is doubled? If it is cut in half?

(5.7.5) It is possible to have an even greater cache hierarchy than two levels. Given the processor above with a second level, direct-mapped cache, a designer wants to add a third level cache that takes 50 cycles to access and will reduce the global miss rate to 1.3%. Would this provide better performance? In general, what are the advantages and disadvantages of adding a third level cache?

(5.7.6) In older processors such as the Intel Pentium or Alpha 21264, the second level of cache was external (located on a different chip) from the main processor and the first level cache. While this allowed for large second level caches, the latency to access the cache was much higher, and the bandwidth was typically lower because the second level cache ran at a lower frequency. Assume a 512 KiB off-chip second level cache has a global miss rate of 4%. If each additional 512 KiB of cache lowered global miss rates by 0.7%, and the cache had a total access time of 50 cycles, how big would the cache have to be to match the performance of the second level direct-mapped cache listed above? Of the eight way-set associative cache?

Cache coherence concerns the views of multiple processors on a given cache block. The following data shows two processors and their read/write operations on two different words of a cache block X (initially X[0] = X[1] = 0). Assume the size of integers is 32 bits.

P1

P2

X0++;X1=3

X0=5;X1+=2;

5.17.1 List the possible values of the given cache block for a correct cache coherence protocol implementation. List at least one more possible value of the block if the protocol doesnโ€™t ensure cache coherency.

5.17.2 For a snooping protocol, list a valid operation sequence on each processor/cache to finish the above read/write operations.

5.17.3 What are the best-case and worst-case numbers of cache misses

needed to execute the listed read/write instructions?

Memory consistency concerns the views of multiple data items. The following data shows two processors and their read/write operations on different cache blocks (A and B initially 0).

P1

P2

A=1;B-2;A+=2;B++;

C=B;D=A;

5.17.4 List the possible values of C and D for an implementation that ensures both consistency assumptions on page 470.

5.17.5List at least one more possible pair of values for C and D if such assumptions are not maintained.

5.17.6 For various combinations of write policies and write allocation policies, which combinations make the protocol implementation simpler?

5.2 Caches are important to providing a high-performance memory hierarchy to processors. Below is a list of 32-bit memory address references, given as word addresses.
3, 180, 43, 2, 191, 88, 190, 14, 181, 44, 186, 253


5.2.1 [10] <ยง5.3> For each of these references, identify the binary address, the tag,
and the index given a direct-mapped cache with 16 one-word blocks. Also list if each reference is a hit or a miss, assuming the cache is initially empty.


5.2.2 [10] <ยง5.3> For each of these references, identify the binary address, the tag,
and the index given a direct-mapped cache with two-word blocks and a total size of 8 blocks. Also list if each reference is a hit or a miss, assuming the cache is initially empty.


5.2.3 [20] < ยงยง5.3, 5.4> You are asked to optimize a cache design for the given
references. There are three direct-mapped cache designs possible, all with a total of 8 words of data: C1 has 1-word blocks, C2 has 2-word blocks, and C3 has 4-word blocks. In terms of miss rate, which cache design is the best? If the miss stall time is 25 cycles, and C1 has an access time of 2 cycles, C2 takes 3 cycles, and C3 takes 5 cycles, which is the best cache design?


There are many different design parameters that are important to a cacheโ€™s overall performance. Below are listed parameters for different direct-mapped cache designs.


Cache Data Size: 32 KiB


Cache Block Size: 2 words


Cache Access Time: 1 cycle


5.2.4 [15] < ยง5.3> Calculate the total number of bits required for the cache listed
above, assuming a 32-bit address. Given that total size, find the total size of the closest direct-mapped cache with 16-word blocks of equal size or greater. Explain why the second cache, despite its larger data size, might provide slower performance than the first cache.


5.2.5 [20] <ยงยง5.3, 5.4> Generate a series of read requests that have a lower miss rate on a 2 KiB 2-way set associative cache than the cache listed above. Identify one possible solution that would make the cache listed have an equal or lower miss rate than the 2KiB cache. Discuss the advantages and disadvantages of such a solution.


5.2.6 [15] <ยง5.3> Th e formula shown in Section 5.3 shows the typical method to
index a direct-mapped cache, specifically (Block address) modulo (Number of blocks in the cache). Assuming a 32-bit address and 1024 blocks in the cache, consider a different indexing function, specifically (Block address [31:27] XOR Block address [26:22]). Is it possible to use this to index a direct-mapped cache? If so, explain why and discuss any changes that might need to be made to the cache. If it is not possible, explain why.

Question: This Exercise examines the single error correcting, double error detecting (SEC/DED) Hamming code.

(5.9.1) What is the minimum number of parity bits required to protect a 128-bit word using the SEC/DED code?

(5.9.2) Section 5.5 states that modern server memory modules (DIMMs) employ SEC/DED ECC to protect each 64 bits with 8 parity bits. Compute the cost/performance ratio of this code to the code from 5.9.1. In this case, cost is the relative number of parity bits needed while performance is the relative number of errors that can be corrected. Which is better?

(5.9.3) Consider a SEC code that protects 8-bit words with 4 parity bits. If we read the value 0x375, is there an error? If so, correct the error.

To support multiple virtual machines, two levels of memory virtualization are needed. Each virtual machine still controls the mapping of virtual address (VA) to physical address (PA), while the hypervisor maps the physical address (PA) of each virtual machine to the actual machine address (MA). To accelerate such mappings, a software approach called โ€œshadow pagingโ€ duplicates each virtual machineโ€™s page tables in the hypervisor, and intercepts VA to PA mapping changes to keep both copies consistent. To remove the complexity of shadow page tables, a hardware approach called nested page table (NPT) explicitly supports two classes of page tables (VAPA and PAMA) and can walk such tables purely in hardware.

Consider the following sequence of operations: (1) Create process; (2) TLB miss; (3) page fault; (4) context switch;

(5.14.1) What would happen for the given operation sequence for shadow page table and nested page table, respectively?

(5.14.2) Assuming an x86-based 4-level page table in both guest and nested page table, how many memory references are needed to service a TLB miss for native vs. nested page table?

(5.14.3) Among TLB miss rate, TLB miss latency, page fault rate, and page fault latency, which metrics are more important for shadow page table? Which are important for nested page table?

Assume the following parameters for a shadow paging system

TLB Misses per 1000 instructions

NPT TLB Miss Latency

Page Faults per 1000 instructions

Shadowing Page Fault Overhead

0.2

200 cycles

0.001

30,000 cycles

(5.14.4) For a benchmark with native execution CPI of 1, what are the CPI numbers if using shadow page tables vs. NPT (assuming only page table virtualization overhead)?

(5.14.5) What techniques can be used to reduce page table shadowing induced overhead?

(5.14.6) What techniques can be used to reduce NPT induced overhead?

See all solutions

Recommended explanations on Computer Science Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free