Chapter 6: Q44E (page 500)
Implement the logic equations of Exercise B.43 as a PLA.
Short Answer
The diagram of PLA is given below:
Chapter 6: Q44E (page 500)
Implement the logic equations of Exercise B.43 as a PLA.
The diagram of PLA is given below:
All the tools & learning materials you need for study success - in one app.
Get started for freeQuestion: Consider the following portions of two different programs running at the same time on four processors in a symmetric multi-core processor (SMP). Assume that before this code is run, both x and y are 0.
Core 1: x = 2;
Core 2: y = 2;
Core 3: w = x + y + 1;
Core 4: z = x + y;
6.7.1 [10] What are all the possible resulting values of w, x, y, and z? For each possible outcome, explain how we might arrive at those values. You will need to examine all possible interleaving’s of instructions
6.7.2 [5] How could you make the execution more deterministic so that only one set of values is possible?
Question:Consider the following piece of C code:
for (j=2;j<1000;j++)
D[j] = D[j−1]+D[j−2];
Th e MIPS code corresponding to the above fragment is:
addiu \(s2,\)zero,7992
addiu \(s1,\)zero,16
loop: l.d \(f0, _16(\)s1)
l.d \(f2, _8(\)s1)
add.d \(f4, \)f0, \(f2
s.d \)f4, 0(\(s1)
addiu \)s1, \(s1, 8
bne \)s1, $s2, loop
Instructions have the following associated latencies (in cycles):
add.d | I.d | s.d | addiu |
4 | 6 | 1 | 2 |
6.4.1 How many cycles does it take for all instructions in a single iteration of the above loop to execute?
6.4.2 When an instruction in a later iteration of a loop depends upon a data value produced in an earlier iteration of the same loop, we say that there is aloop carried dependencebetween iterations of the loop. Identify the loop-carried dependences in the above code. Identify the dependent program variable and assembly-level registers. You can ignore the loop induction variable j.
6.4.3 Loop unrolling was described in Chapter 4. Apply loop
unrolling to this loop and then consider running this code on a 2-node distributed memory message-passing system. Assume that we are going to use message passing as described in Section 6.7, where we introduce a new operation send (x, y) that sends to node x the value y, and an operation receive( ) that waits for the value being sent to it. Assume that send operations take a cycle to issue (i.e., later instructions on the same node can proceed on the next cycle), but take 10 cycles to be received on the receiving node. Receive instructions stall execution on the node where they are executed until they receive a message. Produce a schedule for the two nodes assuming an unroll factor of 4 for the loop body (i.e., the loop body will appear 4 times). Compute the number of cycles it will take for the loop to run on the message passing system.
6.4.4 The latency of the interconnect network plays a large role in the efficiency of message-passing systems. How fast does the interconnect need to be in order to obtain any speedup from using the distributed system described in Exercise 6.4.3
A.1 [5] Section A.5 described how memory is partitioned on most MIPS systems. Propose another way of dividing memory that meets the same goals.
Implement the four functions described in Exercise B.11 using a PLA
Question: 6.16 Refer to Figure 6.14b, which shows an n-cube interconnect topology of order 3 that interconnects 8 nodes. One attractive feature of an n-cube interconnection network topology is its ability to sustain broken links and still provide connectivity.
6.16.1 [10] Develop an equation that computes how many links in the n-cube (where n is the order of the cube) can fail and we can still guarantee an unbroken link will exist to connect any node in the n-cube. 6.16.2 [10] Compare the resiliency to failure of n-cube to a fully-connected interconnection network. Plot a comparison of reliability as a function of the added number of links for the two topologies.
What do you think about this solution?
We value your feedback to improve our textbook solutions.