You are encouraged to work on the problem set in groups and turn in one problem set for the entire group. Remember to put all your names on the solution sheet. Also remember to put the name of the TA in whose discussion section you would like the problem set returned to you.
|Frame 1||Page 13|
|Frame 2||Page 5|
|Frame 3||Page 10|
|Frame 5||Page 1|
|Frame 7||Page Table|
References (to pages): 14, 14, 13, 14, 6, 6, 13, 14, 15, 14, 15, 13, 4, 3.
At the end of this sequence, what three entries are contained in the TLB?
Assume that the virtual memory system added uses a one-level translation scheme and the page table is always resident in physical memory.
An instruction is said to generate a page fault if a page fault occurs at any time during the processing of that instruction.
a. How many page faults would occur if the elements of the array are stored in column order in the virtual memory?
b. How many page faults would occur if the elements are stored in row order?
c. Estimate the total time needed to perform this normalization for both arrangements a & b. Assume that it takes 2 ns to do a comparison, 20 ns to do a divide and 100 ns to do a load/store to memory.
200, 204, 208, 20C, 2F4, 2F0, 200, 204, 218, 21C, 24C, 2F4This pattern is repeated four times.
a. Show the contents of the cache at the end of each pass throughout this loop if a direct-mapped cache is used. Compute the hit rate for this example. Assume that the cache is initially empty.
b. Repeat part (a) for a fully-associative cache that uses the LRU-replacement algorithm.
c. Repeat part (a) for a four-way set-associative cache that uses the LRU replacement algorithm.
a.Give a sequence of addresses that results in a different block to
be replaced from the cache if the replacement policy is the
priority-based pseudo-LRU policy Professor Patt discussed in class
instead of a true LRU replacement policy. Remember that this scheme
gives priority to one half of the cache on a cache hit so that the
block that happens to be in the same half as the accessed block is
marked as the second most recently used block in the same set. Assume
that the cache is initially empty. The cache is 4-way set associative,
its size is 256 bytes and each block is 4 bytes. Assume that the
address space is 10 bits and memory is byte-addressable.
b.Here is an LC-3b assembly language program that performs array accesses. Show the address trace produced by load instructions. Suppose that we have a 1024-byte, 4-way set-associative cache with 4-byte blocks. Calculate the data cache hit ratio when we have a perfect LRU replacement scheme.
.ORIG x5000 AND R0, R0, #0 ; R0 : array sum AND R3, R3, #0 AND R1, R1, #0 ; R1 : array index ADD R3, R3, #2 ; R3 : outloop index LEA R6, DATA OUTLOOP LDW R4, R6, #0 AND R2, R2, #0 ADD R2, R2, #5 ; R2: inloop index INLOOP ADD R1, R4, R3 LDB R7, R1, #0 ADD R0, R7, R0 ADD R4, R4, R4 ADD R2, R2, #-1 BRP INLOOP ADD R3, R3, #-1 BRZP OUTLOOP HALT DATA .FILL x400 .END
1. What is the average access time experienced by the CPU if the main memory uses 4 way interleaving? Assume that the memory is built with DRAM chips that allow the first word to be accesses in 8 cycles, but subsequent words of the block are in 4 clock cycles per word. Also, one clock cycle is required to send one word to the cache.
2. What is the average access time if the main memory is not interleaved (i.e. there is a single DRAM chip)?
3. What is the improvement obtained with interleaving?
* Associativity (1, 2, or 4 ways)Assumptions:All memory accesses are one byte accessses. All addresses are byte addresses.
* Block size (1, 2, 4, 8, 16, or 32 bytes)
* Total cache size (256B, or 512B)
* Replacement policy (LRU or FIFO)
AND, for those who cannot get enough of this stuff, the following problem:
(Note: You can get full credit without doing this problem)
You will be given a cache simulator (just the executable) with a hard-coded configuration. Your job is to determine the configuration of the cache. The simulator takes a trace of memory addresses as input and provides a hit ratio as output. Find the following:
* Associativity (1, 2, 4, or 8 ways)Show the traces you used to determine each parameter of the cache. Assumptions:All memory accesses are one byte accessses. All addresses are byte addresses.
* Block size (1, 2, 4, 8, 16, or 32 bytes)
* Total cache size (256B, 512B, or 1024B)
* Replacement policy (LRU or Pseudo-LRU)