1. If you answered (e) you are watching too much TV! ;-) 2. False. The kernel knows VPs. A VP may have multiple threads in it but the kernel is unaware of it. 3. True. Due to memory hierarchy effects when a processor has to move from one address space to another. 4. Minimize ill-effects of inopportune preemption including thread related cleanup; not acquiring a spinlock on behalf of a thread that is likely to hold on to it for longer than the time to preemption. 5. 1) T1 and T2 get enqueued on q1; predicate not true for either. 2) T3 gets enqueued on q2; resumed immediately since its predicate is true and it is the only runnable thread in the serializer; T3 completes execution in the serializer and returns. 3) T1 and T2 both have their predicates satisfied, and get scheduled in some order, after T3 leaves the serializer. 4) Depending on the order of scheduling of T1 and T2 by the serializer and any preemptions of T1 and T2 by the underlying operating system (which is beyond the control of the serializer) T1 and T2 can return (11, 11), (11, 12), (12, 11), or (12, 12). This is due to the fact that both T1 and T2 can be simultaneously active inside the join (c1) statement, and the return value computed depends on when the variable 'readers' is read by each thread. 6a. (10 pts) 1) A writer is allowed into the resource even when other readers are active. 2) When a writer completes (write_end), exactly one waiting reader is signalled even if there are multiple readers waiting. Both these points are in violation of the stated problem specification. 6b. (20 pts) NOTE: The following is one possible solution. Several variants are possible depending on the desired priority between readers and writers (which is left unspecified in the question). monitor mon (read_start, read_end, write_start, write_end) { bool busy = false; /* flag to indicate if a writer is active */ int readers = 0; /* concurrent readers count */ queue rq; /* readers wait on this queue */ queue wq; /* writers wait on this queue */ read_start() { if (busy) wait(rq); /* wait if a writer active */ /* awakened */ readers++; /* increment concurrent reader count */ if (queue(rq)) signal(rq); /* wake up other readers if any */ } read_end() { readers--; /* decrement concurrent reader count */ if (readers == 0) { /* all concurrent readers done */ if (queue(wq)) signal(wq);/* wake up waiting writer if any */ } } write_start() { if ( busy || (readers > 0)) wait(wq); /* writer needs exclusive access */ /* awakened */ busy = true; } write_end() { busy = false; if (queue(rq)) signal(rq); /* wake up reader(s) if any */ else if (queue(wq)) signal(wq); /* else wake up a writer if any */ } } /* end monitor */ 7. 1. At t0, switch from U3 to U2: .02 ms (same process) running time for U2 : 1.00 ms 2. switch from U2 to U1 : 0.001 ms (same LWP) running time for U1 : 3.00 ms 3. switch from U1 to U4 : 0.03 ms (different process) memory hierarchy overhead : 0.5 ms running time for U4 : 5.00 ms 4. switch from U4 to U5 : .02 ms (same process) running time for U5 : 5.00 ms ------------- Total elapsed time : 14.571 ms -------------