1. Rupiah 2. True. At low lock contention, you want to get the lock as soon as it is released to reduce the lock acquisition latency. Dynamic which attempts to get the lock immediately upon release has this advantage over static. 3. False. If the multiprocessor is a NCC (non-cache coherent) shared memory multiprocessor then spin-on-read and spin-on-test-and-set both have to traverse the ICN for every spin cycle. 4. False. Multithreading allows overlapping I/O even on a Uniprocessor. 5. False. Pre-emption can be implemented using timer interrupts from the kernel for example. 6. Kernel-threads: unit of scheduling in the kernel; kernel schedules the "ready" kernel threads in a pre-emptive fashion; LWP: vehicle for implementing user-level computation; one-to-one correspondence between a lwp and a kernel thread; lwp has PCB and other user-level accounting info associated with it; A Unix process can have several lwp's in it; switching among lwp's of the same process is cheaper than switching between lwp's of different processes. user-level threads: These are the user level threads, any number of which can be created. User-level threads are not recognized by the kernel, and have to be associated with a particular lwp; switching among them is non-preemptive done by user-level scheduler; if a user-level thread blocks then the associated lwp and the kernel thread that lwp is mapped to all block; 7. PPC is protected procedure call is an RPC-like mechanism in Psyche. It is directed at an address space, it is a blocking call from the point of view of the client thread that made the call. Implementation: The kernel does an up-call using software interrupts to any virtual processor in the target address space to effect the PPC. Blocking semantics: if the client and server VPs are on different physical processors, then the kernel delivers a "blocked in the kernel" interrupt to the user-level scheduler in the client's VP; if the client and server VPs are on the same physical processor then the server's VP is given a chance to complete the PPC, and the kernel delays the "blocked in the kernel" interrupt to the client VP. Rationale for these semantics: The former scenario is appropriate to increase the parallelism. The latter scenario is to reduce the number of context switches when the client and server on the same physical processor. 8. Path {A; B} end; Some number of A's and some number of B's can be executing concurrently subject to the following two conditions: "A" can be started any time; "B" can be started if (#B's completed + #B's currently running) < (#A's completed) Path {A} + {B) end; Any number of A's or B's (but not both) can execute concurrently. 9. Fixes to A and B functions: A { if queue(y) /* P2 has already arrived */ signal(y) else wait(x) } B { if queue(x) /* P1 has already arrived */ signal(x) else wait(y) } 10. serializer printer { queue waiting_q; crowd printer_crowd[M]; int free_printers = M; print_file(int printer_number, FILE *fp) { ....... } entry_point check_printer_q (FILE *my_file) { enqueue (waiting_q) until (free_printers > 0); for (i = 0; i < M; i++) { if empty(printer_crowd[i]) break; } free_printers--; join_crowd (printer_crowd[i]) { print_file(i, my_file); } free_printers++; } }