Thread management paper review

From: Reid Wilkes (reidwilkes_at_hotmail.com)
Date: Tue Jan 27 2004 - 19:01:19 PST

  • Next message: Prasanna Kumar Jayapal: "Review of Thread management implications paper - Anderson et al..."

    This paper was an interesting look at some of the underlying issues with the design of thread management mechanisms, particularly in a shared-memory multiprocessor environment. I found this paper overall to be much easier to read than some of the others we have read; the technical material is no less dense but seems to be presented in a relatively clear manner. Another reason for this is that, at least at the interface level, I am familiar with threading libraries and have used them in projects I have been a part of. In fact, one thing this paper has made me aware of is that the extensive use of threads in common user applications is not necessarily an old idea; at the time the paper was written (1989) the authors do state that it is common but it seems implied that it is a relatively new development in the sense that the design of the threading system was still a topic of research interest. This was for me one of the first in-depth looks at how a thread management might actually be implemented. The paper basically explores two primary issues with the design of the threading system: how to organize and protect potentially shared data in a multiprocessor environment, and how to implement the locks themselves which are used to force the processors to synchronize access to shared data. The focus of the discussion is, naturally, focused on performance of the various alternatives. There are two different concepts, or measures, of performance that the authors consider. The first is latency, which is the measuring some action on a single thread with no other contention on the system. The second concept is throughput, which is the total number of actions that can be processed by the system in a given amount of time. Several options are considered for structuring the various queues and lists which the system must maintain to provide thread scheduling. The most primitive, and worst performing is to protect all data structure with one master lock. The most advanced option proposed seems to be the idea of keeping as much data as possible local to each processor. That way there is very little locking in the "normal" case. The downfall to this approach, however, is that it is possible that the data (whether it be lists of ready threads to run or lists of memory buffers pre-allocated for thread control blocks) will not be distributed evenly across the processors. In this case, extra work (and therefore extra time) must be consumed to attempt to restore balance across the processors. The ideas surrounding the issue of how to best implement spin locking were also quite interesting to study in detail. Again, the simplest design (simply having each processor spin on a test-and-set instruction) does not necessarily yield the best performance as bus contention due to memory accesses by the various processors provides a cumulative slow-down effect. However, as with the previous discussion, there are seemingly few good options to improve performance as even a small increase in complexity of the algorithms used can greatly decrease the overall performance of the system. This last statement seems to be the main take-away from the paper - that the design of a good thread management system is very difficult and requires a deep understanding of the hardware/software interactions that take place.


  • Next message: Prasanna Kumar Jayapal: "Review of Thread management implications paper - Anderson et al..."

    This archive was generated by hypermail 2.1.6 : Tue Jan 27 2004 - 19:01:29 PST