From: Reid Wilkes (reidwilkes_at_verizon.net)
Date: Sun Jan 11 2004 - 23:15:00 PST
The paper on Programming Semantics is the oldest one we've read yet, being
published in 1966. It's quite amazing to see what the state of the art was
in computing at this time; I actually found this paper to be the most
difficult to read and I believe that might be because the terminology and
basic concepts of computing were so different than they are today; the Unix
paper for instance was much easier to digest because it is describing
essentially the same basic concepts which make up a modern OS today. As
stated in the introduction, the purpose of the paper is to introduce a
formal language to specify behavior of programs running on a multiprogrammed
computer system. (It is telling as to the antiquity of these ideas that
Microsoft Word's spellchecker does not recognize the word
"multiprogrammed".) The formal language, according to the paper, has a
sophistication ".midway between an assembly language and an advanced
algebraic language." If this doesn't get the blood pumping, I don't know
what will. From what I can tell, the problem being addressed was that at the
time there was very little knowledge of how to construct systems where
multiple independent programs could be running on the machine at the same
time. It seems to me that this concept is so ingrained in the minds of
modern computer and software engineers that it is a little hard to fully
appreciate the state affairs when there was a void of such knowledge. The
paper proposes (although by way of a reference to previous work) the idea of
"fork", as well as "join" and what is essentially a critical section (the
"lock"/"unlock" instruction pair). Some other ideas proposed by the paper
are less familiar to a modern reader - although the term "process" is
introduced (with maybe a little more ambiguous meaning that it would have
today) - the term "computation" is also introduced which is something which
seems to have little parallel in modern OS's I am familiar with. A
computation is described as being a collection of processes cooperating on
some job or task. Either there is no formal equivalent to this in today's
operating systems, or this concept comes closer to, say, a Windows process
while the process concept described in the paper is really more like a
Windows thread. Aside from the synchronization and execution control issues,
the bulk of the paper deals with a sort of security system built around a
concept called a "C-list" (or "Capability List"). Every computation has a
"C-List" and essentially it is a list of resources the processes in the
computation have the right to use, and in some cases in what ways they may
be used (read/write/execute). In many ways this almost seems like the
opposite concept of ACL's which are described in the Multics paper. With
Multics ACL's the protection list is on the resource and the ID of the
principle is checked against resource's list. With the C-List however, the
protection list is really carried by the computation and the ID of the
resource is checked against the computation's list. Overall, my main
impression from this paper was a sense of surprise at how foreign the entire
system seemed - unlike any of the other papers I had a great deal of
difficulty trying to envision the problem space and the solution being
proposed.
This archive was generated by hypermail 2.1.6 : Sun Jan 11 2004 - 23:15:01 PST