Summary: The main point of the talk is to emphasize the need for and usefulness of conceptual -- dare I say ontological -- clarity in developing system specifications. The machine is not the world. The world is not the machine. There is, however, a useful partial mapping from the machine to the world. The machine models some aspects of the world. This partial mapping or model is the specification. Jackson calls this model the shared phenomena. The specification is exactly that which can be mapped both onto the world and onto the machine -- and nothing more. Because we are typically interested in moving from phenomena in the world to implementation in a machine -- we want the machine to reflect the world, not the world to reflect the machine -- it is also useful to distinguish moods of pheonomena in the world that are indicative (are the case) from those that are optative (are desired). These are the basic points of the presentation; Jackson elaborates on them by discussing 4 facets of the relationship between the world and the machine, 4 kinds of denial, and 4 principles for accepting the world -- no need to repeat them here
Assessment: In The Human Condition, a work of political philosophy, Hannah Arendt proposes to "think what we are doing." At the most general level, this is also what Jackson is doing -- thinking through what it is that software designers and developers are doing in the world. But being a technologist first, and a philospher second, his primary aim is utilitarian -- to use this clarity of thought to build better software systems.
So, is this conceptual clarity of any use? Or is it just too vague and abstract? My guess is that most of the time the distinctions Jackson makes do not add much value to designers' and developers' undestanding of the systems they are working with because practitioners share the common background knowledge to implicitly, automatically, and effectively switch back and forth between the world and the machine without confusing the two. But, there are two cases where the clarity might be quite useful. The first is when the phenomena in the world are sufficiently complex or subtle that the distinction between the world and machine might become confusing. For (a bad) example, consider three airplanes, A, B, and C, each with a hypothetical TCAS-like system that interactively negotiates safe collision avoidance with intruder aircraft. Each of the three onboard systems represents all three airplanes. A will represent B and C in its system. But will it also represent the fact that B and C are representing each other and negotiating? Is B's representation of C something in the world, or something in the machine? Will A's representation of B include B's representation of A? In situations like this, it is important to think carefully. The second case where clarity might help is in discussing system requirements with people who know little about software, and more importantly, do not think in terms of software systems. It is difficult for the engineer to communicate with such people because they do not share the implicit background knowledge that normal technical communications depend upon. In such cases it is useful to be able to make the explicit distinction between what these people expect to happen in the world and the software to do it.
Incidentally, one important point that Jackson does not pursue in depth is technological reverse adaptation. It is true and empirically obvious that the world adapts itself to technological artifacts that are introduced into it. The most common example of reverse adaptation is highway construction. Highways not only meet the needs of existing and projected traffic loads in and between communities (requirements), but also shape the evolution of these loads by affecting the geographical distribution of residences (urban sprawl), commerce, industry, and recreation. Similarly, computing resources are affecting the way organizations work, people communicate, socialize, and so forth. This observation is a little far afield from delimiting software specifications, but it does fit into the broader sketch of the relationship between the world and the machine: the machine not only models and controls some phenomena in the world, but also dynamically recreates the world.
I thoroughly enjoyed this paper for philosphical reasons (I confess),
but also for big-picture utilitarian reasons.
Summary: This paper is a survey of the state of the art and future directions for formal methods -- big surprise given the title. The emphasis in the "state of the art" section is on notable examples of (i.e., successes with) the use of formal methods. The "future directions" section is programmatic (what should be done) not speculative (what will likely be done) organized under the headings: fundamental concepts, methods and tools, integration of methods, education and technology transfer.
Assessment: In my assessment of of the Computer
Roundtable, I note that the opinions expressed there collectively amount
to a fair amount of finger-pointing and hand-wringing. I also suggest
that whether formal methods are widely used or not is almost entirely a
question of how they are packaged and delivered to industry in terms of
their cost, usability, and time and manpower requirements.
The "future directions" section of this paper is a much more articulate
and detailed statement of this same basic point: If formal methods are
to be used more and more effectively, this is what has to happen.
Very nicely done.
Summary: Diverse views on and assessments of formal methods are represented -- which is the point of the Computer roundtables. It is nonetheless impressive -- in some sense of the word -- that the roundtable manages to assemble 10 mutually orthogonal perspectives on formal methods. Some of the divergent opinions represented include: (1) formal methods are / are not viable tools for use in industry; (2) formal methods are / are not cost effective; (3) formal methods are mathematically rigorous methods / are just somewhat more formal than seat-of-the-pants design; (4) software engineers need more and better mathematical education / formal methods proponents need to adapt their methods and tools to the skill sets of real-world engineers. A real cacophony.
Assessment: Apparently, formal methods is one of those academic research agendas that has not quite lived up to the potential that early and passionate proponents foresaw. Happens all the time in all disciplines. Such agendas often degenerate into finger-pointing factionalism, soul-searching, and hand-wringing as they fade into irrelevance. Often, but not always. In this collection of opinions I think I detect the finger-pointing, soul-searching, and hand-wringing. It is intellectually reassuring, but almost embarassing to read this roundtable. It's like eavesdropping on a private, family squabble. Too much dirty linen being aired and too much whining for the casual visitor.
I do get the strong impression, tho', that there is an important, if
circumscribed, place for formal methods in industrial practice. (This
basic point is made much better by [Clarke, Wing, et al.]) The key
issues are the following. (1) Formal methods are most useful
and necessary in complex, subtle, and mission- or safety-critical software
systems. Unless the industrial organization of software development
is transformed in the years ahead, they just will not find application
in domains where bug patches suffice. (2) If formal methods are too
be used, they must be usable. Either the methods must become more
usable, or the users more sophisticated or specialized, or some combination
of the two. (3) They must be justified as cost-effective for a particular
application domain . . . which is a stronger condition than being justifiable
as cost-effective. Two closely related alternatives are: (a) they
are required by fiat, for example, government regulations, or (b) it becomes
convention wisdom that they are worth employing. (4) A demonstrably
correct system specification is always preferable to an uncertain one,
other things being equal. This is the promise of formal methods.
But other things are not equal -- cost, time, ability, manpower.
The squabble is about whether and how to influence these other things.
Summary: The paper describes an approach to writing requirements specifications for process-control systems. TCAS II is the testbed for the approach. Stated with a less academic phrasing, the paper describes the development of the specifications for TCAS II, and the lessons learned when some basic methodological convictions in the area of formal methods were carried beyond a research project to a commercial-quality product. Most of the paper focuses on the development of the specification language (RSML), an extension of statecharts and on the system requirements specification.
Key conclusion of the work reported in the paper is that formal methods can be successfully and usefully applied to large, complex industrial systems. From what I gather about the status of formal methods in computer science and software engineering, this demonstration is quite noteworthy in its own right.
Key lessons from the work reported in this paper are two: (1) Applying formal methods to a pseudocode specification that had evolved over 15 years was hard -- harder than starting from scratch -- for the usual reasons: incrementally increasing design complexity, lack of (or degraded) conceptual coherence, poor documentation, poor design decisions that had to be respected; (2) Involvement of users in the development of the formal methods specification and presentation greatly aided the production of a correct, and more importantly, an intelligible and useful product.
Assessment: This paper describes the TCAS II specification
project in considerable detail. Taken as a case study, the paper
makes the case (or creates one empirical fact supporting the assertion)
that formal methods can be very useful. But, that's all it
is -- an interesting case study. It does not attempt to generalize
from the particulars of the case.