Shamik Basu

CSE 584 Software Engineering Autumn '98

Paper 3: Requirements

Requirements Specification for Process-Control Systems by Nancy Leveson et

al.

Many software requirements validation techniques involve building prototypes

or executable specifications or waiting until the software is constructed

and then testing the entire system. The confidence that a system will have

certain properties is limited to the test cases that were executed. This

paper's approach models the required software blackbox behavior by using a

state based model and applies formal analysis to the model. The authors want

to chalk out a formal process for the entire software product cycle long

term but this paper is limited to the requirements phase. The approach is

applied to a real world system and this proves that building a formal

requirements model for a complex process control system is possible and such

a model can be readable and reviewable by non computer scientists. The

requirements specification language presented in this paper was designed for

process control systems where the goal is to maintain a relationship or

function over time between the input to the system and the output from the

system in the face of disturbances. Tradeoffs between functional goals and

constraints must be identified early and resolved. A blackbox, behavioral

specification uses only the current process state inferred from measurements

of the controlled variables, process states that were measured and inferred,

past corrective actions, and predictions of future states. A formal

requirements model makes possible mathematical verification and generation

of standard system engineering and safety analyses such as fault trees. The

criteria for designing the specification language are that the language

should specify blackbox behavior only and not include internal design

information, it should be minimal, simple, readable, reviewable and usable.

Readability and writability are often conflicting goals and the authors

chose readability in these situations. Feedback from domain experts helped

overcome the designers' individual preferences. The authors also discovered

that abstraction hindered readability. The language designed by the authors

was called RSML. This language is loosely based on an older language called

StateCharts. The basic state machine is composed of states connected by

transitions. Default or start states are states whose transitions have no

source. Superstates, parallelism, arrays, conditional connectives, directed

communications, events, macros, transition buses, temporal functions and

step semantics are interesting properties of the language. Important points

made by the authors are the difficulty in abstracting away from the design,

the necessity of maintaining an audit trail of decisions and reasons as to

why they were made, and how legacy code makes the final design more complex.

This paper was an interesting and thorough presentation of the project the

authors worked on. The process seems entirely necessary for a safety

critical system but I would be interested in exploring how much of it

carries over into other systems. If one were to be developing a new system a

fair number of the above guidelines could be put to use. How about taking on

a legacy system - would the principles apply? E.g. the project presented in

the paper already had a 300-page design document. Would the cost involved in

creating a document of that magnitude still make this approach worthwhile?

Would the approach scale to an even larger project?

Deriving specifications from requirements by Jackson and Zave

A requirement states desired relationships in the environment that will be

brought about or maintained by the machine. The requirement is entirely

about the environment. The specification describes the behavior of the

machine at its interface with the environment. It is also expressed entirely

in terms of environment phenomena. Given a requirement, we progress to the

specification by purging the requirement of all features - such as

references to environment phenomena not accessible to the machine - that

would preclude implementation. The paper presents its ideas with the help of

an example - that of the control of a turnstile at the entry to a zoo. The

first step is to decide what environment phenomena are of interest - this is

called the designation set. It is important to clearly define terms by which

phenomena will be described in the documents. If the machine is to interact

with the environment, both must share certain phenomena. Control of each

shared phenomenon resides either in the environment or in the machine

depending on what initiates the phenomenon. Control of an event is the power

to perform it spontaneously but only when it is not precluded by other

constraints on its occurrence. The authors present 2 kinds of environment

description. The optative mood describes properties they would like the

machine to bring about or maintain. The indicative mood describes properties

the environment has, or will have, regardless of the behavior of the

machine. Mixed mood descriptions must be avoided because this would cause

confusion and also when a system has been built the optative properties

should become indicative. Requirements are determined by the customer. To

become a specification, the requirement must satisfy three rules. All

environment phenomena mentioned are shared. All phenomena required to be

constrained are directly machine controlled. All required constraints on

events are expressed in terms of preceding events or states in preceding

intervals. Real time constraints are taken into account by specifications

like a particular delay cannot be more than 250 msec.

Formal methods by Clarke et al

The first part of this report assesses the state of the art in specification

and verification. For verification, advances in model checking and theorem

proving are presented. In the past, the practical use of formal methods

seemed hopeless because notations were too obscure, techniques did not scale

and tool support was inadequate. Formal specification uses a language with a

mathematically defined syntax and semantics. System properties described

include functional behavior, timing behavior, performance characteristics,

or internal structure. Different specification languages, each able to

handle a different aspect of analysis are sometimes combined. The main

benefit of writing things down precisely is gaining a deeper understanding

of the system. It helps uncover design flaws. The spec itself can be

formally analyzed. Successful examples of formal specification can be found

in CICS, CDIS, TCAS etc. Model checking is a popular formal method of

verification. This technique relies on building a finite model of a system

and checking that a desired property holds in that model. Two general

approaches to model checking are used today. In temporal model checking,

specs are expressed in a temporal logic and systems are modeled as finite

state transition systems. In the second approach, the spec is given as an

automaton. Model checking is very powerful because it produces

counterexamples that represent subtle design errors. Its main disadvantage

is state explosion. Theorem proving is a technique by which both the system

and its desired properties are expressed as formulas in some mathematical

logic. This logic is given by a formal system that defines a set of axioms

and inference rules. Further work needs to be done in composition,

decomposition, abstraction, reusable models and theories, combination of

mathematical theories, and data structures and algorithms. To be attractive

to practitioners, tools must provide the following criteria. Early payback,

incremental gain for incremental effort, multiple use, integrated use, ease

of use, efficiency, ease of learning, orientation toward error detection,

focused analysis, and evolutionary development.

This paper is in sharp contrast with the previous one by Jackson and Zave.

The previous one clearly laid down its terminology and presented its ideas

in a very clear manner that could be understood by audiences at all levels.

This paper on the other hand assumes the audience to have a very keenly

developed sense of logic manipulation already. I understood that there are

two types of formal verification - model checking and theorem proving but

very little beyond that. E.g. model checking is defined as a technique that

relies on building a finite model of a system that checks if a desired

property holds in that model. How does one build such a model? What is the

point of defining the concept if there is no way of figuring out how to

actually use the method? The paper is targeted squarely at formal methods

researchers and no one in the outside world. Consider this line which

describes how model checking and theorem proving can be combined to provide

an exciting new technique: A sufficiently expressive logic can be used to

define temporal operators over finite state transition systems in terms of

maximal or minimal fixed points. I think this paper amply illustrates the

point repeatedly emphasized in the next set of papers - researchers need to

be able to make industry understand what they are talking about. The fault

is not always on the side of the listeners.

Formal Methods: Point-Counterpoint

The first paper by Hinchey et al identifies four reasons for industry's

reluctance to take formal methods to heart. Common misconceptions on both

sides are that formal methods can guarantee correctness and that formal

methods are difficult to use and delay the development process. Standards

bodies should make formal verifications necessary for certification. Tools

need to improve significantly. Industry needs to be educated and have access

to expert advice. The next paper by Glass claims that there is a huge gap

between the academics and practitioners' points of view. Formal methods is

one area in which this chasm becomes apparent. Glass says formal methods are

underdefined, underevaluated and perhaps not the right direction to proceed

in. Cliff Jones says that in most cases a less than completely formal

approach is much more fruitful. There is a minimal emphasis on notation and

the central idea is representing an abstraction of the system. Specs and

design inspections based on formal descriptions result in catching

potentially expensive errors. Jackson and Wing agree that a lightweight

formal method is the right way to go in most cases other than safety

critical work because the cost of full verification is prohibitive. The

specification language needs to be more focused, only portions of the

project that are the most important to get absolutely right should be

formally modeled, analysis should be limited to finding errors rather than

doing complete verification and for a large system it is better to compose

partial specs. Anthony Hall says that it is cheaper to produce even

noncritical software using formal methods. He cites examples from projects

he has worked on. Dill and Rushby next say that even though the impact of

formal methods on software development is disappointing, the impact on

hardware verification is a lot more positive. Traditional validation

techniques are not keeping up with the design complexity of hardware. The

main reason for this success is that applications of formal methods to

hardware have become cost effective. Benefits include an improved product,

reduced validation costs and reduced time to market. Hardware has some

intrinsic advantages over software as a target for formal methods because it

has no pointers, no potentially unbounded loops, no recursion and no

dynamically created processes. There are also a relatively small number of

major design elements like pipelining and cache coherence. Some lessons that

can be transferred to the software world are better tools, use formal

methods to find bugs rather than prove correctness, use a scaled down system

to simplify the model, focus on targeted problems and get researchers to

focus on real world problems. Holloway and Butler emphasize the point that

formal methods are not used in industry due to tools that lack functionality

and are also buggy, industrially relevant problems have not been solved yet

by formal methods, and researchers have not done the things needed to ensure

industry professionals actually use their ideas. Zave then argues that just

educating software professionals about formal methods is not enough. Finding

really good models in any domain is very hard work. This is actually

research work in itself. Researchers should pick some different domains,

just like traditional engineering, and research models for those domains.

Lutz then says that engineers use mathematics as a tool and they rarely fall

back to first principles to solve a problem. So formal methods researchers

should try to come up with some kind of handbook that the engineer can then

apply. Parnas makes the same point saying that the formal mathematical

methods are not practical enough. The notation used is cumbersome.

Researchers are advocating ideas that have not been fully developed yet.

Gries believes formal methods have been avoided because undergrads often

fear mathematics and notation. He says logic has never been taught correctly

and that people are not familiar with the concept of logic as a tool.

A common theme is that industry professionals need to be educated in the use

of formal methods and particularly abstraction. The need for better tools is

also repeatedly stressed.