State Machine Oriented Specifications
1) One key method for specifying key aspects of some software systems is to use state machine descriptions.
2) You’ve seen state machines used frequently before in computer science
a) Using regular expressions to define token structures in compilers
i) These regular expressions were often converted to state machines to drive lexing
ii) Regular expressions have precisely the same computational power as finite state machines
(1) One can convert a regular expression to a finite state machine, and vice versa
b) Basic digital logic design is done using state machines
c) Many protocols (caching, communication, etc.) can be specified using state machines
d) The “path expressions” I mentioned in class to control invocation of operations on ADTs can be represented as state machines (e.g., open (read | write)* close)
e) Computers themselves can be viewed as state machines: the data (registers, memories, and inputs) are the state and the instructions define transitions between states
3) State machines can be either finite-state or infinite-state
a) We’ll worry mostly about finite-state ones, although we’ll keep an eye on infinite-state ones.
4) State machines are usually defined as a graph
a) A finite set of alphabet of symbols (consider these input events)
b) A finite set of states (represented as circles or other similar objects)
i) One state is defined as the starting state
ii) Multiple states may be defining as terminating (or accepting) states
(1) Some state machines are intended to terminate, indicating whether an input sequence is “accepted”
(2) Others are not intended to terminate but rather continue making transitions given an infinite input sequence
c) A set of transitions from one state to another state (represented as arrows) to be taken as symbols from the alphabet are seen on the input
i) If there is at most one transition from a given state for a given symbol, then the machine is a deterministic machine
ii) If there is at least one state that has more than one transition on the same symbol, then the machine is a non-deterministic machine
iii) Deterministic and non-deterministic finite state machines have precisely the same computational power
(1) But converting a non-deterministic machine to a deterministic machine may cause a large (exponential, in theory) blowup in the size of the state space
5) Classic examples
a) Accept all tokens of alphanumerics that are legal identifiers (a letter followed by letters and numbers)
b) Accept all binary strings that are even parity
c) Accepts coins to a vending machine; determine when there is enough money for a product and produce the correct change
6) What does all this have to do with specifying software?
a) A state machine can act as a specification of a software system
a) Specify the operation of a digital watch with two function buttons. Pushing button 1 causes the watch to successively change from displaying the time, to setting the hours, to setting the minutes and back to displaying the time. Pushing button 2 allows the user to increment either the hours or the minutes when the watch is in the appropriate state. There isn’t information about how to implement this, but it gives precise information about what the watch should do.
i) Of course, it really isn’t precise and complete
(1) What if you push button 2 when the watch isn’t in “an appropriate state”?
(2) When you push button 1 from displaying time to setting hours, can the time change?
(3) Is there a commitment to display the internally kept time on the watch accurately?
b) Voice mail directions
c) Registering your name, address, credit card, etc. on a web site from which you wish to purchase something
8) Problem: although the basic idea is just fine, the practice of specifying using state machines can be quite difficult in large part because the size of the representation can get large quickly
9) Example: let’s look at a simple example of using a state machine to track two processes trying the access a shared resource using a critical section
a) [Example based on the Clarke/Emerson example]
b) What would happen if there were three processes under consideration? N processes? The size of the representation would blow up
10) Communicating finite state machines
a) One approach to trying to clarify the machines and to reduce their size is to build a machine out of a set of communicating finite state machines
b) [See web page example: http://www.owlnet.rice.edu/~comp481/examples/commdfa.html]
c) Changes to the model include the ability to include conditions (guards) on the transitions (e.g., the checks to see if the other state machine is in a given state); a transition is taken only if the input event occurs and the condition holds
i) Although this example doesn’t show it, machines can also be defined to generate additional events when a transition is taken
ii) These events can, in turn, cause other transitions to be taken
d) By defining several state machines largely separately, one can simplify the representation of the machines and automatically compute a single state machine from this by taking the cross products
e) It is important to note that the communicating machines are, of course, not entirely separate: in this specific example, each has conditions that depend on the state in which the other machine is
11) Hierarchical state machines
a) Harel invented yet a further improvement to communicating finite state machines: hierarchical state machines
b) Conventional state machines occupy a single state at a time
c) Communicating state machines occupy multiple states (one in each machine) at a time
d) Statecharts (Harel’s notation) does the same, but allows for hierarchies to be built
i) [Show example from TCAS II in RSML, a slight variant of Statecharts]
ii) It’s basically an and-or tree of state machines
iii) Machines separated by dotted lines are “and” machines, where each of the machines occupies exactly one state at a time; it’s easy to imagine taking the cross product to create a flattened machine
iv) Everything else is an “or” machine, essentially like a standard state machine (although they can in turn be nested “and” machines)
v) There are a zillion details
(1) What are the start states upon entering an “and” machine; these notations usually have an arrow with nothing at the tail pointing at the start states.
(2) Nested states are allowed to cause exits from the enclosing “and” machines (usually by showing a transition to the edge of the enclosed box)
(1) Perhaps the most complicated issue in statecharts is: what are the semantics?
(2) There are two central questions
(a) First, there is a question of synchrony: one approach (the “synchrony hypothesis”) states that you accept a single external event and then propagate all internal events until the machine stabilizes; then you accept another external event, etc. One model of this is to think of the machine as executing infinitely fast. The alternative is to allow external and internal events to interleave. The latter appears to be used in hardware specifications more frequently, and the former in software specifications (so we will consider the synchrony hypothesis as a rule)
(b) Second, there is a question of what to do when there are multiple events available: which of the enabled transitions should be taken? There are indeed dozens of (published) choices, with subtle distinctions. Some of the more theoretically pleasing semantics seem, unfortunately, to be less intuitive to people. It is, however, critical to have a well-defined semantics; after all, these are specification languages. At the same time, for most “normal” examples, the differences among the semantics are not substantive. (Probably the most common semantics are the “Statemate semantics”, Harel and Naamad, which define the formal semantics of statecharts in terms of the operational semantics defined by Statemate.)
vii) RSML: this is a variant of statecharts defined by Leveson et al. precisely for specifying the TCAS system
(1) It allows for some additional communication structures
(2) It defines the guards separately from the transitions, using and-or tables
12) Validation: OK, assume we now have a terrific notation for defining hierarchical state machines: how do we validate them; that is, how do we ensure that they do what we want them to do?
d) Model checking
13) Aside on proofs
a) As I talked about with Hoare triples, etc., there had been an enormous amount of work in proving programs correct, in particular showing that an implementation satisfies a specification
b) Work in Z and in state machines has really shifted that focus to validating properties of the specification itself
i) Both, is it consistent and, is it what the specifier really wants
c) For state machines, the view is that they are a good model of reactive systems: not of all software systems, of course
d) In this case, the validation takes place not in terms of state variables (like the birthday book), but in terms of the ongoing behavior of a program
i) Does it deadlock?
ii) Can we ever reach a particular state (like an error state)?
iii) Can a process be starved?
e) Amir Pnueli (and others) observed that one can describe many of these ongoing behaviors in terms of temporal logic
i) Temporal logic is essentially standard predicate logic plus quantifiers that allow you to say things like “always” and “next state”
ii) The basic idea is that you can write down a logic formula and then prove whether or not the (paths through the) state machine satisfy that formula
iii) Doing so increases your confidence in the specification
iv) Of course, these proofs can be difficult and error-prone
f) Model checking