CSE 505 Lecture Notes:

Notes on Luca Cardelli, "A Semantics of Multiple Inheritance"

November 19, 1997


Cardelli identifies record subtyping as an important form of polymorphism in object-oriented languages. Two interpretations of objects:

A record is a finite association of values to labels, for example

(x=3, y=4, displayed=true) Records have record types: (x=3, y=4, displayed=true) : (x : int, y : int, displayed : bool)

There is a subtype relation on record types, corresponding to the subclass relation in Simula or Smalltalk. A record type T is a subtype of another record type T' (written T <= T') if T has all of the fields of T', and possibly more, and each field of T' has a like-named field in T that is a subtype of it.

Note that for any type T, T<=T, and in particular, int<=int, etc, for all of the basic types.

If a has type T and T<=T', then a also has type T'. For example,

(x : int, y : int, displayed : bool) <= (x : int, y : int) and so (x=3, y=4, displayed=true) : (x : int, y : int)

Records can have functions as components; we can use this to model methods. The rule for subtyping among functional types is:

if s' <= s and t <= t1 then s->t <= s'->t' (Note that the subtype relation is inverted on the left side of the arrow.)

There is a notation for defining a subtype of a record type:

type t = t' and (x:int) says that t has all of the fields of t' plus an x:int field.

Or we can mask off fields:

type s = s' ignoring x

We can also have variant types:

type int_or_bool = [a:int, b: bool]

We can model private variables (or private functions) using lexical scoping:

type counter = (increment: int -> unit, fetch: unit -> int) val make_counter(n: int) = let count = cell n in (increment = lambda n: int . count := (get count) + 1, fetch = lambda nil: unit. get count) Here the type counter has an increment and a fetch field (which are both functions), to increment the value of the counter and to fetch its value. The make_counter function has a lexically scoped variable count, which is shared by both of these functions, but which is hidden by virtue of lexical scoping.


Formal Model

Cardelli defines a simple applicative language (see page 8 in the paper). The semantics of the language, and the typechecking rules, are defined as follows.

The semantics of expressions is given in a domain V of values. V contains all possible values, and is the coalesced sum of the domains of basic values, functions, records, variants, and two special values: bottom and wrong. Bottom is used to model runtime exceptions, such as divide-by-zero or a non-terminating computation. Wrong is used to model type errors.

A "coalesced sum" means that the values in V are "labelled" by the domain from which the value came. This could be modelled, for example, by having V consist of tuples of (domain_name,domain_value).

The idea is that we will define some type rules, which can be checked at compile time. These will guarantee that we will never compute the value 'wrong' at run time.

A type is just a subset of V. (Not every possible subset is a type -- the subsets must obey certain technical properties. Those subsets are called ideals. (The precise definition of 'ideal' is beyond the scope of the instructor.) However, all types found in programming languages are ideals, so this restriction won't get in our way.

The environment is a function from identifiers to values. The semantics of expressions is defined using a semantic function E, which maps expressions to functions from environments to values. (This should be a script E but I can't get html to do this.)

E is in Exp -> Env -> V Env = Id -> V The double square brackets in the formal definition of the semantic function on page 10 separate elements of the simple applicative language from elements of the meta-language.

For example, consider the equation for the meaning of identifiers:

E[[x]]v = v[[x]]

This says that the meaning of an identifier x is simply the result of looking up that identifier in the environment v.

There are also equations that give the meaning of type expressions (page 11). The type inference rules (page 12) let us deduce the types of expressions, given information about e.g. the types of their constituent parts.

There is a typechecking function T, which maps expressions into functions from type environments to type expressions:

T is in Exp -> TypeEnv -> TypeExp TypeEnv = Id -> TypeExp

The bottom line is the corollary on page 15:

if T[[e]]u = t then E[[e]]v /= wrong In other words, if an expression e can be successfully typechecked, then the value of that expression at runtime will never be 'wrong'.