Dan Grossman
CSE505: Concepts of Programming Languages
Lecture 14: More on References, Exceptions, Then on to OOP
(The beginning of this lecture is a modified version of lec13.txt that
we didn't get to. Specifically, we got to line 239 of that file,
which is after, "Corollary: Deep subtyping...")
Last time we gave the semantics and typing rules for ML-like
references. Although we'll see shortly why it gets you into trouble,
we can even give an abstract interface to references:
type 'a ref; (* ref is an abstract type constructor *)
ref : 'a -> 'a ref (* we construct references with ref *)
! : 'a ref -> 'a
:= : 'a ref -> 'a -> unit (* it's infix, but syntax doesn't matter *)
Several people found it odd that the return type of := was unit. That
just means there is no reason to look at the result, which makes
sense: we use := only for its effect. () is a value just as 34 is a
value, so there's no reaon we'll get stuck.
When we do things for effect, we want sequences so we can evaluate an
expression after the effect takes place. Sure enough, ML has ;, and
e1;e2 has the semantics "evaluate e1, then evaluate e2, and the result
is the evluation of e2". Now that we have a mutable heap, e1;e2 is
not equivalent to e2.
Is ; really special? Nope. It has type `b -> `a -> `a. So does
/\`b/\`a\x:`b\y:`a. y. In fact, in ML with CBV, left-to-right evaluation
order, e1;e2 and (fun x -> fun y -> y) e1 e2 are totally equivalent.
But we know O'Caml is CBV right-to-left and writing
(fun y -> fun x -> y) e2 e1 for e1;e2 looks really odd! (There are
also precedence differences between ; and function-application.)
A final practical difference: O'Caml warns you if e1 doesn't have type
unit in e1;e2. I often turn this warning off.
Okay, so none of this changes what references are or do. We were
seeing how they interact with polymorphism and we already decided that
sound subtyping requires:
t1 < t2 t2 < t1
------------------
t1 ref < t2 ref
Now let's consider parametric polymorphism and this well-known problem:
let x : forall 'a. (('a list) ref) = ref [] in
x [int] := 1::[];
match ! (x[string]) with
hd::tl -> hd ^ "gotcha!"
| _ -> ()
What happened? It doesn't make sense to have values of the type
forall `a. ((`a list) ref) because we instantiate _then_ mutate,
putting a non-polymorphic value wher we expect a polymorphic one.
What would not be a problem:
(1) Giving [] the type forall 'a. 'a list. In fact, ML does.
(2) Giving ref [] the type (forall 'a. 'a list) ref. ML can't write
types like that.
(3) Giving /\'a. ref ([]:'a list) the type forall 'a. (('a list) ref).
But this thunks the reference creation, which changes the semantics.
(In our example, we would create two references because x is bound to
a /\-abstraction, not a reference.)
So we want x to be bound to a reference and we don't want it to have a
polymorphic type. But naively, that's exactly what ML type inference
(which we didn't cover) would do, given the abstract interface above.
So type inference had to be tweaked...
What we really want: Mutable data doesn't get a polymorphic type.
Why that's hard to get:
(1) ref looks like any other type constructor
(2) we can't special-case ref because we could hide it:
type 'a mytype; (* implemented as 'a list or 'a list ref or ... *)
myfun : 'a -> 'a mytype
The old way: Distinguish "weak" type variables which can't be
made polymorphic. So ref has type '_a -> '_a ref where '_a is weak,
which means ref [] can't be made polymorphic. So 'a mytype can't be
implmented with ref by '_a mytype can. Have to change interface when
you make something mutable.
The "new" way (10 years now): To give e a polymorphic type, e must be
a value or a variable. Aha: ref looks like a function, so ref []
isn't a value. But [] is a value. No type variable distinction or
changes to interfaces. More conservative (e.g., (fun () -> []) ())
but we've lived with it for 10 years.
The "new" "new" way: It used to be that you needed an explicit
(non-polymorphic) type to tell ML the type of ref [] or (fun () -> []) ().
Now it will infer it based on uses but:
(1) Will reject if you use it polymorphically.
(2) Will reject if you never use it.
Warning: This is tricky stuff. Over and over again, very smart people
screw up languages with mutation and type variables. Don't be next.
So far, we have seen that without care, mutation messes up evaluation
(order matters), subtyping (depth is unsound), and polymorphism
(polymorphic references are unsound). It also breaks termination
(STLC with references can diverge without fix):
let x : (int->int) ref = ref \y. y in
let f : (int->int) = \y. (!x) y in
x := f;
f 0
This isn't too surprising since under the hood, recursive functions are
implemented with a cyclic data structure (see hw3).
And we don't get "free theorems" we might expect...
In System F with references (and the value restriction), this is a
free theorem: If f has type forall `a. int -> `a -> int, then f [t1] c
v1 and f [t2] c v2 have the same behavior.
This is false: If f has type forall `a. int ref -> `a ref -> int, then
f [t1] r1 r2 and f [t2] r1 r3 have the same behavior.
(It's a really clever trick I can show you here or in office hours.)
===============
Last class, we briefly discussed that ML does references differently
than C, C++, Java, etc. where most variables (and fields) are
mutable. Because these language encourage an imperative style, it
would be a pain to have to use ref or ! every time we wanted to
make or read mutable memory.
But we _can_ give an operational semantics to the C-style approach.
For some reason, it's unpopular, but I do it in my research, so you
get to see it. The key idea is to understand that left-expressions
and right-expressions are evaluated differently. We'll do just a
trivial subset of C and assume left-to-right evaluation order:
t ::= int | t*
e ::= c | x | e = e | *e | &e | e;e
v ::= c | &x
Large-step semantics (using L(H,e,H',x) for "e left-evaluates to x
producing H'" and R(H,e,H',v) for "e right-evaluates to rv producing H'":
L(H,e1,H1,x) R(H1,e2,H2,v)
---------- ------------- -----------------------------
R(H,c,H,c) R(H,x,H,H(x)) R(H,e1=e2,H2[x->v],v)
R(H,e,H1,&x) L(H,e,H1,x) R(H,e1,H1,v1) R(H1,e2,H2,v2)
--------------- ------------- -----------------------------
R(H,*e,H1,H(x)) R(H,&e,H1,&x) R(H,e1;e2,H2,v2)
R(H,e,H1,&x)
----------- --------------
L(H,x,H,x) L(H,*e,H1,x)
Understanding this made me a better C programmer, but maybe I'm weird.
================
Exceptions
Like references, exceptions make us change the definition of all our
other constructs and break parametricity (e.g., /\'a.\x:t. raise E).
But otherwise, they're pretty straightforward. For simplicity, we'll
raise integers.
e ::= ... | raise e | try e catch (c,e)
t ::= <>
We can define evaluation by "bubbling up" exceptions. New rules:
e --> e'
--------------------
raise e --> raise e'
--------------------------------- -------------------------
(try (raise c) catch (c,e)) --> e (try v catch (c,e)) --> v
------------------------ -----------------------
(raise c) e --> raise c v (raise c) --> raise c
... and so on for every other construct we have. In particular:
c not equal to c'
----------------------------------------
(try (raise c) catch (c',e)) --> raise c
An uncaught exception would be "stuck" so our statement of type-safety
would need a caveat for this (produces a value or raise c)
That's really all there is to exceptions. In practice there are
faster ways to raise them (see 501?), but "popping off a bunch of
stack in O(1) time" doesn't change asymptotic behavior because you
build the stack in linear time.
You can also "compile exceptions away" by having every expression
evaluate to a sum (inl for exception or inr for normal), so you can
think of exceptions as a programmer convenience, but it's a big
convenience.
Type-checking is more interesting:
D;G |- e : int D;G |- e1 : t D;G |- e2 : t
------------------ ----------------------------------
D;G |- raise e : t D;G |- try e1 catch(c,e2) : t
A raise can have any type because it doesn't become a value that gets
used. You can think of "e has type t" as meaning "_if_ e becomes a
value, the value will have type t". For a raise, that holds
vacuously. That's why we can give infinite loops any type we want
(see midterm or ML's let rec f x = f x).
For a try/catch, it's like an if where both branches need the same
type. You may not have seen that before because Java's try/catch has
statements which "don't have types" (or as I like to say, always have
type unit).
==================
Finally, OOP... let's go back to slides...