Dan Grossman
CSE505: Concepts of Programming Languages
Lecture 13: Polymorphism / Type-Variables Wrap-up, References
Meta-comments:
(1) We have covered a tremendous number of programming-language
features: control-flow, assignment, functions, lexical scope, pairs,
records, variants (i.e., datatypes as in ML), subtyping, parametric
polymorphism (i.e., universal quantification), recursive types, ...
(2) We have covered a tremendous number of programming-language
concepts: operational models (i.e., interpreters), denotational models
(i.e., translators), equivalence, systematic renaming (i.e,
alpha-equivalence), encodings, type safety, subsumption, higher-order
subtyping (i.e, covariant records and contra/covariant functions),
free theorems, erasure, tensions between expressiveness and fast
implementations, ...
So it's okay that your head feels full! But it also seems that the
overhead projector is putting everyone (including me) in a trance, so
we won't use it today.
There are three things to finish up on this whirlwind tour of "life
after IMP":
(1) abstract data types
(2) mutable references
(3) exceptions
We'll then move on to our last major topic: objects. We'll be less
formal because our heads are full, _not_ because OOP is less amenable
to formal modeling (though lambda-calculus remains "more popular" to
many researchers).
(1) Abstract data types
We did some of this last time, but it's worth reviewing. If we want
to reason about _part_ of a program _soundly_, we need to restrict how
the rest of the program can interact with it. I think this is
probably the most important thing a language can help you do. There
are several approaches:
* hiding -- have code and data the rest of the program cannot name
* typing -- have code and data that is namable but usable in only
restricted ways, and enforce the restriction at compile-time
* run-time checking -- same as typing, but checked at run-time
(more expensive, more flexible, later error detection)
We were investigating the typing and hiding approaches with this
example intlist ADT from ML:
type myintlist; (* an abstract type *)
val mtlist : myintlist;
val cons : int -> myintlist -> myintlist
val decons : intlist -> ((int * myintlist) option)
val length : intlist -> int
...
We want to write clients that:
* cannot break the list-library functions
* do not break when we swap in an alternate list ADT
* allow multiple list ADTs at once, ideally even being able to
choose one at run-time, put them in a data structure, etc.
System F did pretty well:
(/\`b \x:t1. list-client) [t2] list-library
where t1 = {mt:`b, cons : int->`b->`b, decons: `b->unit+(int*`b), len:`b->int}
(i.e., a record-type for list-functions with the list-type abstracted)
and t2 = mu `a. unit + (int * `a)
(i.e., an encoding of int-lists)
Shortcomings:
* it's the client that does the abstraction, not the library
* different libraries would have different types, so can't choose
one at run-time.
But System F can do better:
(/\`c.\y:(forall `b.(t1->`c)). y [t2] list-library)
[t3] (/\`b. \x:t1. list-client)
where
t1 is still the abstracted record type for the library
t2 is still this library's encoding of int-lists
t3 is the client's return type
This is great:
* takes 2 steps and it's the same as the previous solution
* all list-libraries have the same type:
forall `c. (forall `b. (t1->`c)) -> `c
But:
* you can't even write this type in ML
* it's a painful structure inversion -- passing clients to libraries
just doesn't scale well when you have lots of abstractions in a
large system
So maybe a solution that just used hiding (no type variables) would be
a lot easier, especially for people used to OOP. The idea is to keep
all lists hidden from clients and only expose functions (methods) that
operate on those lists. I wrote two examples in OCaml, using just
functions and records... <>
Plusses:
* It worked
* There were no type variables
* Different implementations have the same type
Minuses:
* A different interface (no big deal?)
* Inconvenient for strong binary (really (n>1)-ary) operations.
* Example: Have to write append in terms of cons and decons
* Example: Suppose we have
type t2 = {
cons : int -> t;
average : unit -> int;
append : t -> t
}
You _cannot_ implement this, so you end up exposing more than
you want.
Okay, so we have encoded ADTs with System F and with closures, but
neither was perfect. Since ADTs are such an essential part of software
development, it's worth defining them directly.
It turns out _existential types_ are exactly what we need. <>
Polymorphism summary:
When we added simple types (ints, functions, records, sums, etc.),
we prevented getting stuck (e.g., reading a field from a function),
but we didn't have the ability to write generic code.
With subtyping, we could reuse code by passing it something more
specific than it needed (e.g., a record with more fields). This was
based on subsumption (implicit upcast) and extended to other types
like function types, nested records, etc.
Subtyping isn't really what you want for code like our list-library
functions, curry (see hw4), etc. Type variables and parametric
polymorphism are a better match. And type variables can enforce
strong abstractions, like in our file-handle example. But for ADTs,
existential types are a better description of what you want than
universal ("forall") types.
Niether subtyping nor polymorphism let you code up recursive data
structures. We need explicit support for them. Most languages do
so with named types that refer to themselves (perhaps indirectly).
We used mu and type variables instead and were able to reuse some of
our earlier work on subtyping (mu-types are subtypes and supertypes
of their unrolling) and type variables (defining unrolling in terms
of type substitution)
Meanwhile, back on "planet practical programmer" many are noticing
that our fancy types and functions haven't been able to do what IMP
could do in lecture 3: mutate heap locations. Even O'Caml can do
that, so let's see how mutation interacts with functions and types.
Rather than make variables mutable (we could without much trouble),
we'll take the ML approach of distinguishing (mutable) references from
(immutable) variables:
ref e (* create a reference initialized to evaluation of e *)
!e (* return the current contents of the reference e evaluates to *)
e1 := e2 (* change the contents of the reference e1 evaluates to, to the
value e2 evaluates to *)
Operational semantics: We need some notion of reference (a pointer to
some memory) and a way to create a fresh reference (consult your
memory manager), even though actual references do not appear in source
programs. And we need a heap to hold references.
e ::= ... | ref e | !e | e1 := e2 | r
H ::= . | H,r->v
v ::= ... | r
All of our rules change because program-states now look like (H;e).
Examples:
(H;e1) -> (H';e1') (H;e) -> (H';e')
---------------------------- ----------------------- --------------------
(H;(\x. e) v) -> (H; v[e/x]) (H;e1 e2) -> (H';e1' e2) (H; v e) -> (H'; v e')
The "new rules" (omitting the rules for evaluating subterms of :=, !, and ref):
r not in dom(H)
------------------------ ------------------- ------------------------
(H;ref v) -> (H,r->v; r) (H; !r) -> (H; H(r)) (H;r:=v) -> (H,r->v; ())
Unlike IMP:
* We can have references hold references (interesting data structures)
* Dereferences is explicit (via !) instead of implicit
These two things have everything to do with each other: With implicit
dereference, does "x := (ref (ref 1))" change the reference bound to x
to hold 1, a pointer to 1, or a pointer to a pointer to 1.
C/C++ just set the default the other way (implicit dereference unless
you use the address-of operator). Java has explicit dereference _and_
you must project a field or method at the same time.
References make order-of-evaluation matter, even for terminating programs.
Static semantics: Ref-types include the type of the contained value.
t ::= ... | t ref
To type-check source programs (in System F plus refs), we just need:
D,G |- e : t D,G |- e : t ref D,G |- e1 : t ref D,G |- e2 : t
--------------------- ---------------- ----------------------------------
D,G |- ref e : t ref D,G |- !e : t D,G |- e1 := e2 : unit
That is exactly what you implement in your compiler, but it's not
enough for proving type safety. After all, our Preservation Lemma
(evaluation preserves typing) will fail because any expression with a
reference r won't type-check.
So for the proof only, we define an extended system that type-checks
heaps and type-checks the expression e in (H;e) using the types of the
references in H. Key Weakening Lemma example: If e1 e2 typechecks and
e1 allocates some new references, then e2 still typechecks. Notice
that e1 does not change the type of any existenting references e2
uses.
There isn't much more to references except how they interact (badly!)
with everything we've worked so hard to do...
For ADTs, see problem 3 on homework 4.
For subtyping, when should we allow t ref < t' ref?
Should we allow covariant subtyping (t < t')?
let x : {.l1:int, .l2:int} ref = ref {.l1=0, .l2=0} in
x := {.l1=0};
(!x.l2)
Should we allow contravariant subtyping (t' < t)?
let x : {.l1:int} = ref {.l1=0} in
let y : {.l1:int, .l2:int} = x in
(!y.l2)
Reference types are invariant!!!
t1 < t2 t2 < t1
------------------
t1 ref < t2 ref
Corollary: Deep subtyping on records had everything to do with fields
being immutable!
For type variables, we need to be at least as careful!
let x : forall `a. ((`a list) ref) = ref [] in
x [int] := 1::[];
match (!x) [string] with
hd::tl -> hd ^ "gotcha!"
| _ -> ()
ML solution: No polymorphic references. You can give ref [] the type
int list ref or string list ref, but not forall `a. ((`a list) ref).
But that would be hard to implement because we don't know if abstract
types are mutable. So instead, ML implements something that happens to
be stronger: Given let x = e1 in e2, we can give x a polymorphic type
only if e1 is a variable or a value. This is the "value restriction"
from lecture 11 and now you know why it's there.
There are other solutions:
A common one is to make sure the only polymorphic values are
functions (so if forall `a. t is a type then t is a forall type or a
function type). If functions are immutable (typically they are),
you're fine. But you can't give values like [] the type forall `a. a
list.
Segue to Dan's research you won't be tested on: A solution I advocate
for imperative languages (like C or Java + safe polymorphism) is to
type-check "left-expressions" (e1 in e1 := e2) (and
"right-expressions" (most everything else)) differently: It suffices
to forbid type instantiation (e [t]) as a left-expression. So
polymorphic references always hold polymorphic values. It's not
surprising this works. It's under-appreciated because ML likes to
type-check := as a "function" of type `a ref -> `a -> unit. But that
isn't really right, so they have to compensate with the value
restriction.
Warning: Over and over again, very smart people screw up languages
with mutation and type variables. Don't be next.
So far, we have seen that without care, mutation messes up evaluation
(order matters), subtyping (depth is unsound), and polymorphism
(polymorphic references are unsound). It also breaks termination
(STLC with references can diverge without fix):
let x : (int->int) ref = ref \y. y in
let f : (int->int) = \y. (!x) y in
x := f;
f 0
And we don't get "free theorems" we might expect...
In System F with references (and the value restriction), this is a
free theorem: If f has type forall `a. int -> `a -> int, then f [t1] c
v1 and f [t2] c v2 have the same behavior.
This is false: If f has type forall `a. int ref -> `a ref -> int, then
f [t1] r1 r2 and f [t2] r1 r3 have the same behavior.
(It's a really clever trick I can show you in office hours.)
Next up: Some brief words on exceptions (they're easy) and then on to
objects.