Winter 2014

Robert R. Henry

Lecture Syllabus

Dates are approximate.  I didn’t finish all material in one lecture, and so sometimes bled into the next.  I didn’t make this summary until after the quarter, in reviewing and organizing my lecture notes for archival purposes.

06Jan2014: Compiler architecture: scan,parse,semantics; remove redundancies; resource allocation; instruction selection.  Separation of concerns.  Re-targetable compilers.  Different front ends coupled to different backends.  Role of regression tests.  Role of bootstrapping.

08Jan2014: More compiler architecture.  Differences between compile time (static) and run time (dynamic).  Role of libraries. Role of linking and loading.  Role of assembler.

10Jan2014: ??  Engineering precepts of using abstractions just complex enough. Separation of responsibilities.

13Jan2014: Regular expressions to NDFA; NDFA to DFA via subset construction and fixpoint algorithm; FSM minimization.  Encoding DFA as tables plus fixed interpreter, or encoding state machine directly in code with if’s and while statements, nesting and gotos.

15Jan2014: RegExp as regular expressions on steroids, use in Java, Ruby, Python, etc. Rant about programmability issues of write-once, read-never complex regular expressions; better to use a CFG/BNF without embedded recursion to make regexps easier to program.

17Jan2014: Adding semantic analysis to lexers; Horner’s method for semantics of integers; 3rd rail of floating point conversion.  Context free grammars, ambiguity, left most vs right most derivation; right most derivation in reverse;  parsing vs derivations; parse trees; refactoring to avoid left recursion as needed for LL(1) parsers.

20Jan2014: ??

22Jan2014: Predictive parsing. Top Down parsing.  Formal LL(1) mechanisms.  The method of recursive descent. Railroad (aka syntax) diagrams. Pragmatically brushing away left recursion. Adding semantic analysis (AST building example) to recursive descent parsers.

24Jan2014: FIRST and FOLLOW construction using fixpoint algorithm; dealing with those troublesome irritating nullable productions and the complexities they cause.

27Jan2014: From predictive parser to bottom-up LR style parser, using algorithm similar to used in NDFA to DFA construction.  Defer work for as long as possible.  More context means fewer chances for anbiguity.

27Jan2014Using FIRST and FOLLOW sets to disambiguate conflicts to make an SLR parser. Resolution of reduce/reduce and shift/reduce conflicts in operator grammars, using precedence and associativity information.  Example LR parse of (a + a*a) showing stack, shifting and reducing. Handout of LR state machine, as produced by bison and graphviz (cool!).

29Jan2014: Encoding into the grammar: precedence; disambiguating if/then/else;  disambiguating parenthesis language; differences between separation and termination. Rant about javascript.

29Jan2014: Expression trees to postfix (LRN) representation.  K-ary operators. Operands on presumed infinite stack.

31Jan2014: Java byte codes and VM model. 1-pass generation of stack machine code using implicit postfix walk of expression tree, coupled with recursive descent parser.  Unix memory model.  Why stacks grow down on Unix.  PDP-11 addressing modes. x86_64 register conventions, requirements and overlaps.

03Feb2014: General model for procedure interface.  Creation and destruction of data and control evaluation environments. Stack disciplines for LIFO.  Calling conventions and caller vs callee save registers. x86_64 calling conventions and prolog/epilog code to use. Need to support perturbations of simplicity: varargs; structure valued args and returns; segjmp/longjmp; stack alignment; static nesting; funargs; escapes; closures; threading; garbage collection roots; debugging; reflection; co-routines; generators; non contiguous stacks.

05Feb2014: Control flow.  Backpatching.  Code for lazy evaluation of && and ||. Application of DeMorgan’s theorem.  Branch destination manipulations.

07Feb2014: Midterm

10Feb2014: Supercomputers and the Tera/Cray MTA. Thread parallelism in hardware; VLIW instruction set; hardware context switches once per clock tick; latency tolerant memory system; tagged memory; no caches; memory lock bits. Trade offs in hardware and software to achieve easy parallel programming. Manufacturing and service issues with heavy, high current, hot, densely packed, toxic electronics. Fun of building a complete computer system.

12Feb2014: Uses of types to encode rules for operations.  Representation of types, and association with identifiers.  Consumption of type information to check things that can’t be done in CFGs, such as declaration rules, expression types, etc.

14Feb2014: Block structure and nested symbol tables. Type graph.  Representations of primitive and composite types in type graph. Collection of type information using several sweeps over AST, as required to solve forward declarations and get all type info in place before doing type checking.

17Feb2014: ??

19Feb2014: Type checking as a form of compile time constant folding.  Other type like thingies: constedness, tainting, locking, volatiilty, assignable, ranges.  Type info for code generation: sizeof, alignof, offsetof..  Type stack used during type checking, by analogy with expression evaluation stack used at runtime. Example of type checking scenarios in MJ. Implications of imprecise knowledge about runtime type of “this”, and need for vtbls as compiler produced residual that is consulted at runtime to determine overridden functions and implement basic object oriented functionality.

21Feb2014: Example of vtbls and method overriding. Activation records on stack on x86_64. MJ function prolog and epilog. Multidimensional C or Fortran style arrays. Java arrays.  Pascal arrays with non-0 lower bound.

24Feb2014: Storage allocation disciplines. Automatics[sic] on stack.  Variable length arrays in C.  alloca(). Adding a layer of indirection to solve one’s problems.

24Feb2014 Pascal style nested procedures and static links. Dynamic scoping, and need to move symbol tables to runtime. Implementation of setjmp/longjmp, violation of C++ destructor requirements. Escapes and separation of control lifetime from data lifetime.

24Feb2014 Modern try/catch/finally and exception throwing.  Mapping pc ranges to scope of try/catch/finally. Reflective stack unwinders (program looking at itself). Challenges of keeping VM model and JIT code in sync to support reflection.

26Feb2014: Profiling.  Statement counting, naive and some improvements thereof. Uses of profile information in feedback loop to guide optimizations in the future.  Sampling profiling.  Call graph profiling and gprof. Profiling web page construction, and services used to make web pages; discussion about New Relic’s mechanisms for profiling and overall architecture.

28Feb2014:  Calling Conventions: CBV (call by value), CBR (reference), CBCICO (copy in, copy out); CBN (name) ala Algol, but now improved using closures or Ruby style blocks; Jensen’s device to make a concise representation of summation using CBN.

01Mar2014: Optimization[sic] overview. “An Inefficient Program” example. Examples of transformations: CSE; simplification; constant folding; dead code elimination; tail merging; exposing arithmetic; loop jamming and unrolling; induction variables; peephole optims; clever instructions and addressing modes; global alias analysis; references and aliasing; inline expansion; tail recursion; memoization.

03Mar2014: Peephole optimization: combine control or data flow adjacent instructions together.  Code generation. Instruction and addressing mode specification using a tree grammar.  Tree pattern matching.  Dynamic programming to determine the minimal cost pattern match (“parse” or “cover”) using a context-independent objective function, such as instruction count or instruction cycles.  Bottom Up Rewrite Systems (BURS).  3 pass algorithm (BU, TD then BU) to select and generate optimal cover.

05Mar2014: Data flow equations.  Facts about a basic block (what is created, what is destroyed), combined along flow paths going forward or backwards, doing union or intersection, as required by the problem to solve.  Another fixpoint algorithm: iterate sets of facts at each block until there’s no change.  Solution of “live variables” and “available expressions” using data flow equations.

07Mar2014: Static Single Assignment (SSA) form: each variable assigned to only once.  Merging of different values encoded in phi functions at entrance to block. Notion of dominance in the control flow graph; notion of dominance frontier where dominance ends, which is where phi functions need to be inserted to get smallest number of insertions. Animation example. Conversion of phi functions to assignments in the predecessor block.

07Mar2014: construction of conflict graph to encode when 2 values must live simultaneously.  Color the graph, one color for each available register.  Heuristics for choosing nodes and ordering nodes.  Preference information to handle realities of bizarre register files, instructions and calling conventions.

10Mar2014: Static analysis: doing whole program analysis to statically analyze “all paths” and “any path” conditions to look for possibilities of runtime errors at compile time, such as null pointer exceptions or out of range conditions.  Analyzers look for programming anti patterns, such as resource allocation/deallocation mismatches.  Notion of “source annotation language” used to provide more type information than is typically encoded in the languages type system. Static analysis got a big boost by Y2K, and Ariane-5 1st launch catastrophe.

Dynamic analysis to check for conditions at run time.  Architecture of valgrind, as a compiler from object code to object code, to collect use information (as data tags) for every byte in the program.

12Mar2014: Garbage collection: mark/sweep; semi space; generational.  Trade off speed and flexibility against observed statistical behavior of the lifetime of objects (old get older, young point to old, clustered allocation).  Notion of root sets and transitive closure over data.

14Mar 2014 FFTW (Fastest Fourier Transform in the West): domain specific compiler mapping a single integer N to C code. It is capable of managing intense bookkeeping, breaking down problems to smaller problems, and full of domain specific knowledge used to strategic and tactical advantage.  Blows away the competition.

14Mar2014: Facebook’s PHP system “HipHop”: n compilation and runtime system for a dynamically typed scripting language, trading off interpretation and just-in-time compilation producing blocks of type-safe code guarded by type guards.