Report on Assigned Reading - Tools

By - Anju Gupta

University of Washington, CSE584 – Software Engineering


Tools are the programs, which help the programmers to understand the existing programs, to understand the differences between several versions of a program and creating new programs by combining the pieces of the old programs. Thus help programmers to make program development and maintenance easier, faster, and less error prone.

The use of program dependence graphs in software engineering by Susan Horwitz and Thomas Reps

This paper describes a language-independent program representation - the program dependence graph and discusses how program dependence graphs, together with operations such as program slicing, can form the basis for powerful programming tools. To build tools that can help programmers, the problems that the tools should address, are categorized into three different classes they are: slicing problems (one program), differencing problems (two programs) and integration problems (three or more programs). For each set of problems, first the case of single procedure programs are addressed and then the solutions are extended to multiple procedure programs. The paper then discusses in details, program dependence graph- program with no procedures and functions, system dependence graph- program with multiple procedures and functions and program representation graph- a variant of the program dependence graph. Programs forward and backward slicing are discussed. It is possible to devise algorithms that identify a safe approximation to identify semantic differences between two versions of program. The technique for solving three differencing problems are discussed first for single-procedure programs and then for multiple-procedure programs. The paper discusses techniques to solve the two integration problems. Program Integration for Single-procedure programs and Multiple-procedure programs is discussed. The paper concludes describing the implementation of the slicing, differencing and integration techniques in a prototype system, called the Wisconsin Program Integration System.

I found the paper easy to follow. It was easy to understand the problems, which would occur with the one, two or three/more programs. I have generally worked with two/more programs and my work place. I have often come across problems with the versioning of two programs. The tools used by our team are Microsoft’s Visual source safe.

 Dynamically Discovering likely program Invariants to support Program Evolution by Michael Ernst, Jake Cockrell, William Griswold, David Notkin

Program invariants can protect a programmer from making changes that inadvertently violate assumptions upon which the program’s correct behavior depends. Instead of expecting programmers to fully annotate code with invariants is to automatically infer invariants from program itself. This paper reports two results from the research on dynamic techniques for discovering invariants from execution traces. The first result is a set of techniques and an implementation for discovering invariants from execution traces. The basic approach consists of instrumenting the source program to trace the variables of interest, running the instrumented program over a set of test cases, and inferring invariants over both the instrumented variables and derived variables that are not manifest in the original program. The second result is the application of the engine to two sets of target programs. On the first set of programs, the invariants detector successfully reports all the formally specified preconditions, postconditions, and loop invariants. But on the second set of programs, the programs are not annotated with invariants, nor is there any indication that invariants were used in their construction. The derived invariants can help a programmer to evolve a program that contains no explicitly stated invariants; this is demonstrated by evolving a program from Siemens suite. The ways to accelerate invariants inference and to manage the number of invariants reported is suggested. Related work on dynamic and static inference is discussed. The lesson learnt from the techniques developed are: it is fast when applied to larger programs, the dynamically inferred invariants helped understanding programming and the approach should be applicable to the evolution of larger systems.

It was interesting to read how invariants were extracted from execution traces. The paper was very informative on how these techniques would help the developer in understanding the code better.

Lackwit: A program understanding tool based on type interference by Robert Callahan, Daniel Jackson

The Lackkwit is a tool built to demonstrate feasibility of applying type inference analyses to C programs for program understanding tasks and to experiment with the kind and quality of information available. Reasons for choosing type are, it is fully automatic and it handles complex features of rich source languages like C. The Lackwit is used to analyze Morphin, a robot vehicle control program. The tool helped determine the structures, highlighted some representation exposures, and helped determine performance bottlenecks in database and helped characterize the sizes of query results. The Lackwit reports various the global and local variables that are not read when used for detecting. Checking for memory leaks, Lackwit identifies problem areas. Currently, Lackwit is C and C++ front end, a sequential binary file database; C based query engine and uses "dot" graph-drawing tool for graphs. Future work on Lackwit would in the areas of, improving it performance, improving the query interface and better presentation of data.

I found the tool Lackwit very useful. The results of the tool were very impressive. The tool used by me to understand the structures of the database applications, tables design relationships, and is Erwin. It has helped me not only to visually understand the database schema but also helped easy implementations.

Static Detection of Dynamic Memory Errors by David Evans

In this paper an efficient static checking tool, LCLint is discussed. The use of the tool has been extended to detect broad class of important errors including misuses of null pointers, failures to allocate or deallocate memory, uses of undefined or deallocated storage and dangerous or unexpected aliasing. Annotations are used to make assumptions about function interfaces, variables and types explicit. Constraints necessary to satisfy these assumptions are checked at compile-time. Places where the constraints are violated are anomalies in the code, which typically indicate bugs in the program. Using LCLint in an analysis of a buggy code, it checks the function implementation satisfies the external constraints, it checks that all storage derivable are defined and it produces an error reporting an incomplete definition anomaly. An example of the toy employee database program is described to demonstrate how annotations can be added to an existing program, thereby improving its documentation and maintainability and detecting errors in the process. The paper concludes with, a combination of static checking using annotations and run-time checking and testing can help produce reliable code with less effort than traditional methods.

The LCLint is definitely a better approach than using traditional methods. I think lots of work still needs to done on the tool for improvements.