590N: Empirical Studies of Software Engineering
Location: EE1 042
Time: Tue 1:30 - 2:20 pm
In this quarter, we are going to focus on empirical studies of software engineering methodologies and tools. We face a number of challenges when our evaluation involves human subjects. We plan to address following questions in this quarter; “Which evaluation method should I choose in order to evaluate my software engineering tools?”, “How should I form research questions to evaluate software engineering tools or methodologies?”, “How should we design case studies and experiments when we have less controls over human subjects?”
In addition, we will look into some empirical studies that were conducted in the initial stage of research projects in order to gather requirements and verify the hypothesis that the projects were based on.
Case study is a good research strategy when an investigator asks “how / why” research questions and focuses on contemporary events, but does not have control over behavioral events. – Robert K. Yin [Case Study Research – Design and Methods 2nd Edition, 1994]
· Week 1 (Mar 30th) – ( Miryung Kim )
Design Pattern Rationale Graphs: Linking Design to Source
E. Baniassad, G. Murphy, C. Schwanninger
This paper introduces the Design Pattern Rationale Graph and presents case studies to demonstrate that a DPRG can help a developer identify design goals in a pattern, and can improve a developer’s confidence about how those goals are realized in code base.
- Three different claims about DPRG (Confidence/ Completeness/ Lightweightness)
- Different case studies per each claim
- Observation of program investigation activities
- Key words: Design rationale, design goals, association of source code with design goals, instantiation of design patterns
· Week 2 (Apr 6th ) – ( Andrew Petersen )
Exploiting the Map Metaphor in a Tool for Software Evolution
W. G. Griswold, J. J. Yuan, Y. Kato
This paper describes the design of Aspect Browser and discusses a case study of removing a feature from a 500000 line program written in Fortran and C. The authors used the pair programming and talk-aloud strategy in their case study.
A structured demonstration is a hybrid evaluation technique that combines elements from experiments, case studies, and technology demonstrations.
· Week 3 (Apr 13th) – (Vibha Sazawal)
A Structured Demonstration of Program Comprehension Tools
Susan Elliott Sim, Margaret-Anne D. Storey
This paper describes a structured tool demonstration of program understanding tools. The demonstration was held as part of a workshop at CASCON 99 and followed by a workshop panel where the development teams and the observers presented their results and findings from this experience.
· Week 5 (Apr 27th ) - (Tao Xie)
WYSIWYT Testing in the Spreadsheet Paradigm: An Empirical Evaluation
Karen Rothermel, Margaret Burnett et al.
This paper presents empirical data about the effectiveness of end-user testing methodology.
- key words: End user programming and testing, spreadsheet programs
- Controlled laboratory experiment with background questionnaire
- Tutorial was given to both the control group and the experimental group, but the content of the tutorial was different.
- The subjects were divided into two groups; the experimental group used the Forms/3 with WYSIWYT, the control group used the Forms/3 without testing support.
· Week 6 (May 4th) – Observational study in a laboratory setting (Will Portnoy and Evan)
How Software Tools Organize Programmer Behavior During the Task of Data Encapsulation.
R. W. Bowdidge, W. G. Griswold
Empirical Software Engineering 1997
This paper presents an exploratory study, which demonstrates how the Star diagram organized and affected programmers’ behavior of encapsulating a data structure. Subjects were divided into three teams. Each team used one of three environments: standard UNIX tools, a restructuring tool with textual view of the source code, or a restructuring tool using the star diagram view.
· Week 7 (May 11th) - (Tammy VanDeGrift)
Invariant Inference for Static Checking: An Empirical Evaluation
J.W. Nimmer, M. Ernst
This paper describes an evaluation of the effectiveness of two techniques to assist the annotation process: inference via static analysis and inference via dynamic invariant detection. They present quantitative and qualitative evaluation of two different invariant inference methods in a program verification task over three small programs.
Comparison (Case Study vs. Experiments)
· Week 8 (May 18th) – (Craig& Evan)
Evaluating Emerging Software Development Technologies: Lessons Learned from Assessing Aspect Oriented Programming
G. C. Murphy, R. J. Walker, E. Baniassad
UBC CS TR-98-10
This paper describes the lessons the authors has learned in conducting case studies and experiments in the evaluation of AOP related tools.
Inquisitive Study (Survey/ Interviews/ Questionnaires)
Focus Group – Gathering Requirements
· Week 9 (May 25th) – (Charles Reis & Katarzyna Wilamowska)
Studying Work Practices to Assist Tool Design in Software Engineering
J. Singer, T. Lethbridge
IWPC 1998 PDF
This paper presents work practice data of the daily activities of software engineers. Four separate studies are presented. This paper also includes some requirements for a tool the authors have developed as a result of the studies.
Classic Empirical Studies
· Week 10 (Jun 1st ) – (Keunwoo Lee)
Assessing Software Review Meetings: Results of a Comparative Analysis of Two Experimental Studies
Adam Porter and Philip Johnson
IEEE Transactions on Software Engineering, March 1997