Readings in Human-Computer Interaction: Toward the Year 2000

 

Copyright © 1994, Ronald Baecker, Jonathan Grudin, William Buxton, Saul Greenberg. All rights reserved.

 

To be published by Morgan Kaufmann Publishers

 

19 Dec. RMB — DONE ! — to MK

 

NOT TO BE COPIED WITHOUT THE EXPRESS PERMISSION OF THE AUTHORS


Chapter 1

Historical and Intellectual Perspective

When thinking about technology, we tend to anticipate the future and forget the past. Consider today's graphical user interfaces (GUIs). We know that they are typically patterned after the two innovative systems to be discussed in Case B: the Xerox Star, introduced in 1981, and the Apple Macintosh, introduced in 1984. Yet the groundwork for these systems was laid in the 1960s and 1970s.

 

In this chapter, we provide an historical perspective, introducing some seminal ideas, contributors, and systems. However, our objective is not a detailed, historically complete recitation of names, places, and dates. Our emphasis is intellectual, with a focus on the history of ideas. Today's new systems can be traced to the excitement generated by yesterday's imaginative speculations. Today's speculative ideas may be the seeds of tomorrow's systems.

The MEMEX

Although the modern digital computer is grounded in ideas developed in the 18th and 19th centuries, important concepts and the technology required to implement the ideas only became available in the 1930s and the 1940s. The initial motivation was to speed the routine and laborious calculations required for ballistic (e.g., artillery) and atomic energy computations.

 

Perhaps the first person to see beyond these uses and conceive of the computer as a fundamental tool for transforming human thought and human creative activity was Vannevar Bush (1945). In his classic paper, "As We May Think," he described the increasing difficulty of managing and disseminating the results of research (pp. 101-2):

 

"Professionally our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose. If the aggregate time spent in writing scholarly works and in reading them could be evaluated, the ratio between these amounts of time might well be startling. Those who conscientiously attempt to keep abreast of current thought, even in restricted fields, by close and continuous reading might well shy away from an examination calculated to show how much of the previous month's efforts could be produced on call. Mendel's concept of the laws of genetics was lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential.

The difficulty seems to be not so much that we publish unduly in view of the extent and variety of present-day interests but rather that publication has been extended far beyond our present ability to make real use of the record. The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships."

 

To solve this problem, he sketched the outlines of a device he called the MEMEX (pp. 106-8):

 

"Consider a future device for individual use which is a sort of mechanized private file and library. It needs a name, and, to coin one at random, "MEMEX" will do. A MEMEX is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

It consists of a desk, and while it can presumably be operated from a distance, it is primarily the piece of furniture at which he works. On the top are slanting translucent screens, on which material can be projected for convenient reading. There is a keyboard and sets of buttons and levers. Otherwise it looks like an ordinary desk.

In one end is the stored material. The matter of bulk is well taken care of by improved microfilm. Only a small part of the interior of the MEMEX is devoted to storage, the rest to mechanism. Yet if the user inserted 5,000 pages of material a day it would take him hundreds of years to fill the repository, so he can be profligate and enter material freely.

Most of the MEMEX contents are purchased on microfilm ready for insertion. Books of all sorts, pictures, current periodicals, newspapers, are thus obtained and dropped into place. Business correspondence takes the same path. And there is provision for direct entry. On the top of the MEMEX is a transparent platen. On this are placed longhand notes, photographs, memoranda, all sorts of things. When one is in place, the depression of a lever causes it to be photographed onto the next blank space on a section of the MEMEX film, dry photography being employed.

There is, of course, provision for consultation of the record by the usual scheme of indexing. If the user wishes to consult a certain book, he taps its code on the keyboard, and the title page of the book promptly appears before him, projected onto one of his viewing positions. Frequently used codes are mnemonic, so that he seldom consults his code book; but when he does, a single tap of a key projects it for his use. Moreover, he has supplemental levers. On deflecting one of these levers to the right he runs through the book before him, each page in turn being projected at a speed which just allows a recognizing glance at each. If he deflects it further to the right, he steps through the book ten pages at a time; still further at one hundred pages at a time. Deflection to the left gives him the same control backward.

A special button transfers him immediately to the first page of the index. Any given book of his library can thus be called up and consulted with far greater facility than if it were taken from a shelf. As he has several projection positions, he can leave one item in position while he calls up another. He can add marginal notes and comments, taking advantage of one possible type of dry photography, and it could even be arranged so that he can do this by a stylus scheme, such as is now employed in the telautograph seen in railroad waiting rooms, just as though he had the physical page before him.

All this is conventional, except for the projection forward of present day mechanisms and gadgetry. It affords an immediate step, however, to associative indexing, the basic idea of which is a provision whereby any item may be caused at will to select immediately and automatically another. This is the essential feature of the MEMEX. The process of tying two items together is the important thing.

When the user is building a trail, he names it, inserts the name in his code book, and taps it out on his keyboard. Before him are the two items to be joined, projected onto adjacent viewing positions. At the bottom of each there are a number of blank code spaces, and a pointer is set to indicate one of these on each item. The user taps a single key, and the items are permanently joined. In each code space appears the code word. Out of view, but also in the code space, is inserted a set of dots for photocell viewing; and on each item these dots by their positions designate the index number of the other item.

Thereafter, at any time when one of these items is in view, the other can be instantly recalled merely by tapping a button below the corresponding code space. Moreover, when numerous items have been thus joined together to form a trail, they can be reviewed in turn, rapidly or slowly, by deflecting a lever like that used for turning the pages of a book. It is exactly as though the physical items had been gathered together from widely separated sources and bound together to form a new book. It is more than this, for any item can be joined into numerous trails.....

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped in the MEMEX and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience and of the experience of friends and authorities. The patent attorney has on call the millions of issued patents, with familiar trails to every point of his client's interest. The physician, puzzled by a patient's reactions, strikes the trail established in studying an earlier similar case and runs rapidly through analogous case histories, with side references to the classics for the pertinent anatomy and histology. The chemist, struggling with the synthesis of an organic compound, has all the chemical literature before him in his laboratory, with trails following the analogies of compounds and side trails to their physical and chemical behavior.

The historian, with a vast chronological account of a people, parallels it with a skip trail which stops only on the salient items, and can follow at any time contemporary trails which lead him all over civilization at a particular epoch. There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record. The inheritance from the master becomes, not only his additions to the world's record, but for his disciples the entire scaffolding by which they were erected."

 

Bush's vision was remarkable. He foresaw the application of machines to information storage and retrieval, the value of associative indexing, and the multi-media nature of future computer use. He predicted the development of "a machine which types when talked to" and speculated on the possibility of some day establishing a path from the written word to the brain that is "more direct" than the senses—tactile, oral, and visual. With CD technology making a fully digital MEMEX possible and computer networks allowing us to build a distributed MEMEX, Bush's dream may become a reality.

Man-Computer Symbiosis

In the 1950s, others began to see the computer's potential use as a facilitator of aspects of human creativity and problem solving. Between 1960 and 1965 there was an outpouring of ideas and prototype systems. Much of what has been achieved since then has been to implement and expand these ideas, to convert prototypes to products. What was special about the early 1960s? One factor was the arrival of transistor-based "second-generation" computers in 1958. The use of vacuum-tube computers had been sharply limited by their size, speed, and power requirements, and their maintenance cost. With transistors, the constraints on what could be imagined melted away, a process that accelerated in the mid-1960s as "third-generation" integrated circuit computers began to appear.

 

One of the most compelling new visions of the computer's potential was that of J.C.R. Licklider, who conceived (1960) of a synergistic coupling of human and machine capabilities (p. 4):

 

"The fig tree is pollinated only by the insect Blastophaga grossorum. The larva of the insect lives in the ovary of the fig tree, and there it gets its food. The tree and the insect are thus heavily interdependent: The tree cannot reproduce without the insect; the insect cannot eat without the tree; together, they constitute not only a viable but a productive and thriving partnership. This cooperative "living together in intimate association, or even close union, of two dissimilar organisms" is called symbiosis.

"Man-computer symbiosis" is a subclass of man-machine systems. There are many man-machine systems. At present, however, there are no man-computer symbioses. The purposes of this paper are to present the concept and, hopefully, to foster the development of man-computer symbiosis by analyzing some problems of interaction between men and computing machines, calling attention to applicable principles of man-machine engineering, and pointing out a few questions to which research answers are needed. The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today."

 

Licklider asserted that the then-current generation of computers failed to facilitate this symbiosis (p. 5):

 

"Present-day computers are designed primarily to solve preformulated problems.....

However, many problems that can be thought through in advance are very difficult to think through in advance. They would be easier to solve, and they would be solved faster, through an intuitively guided trial-and-error procedure in which the computer cooperates, turning up flaws in the reasoning or revealing unexpected turns in the solution. Other problems simply cannot be formulated without computing-machine aid. Poincare anticipated the frustration of an important group of would-be computer users when he said, "The question is not ‘What is the answer?' The question is ‘What is the question?'" One of the main aims of man-computer symbiosis is to bring the computing machine effectively into the formulative parts of technical problems.

The other main aim is closely related. It is to bring computing machines effectively into processes of thinking that must go on in "real time," time that moves too fast to permit using computers in conventional ways. Imagine trying, for example, to direct a battle with the aid of a computer on such a schedule as this. You formulate your problem today. Tomorrow you spend with a programmer. Next week the computer devotes 5 minutes to assembling your program and 47 seconds to calculating the answer to your problem. You get a sheet of paper 20 feet long, full of numbers that, instead of providing a final solution, only suggest a tactic that should be explored by simulation. Obviously, the battle would be over before the second step in it planning was begun. To think in interaction with a computer in the same way that you think with a colleague whose competence supplements your own will require much tighter coupling between man and machine than is suggested by the example and than is possible today."

 

Licklider suggested how computers could improve and facilitate thinking and problem-solving (pp. 5-6):

 

"..... In the spring and summer of 1957, therefore, I tried to keep track of what one moderately technical person actually did during the hours he regarded as devoted to work. Although I was aware of the inadequacy of the sampling, I served as my own subject.

It soon became apparent that the main thing I did was to keep records.....

About 85 per cent of my "thinking" time was spent getting into a position to think, to make a decision, to learn something I needed to know. Much more time went into finding or obtaining information than into digesting it. Hours went into the plotting of graphs and other hours into instructing an assistant how to plot. When the graphs were finished, the relations were obvious at once, but the plotting had to be done in order to make them so. At one point, it was necessary to compare six experimental determinations of a function relating speech intelligibility to speech-to-noise ratio. No two experimenters had used the same definition or measure of speech-to-noise ratio. Several hours of calculating were required to get the data into comparable form. When they were in comparable form, it took only a few seconds to determine what I needed to know.

Throughout the period I examined, in short, my "thinking" time was devoted mainly to activities that were essentially clerical or mechanical: searching, calculating, plotting, transforming, determining the logical or dynamic consequences of a set of assumptions or hypotheses, preparing the way for a decision or an insight. Moreover, my choices of what to attempt and what not to attempt were determined to an embarrassingly great extent by considerations of clerical feasibility, not intellectual capability.

The main suggestion conveyed by the findings just described is that the operations that fill most of the time allegedly devoted to technical thinking are operations that can be performed more effectively by machines than by men. Severe problems are posed by the fact that these operations have to be performed upon diverse variables and in unforeseen and continually changing sequences. If those problems can be solved in such a way as to create a symbiotic relation between a man and a fast information-retrieval and data-processing machine, however, it seems evident that the cooperative interaction would greatly improve the thinking process."

 

In a later paper, Licklider and Clark (1962) outline applications of human-computer communication to military command and control, mathematics, programming, war gaming and management gaming, planning and design, education, and scientific research. They report on some early experiments and prototype systems that demonstrate the potential of using computers in these applications. Then, showing remarkable foresight, they list ten problems whose solutions are prerequisites for true human-computer symbiosis. The first five they term "immediate," the next one "intermediate," and the last four "long-term":

 

• time sharing of computers among many users

 

• an electronic input-output surface for the display and communication of correlated symbolic and pictorial information (the earlier paper cited this and computer-driven wall displays as essential)

 

• an interactive, real-time system for information processing and programming

 

• systems for large scale information storage and retrieval designed to make possible concurrent, cooperative problem solving in many disciplines

 

• the facilitation of human cooperation in the design and programming of large systems

 

• combined speech recognition, hand-printed character recognition, and light-pen editing

 

• natural language understanding, including syntactic, semantic, and pragmatic aspects

 

• recognition of the speech of arbitrary computer users (the earlier paper stressed a need for automatic speech production, as well)

 

• the theory of algorithms—discovery, development, and simplification

 

• heuristic programming.

 

Time has proven Licklider and Clark to be remarkably accurate. Their immediate goals have been met. But despite some progress, their intermediate and long-term visions remain goals, thirty years later.

Time Sharing and Networking

Even before Licklider's papers, John McCarthy and Christopher Strachey independently proposed the development of a system of time-sharing, allowing the computer to work on several jobs simultaneously and to give each user the illusion of having a personal machine.

 

The realization of this idea was absolutely critical for interactive systems to succeed. Computers, which were very expensive, could run only one program at a time. It was too costly to allow such a computer to be idle waiting for a user to enter information, so "batch processing" was the rule: a program was read in, executed, and the result then printed out for the user to study. With time-sharing, an "inactive" user does not hold up the system—it can work on other jobs while waiting.

 

The time-sharing concept was soon validated by experimental systems built in the early 1960s at Bolt Beranek and Newman, Inc., MIT, and System Development Corporation, among others (Davis, 1966; Fano and Corbato, 1966; Licklider, 1968). The viability and rapid development of time-sharing advanced the art of human-computer interaction significantly:

 

• It dramatically increased the accessibility of computers and the size of user communities.

 

• Because a user could now afford to think at "the terminal" and not simply carry out preconceived actions, designers of interactive programs could pay more attention to users' behaviors at the terminal and to methods for making them maximally productive. One result was the development of new "interaction languages" such as JOSS (Shaw, 1964), which facilitated on-line control and programming of the machine (Davis, 1966; Licklider, 1968).

 

• New applications such as computerized text processing began to flourish, eventually resulting in full-screen editors and word processors (Meyrowitz and van Dam, 1982).

 

• Because each time-sharing computer had a community of users who could communicate directly with each other through messages, or indirectly through a shared file system, computer-mediated human-human interaction could also be enriched.

 

Starting in the mid-1960s, the latter achievement was greatly extended by the development of wide area computer networks, linking machines and users in geographically distant sites (Roberts, 1986). The availability of these networks ultimately led to the virtual communities we see today (Chapter 14).

 

The challenges to realizing time-sharing and networking were primarily technical. Vision quickly became reality, providing the essential foundation for the development of interactive systems. Progress was slower, however, in implementing the concepts that facilitate fluid interaction between human and machine.

Sketchpad

Some computers of the early 1950s had displays, such as MIT's Whirlwind and the SAGE air-defense command and control system (Machover, 1978; Bell, 1986; Ross, 1986). Thus, by the middle 1950s it was obvious that the computer could be used to manipulate pictures as well as numbers and text. Researchers began exploring the potential for enhanced graphical communication between human and machine. The most successful was Ivan Sutherland (1963, 1963 video). His pioneering Sketchpad system, built at MIT Lincoln Laboratory, introduced powerful new ideas and concepts:

 

• hierarchic internal structure of computer-represented pictures, defined in terms of subpictures

 

• a master picture with instances that are transformed versions of the master; this concept helped lay the foundation for modern object-oriented programming.

 

• constraints as a method of specifying details of the geometry of a picture; for example, a horizontal constraint applied to a line, or an equal distance constraint applied to pairs of points

 

• the ability to display and manipulate iconic representations of constraints

 

• the ability to copy as well as instance both pictures and constraints

 

• elegant techniques for picture construction using a light pen as an input device

 

• separation of the coordinate system in which a picture is defined from that in which it is displayed

 

• recursive operations such as "move" and "delete" applied to hierarchically defined pictures.

 

Sketchpad was tremendously influential, but it took years for many researchers or developers to get systems powerful enough to apply these ideas.

Interactive Computer Graphics

Others were actively exploring avenues for assisting design that were opened by the new interface technologies. Their excitement and intensity peaked at the same conference at which Sutherland unveiled Sketchpad, the 1963 Spring Joint Computer Conference. Work described there included a general outline of the requirements for a computer-aided design (CAD) system (Coons, 1963), a presentation of the requirements for CAD in terms of languages and data structures (Ross, 1963), a description of hardware requirements for CAD (Stotz, 1963), and a method of generalizing Sketchpad to allow input and manipulation of three dimensional line drawings (Johnson, 1963).

 

But further advances in hardware and software were needed to realize the tremendous potential of computer graphics suggested by Sutherland and the early CAD pioneers. Early graphics hardware was very expensive, requiring costly memory to store image representations and costly circuitry to eliminate flicker from cathode ray tube (CRT) displays. Invented in the 1950s, the direct view storage tube (Preiss, 1978) had by the early 1970s made low-cost graphics possible, providing a tremendous stimulus. Work was also underway on enhancing the expressive potential of the technology. This led to two areas of innovation: powerful display processors (Myer and Sutherland, 1968) capable of real-time manipulations of simple line drawings, and input technologies that accepted real-time input of sketches and gestures, such as data tablets (Davis and Ellis, 1964).

 

The software front saw progress in two major directions. Investigators at Lincoln Laboratory and other sites developed operating systems capable of supporting interactive graphics under time-sharing, another step in making the technology more cost-effective (Sutherland, Forgie and Morello, 1969). Languages were developed with embedded graphics support that facilitated the production of graphics applications (Rovner and Feldman, 1968).

 

These advances led to a flourishing of new applications of computation to mathematics, science, engineering, and art. Culler and Fried, for example, pioneered in the development of computer-aided mathematical laboratories (Culler, 1968). Typical of the new scientific applications was the use of 3D computer graphics in molecular modeling, begun by Levinthal (1966) and later refined by several groups (e.g., Brooks, 1977). In art, building on the pioneering work in computer-generated animation of Knowlton (1966) and others, Baecker (1969) provided a convincing example of how artists could specify and refine movies through an interactive language of sketches, direct actions, and real-time playback. Davis (1966), Licklider (1968), and a special issue of readings from Scientific American (1971) provide accounts of these early developments.

MEMEX Revisited: Two Visions of Augmenting Human Intellect

Also in the 1960s, two influential technological visionaries, Doug Engelbart (1963, 1982, 1986; Engelbart and English, 1968, 1968 video) and Ted Nelson (1965, 1973, 1981, 1987) elaborated Bush's concept of the MEMEX in imaginative ways. They envisioned computers building and manipulating richly structured complexes of interconnected, interlinked bodies of text, which Nelson termed hypertext. They realized, as Bush had not, that most information would be stored digitally rather than on microfilm.

 

The approaches of Engelbart and Nelson differed in substantive ways. Engelbart focused on defining an hierarchic structure for ordinary documents to enable computer support in their preparation; Nelson was more interested in lateral links and interconnections to create a text "space" unlike any that existed. Engelbart looked to support focused group creation and problem solving; Nelson was excited by individual exploration and extension of document structures that could combine contributions from people with no formal ties.

 

Engelbart conceived of his work as the "augmentation of man's intellect" (1963, pp. 1, 3-4):

 

"By "augmenting man's intellect" we mean increasing the capability of a man to approach a complex problem situation, gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: the comprehension can be gained more quickly; that better comprehension can be gained; that a useful degree of comprehension can be gained where previously the situation was too complex; that solutions can be produced more quickly; that better solutions can be produced; that solutions can be found where previously the human could find none. And by "complex situations" we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers — whether the problem situation exists for twenty minutes or twenty years. We do not speak of isolated clever tricks that help in particular situations. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human "feel for a situation" usefully coexist with powerful concepts, streamlined terminology and notation, sophisticated methods, and highly-powered electronic aids.....

Our culture has evolved means for us to organize and utilize our basic capabilities so that we can comprehend truly complex situations and accomplish the processes of devising and implementing problem solutions. The ways in which human capabilities are thus extended are here called augmentation means, and we define four basic classes of them:

1. Artifacts — physical objects designed to provide for human comfort, the manipulation of things or materials, and the manipulation of symbols.

2. Language — the way in which the individual classifies the picture of his world into the concepts that his mind uses to model that world, and the symbols that he attaches to those concepts and uses in consciously manipulating the concepts ("thinking").

3. Methodology — the methods, procedures, and strategies with which an individual organizes his goal-centered (problem-solving) activity.

4. Training — the conditioning needed by the individual to bring his skills in using augmentation means 1, 2 and 3 to the point where they are operationally effective.

The system we wish to improve can thus be visualized as comprising a trained human being together with his artifacts, language, and methodology....."

 

Twenty years later, in reviewing his progress, Engelbart (1982) asserted that he had been successful in facilitating "almost all phases of any simple to complex document production process," and in providing a "dialogue support system" consisting of electronic mail and remote shared screen capabilities. Echoing Licklider, he stressed the importance of synergy (Engelbart, 1982, pp. 306-307):

 

"It is extremely important to note the multiple levels of synergism at work here:

(a) The synergistic effect of integrating many tools into one coherent workshop makes each tool considerably more valuable than if it were used alone — for instance, the value of teleconferencing is very much greater when the participants are already doing a large proportion of their everyday work on line, so that any of the working material is available for selective citing and accessing, and when the users are already at home with the basic techniques of preparing and studying on-line material and of organizing and finding related passages.

(b) ... the synergistic effect of integrating many augmented individuals into one coherent community makes each element of augmentation considerably more valuable than if it were applied just to support its one individual — this is derived from the collaborative communication capabilities as applied through extended organizational methods to integrate the augmented capabilities of individuals into augmented teams and communities.

And finally, for any application of significant power — of which augmentation of an engineering project would be a good example — the adaptability and evolutionary flexibility of the computer-communication system is extremely important. The working methods of individuals will shift markedly as they settle into use of a comprehensive workshop, and with these new methods and skills will come payoff potential for changes and additions to their workshops — a cycle that will be significantly active for many years to come. A similar cycle will be even more dramatically evident at the organizational level."

 

Nelson (1965) contrasted his concept of hypertext to traditional text (p. 96):

 

"Systems of paper have grave limitations for either organizing or presenting ideas. A book is never perfectly suited to the reader; one reader is bored, another confused by the same pages. No system of paper — book or programmed text — can adapt very far to the interests or needs of a particular reader or student.

However, with the computer-driven display and mass memory, it has been possible to create a new, readable medium, for education and enjoyment, that will let the reader find his level, suit his taste, and find the parts that take on special meaning for him, as instruction or entertainment.

Let me introduce the word hypertext to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper. It may contain summaries, or maps of its contents and their interrelations; it may contain annotations, additions and footnotes from scholars who have examined it. Let me suggest that such an object and system, properly designed and administered, could have great potential for education, increasing the student's range of choices, his sense of freedom, his motivation, and his intellectual grasp. Such a system could grow indefinitely, gradually including more and more of the world's written knowledge....."

 

Doug Engelbart was early in recognizing the need for experiments to test various approaches to the human-computer interface (English and Engelbart, 1968). Nelson shares Engelbart's concerns about the quality of the interface, but takes a more passionate and less scientific approach. Typical of his style, for example, is his invention of the term "cybercrud," which he defines as "putting things over on people using computers"(1973), and the phrase "the psychic engineering of fantic fields," which anticipates what we now think of as the design of the conceptual models and metaphors through which users encounter and interpret interactive systems (pp. 21, 23, 26):

 

"The myth of technical determinism seems to hold captive both the public and the computer priesthood. Indeed, the myth is believed both by people who love, and by people who hate, computers. This myth, never questioned because never stated, holds that whatever is to come in the computer field is somehow preordained by technical necessity or some form of scientific correctness. This is cybercrud.

Computers do what people want them to do, at best. Figuring out what we should want, in full contemplation of the outspread possibilities, is a task that needs us all, laymen no less. There is something right about the public backlash against computers: things don't have to be this way, with our bank balances unavailable from computers, the immense serial numbers of our drivers' licenses generated by computers, the unstaunchable rivers of junk mail sent to us by computers. And it is the duty of the computer-man to help demythologize, to help the intelligent layman understand the specifics of systems he must deal with, and to help the public explore the question, what do we want?.....

I can now state what I believe to be the central problem of screenworld design, and indeed of design of man-machine anything — that is, psychic architecture.

By the psychic architecture of a system, I mean the mental conceptions and space structures among which the user moves: their arrangements and their qualities, especially clarity, integration and meshing, power, utility and lack of clutter.

It should be noted that these notions are much like those by which we judge regular architecture, and indeed the relationship would seem very close. An architectural grand design — say, of a capitol building — embraces the fundamental concepts a user will have to know to get around: main places, corridor arrangement (visualization and symmetries), access structure. These concepts are the very same in a screenworld or other complex man-made virtual structure: main places, corridors or transition rules (and their visualization and symmetries), access structure. It is a virtual space much like a building (though not confined to three "normally" connected dimensions), and susceptible to the same modes of spatial understanding, kinds of possible movement within, and potential appreciation and criticism.....

The psychic engineering of fantic fields — adult's hyperspaces of word and picture, child's gardens of verses — is our new frontier. We must look not to Asimovian robotics and the automated schoolmarm and librarian, but to the penny arcade and the bicycle, the clever diagram and the movie effect, to furnish this new realm.

 

Bravo! Happily, Nelson has had a profound effect on many young people interested in computers through his lively and deeply humanistic vision of the creative potential of appropriate computer technology.

 

Interestingly, two sub-disciplines of human-computer interaction recently emerged, each claiming one of these two men as an inspiration. Computer-Supported Cooperative Work (CSCW) has applied Engelbart's insights into the importance of group and organizational activity (see Chapter 11). Hypertext has also become a booming field (see Chapter 13). These chapters include the recent history of the two topics.

Human Factors, Psychology, and the Design of Human-Computer Dialogues

Recall Norman's suggestion that the design of human-computer interfaces be informed by the psychology and the design of everyday artifacts. The discipline that focuses on enhancing the quality of our use of artifacts is called human factors or ergonomics (McCormick and Sanders, 1982; Sanders and McCormick, 1987; Van Cott and Kinkade, 1972). This field originated in studies of work practices early in the century. Human factors drew particular attention during and after World War II, when it was applied to the evaluation of the complex new weaponry. Simple design flaws were found to have serious effects. For example, an escape hatch that was awkward for a fully equipped man to get through led to the unnecessary deaths of over 10,000 British bomber crew members (Dyson, 1979).

 

Human factors is therefore a branch of applied psychology devoted to guiding and enhancing the design of artifacts. Shackel's (1962) work on computer display design is an early example of empirically confirming design choices through quantitative validation of interface quality. Part IV discusses in greater depth the role of human factors and applied psychology in human-computer interaction. In the 1960s, psychological research began influencing the design of interactive computer systems and software in numerous ways, including:

 

• comparisons of batch processing and time sharing computer use, off-line and on-line problem solving (Grant and Sackman, 1967; Lampson, 1967, Nickerson, Elkind, and Carbonell, 1968; Gold, 1969)

 

• the relationship between the efficacy of problem solving in a time-shared environment and system response time (Miller, 1968; Grossberg, Wiesen and Yntema, 1976)

 

• human-computer cooperative problem solving (Yntema and Torgerson, 1961; Carbonell, Elkind, and Nickerson, 1968; Miller, 1969).

 

Yet, since most people who interacted directly with computers in the 1960s were programmers, human-computer interaction was at that time primarily programmer-computer interaction. This discipline is typically known as the psychology of computer programming.

 

Sackman (1970) published an important book applying psychological research methods to programming. After reviewing past research on problem solving in programming, he presented detailed results and analyses of a new set of experiments. Of particular interest is his attention to individual differences and his recommendations for a "scientific study of man-computer problem solving."

 

Even more influential than the Sackman book was one published by Weinberg (1971). It dealt broadly with programming as human performance, as an individual activity, and as a social activity. More than any other single work up to that time, the book focused attention on the human factors of programming and described the actual behavior and thought processes of programmers as they carried out their daily activities. By encouraging programmers to improve the interfaces to their own computerized tools, increasing their productivity and enhancing program reliability and maintainability, Weinberg led to a general recognition of the importance of anyone's interface to the machine.

 

A third significant book, Shneiderman (1980), reviews the psychology of programming through the 1970s. As programmers became a smaller and smaller fraction of computer users in the 1980s and 1990s, the psychology of programming became less central to human-computer interaction. Research continues, however, with particular emphasis on cognitive models of programming, object-oriented programming, and visual programming (Soloway and Iyengar, 1986; Olson, Sheppard, and Soloway, 1987; Koenemann-Belliveau, Moher, and Robertson, 1991). Two useful survey papers are Curtis, Soloway, Brooks, Black, Ehrlich, and Ramsey (1986), which is reprinted in Baecker and Buxton (1987) — the first edition of this book, and Curtis (1988).

 

By the 1970s, conditions had changed from the 50s and 60s, and more and more non-programmers were interacting directly with computers. Computer operators at terminals initiated batch processing jobs and handled the output. In industries such as insurance, banking, and air transportation, data entry personnel used terminals. These users had little discretion over their work conditions. Although optimizing the interfaces on these systems was not usually a management priority, psychologists and human factors specialists increasingly stressed the need and the potential for improving human-computer interaction. Shackel (1969), Nickerson (1969), and Bennett (1972) wrote early influential proposals and reviews in this regard.

 

A landmark event in the consolidation and popularization of human-machine interface issues was the publication of "Design of Man-Computer Dialogues" by James Martin (1973), a widely published and influential data processing consultant. The scope of his work includes the following:

 

• categorization of terminal operators

 

• alphanumeric dialogues, including natural language dialogues, programming dialogues, display techniques, and supporting hardware

 

• dialogues using sound and graphics, including the role of pictorial representations, and the technology of voice answerback systems

 

• psychological considerations, including response time requirements, human channel and buffer capacity considerations, display encoding, and the role of creativity

 

• training operators, including totally naive operators, computer assisted instruction, information control rooms, and terminals for management

 

• implementation considerations, including control of user errors, techniques for dealing with failures, "bullet proofing" systems, security and privacy, dialogue program generators, and simulation of human-machine interfaces.

 

Three other influential works of that period were "User Engineering Principles for Interactive Systems" (Hansen, 1971), chapters on interactive graphical techniques and command languages in Principles of Interactive Computer Graphics (Newman and Sproull, 1973), and "The Art of Natural Graphic Man-Machine Conversation" (Foley and Wallace, 1974). Hansen's paper presented what is probably the first and is certainly the shortest enumeration of principles for the design of interactive systems:

 

• "Know the user."

 

• "Minimize memorization" by allowing selection of items rather than entry of data, using names instead of numbers, ensuring predictable behavior, and providing ready access to useful system information.

 

• "Optimize operations" by providing rapid execution of common operations, preserving display consistency, exploiting "muscle memory," and organizing and reorganizing commands based on observed system usage.

 

• "Engineer for errors" by providing good error messages, designing so common errors are not made, allowing actions to be reversible, and guaranteeing system integrity in the face of hardware or software failure.

 

Martin's book acquainted the commercial data processing world with the issues and importance of good interface design. The three works cited above raised the academic computer science community's awareness of human factors.

The Personal Workstation

We now return to system and technology development, turning our attention to Xerox Palo Alto Research Center (PARC), formed in 1971. An incredible concentration of computer science talent converged on PARC just as the evolution of memory and processor technology offered new opportunities in computer design and implementation. This produced several major contributions by the mid-70s, although most were not published until later (Pake, 1985; Perry and Wallich, 1985), and many were not exploited commercially by Xerox (Alexander and Smith, 1988).

 

• Xerox developed the Alto, a prototype of a new kind of computer called a "personal workstation" (Thacker et al., 1979; Thacker, 1986); intended for use by one individual, it had significant local processing power and memory, a high resolution bit-mapped display, a keyboard, and a mouse, the latter being an innovation developed by Engelbart's group at Stanford Research Institute.

 

• Xerox pioneered the development of congenial graphical interfaces to systems and to the applications that ran on them, such as text editing, creation of illustrations, document creation, and electronic mail (Lampson, 1986). These interfaces incorporated windows, menus, scroll bars, mouse control and selection mechanisms, and views of abstract structures, all presented and integrated in a consistent manner. One important innovation was WYSIWYG, "What You See Is What You Get," in which a user sees and manipulates on the screen a representation of a document that looks identical to the eventual printed page.

 

• Xerox pioneered methods for the local area networking of workstations (Metcalfe and Boggs, 1976; Lampson, 1986). This provided users with the advantages of a personal machine along with access to shared resources, such as a central file server and high-speed printers.

 

Note that other, even earlier computers, such as the Whirlwind, the TX-0, the TX-2, the DEC PDP-1, and the Linc (Clark, 1986), were often used as "personal computers" (Bell, 1986). Yet only with the Alto and its successors did the concept of a computer intended for use by a single individual become technologically and economically realizable, first in highly capitalized, advanced research and development laboratories, and years later "for the rest of us."

The Dynabook

Perhaps the most compelling vision amidst the excitement of the many new ideas and applications at Xerox was Alan Kay's concepts of the "Reactive Engine" (Kay, 1969) and the "Dynabook" (Kay and Goldberg, 1977; Kay, 1987 video) (pp. 31-2):

 

""Devices" which variously store, retrieve, or manipulate information in the form of messages embedded in a medium have been in existence for thousands of years. People use them to communicate ideas and feelings both to others and back to themselves. Although thinking goes on in one's head, external media serve to materialize thoughts and, through feedback, to augment the actual paths the thinking follows. Methods discovered in one medium provide metaphors which contribute new ways to think about notions in other media.

For most of recorded history, the interactions of humans with their media have been primarily nonconversational and passive in the sense that marks on paper, paint on walls, even "motion" pictures and television, do not change in response to the viewer's wishes. A mathematical formulation — which may symbolize the essence of an entire universe — once put down on paper, remains static and requires the reader to expand its possibilities.

Every message is, in one sense or another, a simulation of some idea. It may be representational or abstract. The essence of a medium is very much dependent on the way messages are embedded, changed, and viewed. Although digital computers were originally designed to do arithmetic computation, the ability to simulate the details of any descriptive model means that the computer, viewed as a medium itself, can be all other media if the embedding and viewing methods are sufficiently well provided. Moreover, this new "metamedium" is active — it can respond to queries and experiments — so that the messages may involve the learner in a two-way conversation. This property has never been available before except through the medium of an individual teacher. We think the implications are vast and compelling.

A dynamic medium for creative thought: the Dynabook. Imagine having your own self-contained knowledge manipulator in a portable package the size and shape of an ordinary notebook. Suppose it had enough power to outrace your senses of sight and hearing, enough capacity to store for later retrieval thousands of page-equivalents of reference materials, poems, letters, recipes, records, drawings, animations, musical scores, waveforms, dynamic simulations, and anything else you would like to remember and change.

We envision a device as small and portable as possible which could both take in and give out information in quantities approaching that of human sensory systems. Visual output should be, at the least, of higher quality than what can be obtained from newsprint. Audio output should adhere to similar high-fidelity standards.

There should be no discernible pause between cause and effect. One of the metaphors we used when designing such a system was that of a musical instrument, such as a flute, which is owned by its user and responds instantly and consistently to its owner's wishes. Imagine the absurdity of a one-second delay between blowing a note and hearing it!

These "civilized" desires for flexibility, resolution, and response lead to the conclusion that a user of a dynamic personal medium needs several hundred times as much power as the average adult now typically enjoys from timeshared computing. This means that we should either build a new resource several hundred times the capacity of current machines and share it (very difficult and expensive), or we should investigate the possibility of giving each person his own powerful machine. We chose the second approach."

 

Hardware advances over the past two decades have enabled the realization of Kay's vision in terms of size and portability. Portable computers, then notebook computers, and most recently personal information managers have become significant segments of the computer marketplace. A challenge for the year 2000 is to develop software that will complete the realization of the vision.

The Personal Computer

About the time that Kay was building and elaborating his Interim Dynabook, an article in Popular Electronics (Roberts and Yates, 1975) triggered a veritable revolution in the computing industry by describing the Altair, the first home computer. Personal computers increased the availability of computer power at an incredible rate and significantly broadened the usefulness of computers. No longer were they primarily the purvey of a technical and mathematical priesthood at one end and tightly managed data entry clerks at the other. Instead, rapidly growing numbers of computer users were doctors and lawyers, teachers and librarians, business people and shopkeepers.

 

Whether professional users, amateurs, or hobbyists, most of the new computer users desired more congenial, forgiving, "user-friendly" interfaces than were available. The strength of this phenomenon resulted in the eventual successes of the Apple Macintosh, the first commercially successful implementation of the Xerox-style human interface, and the many graphical interfaces that followed it, as described in Case B.

 

The history of personal computer and workstation hardware and software is covered in Levy (1985), Goldberg (1988), and Ranade and Nash (1994).

The Role of Artificial Intelligence

As Kay prototyped his Dynabook, Nicholas Negroponte (1969) in the Architecture Machine Group at MIT was developing a more radical view of the ultimate human-computer interaction (p. 1):

 

"Computer-aided design cannot occur without machine intelligence — and would be dangerous without it. In our area, however, most people have serious misgivings about the feasibility and more importantly, the desirability of attributing the actions of a machine to intelligent behavior. These people generally distrust the concept of machines that approach (and thus why not pass?) our own human intelligence. In our culture an intelligent machine is immediately assumed to be a bad machine. As soon as intelligence is ascribed to the artificial, some people believe that the artifact will become evil and strip us of our humanistic values. Or, like the great gazelle and the water buffalo, we will be placed on reserves to be pampered by a ruling class of automata.

Why ask a machine to learn, to understand, to associate courses with goals, to be self-improving, to be ethical — in short, to be intelligent?

The answer is the underlying postulate of an architecture machine. A design machine must have an artificial intelligence because any design procedure, set of rules, or truism is tenuous, if not subversive, when used out of context or regardless of context. It follows that a mechanism must recognize and understand the context before carrying out an operation. Therefore, a machine must be able to discern changes in meaning brought about by changes in context, hence, be intelligent. And to do this, it must have a sophisticated set of sensors, effectors, and processors to view the real world directly and indirectly."

 

Negroponte went on to describe the need for a rich language of interaction between human and machine, calling for dialogues specific to a particular person and particular machine that would evolve based on their shared history (pp. 9-13):

 

"..... Each [a designer and a machine] should track the other's design maneuvers, evoking a rhetoric that cannot be anticipated. "What was mere noise and disorder or distraction before, becomes pattern and sense; information has been metabolized out of noise" (Brodey and Lindgren, 1967). The event is circular inasmuch as the designer-machine unity provokes a dialogue and the dialogue promotes a stronger designer-machine unity. This progressively intimate association of the two dissimilar species is the symbiosis. It evolves through mutual training, in this case, through the dialogue.

Such man-machine dialogue has no historical precedent. The present antagonistic mismatch between man and machine, however, has generated a great deal of preoccupation for it. In less than a decade the term "man-machine communication" has passed from concept to cliché to platitude. Nevertheless, the theory is important and straightforward: in order to have a cooperative interaction between a designer of a certain expertise and a machine of some scholarship, the two must be congenial and must share the labor of establishing a common language. A designer, when addressing a machine, must not be forced to resort to machine-oriented codes. And in spite of computational efficiency, a paradigm for fruitful conversations must be machines that can speak and respond to a natural language.

With direct, fluid, and natural man-machine discourse, two former barriers between architects and computing machines would be removed. First, the designers, using computer-aided design hardware, would not have to be specialists. With natural communication, the "this is what I want to do" and "can you do it" gap could be bridged. The design task would no longer be described to a "knobs and dials" person to be executed in his secret vernacular. Instead, with simple negotiations, the job would be formulated and executed in the designer's own idiom. As a result, a vibrant stream of ideas could be directly channeled from the designer to the machine and back.

The second instruction overcome by such close communion is the potential for reevaluating the procedures themselves. In a direct dialogue the designer can exercise his proverbial capriciousness. At first a designer may have only a meager understanding of his specific problem and thus require machine tolerance and compatibility in his search for the consistency among criteria and form and method, between intent and purpose. The progression from visceral to intellectual can be articulated in subsequent provisional statements of detail and moment-to-moment reevaluations of the methods themselves.

But, the tete-à-tete must be even more direct and fluid; it is gestures, smiles, and frowns that turn a conversation into a dialogue. "Most Americans are only dimly aware of this silent language even though they use it everyday. They are not conscious of the elaborate patterning of behavior which prescribes our handling of time, our spatial relationships, our attitudes towards work, play, and learning" (Hall, 1959). In an intimate human-to-human dialogue, hand-waving often carries as much meaning as text. Manner carries cultural information: the Arabs use their noses, the Japanese nod their heads. Customarily, in man-machine communication studies, such silent languages are ignored and frequently are referred to as "noise." But such silent languages are not noise; a dialogue is composed of "whole body involvement — with hands, eyes, mouth, facial expressions — using many channels simultaneously, but rhythmized into a harmoniously simple exchange" (Brodey and Lindgren, 1968).

Imagine a machine that can follow your design methodology and at the same time discern and assimilate your conversational idiosyncrasies. This same machine, after observing your behavior, could build a predictive model of your conversational performance. Such a machine could then reinforce the dialogue by using the predictive model to respond to you in a manner that is in rhythm with your personal behavior and conversational idiosyncrasies.

What this means is the dialogue we are proposing would be so personal that you would not be able to use someone else's machine, and he would not understand yours. In fact, neither machine would be able to talk directly to the other. The dialogue would be so intimate — even exclusive — that only mutual persuasion and compromise would bring about ideas, ideas unrealizable by either conversant alone. No doubt, in such a symbiosis it would not be solely the human designer who would decide when the machine is relevant."

 

An elaboration of this vision can be found in Negroponte (1975) and in a quarter century of work in the Architecture Machine Group, now the Media Laboratory, of MIT. Although inspirational, little of the promise of AI has yet been realized. Current progress is documented in our presentations of speech and natural language interaction in Chapter 8 and of work on tailorable and adaptive systems in Chapter 12. Each decade promises to be the one in which AI technologies will find widespread application. Will it happen by the year 2000? Negroponte and others think so ("Speech more important," 1989; Straub and Wetherbe, 1989).

Modeling Users and Interfaces

The 1960s saw relatively few scientific and behavioral studies of interfaces, other than those at Bolt Beranek and Newman, System Development Corporation, and MIT Lincoln Laboratory, cited earlier. The pace of effort picked up in the 1970s. An influential group was formed by John Gould at IBM Research around 1971. One early contribution was to study the interface to Zloof's (1975, 1976) concept of "Query by Example" before any code was written (Thomas and Gould, 1975).

 

Another influential group was the Applied Information Processing Psychology Project of Allen Newell, Stu Card, and Tom Moran, begun at Xerox PARC in 1974. Their approach drew from the field of "information processing psychology" (Lindsay and Norman, 1977). This field of psychology, described in Chapter 9, involves representing human information processing (perceiving, recognizing, learning, remembering, etc.) in formal terms, in some cases modeled by computer programs.

 

Newell, Card and Moran developed a model of a human user of computer systems called the Model Human Information Processor (see Chapter 9) and a methodology for calibrating the model and applying it as predictive tool. Even though the resulting work, The Psychology of Human-Computer Interaction (Card, Moran and Newell, 1983), raises more questions than it answers, it is a landmark contribution, the first major body of work that attempts to develop an underlying applied science of human-computer interaction.

Expanding Research Frontiers

Finally, on the applied side, the 1980s and 90s saw a veritable explosion of research on new paradigms of human-computer interaction. Although here also the ideas had typically been introduced in the 1960s and 70s, decades would need to elapse until technology had matured sufficiently to allow them to be explored. These paradigms include groupware and computer-supported cooperative work (Chapter 11); tailorable and adaptive systems (Chapter 12); hypertext and multimedia (Chapter 13); and cyberspace — virtual reality, ubiquitous computing, and global networks (virtual communities) (Chapter 14). The history of these areas appears in the chapters where they are discussed.

A Developing Community of Scholars

In the late 1970s and early 1980s, many corporations joined IBM and Xerox in mounting major efforts to study and improve the human factors of computing systems. Some academic interest had arisen earlier. In England, Professor Brian Shackel of the University of Loughborough, who in the late 1950s pioneered work on the ergonomics of computers (Shackel, 1962), founded in 1970 an influential Research Group, Human Sciences and Advanced Technology (HUSAT). Other prominent scholars who helped expand the field in the late 70s and early 80s included Allen Newell at Carnegie-Mellon University, Don Norman at the University of California, San Diego, James Foley at George Washington University, Ben Shneiderman at the University of Maryland, and Phil Barnard, Nick Hammond, John Long, Patricia Wright, and Richard Young at the Medical Research Council Applied Psychology Unit in Cambridge, England.

 

This community grew rapidly in the United States and West Europe. The ACM Special Interest Group on Computer and Human Interaction (SIGCHI), formed in 1983, became the fastest-growing ACM Special Interest Group. Local chapters of SIGCHI formed in several cities to schedule monthly technical presentations. The field has drawn increasing participation from Japan and other Asian countries, Australia, South America, and East Europe.

 

We end this account of the intellectual roots of the field of human-computer interaction in the mid-1980s. The researchers and developers whose work comprises this volume take up these threads and introduce many new ideas and discoveries from the subsequent decade. Before placing you in their hands, we list several forums of communication among members of the field. These have helped us and may assist you in further exploration:

 

• International Symposium on Man-Machine Systems, held in Cambridge, England, in 1969 (IEEE, 1969)

 

• International Journal of Man-Machine Studies, now called The International Journal of Human-Computer Studies, begun in 1969

 

• Software Psychology Society, based in Washington, D.C., which has met monthly since 1976 (Shneiderman, 1986)

 

• Human Factors Society Technical Group on Computer Systems, formed in 1971

 

• Conference on Easier and More Productive Use of Computing, held at the University of Michigan in 1981

 

• annual ACM SIGCHI CHI Conferences, begun with an unexpectedly successful 1982 meeting in Gaithersburg, MD.; the Proceedings are widely cited; the SIGGRAPH video series documents demonstrations shown at past conferences

 

• journal Behaviour and Information Technology (begun in 1982)

 

• IFIP INTERACT Conferences (every three years since 1984)

 

• British Computer Society HCI Conferences (annually since 1985)

 

• International Conference on Human-Computer Interaction (every two years since 1985)

 

• journal Human-Computer Interaction (since 1985)

 

• the Handbook of Human-Computer Interaction (Helander, 1988), a landmark 1100-page compendium of 52 excellent original overview chapters.

 

• journal Interacting with Computers (since 1989)

 

• journal International Journal of Human-Computer Interaction (since 1989)

 

• journal ACM Transactions on Computer and Human Interaction (since 1994)

 

• magazine interactions (ACM) (since 1994)

 

There are also specialized conferences dealing with User Interface Software Technology, Computer Supported Cooperative Work, Hypertext, Multimedia, Intelligent Interfaces, and Virtual Reality.

 

Human-computer interaction has also taken a place in general treatments of the history of computing (e.g., ACM Annals of the History of Computing; Friedman, 1989; BBC, 1992 video). Friedman does not focus exclusively on human-computer interaction or even on interactive systems—his focus is on systems in organizations, which includes a useful overview of the history of computer systems. In addition, more focused historical reviews have begun to appear (Gaines, 1985; Shackel, 1985; Grudin, 1990; 1991).

 

The study of human-computer interaction is underway. Is the field living up to its potential? You be the judge.

References

Alexander, R. and Smith, D. (1988). Fumbling the Future: The Story of Xerox & Personal Computing. Morrow.

 

Baecker, R. (1969). Picture-Driven Animation. AFIPS Conference Proceedings 34, 273-288.

 

Baecker, R. and Buxton, W. (Eds.) (1987). Readings in Human-Computer Interaction: A Multidisciplinary Approach. Morgan Kaufmann Publishers.

 

Bell, G. (1986). Toward a History of Personal Workstations. Proc.Conference on the History of Personal Workstations, ACM, 1-17.

 

Bennett, J. (1972). The User Interface in Interactive Systems. Annual Review of Information Science and Technology 7, 159-196.

 

Brodey, W. and Lindgren, N. (1967). Human Enhancement Through Evolutionary Technology. IEEE Spectrum 4(9), 87-97.

 

Brodey, W. and Lindgren, N. (1968). Human Enhancement: Beyond the Machine Age. IEEE Spectrum 5(2), 79-93.

 

Brooks, F. (1977). The Computer "Scientist" as Toolsmith — Studies in Interactive Computer Graphics. IFIP Conference Proceedings, 625-634.

 

Bush, V. (1945). As We May Think. The Atlantic Monthly, Vol. 176, 101-108.

 

Carbonell, J., Elkind, J., and Nickerson, R. (1968). On the Psychological Importance of Time in a Time Sharing System. Human Factors 10(2), 135-142.

 

Card, S., Moran, T., & Newell, A. (1983). The Psychology of Human-Computer Interaction, Lawrence Erlbaum Associates.

 

Card, S. and Moran, T. (1986). User Technology: From Pointing to Pondering. Proc.Conference on the History of Personal Workstations, ACM, 183-198.

 

Clark, W. (1986). The LINC was Early and Small. Proc.Conference on the History of Personal Workstations, ACM, 133-155.

 

Coons, S. (1963). An Outline of the Requirements for a Computer-Aided Design System. AFIPS Conference Proceedings 23, 299-304.

 

Culler, G. (1968). Mathematical Laboratories: A New Power for the Physical and Social Sciences. Reprinted in Proceedings of the Conference on the History of Personal Workstations. ACM, 1986, 59-72.

 

Curtis, W. (1988). Five Paradigms in the Psychology of Programming. In Helander, M. (Ed.) (1988), Handbook of Human-Computer Interaction, North-Holland, 87-105.

 

Curtis, W., Soloway, E., Brooks, R., Black, J., Ehrlich, K., and Ramsey, R. (1986). Software Psychology: The Need for an Interdisciplinary Program. Proc. of the IEEE 74(8), 1092-1106.

 

Davis, M. and Ellis, T. (1964). The Rand Tablet: A Man-Machine Graphical Communication Device. AFIPS Conference Proceedings 24, 325-331.

 

Davis, R. (1966). Man-Machine Communication. In Cuadra, C.A. (Ed.) Annual Review of Information Science and Technology 1. Interscience, 221-254.

 

Dyson, F. (1979). Disturbing the Universe. Harper & Row.

 

Engelbart, D. (1963). A Conceptual Framework for the Augmentation of Man's Intellect. In Howerton & Weeks (Eds.), Vistas in Information Handling, Vol. 1. Spartan Books, 1-29.

 

Engelbart, D. (1982). Integrated, Evolutionary, Office Automation Systems. In Landau & Bair (Eds.), Emerging Office Systems. Ablex.

 

Engelbart, D. (1986). The Augmented Knowledge Workshop. Proc. Conference on the History of Personal Workstations. ACM, 73-83.

 

Engelbart, D. & English, W. (1968). A Research Center for Augmenting Human Intellect. AFIPS Conference Proceedings 33, 395-410.

 

English, W., Engelbart, D., and Berman, M. (1967). Display-Selection Techniques for Text Manipulation. IEEE Transactions on Human Factors in Electronics HFE-8(1), 5-15.

 

Fano, R. and Corbato, F. (1966). Time-Sharing on Computers. Scientific American 214(9), 129-140.

 

Foley, J. and Wallace, V. (1974). The Art of Natural Graphic Man-Machine Conversation. Proc. of the IEEE 62(4), 462-470.

 

Friedman, A. (1989). Computer Systems Development: History, Organization and Implementation. Wiley.

 

Gaines, B. (1985). From Ergonomics to the Fifth Generation: 30 Years of Human-Computer Interaction Studies. Proc. Interact ‘84. Elsevier (North-Holland), 3-7.

 

Gold, M. (1969). Time-Sharing and Batch-Processing: An Experimental Comparison of Their Values in a Problem-Solving Situation. Communications of the ACM 12(5), 249-259.

 

Goldberg, A. (Ed.) (1988). History of Personal Computer Workstations. Addison-Wesley, material originally presented at an ACM Conference by the same name held in 1986.

 

Grant, E. and Sackman, H. (1967). An Exploratory Investigation of Programmer Performance Under On-Line and Off-Line Conditions. IEEE Transactions on Human Factors in Electronics 8(1), 33-48.

 

Grossberg, M., Wiesen, R. and Yntema, D. (1976). An Experiment on Problem Solving with Delayed Computer Responses. IEEE Transactions on Systems, Man, and Cybernetics, 219-222.

 

Grudin, J. (1990). The Computer Reaches Out: The Historical Continuity of Interface Design. Proc. CHI'90, ACM, 261-268.

 

Grudin, J. (1991). Interactive Systems: Bridging the Gaps Between Developers and Users. IEEE Computer 24(4), 59-69. Reprinted in this volume.

 

Hall, E. (1959). The Silent Language. Doubleday.

 

Hansen, W. (1971). User Engineering Principles for Interactive Systems. AFIPS Conference Proceedings 39, Fall Joint Computer Conference. AFIPS Press, 523-532.

 

Helander, M. (Ed.) (1988). Handbook of Human-Computer Interaction. Elsevier.

 

IEEE (1969). IEEE Transaction on Man-Machine Systems: Special Issue, 10 Part II (4).

 

Johnson, T. (1963). Sketchpad III: Three Dimensional Graphical Communication with a Digital Computer. AFIPS Conference Proceedings 23, 347-353.

 

Kay, A. (1969). The Reactive Engine. Ph.D. Thesis, University of Utah.

 

Kay, A. & Goldberg, A. (1977). Personal Dynamic Media. IEEE Computer 10(3), 31-42.

 

Knowlton, K. (1966). Computer-Produced Movies. Science 150, November 1965, 1116-1120.

 

Koenemann-Belliveau, J., Moher, T. and Robertson, S. (Eds.) (1991). Empirical Studies of Programmers: 4th Workshop. Ablex.

 

Lampson, B. (1967). A Critique of "An Exploratory Investigation of Programmer Performance Under On-Line and Off-Line Conditions." IEEE Transaction on Human Factors in Electronics 8(1), 48-51.

 

Lampson, B. (1986). Personal Distributed Computing: The Alto and Ethernet Software. Proceedings of the Conference on the History of Personal Workstations. ACM, 101-131.

 

Levinthal, C. (1966). Molecular Model-building by Computer. Scientific American 214(6), 42-52.

 

Levy, S. (1985). Hackers. Anchor Press/Doubleday.

 

Licklider, J. (1960). Man-Computer Symbiosis. IRE Transactions of Human Factors in Electronics HFE-1(1), 4-11.

 

Licklider, J. & Clark, W. (1962). On-Line Man-Computer Communication. AFIPS Conference Proceedings 21, 113-128.

 

Licklider, J. (1968). Man-Computer Communication. Annual Review of Information Science and Technology 3, 201-240.

 

Lindsay, P. and Norman, D. (1977). Human Information Processing: An Introduction to Psychology, Second Edition. Academic Press.

 

Machover, C. (1978). A Brief, Personal History of Computer Graphics. IEEE Computer 11(11), 38-45.

 

Martin, J. (1973). Design of Man-Computer Dialogues, Prentice-Hall.

 

McCormick, E. and Sanders, M. (1982). Human Factors in Engineering and Design, Fifth Edition. McGraw-Hill.

 

Metcalfe, R. and Boggs, D. (1976). ETHERNET: Distributed Packet Switching for Local Computer Networks. Communications of the ACM 19(7), 395-404.

 

Meyrowitz, N. and Van Dam, A. (1982). Interactive Editing Systems: Part 1. ACM Computing Surveys 14(3), 321-352.

 

Miller, R. (1968). Response Time in Man-Computer Conversational Transactions. AFIPS Conference Proceedings 33, 267-277.

 

Miller, R. (1969). Archetypes in Man-Computer Problem Solving. IEEE Transaction on Man-Machine Systems 10 Part II (4), 219-241.

 

Myer, T. and Sutherland, I. (1968). On the Design of Display Processors. Communications of the ACM 11(6), 410-414.

 

Negroponte, N. (1970). The Architecture Machine: Towards a More Humane Environment. MIT.

 

Negroponte, N. (1975). Soft Architecture Machines. MIT.

 

Nelson, T. (1965). A File Structure for the Complex, the Changing, and the Indeterminate. Proc. ACM National Conference, 84-100.

 

Nelson, T. (1973). A Conceptual Framework for Man-Machine Everything. Proc. National Computer Conference, M21-M26.

 

Nelson, T. (1981). Literary Machines, Project Xanadu, 8400 Fredericksburg, #138, San Antonio TX 78229.

 

Nelson, T. (1987). Computer Lib and Dream Machines. Revised Edition, Tempus Books.

 

Newman, W. and Sproull, R. (1979, 1973). Principles of Interactive Computer Graphics. McGraw-Hill.

 

Nickerson, R. , Elkind, J., and Carbonell, J. (1968). Human Factors and the Design of Time Sharing Computer Systems. Human Factors 10(2), 127-134.

 

Nickerson, R. (1969). Man-Computer Interaction: A Challenge for Human Factors Research. Ergonomics 12 (4), 501-517.

 

Olson, G., Sheppard, S. and Soloway, E. (Eds.) (1987). Empirical Studies of Programmers: 2nd Workshop. Ablex .

 

Pake, G. (1985). Research at Xerox PARC: A Founder's Assessment. IEEE Spectrum 22(10), 54-61.

Perry, T. and Wallich, P. (1985). Inside the PARC: The ‘Information Architects'. IEEE Spectrum 22(10), 62-75.

 

Preiss, R. (1978). Storage CRT Terminals: Evolution and Trends. IEEE Computer 11(11), 20-26.

 

Ranade, J. and Nash, A. (Eds.) (1994). The Best of Byte: Two Decades on the Leading Edge. McGraw-Hill.

 

Roberts, H. and Yates, W. (1975). ALTAIR 8800: The Most Powerful Minicomputer Project Ever Presented — Can Be Built For Under $400. Popular Electronics, 33-38.

 

Roberts, L. (1986). The Arpanet and Computer Networks. Proc. Conference on the History of Personal Workstations, ACM, 51-58.

 

Ross, D. (1986). A Personal View of the Personal Work Station: Some Firsts in the Fifties. Proc. Conference on the History of Personal Workstations, ACM, 19-48.

 

Ross, D. and Rodriguez, J. (1963). Theoretical Foundations for the Computer-Aided Design System. AFIPS Conference Proceedings 23, 305-322.

 

Rovner, P. and Feldman, J. (1968). The LEAP Language and Data Structure. IFIP Conference Proceedings, 579-585.

 

Sackman, H. (1970). Man-Computer Problem Solving: Experimental Evaluation of Time-Sharing and Batch Processing. Auerbach.

 

Sanders, M. and McCormick, E. (1987). Human Factors in Engineering and Design, Sixth Edition. McGraw-Hill.

 

Scientific American (1971). Readings from Scientific American: Computers and Computation. W.H.Freeman & Co.

 

Shackel, B. (1962). Ergonomics in the Design of a Large Digital Computer Console. Ergonomics 5, 229-241.

 

Shackel, B. (1969). Man-Computer Interaction— The Contribution of the Human Sciences. IEEE Transactions on Man-Machine Systems 10 Part II (4), 149-163.

 

Shackel, B. (1985). Ergonomics in Information Technology in Europe — A Review. Behaviour and Information Technology 4(4), 263-287.

 

Shaw, J. (1964). JOSS: A Designer's View of an Experimental On-Line Computing System. Proc. Fall Joint Computer Conference 26, 455-464.

 

Shneiderman, B. (1980). Software Psychology: Human Factors in Computer and Information Systems, Winthrop Publishers.

 

Shneiderman, B. (1986). No Members, No Officers, No Dues: A Ten Year History of the Software Psychology Society. SIGCHI Bulletin 18(2), 14-16.

 

Soloway, E. and Iynegar, S. (Eds.) (1986). Empirical Studies of Programmers. Ablex .

 

Speech more Important Interface than Graphics, Media Lab's Negroponte tells SIGGRAPH. (1989, November). Byte, p. 26.

 

Stotz, R. (1963). Man-Machine Console Facilities for Computer-Aided Design. AFIPS Conference Proceedings 23, 323-328.

 

Straub, D. and Wetherbe, J. (1989). Information Technologies for the 1990s: An Organizational Impact Perspective. Communications of the ACM, 32, 1328-1339.

 

Sutherland, I. (1963). Sketchpad: A Man-Machine Graphical Communication System. AFIPS Conference Proceedings 23, 329-346.

 

Sutherland, W., Forgie, J., and Morello, M. (1969). Graphics in Time-sharing: A Summary of the TX-2 Experience. AFIPS Conference Proceedings, 34, 629-636.

 

Thacker, C. (1986). Personal Distributed Computing: The Alto and Ethernet Hardware. Proc. Conference on the History of Personal Workstations. ACM, 87-100.

 

Thacker, C., McCreight, E., Lampson, B., Sproull, R., and Boggs, D. (1979). Alto: A Personal Computer. In Siewiorek, Bell, and Newell Computer Structures: Principles and Examples. Second Edition. McGraw-Hill, 549-572.

 

Thomas, J. and Gould, J. (1975). A Psychological Study of Query By Example. AFIPS Conference Proceedings 44, 439-445.

 

Van Cott, H. and Kinkade, R. (Eds.) (1972). Human Engineering Guide to Equipment Design, Revised Edition. Washington, D.C.: American Institutes for Research.

 

Weinberg, G. (1971). The Psychology of Computer Programming. Van Nostrand Reinhold.

 

Williams, G. (1984). The Apple Macintosh Computer. Byte 9(2), 30-54.

 

Yntema, D.and Torgerson, W. (1961). Man-Computer Cooperation in Decisions Requiring Common Sense. IRE Transactions on Human Factors in Electronics, 20-26.

 

Zloof, M. (1975). Query By Example. AFIPS Conference Proceedings 44, 431-437.

 

Zloof, M. (1976). Query By Example—Operations on Hierarchical Data Bases. AFIPS Conference Proceedings 45, 845-853.

Videos

BBC (1992). The Machine That Changed the World, 5 hr. video series.

 

Engelbart, D. and English, W. (1968). A Research Center for Augmenting Human Intellect. ACM SIGGRAPH Video Review 106. Originally filmed in 1968, published in 1994.

 

Kay, A. (1987). Doing With Images Makes Symbols: Communicating with Computers. University Video Communications, P.O. Box 5129, Stanford CA. 94309 USA.

 

Sutherland, I. (1963). Sketchpad. ACM SIGGRAPH Video Review 13. Originally filmed in 1963, published in 1983.