Скачать 117.78 Kb.
NSF Workshop on Human-Centered Systems Group 3 - Human Centered Design
THE CHALLENGE OF HUMAN-CENTERED DESIGN
T. Winograd and D. D. Woods
Report from Working Group 3
Version of April 9, 1997
>>>>>>> DRAFT: DO NOT CITE OR DISTRIBUTE <<<<<<<
Working Group Leaders:
T. Winograd (Stanford University)
D. D. Woods (Ohio State University)
J. Miller (Apple),
R. Jeffries (Sun Microsystems),
G. Fischer (University of Colorado),
O. Garcia (Wright State University),
G. McConkie (University of Illinois),
M. Holloway (Netscape),
P. Ehn (Malmo University),
V. De Keyser (University of Liege),
J. Grudin (University of California, Irvine),
P. Agre (University of California, San Diego),
S. J. Mountford (Interval Corp.)
Table of Contents
1. What Is A Human-Centered Approach?
1.1. Wide Interpretations of the Label “Human-Centered”
1.2. Technology-Driven Development
1.3 Why Are These Interpretations Insufficient?
1.4 The Strong Interpretation of Human-Centered Design
1.5 How Do We Foster (Strong) Human-Centered Design?
2. Research To Advance Strong Human-Centered Design
2.1 New Modes of Relating Design and Research: Complementarity
2.1.1 The "experimenter as designer" and the "designer as experimenter"
2.1.2 Data and theories about sets of people working with artifacts within a context
2.1.3 Supporting human-centered design and innovation
2.2 Developing Cognitive And Social Technologies to Complement Computational Technologies
2.3 Measures / Quality Metrics
2.4 Examples of Contexts and Challenges in Human-Centered Design and Research
2.4.1 Human-centered system integration
2.4.2 Integrated collaborative tools
2.4.3 Tools for "co-bots"
3 Models For Research And Education
3.1 Testbed-Style Research Projects within Situated Contexts
3.2 Human-Centered Reflection on Design Activities
3.3 Case-based Research
3.4. Researcher Training in Conjunction with Apprenticeship
1. WHAT IS A HUMAN-CENTERED APPROACH?
To create truly human-centered systems, we need to shift the focus of research and design, to put human actors and the field of practice in which they function at the center of technology development. This will make a significant difference in our ability to harness of the power of computers for an expanding variety of people and activities in which those people will use computers and computer-based technologies.
The term "human-centered" is used by many people in a variety of related but non-identical ways. It is important to understand the consequences of taking a "strong" interpretation of the term, which we recommend. It can be contrasted with "wide" interpretations that may be useful for other groups or contexts.
1.1 Wide Interpretations of the Label “Human-Centered”
One can identify areas of computer science research as being human-centered in several ways:
Wide Interpretation 1: The motivation for technology development is grounded in a statement about human needs.
Priority choices in research directions can be motivated either by the abstract logic of the discipline, or by a prediction of how the research results will be applied to meeting human needs. For example, research on medical technology has a clear need basis, while research on graph theory (though it may end up having medical and other applications) is "abstraction-driven" -- not directly motivated by considerations of how the results will be used. We might say that research is human-centered if it is need-driven: motivated by considerations of its applications.
Wide Interpretation 2: People are "in the loop" or part of the system to be developed.
For some computer systems and applications, the role of human-computer interaction is secondary -- there may be some human startup and interventions, but to a large extent, the "beef" is in the computing, not the interaction. For a large and growing class of systems at every level, human-computer interaction plays a central role, and attention to this dimension can be thought of as human-centered. By this definition, "human-centered computing" is another phrase for describing the field of Human-Computer Interaction.
Wide Interpretation 3: Technology that happens to be about interacting with or across people is human-centered.
Work to advance the development of computer-based visualizations, natural language capabilities of computers, intelligent software agents to digest and filter information autonomously, networking tools to link diverse people in diverse locations, and many other examples are human-centered in the sense that the technology under development is intended to interact with or to support interactions across people. The research and development work focuses on expanding the capabilities of the computer with the assumption that advancing these technologies will in and of itself produce benefits. These benefits are sometimes presumed to flow directly from technological advances. In other cases the developers may make allowance for a usability testing and refinement stage after the technology is sufficiently mature.
Wide Interpretation 4: Technology development and change is justified based on predicted improvements in human cognition, collaborations across people, or human performance.
Developments in new computational technologies and systems often are justified in large part based on their presumed impact on human cognition, collaboration and performance. The development or introduction of new technology is predicted to reduce practitioner workload, reduce errors, free up practitioner attention for important tasks, give users greater flexibility, hide complexity, automate tedious tasks, or filter out irrelevant information, among other claims. In effect, prototypes and designs embody hypotheses about how technology change will shape cognition, collaboration and performance. As a result, technology is often based on human-centered intentions, in the sense of changing cognition and collaboration. Whether those intentions are matched by human-centered practice is another question, one addressed by the strong interpretation of the label.
Making such predictions presumes some research base of evidence and models about how related technology changes have affected cognition, collaboration, and performance, and it implies empirical tests of whether the predictions embodied in systems match actual experience. These are one part of a stronger interpretation of what it means to be human-centered in system development.
1.2 Technology-Driven Development
All of the wide interpretations of human-centered design still leave the development process in the position illustrated in Figure 1. The diagram shows a sequence from left to right.
• First, technologies are developed which hold promise to influence human cognition, collaboration and activity. The primary focus is pushing the technological frontier or creating the technological system. The technologist is at the heart of this step.
• Eventually, interfaces are built which connect the technology to users. These interfaces typically undergo some usability testing and usability engineering to make the technology accessible to potential users. Human-computer interaction and usability specialists come into play at this stage.
• When the technologies are put into use, they have social and other larger consequences which can be studied by social scientists.
• Presumably, the human factors and social consequences from past developments have some influence on future development (the small arrows back towards the left).
Figure 1: A technology-driven approach
This sequential approach is fundamentally technology-driven because developing the technology in itself is the primary activity around which all else is organized. Norman (1993) illustrates this by pointing to the original technology-centered motto of the Chicago World’s Fair 1933:
1.3 Why Are These Interpretations Insufficient?
As the powers of technology explode around us, developers recognize the potential for benefits and charge ahead in pursuit of the next technological advance. Expanding the powers of technology is a necessary activity, but research results have shown that is rarely sufficient in itself. Sometimes, useful systems emerge from the pursuit of technological advances. However, empirical studies on the impact of new technology on actual practitioner cognition, collaboration and performance has revealed that new systems often have surprising consequences or even fail (e.g., Norman, 1988; Sarter, Woods and Billings, in press). Often the message from users, a message carried in their voices, their performance, their errors, and their adaptations, is one of complexity. In these cases technological possibilities are used clumsily so that systems intended to serve the user turn out to add new burdens often at the busiest times or during the most critical phases of the task and create new types of error traps.
For example, users can be surprised by new autonomous technologies that are strong but silent (Billings, 1996), asking each other questions like:
• What is it doing now?
• What will it do next?
• Why did it do this?
In other words, new technology transforms what it means to carry out activities within a field of practice -- changing what knowledge is required and how it is brought to bear to handle different situations, changing the roles of people within the overall system, changing the strategies they employ, changing how people collaborate to accomplish goals.
A large set of breakdowns in the human-computer interaction have been identified. These have been compiled (e.g., Norman, 1988) sometimes as “ways to design things wrong” from a human-centered point of view or as “classic” design errors in that they occur over and over again. These problems include:
For every user at some time, and for some users at almost every time, computers are hard to use, hard to learn, and puzzling to work with. Even experienced users find that they don't remember how to do infrequent tasks, aren't aware of capabilities the system has, and end up with frustrating breakdowns in which it isn't clear how to proceed. Many potential users of computer systems throw up their hands altogether because of the complexity (real or perceived) of the computer systems they encounter.
As computerization increasingly penetrates a field of activity, the power to collect and transmit data outstrips our ability to interpret the massive field of data available. This problem has expanded beyond technical fields of activity (an airplane cockpit or power plant control room) to everyday areas of activity as access to and the capabilities of the Internet have grown explosively. Our problem is rarely getting the needed data, instead the problem is finding what is informative given my interests and needs in a very large field of available data. From email overload to the thousands of “hits” returned by a web query, people find that they don't have the tools to cope with the huge quantities of information that they must deal with.
Error and Failure
Computerization transforms tasks eliminating some types of error and failure while creating new types of errors sometimes with larger consequences (Woods et al., 1994). Some of the forms of error exist only in the interaction of people and computers, for example, mode error, as Norman (1988) puts it, if you want to create errors, “... change the rules. Let something be done one way in one mode and another way in another mode.”
Computer systems intended to help users by reducing workload sometimes have a perverse effect. Studies have revealed clumsy automated systems (e.g., cockpit automation), that is, systems which make even easier what was already easy, while they make more difficult the challenging aspects of the job. This clumsiness arises because designers have incomplete and inaccurate models of how workload is distributed over time and phase of task and of how practitioners manage workload to avoid bottlenecks in particular fields of activity.
Fragmentation and Creeping Featurism
As we continually expand the range of activities that people do with computers, we also tend to increase the diversity of ways in which they interact. From the technical point of view, there is a plethora of "systems," "applications," "interfaces," and “options” which the user combines to get things done. From the human point of view, each individual has a setting of concerns and activities that is not organized according to the characteristics of the computing system, software application, or computerized device. The machine environment becomes more and more complex and confusing as new technologies overlap in the service of the user's spheres of activity.
More to Know and Remember
Computer systems, despite their information processing and display capabilities, seem to keep demanding that users know more and remember more. Enter a workplace and we almost always find that users keep paper notes as a kind of external memory to keep track of apparently arbitrary things that a new computer system demands they remember to be able to interact with the system. There seems to be a “conspiracy against human memory” in the typical way that computer systems are designed (Norman, 1988).
In the early days of computing, the point was to get a job done, which could not have been done without the computer. The "user experience" was not a consideration -- if operators could be trained to do the ballistics or code calculations, that was sufficient. In today's computing world, the axis has shifted. People use computers at their discretion, not just because they need the capabilities, but because they find the experience to be positive. In many cases, they are bored, frustrated, or forced to operate in ways they don't find appropriate. The effect on how they respond is not just emotional. It has s direct impact on their ability to learn and use systems effectively, Concern with what is pleasing or displeasing to the user is not a "frill", but a key tool in creating effective systems and deploying them to the people who need them. The underlying principles of human-centered design apply for everything from weapons control systems to video games.
As computers become more pervasive in everyday life, people are increasingly confronted with interactions that are both important and difficult. As computing systems and networks move into a central position in many spheres of work, play, and everyday activity, they seem to take on more functions and increase in complexity. As a result, the kinds of breakdowns described above take on new urgency.
For example, there are an expanding variety of people using computers. Computers are no longer the exclusive province of science and business. We see computers in schools, in homes, in public spaces, and in every place where people lead their lives. A major goal of the government's efforts in developing the information infrastructure is to bring universal transparent affordable access to an information society. This universal reach magnifies the breakdowns, both in number and in consequences.
Systems have become more integrated and collaborative. Most early computer systems were designed to get some specific task done. Today, the "system" encompasses a wide variety of users and tasks within a closely linked collection of devices and activities. The Internet can be thought of as an extreme example of this integration. With it comes complexity and all the other problems mentioned above.
Increasingly, there is software that mediates the use of computer systems. In an attempt to deal with the breakdowns of computing, a number of researchers and software producers are developing programs that can be thought of as "agents," which mediate between a person and the computer systems that are useful to her or him. The motivation is admirable, and sometimes these agents can be quite effective. However, such mediators can create more of the complexity they are intended to reduce as well as create new forms or error and user frustration if they are not designed with human-centered principles in mind.
Ultimately, technological advances are needed but they are not sufficient to produce useful and successful systems. The actual impact of technology change on cognition, collaboration and performance varies depending on how the power and possibilities afforded by technology are integrated into ongoing fields of human activity. As Norman puts it, “technology can make us smart and technology can make us dumb” (Norman, 1993). Our central problem is often not, can we develop it, but what should we develop. Our central problem is not less or more technology, but rather skillful or clumsy use of the wide range of technological possibilities available to us.
|"Human Rights and The Pseudo Experts: Analytical Critique to the Writings of Jack Donnelly." Human Rights Review: Biannual Human Rights Journal. Ankara, Turkey. 4 (2) 2006||School-Centered Emergency Management and Recovery Guide for|
|Enter Title Here (14-point, Upper and Lower Case, Bold, Centered)||Representations in Human-Computer systems development|
|A robot may not inure a human being, or through inaction, allow a human being to come to harm||Lake Arrowhead Human Complex Systems Conference May 2005|
|The African and Inter-American Regional Systems of Protecting Human Rights||The alteration of floodplains has followed human settlement of river corridors worldwide, but human degradation of river-floodplain complexes has been|
|Human pth (1-34) Specific elisa kit Enzyme-Linked ImmunoSorbent Assay (elisa) for the Quantitative Determination of Human Parathyroid Hormone 1-34 Levels in Plasma||On (I) mechanizability of human mind versus consideration regarding meta problem, problem of complexity and chance/randomness (II) defence of utilitarian model of human behavior (decision making)|