Скачать 64.94 Kb.
Transfer & Assay: Concerns & Commitments
Transfer and Assay: Concerns and Commitments
Anker Helms J¿rgensen and Annette Aboulafia
21 June, 1994
Amodeus Project Document: TA/WP27
Transfer and Assay: Concerns and Commitments
Anker Helms J¿rgensen and Annette Aboulafia
Department of Psychology
University of Copenhagen
21st. June 1994
The overall aim of this report is to ensure a sound basis for transfer and assay work in year 3. It is based on the experience gained in studies in the two first years. The main motivation is the touchy element of evaluation in assay work and the dependance between the parties involved in terms of commitment and quality. It is critical to develop a shared understanding of transfer and assay work, its assumptions and implications. The report is a step in this direction in that it aims at creating the grounds for establishing a complete, explicit and agreed basis for transfer and assay work. This will serve to meet the needs of the parties involved: the modelling teams, the assay team, and the design teams. The report addresses four main issues. Firstly a model of the development of modelling approaches towards transfer and assay is presented. Secondly, a procedure for conducting particular transfer and assay studies is outlined. Thirdly, issues on collaboration with design teams are discussed. Finally, subjective issues regarding usability are addressed.
In the two first years of the project we have gained a substantial experience in doing transfer and assay studies. As the third year will see an increased number of focussed transfer and assay studies where the models come under scrutiny, the time is ripe to take stock of the experience. The approach taken in this report is rather general as we have a wide range of models, some of which have been tested thoroughly and some of which barely been out of the laboratory.
Transfer and assay is an integral part of the AMODEUS II project. A number of different and significant issues are in play here. Firstly, it involves the assay team (RP4) doing evaluations of models. Having a separate and independent assay team is a unique feature and a strength of AMODEUS II that in itself promotes evaluation on a sound scientific basis. However, it also has its pitfalls as evaluating others' work is a touchy business, not least within a project.
Secondly, the modelling teams, the assay team and design teams are highly dependant on each other in terms of quality and commitment. The assayers need something substantial to evaluate, something they believe in, something with clout for designers. The modelling teams are dependent on the quality of the work done by the assayers, e.g., having relevant feedback. The design teams involved are interested in timely, relevant and applicable input from modelling teams, who in turn supposedly are interested in realistic design issues and commitments from the designers.
Thirdly, as we all know, iteration is a crucial feature in development of usable products. This is also true for models and encapsulations from the modelling teams. Therefore we have to address the iteration aspect by clarifying the expectations of the modelling teams and the assay team in this regard. Given a set of transfer and assay results meticuously collected by the assay team and fed back to a modelling team, how willing is the modelling team to iterate the model in order to make it more usable and useful?
Fourthly, a pertinent issue in transfer and assay of HCI models is what we have coined "usability in your own back yard". Even excellent HCI researchers are not necessarily excellent mediators of their research and may also be blinkered in terms of the expected utility and usability of their products. They may therefore have difficulties in being confronted with comments from practitioners on the utility and usability of their products. This problem will undoubtedly emerge in the future and it is therefore vital that we address it.
These are some reasons why it is critical to the project to develop a shared understanding of the nature of transfer and assay, its assumptions and implications - and how we actually go about doing it. This report is intended as a vehicle for establishing a complete, explicit and agreed basis for transfer and assay work. The time is ripe as we have already seen the issue popping up here and there.
As to the nature of evaluation, we have looked in the literature for frameworks for evaluation. They do exist in reports on Human Factors and Software Engineering evaluation studies (e.g., Card et al., 1987, and Bittner, 1992) and in standard reference handbooks (e.g., Helander, 1988, and Salvendy, 1987). However, these frameworks are fairly instrumental and of limited scope as they merely address the measuring problem, not the wider issues of the nature and purpose of evaluation. We have therefore looked elsewhere and found two relevant approaches. One is social science where a vast amount of literature exists on models of evaluation of programmes (e.g., Herman et al., 1987), another is assessment of teaching where a comprehensive framework exists for setting up instructional goals and deriving evaluation criteria (Gronlund, 1985). This approach is described in section 3. In addition we have the "Gulfs framework" developed in RP4 (Buckingham Shum & Hammond, 1994) that centers around the notion of encapsulation gulfs, i.e., communication gulfs that can be bridged by appropriate means. These three approaches will serve as conceptual vehicles supporting the conduction of studies, both from the modelling perspective and from the transfer and assay perspective.
The report comprises four main parts. The first presents a model of the development of modelling approaches towards transfer and assay and the roles of the teams involved. The second part outlines a procedure for setting up and conducting transfer and assay studies. The third part addresses the issues of proper collaboration with design teams that on the one hand ensures a sound scientific basis for evaluation and on the other hand meets the needs of the practitioners. Finally, subjective issues regarding usability are addressed.
2. Towards Transfer and Assay
The modelling approaches differ widely in terms of maturity and robustness; some have been tested thoroughly while others hardly have been out of the laboratory. This implies that transfer and assay work will also differ widely. Thus, prescribing a particular manner of doing transfer and assay is almost impossible. Therefore this report takes a rather general view of transfer and assay. The trend in the third year in the project is towards focussed studies of individual modelling approaches with students. We will, however, in this report assume that collaboration with real design projects is still possible.
2.1 The development of Models/Encapsulations towards Transfer and Assay
A model of the transfer and assay process from an assay perspective is seen in seen figure 1 below. It aims at clarifying the states that the models and encapsulations move through towards transfer and assay. It also identifies intermediate states and suggests requirements for moving from one state to another.
Michael O. Deller
Don E. Signer
Figure 1: Model of Modelling/Encapsulation Development
In the two "ends" we have a modeller (Michael O. Deller) and a designer (Don E. Signer) to remind us about the nature of the producer and the consumer of the models: human beings. Inbetween we have the Model Pool that holds all the models. From some models encapsulations are derived and when sufficiently developed moved to the Encapsulation Pool (transition A). When to do this transition is to be decided by the individual modelling teams. When an encapsulation is ready to be transferred and assayed it is moved to the T&A Pool (transition B). This move is an issue to be decided by the modelling team and RP4 jointly. Some models may not take the step via encapsulations and thus move from the Model Pool to the T&A Pool. From the T&A pool models are drawn for the actual T&A Process. The issues concerned with transitions A and B are discussed in this report, while the issue in connection with transition C is a matter for the whole project to decide. We should mention here that we may well end up with more models in the T&A Pool than RP4 can possibly transfer and assay due to resource limits. Therefore the issue as to which models to actually transfer and assay (transition C) arises; this is discussed elsewhere (J¿rgensen, 1994).
In the following, the term model will be used about both models and encapsulations in the T&A Pool in order to avoid the cumbersome term model or encapsulation.
2.2 Roles of RP4 between modelling teams and design team
In order to be able to make the roles and responsibilities explicit we will outline three ways in which the collaboration between the modelling teams, the assay team and the design team can take place. A key issues here is what is being transferred (the modelling skill or the modelling results), who is doing the modelling (modelling teams or designers), and who is feeding the modelling results to the designers and in turn feeding back the designers' reactions to the modelling teams. Three modes are outlined.
In the transfer mode the modelling teams do the modelling, the assayers transfer the results to the designers and the designers' reaction back to the modelling teams. This mode has been employed in the work on the common exemplars, especially ISLE.
In the representative mode the assay team "represents" the modelling teams: presents the models to designers, conducts the transfer, assays the designers' work with the models, analyses the results and feeds them back to the modellers. This mode has been employed in the EnEx2-study (Shum, 1993) and the two recent Design Space Analysis studies in Copenhagen.
In the consulting mode, the assay team serves as facilitators between the modelling teams and designers. The modelling teams themselves run studies supervised by the assayers. This mode is employed in the PAC teaching study.
3. Procedures for Transfer and Assay
This section outlines four phases in doing transfer and assay studies. First a number of preconditions have to be met re. transition B in figure 1 above; next the basis for the transfer and assay has to be established including purpose, roles, tasks, responsibilities, expected outcome, feedback, etc. The two last phases is running the study and feeding back the results to the modelling teams.
3.1 Preconditions for Transfer and Assay
A number of preconditions have to be fulfilled before we can embark upon serious transfer and assay due to proper utilization of RP4 resources. Referring to the process model above, this refers to transition B from the Encapsulation Pool to the T&A pool. This transition is to be decided jointly by the particular modelling team and the assay team.
3.1.1 Are the Commitments Clear?
As noted earlier, we are dependant on each other in the project. Therefore a precondition for transfer and assay studies is a clear statement from the assayers that they are willing to do a proper evaluation of a model - see section 3.2 for details that have to be addressed. This is neccesary in order that the modellers be confident that their model will be given a fair trial. Likewise, the assayers needs a clear statement from the modelling team about their willingness to iterate their model based on the transfer and assay study. This is necessary in order to ensure that RP4 resources are utilized in a constructive manner in the project.
3.1.2 Is the Model Substantial and Documented?
A model has to be substantial before transfer and assay in order to make the best use of RP4 resources. It also has to be communicable and relevant to designers:
- Is the model reasonably "meaty", robust, applicable, coherent, mature and complete?
- Is the model communicable and documented in a form comprehensible to designers?
- In what way is the model relevant to designers?
3.1.3 What are the Benefits of the Model?
If want to "sell" modelling techniques, we have to be very explicit about these benefits - besides having something "meaty" to sell. A recurrent question asked by designers is "What does this approach buy me?"
We therefore need statements from the modelling teams about benefits, potential and power of the model - as well as constraints and prerequisites. These come in two forms. One is the general form as in the Executive Summaries and Worked Examples (Buckingham Shum, 1994; Buckingham Shum et al, 1994). The other is specific to the particular design project at hand and this will - we envisage - develop as the contact with the design project evolves.
Here is a list of topical issues to be addressed:
- benefits in designers' general or specific skills
- benefits in the quality of resulting software
- benefits in design process facilitation
- how it relates to current design practice
- application areas
- the intended users of the model
- preconditions for use
- skills required to use it
- when to use it
- how to use it
- illustrative examples
The answers to the points are extremely important for design teams. This is really where the seeds for proper collaboration with designers are being laid.
We need concepts to handle the benefits achieved in designers' cognitive skills and their appreciation of the models. We need refinements of the three categories previously applied in RP4 work: are the models relevant, understandable, and applicable (Aboulafia et al., 1993) and the distinctions made in the literature on analytic models: applicability, interpretability, and validity (de Haan et al., 1991 and Gugerty, 1993). Being explicit about this will sharpen our understanding of the benefits of the models. In addition to the "Gulfs framework" (Buckingham Shum and Hammond, 1994), we will here propose a framework for evaluation of teaching developed by Gronlund (1985), based on a cognitive and affective taxonomy by Bloom (1956). The idea is to make explicit the instructional goals in terms of the outcome as well as making explicit the basis for an evaluation of the outcome.
The idea is that in transfer and assay studies, RP4 will draw upon these approaches in a dialogue with the model team involved in order to identify critical issues and sharpen the expectations. We certainly do not expect the modelling teams to come up with a complete set of instructional goals!
The cognitive taxonomy consists of 6 levels of comprehension and cognitive skills: knowledge, comprehension, application, analysis, synthesis, and evaluation. The affective taxonomy consists of 5 levels of affective integration: receiving, responding, valuing, organisation, and characterization by a value or complex. The taxonomies are briefly illustrated below by an example from teaching biology. In appendix 1 the categories are described and a more thorough example is given.
This short example shows the instructional objectives in tenth-grade biology teaching. The table lists the main objectives. They can be further elaborated as shown under point 6.
Types of learning Instructional objectives
Knowledge 1. Knows common terms used in biology
2. Knows specific biological facts
3. Knows common laboratory procedures
Understanding 4. Understand general biological principles
Application 5. Applies biological facts and principles to new situations
Thinking Skills 6. Demonstrates skill in critical thinking
6.1 Distinguishes between facts and opinions
6.2 Draws valid conclusions from given data
6.3 Identifies assumptions underlying conclusions
6.4 Identifies limitations of given data
Laboratory skills 7. Uses the microscope skilfully
8. Performs basic operations of dissection skilfully
Communication skills 9. Writes clear and accurate reports of laboratory experiments
Study skills 10. Locates biological information
11. Interprets diagrams, graphs, and charts
Attitudes 12. Displays a scientific attitude towards biological phenomena
Adjustments 13. Works cooperatively with others
Figure 2: General instructional objectives for tenth-grade biology
3.2 Establishing a Basis for Transfer and Assay
The second major phase in doing a transfer and assay study is to establish a specific basis for a study, given the information addressed in the previous section has been provided. Readers well versed in research design will be familiar with this; more detail will be added in the particular cases. It is important to note that the issues discussed below will be addressed iteratively throughout a transfer and assay study.
The purpose is the driving force for any study. This is certainly also the case here - but here we have three quite distinct types of purpose, namely that of the modelling team, of the assay team, and of the design team. These can be quite different as the modelling team for example may want to address specific model related issues, the assay team may want to explore a novel transfer and assay method, and the design team may want input on a certain aspects of the software. Being explicit here is crucial for the overall success of the study. In this process the literature on evaluation from social science (e.g., Herman et al., 1987) may prove beneficial with distinctions between goal-oriented, decision-oriented, responsive, goal-free and utilization-oriented evaluations.
What is the focus in the study? Is it the learning of the model, the usablity of the notation of the model, is it the benefit of applying it, or is it required knowledge?
What assay context will be used? What kinds of users (students or designers), what method to apply, what kinds of data collection and data analysis, etc. - a whole range of questions from research design.
It has to be made clear how the collaboration between the model team, the assay team and the design team should take place: representative mode, catalyst mode or consulting mode? It should also be clear who are supposed to do the modelling work: the modellers or the designers?
The general success criteria will also have to be addressed. These are fairly high-level general views of what constitute a success of a study, probably fairly implicit. Are there any issues that are particluarly touchy that - if violated - will impede the study severely or ruin it altogether? These views will probably emerge in the course of the planning of the study.
What is the expected outcome? Given the information on the power and capabilities of the model, a refinement of this information homing in on the specific context of the study would be in place for further clarification and sharpening of our knowledge of the power of the models. Here the "Gulfs Framework" and the cognitive and affective taxonomy outlined earlier may again be helpful. If designers are involved it would also be necessary to clarify their expectations in terms of outcome.
What kind of feedback is desired? This is clarification of the feedback the modellers would like to get: the nature, the form, the extent, the granularity and the role. Clear statements on this early will both facilitate designing the study and also help clarify the issue of commitment. In addition the assayers will enhance their understanding of the way the modellers work if the role of feedback and the way it will inform the modelling approach is explicit.
3.3 Running the study
This is the third major element in a transfer and assay study. It is almost impossible to say anything sensibly about this here as it is completely specific to the particular set-up of the study.
This is the fourth major element of a study. Again we can't say anything specific here - except that we should be aware that there are several parties involved and different perspectives. Therefore it may well prove beneficial to take time and effort to reflect on the whole process in addition to eliciting particular results. A distinction on testing borrowed from software engineering (Sommerville, 1992) might be useful here as a vehicle - where the "thing" can refer to model, modelling result, transfer mode, etc.:
Verification: "Are we doing the right thing?
Validation: "Are we doing the thing right?"
4. Collaboration with design teams
It appears that the overall strategy in year 3 of the project will imply a shift away from large common design spaces to smaller focussed studies of individual modelling approaches like the EnEx-2 study in year 1 and the Design Space Analysis studies in year 2 where students were used as "designers". This, however, does not necessarily imply that contact with designers is excluded. We may find opportunities for running smaller focussed studies with real design projects, perhaps only involving one or two modelling approaches. It it therefore still important to address the issues of designer involvement.
Especially the first common design space ISLE (Buckingham Shum et al., 1993), but also to some extent the EuroCODE and CERD studies, has made us aware of a number of issues that need to be clarified before collaboration is established with a design project. In ISLE we had four main points of contact with the designers after collaboration had been established: 1) email exchanges on system behaviour while the modelling took place, 2) the modelling report, 3) a one-day workshop with the designers, and 4) an interview with one of the ISLE team members. All exchanges were successful - but for quite different reasons. The email exchanges were useful for the designers as it helped identify and clarify design issues; from the report and in the workshop we learned a lot about transferring results, relevance to designers, and the necessity of clarifying expectations, commitments and motivations; and in the subsequent interview some of the underlying mechanics were revealed. This is the basis for the following list of the most important issues that need to be addressed:
- What are the reasons the design project wants to collaborate with AMODEUS II?
- What kinds of input are designers interested in?
- What are the expectations towards the collaboration?
- Is the design team agreeing on collaboration, expectations, and commitments?
- What support is there from management towards collaboration?
- What are the commitments?
- What is the likelihood that the project will continue for a reasonable period of time?
- What are the factors that govern the fate of the project?
5. Usability in our backyard
A general issue in the transfer and assay of HCI models is what we have coined "usability in your own back yard" or the objective and the subjective approach to usability (J¿rgensen, 1987). It is well known that even excellent HCI researchers are not necessarily excellent mediators of their research and may also be blinkered in terms of the utility and usability of their "products". This is not surprising as their focus is research and not mediation, let alone application. In other fields, eg. software engineering, professionals have strong feelings about their "products" as witnessed by the following quote about a menu-driven computer interface with one-item menus: "It was his baby and he din't want to change it although it was clearly unusable" (J¿rgensen, 1983). The same is undoubtedly true for researchers. The modellers (being the usability specialists) may therefore have a hard time being confronted with statements from designers (being much less usability specialists) on lack of utility or usability of their "products". This problem will undoubtedly emerge in the future and it is therefore vital that we address it. Here are some initial points.
It is important that the modelling teams realise that transfer and assay is an integral part of the AMODEUS II project - and realise the implications for their modelling approach, e.g., by being ready to stand up for transfer and assay studies. The assayers must in turn realise that the modelling teams have many years of emotional investment in their models and that their perspective in general is research-orientated.
The assayers can illustrate a general point in learning and discovery processes: dramatic advances are often seen when one approaches application ("gets dirt on the hands"). The assayers can demonstrate this point through carefully conducted transfer and assay studies that provide timely, relevant and applicable feedback to the modelling teams.
Finally, the design teams and assay team must jointly make explicit to design teams involved the interests, backgrounds, and expectations to the collaboration. They must also be as explicit as possible as to the expected benefits of the models. Vice versa, design teams must also be explicit about their commitments and their needs. Do keep in mind that the time horizon for techniques and models in the software engineering business is around 10 years (Lauesen et al., 1993).
These issues should be taken seriously in order for the AMODEUS project to really exploit the potential to provide timely, relevant, and applicable evidence to designers - just like the simple thinking-aloud method (J¿rgensen, 1990).
Aboulafia, A., Nielsen, J. and J¿rgensen, A.H. (1993): Evaluation Report on the 'EnEx1' design Workshop. Amodeus II report TA/WP2.
Bittner, A.C. (1992): Robust testing and Evaluation of Systems: Framework, Approaches, and Illustrative Tools. Human Factors, vol. 34, pp. 477-484.
Bloom, B.S. (1956) (ed): Taxonomy of Educational Objectives: Handbook I, Cognitive Domain. New York, D. McKay.
Buckingham Shum, S. (ed) (1994) Executive Summaries of AMODEUS Modellig Approaches. Amodeus II report TA/WP12.
Buckingham Shum S., Hammond, N. (1994): Transferring HCI Modelling and Design Techniques to Practitioners: A Framework and Empirical Work. In Proc. HCI94: People and Computers IX (Glasgow, Scotland, 23-26 August 1994). Cambridge Univ. Press
Buckingham Shum S., Hammond, N., Jorgensen, A.H. and Aboulafia, A. (1993): EnEx3: The ISLE Modelling Transfer Exercise. Amodeus II report TA/WP15.
Buckingham Shum, S., J¿rgensen, A.H., Hammond, N., and Aboulafia, A. (eds) (1994): AMODEUS HCI Modelling and Design Approaches: Executive Summaries and Worked Examples. Amodeus II report TA/WP16.
Card, D.N., McGarry, F.E. and Page, G.T. (1987): Evaluating Software Engineering Technologies. IEEE Trans. Software Engineering, vol. SE-13, pp. 845-851.
de Haan, G, van der Veer, G., and van Vliet, J.C. (1991): Formal Modelling techniques in Human-Computer Interaction. Acta Psychologica, vol. 78, pp. 27-67.
Gronlund, N.E. (1985): Measurement and Evaluation in Teaching. MacMillan, 5th ed.
Gugerty, L (1993): The use of analytical models in human-computer-interface design. Int. J. Man-Machine Studies, vol. 38, pp. 625-660.
Helander, M. (1988): Handbook of Human-Computer Interaction. North-Holland.
Herman, J.L., Morris, L.L. & Fitz-Gibbon, C.T. (1987): Evaluator's Handbook. Sage Publications.
J¿rgensen, A.H. (1983): Design Practice and Interface Usability: Evidence from Interviews with Designers. Report 83/10, Dept. of Computer Science, Copenhagen University.
J¿rgensen, A.H. (1987): Two Approaches to Usability: The Subjective and the Objective. In Knave, B. (ed) Proc. Work With Display Units, pp. 940-943.
J¿rgensen, A.H. (1990): Thinking-aloud: A Method promoting Cognitive Ergonomics. Ergonomics, vol. 33, pp 501-507.
J¿rgensen, A.H. (1994): Strategic Issues in Selecting Modelling Approaches for Transfer and Assay. Forthcoming Amodeus II report.
Lauesen, S. Pries-Heje, J. and Schroeder, B. (1993): Embedded Software: Industry versus Research. In Bansler, Boedker and Kensing (eds): Proc. 16th IRIS, Copenhagen, pp. 451-461.
Salvendy, G. (ed) (1987): Handbook of Human Factors. Wiley.
Shum, S. (1993) Analysis of the expert system modeller as a vehicle for ICS encapsulation. Amodeus II report TA/WP5.
Sommerville, I. (1992): Software Engineering. Addison-Wesley.
Appendix A: Setting instructional goals
The Gronlund (1985) approach to educational objectives as a basis for setting goals for teaching has been taken up here as a vehicle for making explicit the benefits of modelling approaches. The approach has been developed over many years as an instrument in teaching in general. It consists of three taxonomies: a cognitive, an affective, and a motor skill. Only the two first are listed here. They are based on Bloom's work in the 1950'ies (Bloom, 1956) - a bit outdated it may seem? But we have not been able to find anything newer - and as Gronlund's comprehensive book "Mesurement and Evaluation in Teaching" is being revised and reprinted constantly, we have good reasons to believe that Bloom's basis is sound and robust.
The cognitive taxonomy
The taxonomy of educational objectives consists of six classes of cognitive skills: Knowledge, comprehension, application, analysis, synthesis, and evaluation. They are described briefly here accompanied by an example from learning the theory behind motoring and driving.
Knowledge is defined as the remembering of previously learned material. This may involve the recall of a wide range of material, from specific facts to complete theories, but that all that is required is the bringing to mind of the appropriate information.
Single phenomena: Be able to define words like lane, right of way, and crossing.
General abstractions: Be able to describe theories of how alcohol is consumed
according the weight of the body
Comprehension is defined as the ability to grasp the meaning of material. This may be shown by translating material from one form to another (words of numbers), by interpreting material (explaining or summarizing), and by estimating future trends (predicting consequences or effects).
Translation: Be able to translate a table of the dependence of the brake length of the
speed and number of brakes
Extrapolation: Be able to calculate the brake length at 45 mph when given a table of
brake lengths at 15, 40, 50, 60, and 90 mph.
Application refers to the ability to use learned material in new and concrete situations. This may include the application of such things as rules, methods, concepts, principles, laws, and theories.
Be able to specify how a driver of a motor vehicle should use brake and steering
wheel in a particular situation.
Analysis refers to the ability to break down material into its component parts so that its organisational structure may be understood. This may include identification of its parts, analysis of the relationships between parts and recognition of higher principles involved.
Analysis of elements in a whole: Be able to find technical specifications in a test
report relevant to the roadability of a car.
Analysis of organizational principles: Be able to figure out the principles by which a
car test is done and described.
Synthesis refers to the ability to put parts together to form a new whole. This may involve the production of a unique communication (theme or speech), a plan of operations (research proposal), or a set of abstract relations (scheme for classifying information).
Work out a unique product: Be able to work out a description of ideal roadability
features of a car.
Evaluation is concerned with the ability to judge the value of material (statement, novel, poem, research report) for a given purpose. The judgements are to be based on definite criteria. These may be internal criteria (organization) or external criteria (relevance to the purpose) and the student may determine the criteria or be given them.
Evaluation from external criteria: Be able to evaluate the techniques applied by a test team in data collection and upon which their conclusions are drawn.
The Affective Taxonomy
It may seem that the affective area is less important than the cognitive area; however, interest, appreciation and attitude are definitely important in all instruction and transfer, also in the transfer of HCI modelling techniques to design practice! It is however, probably only the three first categories that are relevant.
Receiving refers to the student's willingness to attend to particular phenomena or stimuli (classroom activities, textbook, music, etc.). From a teaching standpoint, it is concerned with getting, holding, and directing the students attention. Learning outcomes in this area range from simple awareness that a thing exists to selective attention on the part of the learner.
Receiving refers to the active participation on the part of the student. At this level he not only attends to a particular phenomenon but also reacts to it in some way. Learning outcomes in this area may emphasise acquiescence in responding (reads assigned material), willingness to respond (voluntarily reads beyond assignment), or satisfaction (reads for pleasure or enjoyment).
Valuing is concerned with the worth or value a student attaches to a particular object, phenomenon or behaviour. This ranges in degree form the more simple acceptance of a value (desires to improve group skill) to the more complex level of commitment (assumes responsibility for the effective functioning of the group). Instructional objectives that are commonly classified under attitudes and appreciation would fall into this category.
Organization is concerned with bringing together different values, resolving conflicts between them, and beginning the building of an internally consistent value system. Learning outcomes may be concerned with the conceptualization of a value (recognizes the responsibility of each individual for improving human relations) or with the organization of a value system (develops a vocational plan that satisfies his need for both economic security and social service).
Characterisation by a value or value complex: At this level of the affective domain, the individual has a value system that has controlled his behaviour for a sufficiently long time for him to have developed a characteristic life style. Thus the behaviour is pervasive, consistent, and predictable.
|Abstract – The mathematical model of data transfer with the use of the pulse response on interface rs-485 is developed in the paper. It allows defining frequency transfer function of the cable used at construction of network segment, and the pulse response for each segment||A. cpgs will be cut now due to fiscal pressure from overseas commitments|
|Improving customer delivery commitments the Six Sigma way: case study of an Indian small scale industry||A re-Examination of add/adhd and Childhood Behavior Concerns|
|A taqMan real-time pcr-based assay for the identiﬁcation of Fasciola||Consumer Concerns about Animal Welfare and the Impact on Food Choice|
|The meanings of evidence based practice in higher education: themes, concepts and concerns emerging through public discussion||76 000 000. © Proprietors of Journal of Dairy Research 2009 doi: 00/00 Printed in the United Kingdom\ Application of a multiplex pcr assay for the detection of|
|Human pth (1-34) Specific elisa kit Enzyme-Linked ImmunoSorbent Assay (elisa) for the Quantitative Determination of Human Parathyroid Hormone 1-34 Levels in Plasma||Effects in the comet assay: How can you tell a good comet from a bad comet?|