Unit+3+Study+Questions

Unit 3 - Reading
 In addition to reading this Study Guide, you should read the following textbook sections and articles from the Readings: 
 * 1) Owen, J. M., & Rogers, P. J. (1999). //Program evaluation; Forms and approaches// (pp. 1-62; 170-307). Thousand Oaks, CA: Sage.
 * 2) Chen, H. T. (1996). A comprehensive topology for program evaluation. //[|Evaluation Practice], 17//(2), 121-130.

Unit 3 - Issues to Think About

 * 1) Evaluators have a variety of educational backgrounds, but most of them have specialized in research methods, and in evaluation theory and methods. They rarely have studied program or instructional design and development. Their focus, in doing evaluations, is for the most part on designing the evaluation. Do you think an evaluator with strong preparatory studies in program or instructional design and development - in addition to the usual research evaluation preparatory studies of course - would have an advantage?
 * 2) Owen's premise of five evaluation forms suggests that evaluations be designed and implemented in conjunction with program development stage. In considering the models that you have studied, can you think of any one model that might cover all five forms - if it were implemented in a timely fashion, of course?

Owen, 1999, p. 1-85
 Evaluation:  1. What are the four knowledge products of evaluation? (p. 4)  2. What is the meaning of the phrase logic of evaluation? (p. 14)  3. According to Owen, who makes evaluation judgments?  4. List Owen's five evaluation forms. (a major part for each definition is from this [|website]) <span style="font-size: 13pt; color: rgb(0, 0, 92);"> <span style="font-size: 20pt; font-family: Arial; color: rgb(11, 5, 6);"> 5. Define each of the five evaluation forms. (p. 53/54) finetuning || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Justification/ accountability || -What do we know about this problem will address? - What is recognized as //best practice*// in this area? -Have there been other attempts to find solutions to this problem? -What does the relevant research or conventional wisdom tell us about this problem? -What do we know about the problem that the program will address? -What could we find out from external sources to rejuvenate an existing policy or program? || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">-What are the intended outcomes & how is the program designed to achieve them? -What is the underlying rationale for this program? -What program elements need to be modified in order to maximize the intended outcomes? -Is the program plausible? -Which aspects of this program are amenable to a subsequent monitoring or impact <span class="�434�mozilla-findbar-search" style="padding: 0pt; background-color: yellow; color: black; display: inline; font-size: inherit;"> assessment?
 * <span style="background-color: rgb(255, 255, 255);">negotiating an evaluation plan;
 * <span style="background-color: rgb(255, 255, 255);">collecting and analyzing evidence to produce findings;
 * <span style="background-color: rgb(255, 255, 255);">dissemination the findings to identified audiences for use in (a) describing or understanding an evaluand or (b) making judgments and/or decisions related to that evaluand. (p. 4)
 * <span style="background-color: rgb(255, 255, 255);">//evidence//: the data which has been collected during the evaluation. This could be regarded as information.
 * <span style="background-color: rgb(255, 255, 255);">//conclusions//: the synthesis of data and information. These are the interpretations or meanings made through analysis. Conclusions result from analytical processes involving data display, data reduction and verification
 * <span style="background-color: rgb(255, 255, 255);">//judgments//: in which values are placed on the conclusions. Criteria are applied to the conclusions stating that the program is 'good' or 'bad' or that the results are 'positive', 'in the direction desired' or 'below expectations'.
 * <span style="background-color: rgb(255, 255, 255);">//recommendations//: these are suggested courses of action, advice to policy-makers, program managers or providers about what to do in the light of the evidence and conclusions.
 * <span style="background-color: rgb(255, 255, 255);">Owen (2007) describes the //logic of evaluation// and defines how people try to connect data to value judgments that the evaluand is good or bad, better or worse, passing or failing, or the like. Without this logic any review or evaluation will be found wanting. Evaluators must be familiar with the following basic issues:
 * 1) <span style="background-color: rgb(255, 255, 255);">//Establishing criteria//: What is the underlying basis for the criteria to judge the worth of what is being evaluated? What dimensions must the evaluand to well? It is important to be explicit about the criteria being used to determine worth. (breakfast should be nutritious)
 * 2) <span style="background-color: rgb(255, 255, 255);">//Constructing standards//: What evidence and standards/criteria need to be used to make a judgment of worth? How well should the evaluand perform? (characteristics: fiber, fat, sugar, sodium; good = derived from nutrient profiles of raw cereals)
 * 3) <span style="background-color: rgb(255, 255, 255);">//Measuring performance//: What standards were applied and how will conclusions be made and presented? How well did the evaluand perform? (ex, highly recommended <= 4% fat)
 * 4) <span style="background-color: rgb(255, 255, 255);">//Synthesizing and integrating evidence into a judgment of worth//: Decision-making on the basis of the evaluation. What is the worth of the evaluand? (categorized the cereal and highly recommended some, but ultimate choice left it up to the consumer.)
 * <span style="background-color: rgb(255, 255, 255);">"Logic of evaluation" is important to evaluators which are concerned with determining //impact<span class="�430�mozilla-findbar-search" style="padding: 0pt; background-color: yellow; color: black; display: inline; font-size: inherit;"> // - evaluations of this nature generally are summative (report on what the program has achieved an should be undertaken on programs that are settled or stabilized). (p. 13)
 * <span style="background-color: rgb(255, 255, 255);">The application of the logic of evaluation to real settings involves the evaluator, client or some other stakeholder holding a view about the worth of a give program based on a defensible empirical inquiry. (p. 18)
 * <span style="background-color: rgb(255, 255, 255);">Worth = //extrinsic value// of an evaluand within a given context. (Cereal: The worth of the cereals might be regarded within decision making about the most appropriate foods for people who are undertaking an extended nutritional regime.) (p. 14)
 * <span style="background-color: rgb(255, 255, 255);">Merit = intrinsic value. (p. 14)
 * <span style="background-color: rgb(255, 255, 255);">The generality of the logic is its strength: it gives us a base from which we can delve into different approaches to evaluation practice. (p. 14)
 * <span style="background-color: rgb(255, 255, 255);">Refer to this lesson plan: [|Evaluating Chocolate Chip Cookies Using Evaluation Logic] (It's a word document.)
 * <span style="background-color: rgb(255, 255, 255);">Clients and evaluators should determine, in advance of any evidence gathering, who is to make judgment about the worth of the program. In some instances, the client of an evaluation is more than happy to let the evaluator do so. But in other cases, the client prefers to take on this responsibility. The fact is that, in studies which use the logic of evaluation, some judgment of worth must be made by someone. (p. 20)
 * <span style="background-color: rgb(255, 255, 255);">**Proactive**: Guides the early planning so that it incorporates the views of stakeholders and the accumulated knowledge from previous work in the field. Focuses on the actual need for a program. The main use of these data is to help planners determine what type of program would meet the identified social need or problem. This type of evaluation is carried out before a program is developed.
 * <span style="background-color: rgb(255, 255, 255);">**Clarificative**: Quantifies both the program’s process and objectives - <span class="�431�mozilla-findbar-search" style="padding: 0pt; background-color: yellow; color: black; display: inline; font-size: inherit;"> make program assumptions explicit. Concerned to clarify the underlying rationale of a program. Program developers use this information to think through, and make explicit, the logic that supports the program, including assumptions about how its components link together to produce the desired outcomes. While design evaluation would usually occur before the implementation of a program, it may also be carried out while a program is operating if it is not clear how it was intended that the program was to be delivered. As such, it has a formative evaluation orientation.
 * <span style="background-color: rgb(255, 255, 255);">**Interactive**: Think of this as evaluation design to enable the program to make “mid-course corrections”. Examines program implementation including the extent to which a program is being delivered in the way it was intended to be delivered. The information gained is used to determine how the implementation of the program could be improved and so has a strong formative evaluation emphasis. Consequently, this form of evaluation is conducted as the program is being delivered within its various settings. The information is of particular use to those implementing the program.
 * <span style="background-color: rgb(255, 255, 255);">**Monitoring**: focuses on program outcomes and delivery for management decision-making and accountability purposes. This data are used primarily to account for the expenditure of program funds, including the extent to which key accountabilities have been met by program managers. This type of evaluation 'is appropriate when a program is well established and ongoing' (Owen, 1993: 24). It frequently involves keeping track of how the program is progressing. Real time feedback to managers is an important feature of this type of evaluation. (already identified targets & implementation taking place, managers need an indication of the performance of components of the program)
 * <span style="background-color: rgb(255, 255, 255);"><span class="�432�mozilla-findbar-search" style="padding: 0pt; background-color: yellow; color: black; display: inline; font-size: inherit;"> **Impact**: establishes the effects of a programme once it has been implemented and settled for a period of time. This may involve determining the degree to which program objectives have been met or documenting both intended and unintended outcomes. The main use of these data is to justify whether the program should continue to be implemented or implemented in other settings and, if so, whether any modifications are required. Thus, it has a strong summative evaluation emphasis. Because of this, impact evaluation is usually conducted after some logical end point in the program has been reached, such as where a neighbourhood watch program has been fully operational for a year.
 * <span style="background-color: rgb(255, 255, 255); font-size: 10pt;"> || <span style="background-color: rgb(255, 255, 255); display: block; text-align: center; font-size: 10pt;">Proactive || <span style="background-color: rgb(255, 255, 255); display: block; text-align: center; font-size: 10pt;">Clarificative || <span style="background-color: rgb(255, 255, 255); display: block; text-align: center; font-size: 10pt;">Interactive || <span style="background-color: rgb(255, 255, 255); display: block; text-align: center; font-size: 10pt;">Monitoring || <span style="background-color: rgb(255, 255, 255);"><span class="�433�mozilla-findbar-search" style="background-color: rgb(255, 255, 255); color: rgb(0, 0, 0); font-size: inherit; display: block; text-align: center;">Impact ||
 * <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Orientation || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Synthesis || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Clarification || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Improvement || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Justification/
 * <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Typical Issues || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">-Is there a need for the program?

|| <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">-What is this program trying to achieve? -How is this service going? -Is the delivery working? -Is the delivery consistent with the program plan? -How could delivery be changed to make it more effective? -How could this organization be changed so as to make it more effective?

|| <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">-Is the program reaching the target population? -Is implementation meeting program benchmarks? -How is implementation going between sites? -How is implementation now, compared with a month ago? -Are our costs rising or falling? -How can we finetune the program to make it more efficient? - How can we finetune the program to make it more effective? -Is there a program site which needs attention to ensure more effective delivery? - || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">-Has the program been implemented as planned? -Have the stated goals of the program been achieved? -Have the needs of those served by the program been achieved? -What are the unintended outcomes? -Does the implementation strategy lead to intended outcomes? -How do differences in implementation affect program outcomes? -Has the program been cost-effective?

|| -Research review -Review of best practice [benchmarking] || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">-Evaluablity assessment -Logic/theory development -accreditation || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">-Responsive -Action research -Quality review -Developmental -Empowerment || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">-Component analysis -Devolved performance assessment -Systems analysis || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;"><span class="�435�mozilla-findbar-search" style="padding: 0pt; background-color: yellow; color: black; display: inline; font-size: inherit;"> Objectives based -Process-outcome studies -Needs based -Goal free -Performance audit || <span style="background-color: rgb(255, 255, 255);">*Remember Patton's comments on best practices: Patton, M. Q. (2001). [|Evaluation, knowledge management, best practices, and high quality lessons learned]. //American Journal of Evaluation, 22//(3), 329-336. * <span style="background-color: rgb(255, 255, 255);"> <span style="background-color: rgb(255, 255, 255);"> Page 50 Scenarios <span style="background-color: rgb(255, 255, 255);"> 6. What is PEC and its implication for the five evaluation forms? (p. 55/56)
 * <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Key Approaches || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">-Needs assessment
 * <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">State of Program || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">None || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Development || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Development || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Settled || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Settled ||
 * <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Major Focus || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Program content || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">All elements || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Delivery || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Delivery/ outcomes || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Delivery/ outcomes ||
 * <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Timing (F2F Program delivery || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Before || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">During || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">During || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">During || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">After ||
 * <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Assembly of evidence || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Review of documents & databases, site visits & other interactive methods. Focus groups, nominal groups & Delphi technique useful for needs assessments. || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Generally relies on combination of document analysis, interview and observation. Findings include program plan & implications for organizations. Can lead to improved morale. || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Relies on intensive onsite studies, including observation. Degree of data structure depends on approach. May involve providers & program participants. || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Systems approach requires availability of Management Information Systems [MIS], the use of indicators & the meaningful use of performance information. || <span style="background-color: rgb(255, 255, 255); font-size: 10pt;">Traditionally required use of preordinate research designs, where possible the use of treatment & control groups, & the use of tests & other quantitative data. Studies of implementation generally require observational data. Determining all the outcomes requires use of more exploratory methods & the use of qualitative evidence. ||
 * <span style="background-color: rgb(255, 255, 255);">Program Logic: describe the nature of social and educational programming. The causal mechanisms which are understood to link program activities with intended outcomes.(p. 43)
 * <span style="background-color: rgb(255, 255, 255);">Evaluation Theory: a body of knowledge that conceptualizes, aid in understanding, and predicts action in the areal of evaluation inquiry. (p. 43)
 * <span style="background-color: rgb(255, 255, 255);">Program Logic Development: involves the development of program logic using a range of analytical methods including documentation, and interviews with program staff and other stakeholders with a view to constructing a map of what the program is intended to do. (from clarificative evaluation, p 43/44)
 * <span style="background-color: rgb(255, 255, 255);"> Scenario A: Clarificative (//Logic development//... clarify internal structure and function... causal mechanisms which link program activities with intended outcomes)
 * <span style="background-color: rgb(255, 255, 255);"> Scenario B: Interactive (//Needs-based evaluatio//n... judging the worth of a program... does the program meet the //needs of the participants// OR
 * <span style="background-color: rgb(255, 255, 255);">Scenario B: Impact ( it is an established program adopted from elsewhere... how do the students perform as a result of the program (outcome measures)
 * <span style="background-color: rgb(255, 255, 255);"> Scenario C: Proactive (//Research review//... use pure/applied research to impact on social and educational planning... takes place before a program is designed)
 * <span style="background-color: rgb(255, 255, 255);"> Scenario D: Impact (//Performance audi//t... analysis of program efficiency and effectiveness... judgment on continuance or cancellation)
 * <span style="background-color: rgb(255, 255, 255);"> Scenario E: Monitoring (//System analysis//... setting up procedures by which the central management institutes common evaluation procedures to by used uniformly across an organization???)
 * <span style="background-color: rgb(255, 255, 255);"> Scenario F: Interactive (//Empowerment evaluation//..involves helping program providers to develop and evaluation their own programs... action research)
 * <span style="background-color: rgb(255, 255, 255);"> Scenario G: Impact (//Devolved performance assessment//... organization sets up evaluation procedures by which component can report regularly on their progress.)
 * PEC = Program Evaluation Continuum
 * highlights the need for evaluation to contribute to decision-making at every key point in program design linked to: (a) pre-program, (b) during implementation, and (c) post-completion.
 * Pre-program Stage: **Proactive** and **Clarificative** evaluations for program identification and for program design and appraisal. Seek to identify worthwhile investments which address needs or tap developmental opportunities. Normally involved some form of situational or trend analysis, problem identification and comparison with a desired state... (p. 55)
 * Implementation Stage: **Monitoring** evaluation is used to check that the program is on target in terms of its stated objectives. Seeks to provide up-to-date info on actual implementation progress as compared with targets, and suggestive corrective action. Also, **Impact** evaluation is involved, to assess the likehood that the stated objectives can be attained, with at view to identify changes.(p. 56)
 * Post Completion Stage: **Terminal** evaluation focus on immediate outcomes, seen as an end-of-projet status statement anc could be assembled by project management, focuses on resource use and the actual outcomes to this point in time, seeks to provide an end-of-implementation summary to interested parties, including the funding agency. **Impact** evaluation follow after a sufficient time to allow the full effedts of the program to be noted, provides info about he 'final' outcomes of the progrma, both expected and unexpected, likely to be done by an outside evaluation consultant who reports to the funding agency.<span style="background-color: rgb(255, 255, 255);"> (p. 56)

<span style="color: rgb(0, 0, 255); background-color: rgb(255, 255, 255);">7. Which type of question (who, what, when, where, why) does Owen's evaluation forms address? <span style="background-color: rgb(255, 255, 255);">

8. What is a post hoc evaluation?
 * post hoc = Latin for //after this//.
 * Many times evaluators have a limited range of evidence on which to reach conclusions. Post hoc evaluations are when researchers rely on data collection for a program that has already has been completed or has already started. (p. 65)

<span style="color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);"> 9. What are the ten elements of an evaluation plan, as suggested by Owen? (p. 72/73)
 * 1) Specify the evaluand: What is the focus of the evaluation?
 * 2) Orientation or purpose(s) of the evaluation: Why is the evaluation being done?
 * 3) Clients/primary audiences: Who will receive and use the information?
 * 4) Evaluation resources: what human and material resources are available?
 * 5) Evaluation focus(es): Which element(s) of the program will need to be investigated? -- program context, program design, program implementation, program outcomes or combination?
 * 6) Key evaluation issues/questions: //Assembly of evidence/data management// - What are the key questions and how can we collect and analyze data to answer them? (For each question, outline the data management techniques to be used.) Key questions - To what extent does... ? Is there... ? In what way does... ? //Data management//: What are the most appropriate methods of data collection and data reduction? Collection - Is sampling important? Is anything known about this from other sources? How will that data be collected? Analysis and Interpretation - How will the data be analyzed to addresses the key evaluation question?
 * 7) Dissemination: What strategies for reporting will be used? When will reporting take place? What kinds of information will be included (findings, conclusions, judgments, recommendations)?
 * 8) Codes of behaviour: What ethical issues need to be addressed?
 * 9) Budget and timeline: Give the resources, what will be achieved at key time points during the evaluation?
 * 10) Other considerations which emerge in the course of the negotiation.

<span style="background-color: rgb(255, 255, 255);">Owen, 1999, p. 170-307

 * 1) <span style="background-color: rgb(255, 255, 255);">What is proactive evaluation concerned with?
 * 2) <span style="background-color: rgb(255, 255, 255);">List the key approaches to proactive evaluation.
 * 3) <span style="background-color: rgb(255, 255, 255);">What is clarificative evaluation concerned with?
 * 4) <span style="background-color: rgb(255, 255, 255);">List the key elements of clarificative evaluation.
 * 5) <span style="background-color: rgb(255, 255, 255);">What is interactive evaluation concerned with?
 * 6) <span style="background-color: rgb(255, 255, 255);">List the key elements of interactive evaluation.
 * 7) <span style="background-color: rgb(255, 255, 255);">What is monitoring evaluation concerned with?
 * 8) <span style="background-color: rgb(255, 255, 255);">List the key elements of monitoring evaluation.
 * 9) <span style="background-color: rgb(255, 255, 255);">What is impact <span class="�436�mozilla-findbar-search" style="padding: 0pt; background-color: yellow; color: black; display: inline; font-size: inherit;"> evaluation concerned with?
 * 10) <span style="background-color: rgb(255, 255, 255);">List the key elements of impact <span class="�437�mozilla-findbar-search" style="padding: 0pt; background-color: yellow; color: black; display: inline; font-size: inherit;"> evaluation.

<span style="background-color: rgb(255, 255, 255);">Chen, 1996, p. 121-130
1. <span style="background-color: rgb(255, 255, 255);">Why does Chen criticize the dichotomy of formative and summative evaluation? <span style="background-color: rgb(255, 255, 255);">2. Chen proposes a four-cell typology of evaluation, in lieu of the formative/summative dichotomy. Is the functions/stages typology more inclusive of all types of evaluation? (p. 3)
 * Chen believes that program evaluation has outgrown the formative-summative distinction.
 * The formative-summative dichotomy does not cover many relevant, important kinds of evaluations; as a result there are discrepancies between the as defined by Scriven and the actual usage of these concepts.
 * It is more inclusive, as Scriven's formative-summative dichotomy the cook only tasted the soup to only improve the soup, not to make a valuative conclusion (to see if it is good enough to serve to the guests).
 * When a cook tastes the soup, sometimes the soup is not good enough for the guests and/or making it thought to be beyond capabilities; therefore, it is not just formative or summative. Scriven's formative-summative distinction is narrow. (p. 122)
 * Scriven considers when the guests taste the soup to be summative, guest provide a conclusive opinion of the soup; however, guests' opinions could be used to improve the soup in the future (= formative).
 * Scriven's formative-summative dichotomy and Stake's soup tasting analogy lead to problems in classifying relevant evaluation activies, indicate a need for a broader conceptual framework that can provide a more complete classification of evaluation activities. (p. 123)



I made a little summary of Chen's typology. (p. 123-125)

<span style="background-color: rgb(255, 255, 255);">3. How does Chen's typology relate to Owen's five evaluation forms - or does it? I mashed up Chen Typology and Owen's evaluation forms. They complement each other nicely, as Owen lacks the dimension of function and Chen only has two program stages (process and outcome). I like that I communicate with others easily with Owen's 5 evaluation forms - it's more specific. But I like Chen's argument that any stage the function of the evaluation can be assessment or improvement.



4. Chen expands his four-cell typology to include an additional six cells for mixed purpose evaluations. What are the three circumstances that indicate the use of mixed-purpose evaluations. (Figure 2 from p. 128) Examples of Mixed Types of Evaluation, Sequential Integration - links different types of evaluations in sequential order Examples of circumstances where mixed evaluation is used, (p. 129)
 * Comprehensive Process Evaluation: used to strengthen the implementation process and to judge the merits of a program
 * Comprehensive Outcome Evaluation: elaborates causal mechanisms underlying a program so that it examinesnot only whether the program has an impact, but why, and which mechanisms influence program success or failure.
 * Comprehensive Assessment Evaluation: first evaluates the merits of a program implementation and then evaluates program effectiveness.Info from implementation and outcome assessment provides a comprehensive merit judgment of the overall program.
 * 1) **When evaluation is not a one-shot study, but used as a mechanism for continuous feedback to program stakeholders**. As the program progresses and changes, the need for evaluation is different from time to time. Under such conditions, mixed types of evaluation allow the evaluator flexibility in meeting the changing needs.
 * 2) **When evaluation has to respond to multiple stakeholder groups with different needs****.** Some stakeholders, such as clients and funding agencies, may require information on merit assessment; others, such as program implementers, may need information on program improvement. Under this condition, mixed-type evaluations provide information to meet the diversity of needs.
 * 3) **When stakeholders require comprehensive information about the program**. Chen (1994) argued that, due to the complexity of the program implementation processes, the effectiveness of the labor intensity programs cannot be appropriately assessed without answering both of the following two questions: "Does the program achieve its goals?" and "How does the program achieve its goals?" Mixed-type evaluations provide such comprehensive information.