Unit+2+Study+Questions

=Unit 2 - Issues to Think About=


 * 1) Think about your work situation or experience. Has your institution or organization implemented any new educational or training programs in the past couple of years? If so, who has been affected? Who, according to Scriven's definition of consumer, are the consumers in your situation?
 * 2) Think about an aspect of a program, or a whole program, in which you have considerable experience and expertise. Can you judge if that aspect of the program (or the whole program) is successful or unsuccessful? How do you know? What qualities are you focusing on?
 * 3) About fifteen years ago, a new Dean of the Faculty of Education at Memorial University hired an external consultant (an Associate Dean from UBC) to review its six graduate programs. The external consultant performed an expertise-oriented evaluation. She recommended canceling all six graduate programs and reformulating four new programs - omitting two areas of specialization altogether. Faculty members were not pleased. If a participant-oriented evaluation had been performed (using for example Stake's Responsive Model) do you think the results might have been different? Better or worse? More acceptable or less acceptable to faculty members?
 * 4) You are asked to do an evaluation that rigorously measures the achievement outcome of a program that is completely implemented. Another evaluator had been doing a process and product (or outcomes) evaluation throughout implementation. This required intensive site visits and participant consultation over a four month period. Which report do you think you'd have the most faith in? Why? Or is this question to simplistic?
 * 5) You are an adult currently enrolled in a graduate course. Let's assume that this course is being rigorously evaluated by an external evaluator. Think about how you might feel about the evaluation report, if nobody sent you a questionnaire or interviewed you about your course experiences? Would you have much faith in the evaluation report? Which model(s) or approach(es) might the evaluator have followed, if you were not consulted?

=Unit 2 - Readings=

In addition to reading this //Study Guide//, you should read the following textbook sections and articles from the //Readings//: = = = = =Notes from Course Notes=
 * 1) Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). //Program evaluation: Alternative approaches and practical guidelines// (pages 53-167. White Plains, NY: Longman.
 * 2) Patton, M. Q. (2001). [|Evaluation, knowledge management, best practices, and high quality lessons learned]. //American Journal of Evaluation, 22//(3), 329-336.
 * 3) Morris, D. B. (2002). [|The inclusion of stakeholders in evaluation: Benefits and drawbacks]. //Canadian Journal of Program Evaluation, 17//(2), 49-58.

Management-oriented Consumer-oriented || Expertise-oriented Participant-oriented ||
 * **Phase 1 Models** || **Phase 2 Models** ||
 * Objective-oriented
 * Early attempts to describe and provide guidelines for doing evaluation - they didn't break away from the scientific research/objectivist/quantitative measurement framework || These models broke new ground and borrowed form other research paradigms in delineating guidelines for doing evaluation. They opened possibilities for use of an anthropological humanist research/subjectivist/qualitative measurement framework. ||

=Unit 2 - Study Questions=

Fitzpatrick, Sanders & Worthen, 2004, p. 57-167
Evaluators need to identify stakeholders for an evaluation and involve them early, actively, and continuously. (p 54)
 * Stake holders**: individuals/groups who have a direct interest in and may be affected by the program being evaluated or the evaluation's results. They hold a stake in the future direction of that program and deserve to play a role in determining that a direction by
 * 1) identifying concerns and issues to be addressed in evaluating the program.
 * 2) selecting the criteria that will be used in judging its value.


 * Program**: activities that are provided on a continuing basis; typically what is evaluated**.

Epistemologies** (philosophies of knowing)
 * **Objectivism** requires evaluation to be "scientifically objective" - that is use data-collection and analysis techniques that yield result reproducible and verifiable by others using the same techniques. The evaluation procedure is 'externalized' - existing outside of the evaluator. (social science base of empiricism; replicate) (p. 60) Logical Positivism is a 20th century philosophical movement that holds characteristically that all meaningful statements are either analytic or conclusively verifiable or at least confirmable by observation and experiment and that metaphysical theories are therefore strictly meaningle ss. (p. 61) [|merriam-webster]
 * **Subjectivism** is based on the experience of the evaluator rather than scientific method. Knowledge is understood as being largely tacit rather than explicit. The validity/accuracy of an evaluation depends on the evaluator's experience/background/qualifications/keenness of perceptions. The evaluation procedure is "internalized" - existing within the evaluator in ways that are not explicitly understood or reproducible by others. (experientially-based; tacit knowledge; phenomenology) (p. 60-61)


 * Principles for assigning value** (parallel objectivism/subjectivism)
 * **Utilitarian Evaluation**: focus on group gains (avg scores); greatest good for the greatest number (p. 62)
 * **Intuitionist-Pluralist Evaluation**: value of impact of program on each individual; all who are affected by program are judges (stakeholders) (p. 62)
 * Are these principles mutually exclusive? No. The purist view that looks noble in print yields to practical pressures demanding that he evaluator use appropriate methods based on alternative epistemologies within the same evaluations (impractical). Choose the methods right for THAT evaluation and understand the assumptions/limitations of the different approaches.

- Evaluation is a transdiscipline - crosses many disciplines. "Law of the instrument" fallacy - with hammer/nails, everything appears to need hammering (p. 64) - Identify what is useful in each evaluation approach, use it wisely & avoid being distracted by approaches designed to deal w/ different needs (p. 66)
 * Quantitative:** numerical - statistics
 * Qualitative:** non-numerical - narratives and verbal descriptions


 * Practical Considerations**
 * 1) Evaluators disagree whether/not intent of evaluation is to render a value judgment. Some are concerned only with the usefulness of the evaluation to decision makers and believe that they, not the evaluator, should render the value judgment. Others believe the evaluator's report to the decision maker is complete only if it contains a value judgment. (decision-makers or evaluator render judgment?) (p. 66)
 * 2) Evaluators differ in views of evaluation’s political role. Who has the authority? Responsibility? These will dictate evaluation style.
 * 3) Evaluators are influence by their prior experience.
 * 4) Evaluators differ in who they think should conduct the evaluation and nature of expertise needed to do so?
 * 5) Evaluators differ in their perception of whether it is desirable to have a wide variety of evaluation approaches.



= tunnel vision || Emphasis on organizational efficiency and production model; assumption of orderliness and predictability in decision making; can be expensive to administer and maintain; narrow focus on the concerns of leaders || Cost and lack of sponsorship; may suppress creativity or innovation; not open to debate or cross-examination || Replicability; vulnerability to personal bias; scarcity of supporting documentation to support conclusions; open to conflict of interest; superficial look at context; overuse of intuition; reliance on qualifications of the “experts” || Nondirective; tendency to be attracted by the bizarre or atypical; potentially high labor-intensity and cost; hypothesis generating; potential for failure to reach closure || (Fitzpatrick, pp160-162)
 * || //Objectives-orientated// || //Management-orientated// || //Consumer-orientated// || //Expertise-orientated// || //Participant-orientated// ||
 * 1.//Some proponents// || Tyler, Provus, Metfessel & Michael, Hammond, Popham, Taba, Bloom, Talmage || Stufflebeam, Alkin, Provus, Wholey || Scriven, Komoski || Eisner, Accreditation Groups || Stake, Patton, guba and Lincoln, Rippey, MacDonald, Parlett and Hamilton, Cousins and Earl ||
 * 2.//Purpose of evaluation// || Determining the extent to which objectives are achieved || Providing useful info to aid in making decisions || Providing info about products to aid decisions about purchases or adoptions || Providing professional judgments of quality || Understanding & portraying the complexities of programmatic activity, responding to an audience’s requirements for info ||
 * 3.//Distinguishing characteristics// || Specifying measurable objectives; using objective data; looking for discrepancies between objectives and performance || Serving rational decision making; evaluating at all stages of program development || Using criterion checklists to analyze products; product testing; informing consumers || Basing judgments on individual knowledge and experience; use of consensus standards, team/site visitations || Reflecting multiple realities; use of inductive reasoning and discovery; firsthand experience on site; involvement of intended users; training intended users ||
 * 4.//Past uses// || Program development; monitoring participant outcomes, needs assessment || Program development; institutional management systems; program planning; accountability || Consumer reports; product development; selection of products for dissemination || Self-study; blue-ribbon panels; accreditation; examination by committee; criticism || Examination of innovations or change about which little is known; ethnographies of operating programs ||
 * 5.//Contributions to the conceptualization of an evaluation// || Pre-pose measurement of performance; clarification of goals; use of objective measurements that are technically sound || Identify & evaluate needs & objectives; consider alternative program designs & evaluate them; watch the implementation of a program; look for bugs & explain outcomes; see if needs have been reduced or eliminated; metaevaluation; guidelines for institutionalizing evaluation || Lists of criteria for evaluating educational products & activities; archival references for completed reviews; formative-summative roles of evaluation; bias control || Legitimation of subjective criticism; self-study with outside verification; standards || Emergent evaluation designs; use of inductive reasoning; recognition of multiple realities; importance of studying context; criteria for judging the rigor of naturalistic inquiry ||
 * 6.//Criteria for judging evaluations// || Measurability of objectives; measurement reliability validity || Utility; feasibility; propriety; technical soundness || Freedom from bias; technical soundness; defensible criteria used to draw conclusions and make recommendations; evidence of need and effectiveness required || Use of recognized standards; qualifications of experts || Credibility; fit; auditability; confirmability ||
 * 7.//Benefits// || Ease of use; simplicity; focus on outcomes; high acceptability; forces objectives to be set || Comprehensiveness; sensitivity to information need of those in a leadership position; systematic approach to evaluation; use of evaluation throughout the process of program development; well operationalized with detailed guidelines for implementation; use of a wide variety of info || Emphasis on consumer info needs; influence on product developers; utility; availability of checklists || Broad coverage; efficiency (ease of implementation, timing); capitalizes on human judgment || Focus on description & judgment concern with context; openness to evolve evaluation plan; pluralistic use of inductive reasoning; use of a variety of info; emphasis on understanding ||
 * 8.//Limitations// || Oversimplification of evaluation and problems outcomes-only orientation; reductionististic; linear; overemphasis on outcomes

=Application Exercise, (Fitzpatrick, 2004 p. 166)=

2. Below is a list of evaluation purposes. Which approach would you choose to use in each of these examples? Why? What would be the advantages and disadvantages of this approach in each setting? a. Determining whether to continue a welfare-to-work program designed to get full-time, long-term employment for welfare recipients b. Describing the implementation of a distance- learning education program for college students c. Making recommendations for the improvement of a conflict-resolution program for middle-school students d. Determining whether reading levels of first graders, in Ms. Jones' class, at the end of the year are appropriate.
 * management-oriented evaluation: the approach is meant to serve decision makers. Its rationale is that evaluative information is an essential part of good decision making and that the evaluator can be most effective by serving administrators, managers, policy makers, boards, practitioners, and others who need good evaluative information. (p. 88) Also, since the evaluation is summative and most likely aimed at the cost/benefits of the program
 * Consumer-orientated: Providing info to the consumers (the college students) about products to aid decisions about purchases or adoptions (from the table above).
 * Or participant-orientated, as it relies heavily on descriptive information and considers views and needs of all the stakeholder groups.
 * Participant-oriented: evaluators work to portray the multiple needs, values, and perspectives of program stakeholders to be able to make judgments about the value or worth of the program being evaluated. (p. 149)
 * Objective-orientated: we are looking for discrepancies between the established reading level objectives and the students performance, any discrepancies will be measured. The established objectives are straightforward, it should be easy to compare end of year assessments with the expected level.

Good quote, "Cronbach (as cited in Fitzpatrick, 2004) notes that one important role of the evaluator is to illuminate, not dictate, the decision. Helping clients to understand the compexity of issues, not to give simple answers to narrow questions, it a role of evaluation."

1. Which evaluation model emulates the systems approach, or systems thinking, most closely? 2. In rejecting objectives-oriented evaluation, did evaluators believe that it was fundamentally flawed? If so, how? 3. Which evaluation model relates to the accreditation movement? 4. Which evaluation model has a built-in sort of meta-evaluation?
 * hmm.. not done reading yet, but objective-orientated evaluation talks about inputs and outputs.. I will keep reading. (p.76)
 * I was wrong - developers of the management-oriented evaluation approach have relied on a systems approach to evaluation in which decisions are made about inputs, processes, and outputs much like the logic models and program theory. By highlighting different levels of decisions and decision makers, this approach clarifies who will use the evaluation results, how they will use them, and what aspect(s) of the system to make decisions about. (p. 88)
 * Also, Provus' Discrepancy Evaluation Model, described as an objectives-oriented evaluation model, is systems-oriented, focusing on input, process, and output at each of five stages of evaluation: program definition, program installation, program process, program products, and cost-benefit analysis. (p. 93)
 * //Expertise-orientated//
 * Management-oriented approach (see Table 9.1, p. 160). Kirkpatrick recognizes that one of the //contributions to the conceptualization of an evaluation// is metaevaluation. " This link to metaevaluation was made because models like CIPP and UCLA are actually 4 to 5 evaluations in one – so people could think of them as internal evaluation of evaluations, with a stretch (M. Kennedy, personal communication with, May 25, 2009).

5. Is goal-free evaluation really goal free? (Scriven developed this evaluation method) (p. 84) 6. Why would an evaluator avoid knowing the program goals and objectives? (p. 84) 7. We still write program and course goals and objectives. Are they of any use to evaluators? 8. What does CIPP represent, in the CIPP Model? (p. 89)
 * Goal-Free Evaluation: (1) goals should not be taken as given, they should be evaluated, (2) goals are generally little more then rhetoric ans seldom reveal the real objectives of the project or changes in the intent, (3) many important program outcomes are not included in the list of original program goals or objectives.
 * Scriven believes that the most important function of goal-free evaluation is to reduce bias and increase objectivity (sounds like Scriven!!).
 * Goal-free evaluation is intentionally opposite of Objective oriented evaluation.
 * Goal-directed evaluation and goal-free evaluation can work well together, asit is important to know how other judge the program, not only on the basis of how well it does what it is //supposed// to do but also on the basis of what it //does// in all areas, on all its outcomes, intended or not. (p. 85)
 * In objectives-orientated evaluations, an evaluator is told the goals of the program and is therefore immediately limited in perceptions - the goals act like blinders, causing one to miss important outcomes not directly related to those goals. (hmmm...)
 * Damn yes! Getting tired...
 * **C**ontext Evaluation - planning decisions: Determining what needs are to be addressed by a program and what programs already exist helps in defining objectives for the program.
 * **I**nput Evaluation - structuring decisions: Determining what resources are available, what alternatives strategies for the program should be considered, and what plan seems to have the best potential for meeting needs facilitates design of program procedures.
 * **P**rocess Evaluation - implementing decisions: How well is the plan being implemented? What barriers threaten its success? What revisions are needed? Once these questions are answered, procedures can be monitored, controlled, and refined.
 * **P**roduct Evaluation - recycling decisions. What results were obtained? How well were needs reduced? What should be done wit the program after it has run its course? These questions are important in judging program attainments

9. How are administrative decisions made in your organization? Do they always follow a logical approach? 10. Do adversary evaluation always have to present two opposing views? 11. List three fundamental characteristics of participant-oriented evaluation. (p. 133/4) 12. With all these evaluation models to choose from, what influences an evaluator's decision? > based on philosophical, methodological, and client preferences. Often, evaluators will not adhere to one specific approach, but instead will opt for a combination of > several approaches in a more eclectic approach to evaluation. (p. 165/66)
 * I don't know and I would hope so, but I am sure they are they are not always.
 * More holistic approach which admits to the complexity of humans. Instead of simplifying the issues we should, attempt to understand ourselves and human services in context of their complexity.
 * Value pluralism (a theory assuming more than one principle or basic substance as the ground of reality) is recognized, accommodated, and protected.
 * Depend on inductive reasoning. Understanding comes from grassroots observation and discovery.
 * Multiplicity of data. Understanding comes from the assimilation of data from a number of sources. Subjective/objective, qualitative/quantitative representations of the evaluand.
 * Do not follow a standard plan. The evaluation process evolves as participants gain experience. Often the important outcome of the evaluation is a rich understanding of one specific entity with all of its idiosyncratic contextual influences, process variations, and life histories.
 * Record multiple rather than single realities. No one perspective is accepted as the truth, all perspectives are accepted as correct, and a central task of the evaluator is to capture these realities and portray them without sacrificing the program's complexity.
 * each evaluation must be judged by its usefulness, not its label. (p. 139)
 * first make certain the proposed strategy and tactics fit the terrain and will attain the desired out comes of the campaign (p. 154)
 * evaluation practitioners should use these approaches as heuristic tools, selecting from a variety of evaluation approaches one appropriate for the situation rather than distort the interests and needs of the evaluation's audience(s) to make them fit a preferred approach. (p. 154)
 * How will one know which approach is best for a given situation? There is almost no research to guide one's choice. (p. 156)
 * eclectic in program evaluation, choosing and combining concepts from the evaluation approaches to fit particular situations, using pieces of various evaluation approaches as they seem appropriate. (p. 1.64)
 * Much of evaluation's potential lies in the scope of strategies it can employ and in the possibility of selectively combining those approaches. Narrow, rigid adherence to single approaches must give way to more mature, sophisticated evaluations that welcome diversity. (p. 165)
 * The way in which evaluators determine which approach(es) to employ in a given situation is not based on scientific inquiry or empirical testing; rather, it is

Patton, 2001
1. Why does Patton think that the term `best practices' is a bad idea? (p. 330/331) 2. What does Patton mean by pragmatic utilitarian generalization, and how is it achieved? (p. 334) 3. Is this statement by Patton fair criticism, do you think? "Seldom do such statements [best practices and lessons learner] identify for whom the practice is best, under what conditions it is best, or what values or assumptions undergird its best-ness." (p. 330)
 * The widespread and indiscriminate use of the terms "lessons learned" and "best practices" has devalued them both conceptually and pragmatically because they lack any common meaning, standard, or definition.
 * The assumption with "best practices" is that there must be a single best way t o do something is highly suspect, as the world values diversity, recognizes that many paths exist for reaching some destination; some may be more difficult and costly, but those are criteria that take us beyond just getting there and reveals the importance of asking, //"best" from whose perspective using what criteria?// (p. 331)
 * From a systems point of view, a major problem with "best practices" is the way that they are offered without attention to context - a lot of "best practices" rhetoric presumes context-free adoption. (p. 331)
 * "Best practices" that guide practice can be helpful, but ones that are highly prescriptive and specific represents bad practice of best practices.
 * Calling something "best" is typically more a political assertion than an empirical conclusion.
 * Theory is an abstraction from direct experience, and thus Patton is asserting that high-quality principles should be generalized from existing evaluations such that they can be transferable and applied to new situations. (Not sure if this is clear - need more tea.)
 * I think it is fair, as evaluations are generating a lot of knowledge and there may be a way to harvest common principles and generic patterns of program effectiveness, within certain contexts.

Morris, 2002, p. 49-58
Morris notes that stakeholder involvement involvement empowers the stakeholders, increases the utilization of results, and increases the validity of the evaluation.

1. How does Morris define stakeholder? 2. Is the inclusion of stakeholders in evaluation activity related to a quantitative research perspective or a qualitative research perspective? 3. What is the basic premise of social constructivism? 4. How is stakeholder inclusion in evaluation activity related to the concept of empowerment? 5. Does the inclusion of stakeholders increase or decrease conflict, overall? 6. List three benefits of including stakeholders (p. 50) 7. List three drawbacks to including stakeholders. (p. 50) Stakeholder participation can lead to feelings of lack of power by participating stakeholders, disregarding of results by decision makers, and questionable validity. Thus, stakeholder participation in evaluations can influence the same outcomes both positively and negatively.
 * Morris notes that the term stakeholder has had different definitions for different researchers over the years and researchers are now moving toward explicitly defining how the term "stakeholder" is used. Morris defines stakeholder very broadly, ranging from token to active participation by recipients, distributors, or financial supporters of program services. (p. 49/50)
 * The origins of participant involvement in evaluation can be found in the social constructivist perspective employed in qualitative research. (P. 50)
 * A learning theory that emphasizes that learning is an active social process in which individuals make meanings through the interactions with each other and with the environment they live in. Knowledge is thus a product of humans and is socially and culturally constructed. (p. 50/51)
 * Thus all stakeholders perspectives related to an evaluation are valued and should be actively sought from stakeholders to gain a complete picture of program rationale, impact, and alternatives. Using the principles of social constructivism should lead to am ore valid evaluation that empowers and increases the likelihood of the utilization of results as stakeholder investment is greater that otherwise be the case. (p. 51/52)
 * Empowerment is the increased feeling or sense of power stemming from a given action, in this case participation. (p. 52)
 * Feeling listened to, being taken seriously, and making an important contribution are characteristic of empowerment. Studies show that participation leads to empowerment.
 * Conflict should be anticipated in participation evaluations. If not anticipated, conflict can easily lead to feelings of disempowerment. (p. 54)
 * If stakeholder views are not considered to be of equal value, the participant evaluation process may actually increase the conflict and power differential among the groups, thereby polarizing participants and impeding any productive discussion. (p. 54)
 * 1) Empowering stakeholders by encouraging active participation
 * 2) Increase the utilization of the program evaluation results, and
 * 3) The validity of the results
 * 1) additional time,
 * 2) personnel, and
 * 3) expenses required.

The study questions for this course were written by: [|Mary Kennedy]