Content deleted Content added
WP:CHECKWIKI error fix #94. Stray ref tag. Do general fixes and cleanup if needed. -, typo(s) fixed: Wholey → Wholly using AWB (11756) |
|||
Line 5:
Program evaluations can involve both [[quantitative method|quantitative]] and [[qualitative method]]s of [[social research]]. People who do program evaluation come from many different backgrounds, such as [[sociology]], [[psychology]], [[economics]], [[social work]], and [[public policy]]. Some graduate schools also have specific training programs for program evaluation.
{{TOC limit}}
==Doing an evaluation==
Line 18 ⟶ 19:
A needs assessment examines the population that the program intends to target, to see whether the need as conceptualized in the program actually exists in the population; whether it is, in fact, a problem; and if so, how it might best be dealt with. This includes identifying and diagnosing the actual problem the program is trying to address, who or what is affected by the problem, how widespread the problem is, and what are the measurable effects that are caused by the problem. For example, for a housing program aimed at mitigating homelessness, a program evaluator may want to find out how many people are homeless in a given geographic area and what their demographics are. Rossi, Lipsey and Freeman (2004) caution against undertaking an intervention without properly assessing the need for one, because this might result in a great deal of wasted funds if the need did not exist or was misconceived.
Needs assessment involves the processes or methods used by evaluators to describe and diagnose social needs<ref name="Rossi"/
This is essential for evaluators because they need to identify whether programs are effective and they cannot do this unless they have identified what the problem/need is. Programs that do not do a needs assessment can have the illusion that they have eradicated the problem/need when in fact there was no need in the first place. Needs assessment involves research and regular consultation with community stakeholders and with the people that will benefit from the project before the program can be developed and implemented. Hence it should be a bottom-up approach. In this way potential problems can be realized early because the process would have involved the community in identifying the need and thereby allowed the opportunity to identify potential barriers.
Line 50 ⟶ 51:
===Assessing program theory===
The program theory, also called a [[logic model]] or impact pathway,<ref>Centers for Disease Control and Prevention. Framework for Program Evaluation in Public Health. MMWR 1999;48(No. RR-11).</ref> is an assumption, implicit in the way the program is designed, about how the program's actions are supposed to achieve the outcomes it intends. This 'logic model' is often not stated explicitly by people who run programs, it is simply assumed, and so an evaluator will need to draw out from the program staff how exactly the program is supposed to achieve its aims and assess whether this logic is plausible. For example, in an HIV prevention program, it may be assumed that educating people about HIV/AIDS transmission, risk and safe sex practices will result in safer sex being practiced. However, research in South Africa increasingly shows that in spite of increased education and knowledge, people still often do not practice safe sex.<ref>Van der Riet, M. (2009). 'The production of context: using activity theory to understand behaviour change in response to HIV and AIDS.' Unpublished doctoral dissertation. University of KwaZulu-Natal, Pietermaritzburg.</ref> Therefore, the logic of a program which relies on education as a means to get people to use condoms may be faulty. This is why it is important to read research that has been done in the area.
Explicating this logic can also reveal unintended or unforeseen consequences of a program, both positive and negative. The program theory drives the hypotheses to test for impact evaluation. Developing a logic model can also build common understanding amongst program staff and stakeholders about what the program is actually supposed to do and how it is supposed to do it, which is often lacking (see [[Participatory impact pathways analysis]]). Of course, it is also possible that during the process of trying to elicit the logic model behind a program the evaluators may discover that such a model is either incompletely developed, internally contradictory, or (in worst cases) essentially nonexisistent. This decidedly limits the effectiveness of the evaluation, although it does not necessarily reduce or eliminate the program.<ref>Eveland, JD. (1986) "Small Business Innovation Research Programs: Solutions Seeking
Creating a logic model is a wonderful way to help visualize important aspects of programs, especially when preparing for an evaluation. An evaluator should create a logic model with input from many different stake holders. Logic Models have 5 major components: Resources or Inputs, Activities, Outputs, Short-term outcomes, and Long-term outcomes <ref name="McLaughlin, J. A. 1999">McLaughlin, J. A., & Jordan, G. B. (1999). Logic models: a tool for telling your programs performance story. Evaluation and program planning, 22(1), 65-72.</ref> Creating a logic model helps articulate the problem, the resources and capacity that are currently being used to address the problem, and the measurable outcomes from the program. Looking at the different components of a program in relation to the overall short-term and long-term goals allows for illumination of potential misalignments. Creating an actual logic model is particularly important because it helps clarify for all stakeholders: the definition of the problem, the overarching goals, and the capacity and outputs of the program.<ref name="McLaughlin, J. A. 1999"/>
Line 58 ⟶ 59:
This entails assessing the program theory by relating it to the needs of the target population the program is intended to serve. If the program theory fails to address the needs of the target population it will be rendered ineffective even when if it is well implemented.<ref name=Rossi/>
* Assessment of logic and plausibility<ref name=Rossi/>
This form of assessment involves asking a panel of expert reviewers to critically review the logic and plausibility of the assumptions and expectations inherent in the program's design.<ref name=Rossi/> The review process is unstructured and open ended so as to address certain issues on the program design. Rutman (1980), Smith (1989), and
:Are the program goals and objectives well defined?
:Are the program goals and objectives feasible?
Line 75 ⟶ 76:
===Assessing implementation===
Process analysis looks beyond the theory of what the program is supposed to do and instead evaluates how the program is being implemented. This evaluation determines whether the components identified as critical to the success of the program are being implemented. The evaluation determines whether target populations are being reached, people are receiving the intended services, staff are adequately qualified. Process evaluation is an ongoing process in which repeated measures may be used to evaluate whether the program is being implemented effectively. This problem is particularly critical because many innovations, particularly in areas like education and public policy, consist of fairly complex chains of action. Many of which these elements rely on the prior correct implementation of other elements, and will fail if the prior implementation was not done correctly. This was conclusively demonstrated by [[Gene V. Glass]] and many others during the 1980s. Since incorrect or ineffective implementation will produce the same kind of neutral or negative results that would be produced by correct implementation of a poor innovation, it is essential that evaluation research assess the implementation process itself.<ref>Eveland, JD (1986)
===Assessing the impact (effectiveness)===
Line 137 ⟶ 138:
* '''Later phase:''' Activities achieve stability and are no longer in formation. Experience informs knowledge about which activities may be effective.
''Recommended evaluation approach:'' Summative evaluation
==Planning a program evaluation==
Line 154 ⟶ 155:
===The shoestring approach===
The
===Budget constraints===
Line 187 ⟶ 188:
The purpose of this section is to draw attention to some of the methodological challenges and dilemmas evaluators are potentially faced with when conducting a program evaluation in a developing country. In many developing countries the major sponsors of evaluation are donor agencies from the developed world, and these agencies require regular evaluation reports in order to maintain accountability and control of resources, as well as generate evidence for the program’s success or failure.<ref>{{cite journal |last=Bamberger |first=M. |year=2000 |title=The Evaluation of International Development Programs: A View from the Front |journal=American Journal of Evaluation |volume=21 |issue=1 |pages=95–102 |doi=10.1177/109821400002100108 }}</ref> However, there are many hurdles and challenges which evaluators face when attempting to implement an evaluation program which attempts to make use of techniques and systems which are not developed within the context to which they are applied.<ref name="Smith, 1990">{{cite journal |last=Smith |first=T. |year=1990 |title=Policy evaluation in third world countries: some issues and problems |journal=Asian Journal of Public Administration |volume=12 |issue= |pages=55–68 |doi= }}</ref> Some of the issues include differences in culture, attitudes, language and political process.<ref name="Smith, 1990" /><ref name="Ebbutt, 1998">{{cite journal |last=Ebbutt |first=D. |year=1998 |title=Evaluation of projects in the developing world: some cultural and methodological issues |journal=International Journal of Educational Development |volume=18 |issue=5 |pages=415–424 |doi=10.1016/S0738-0593(98)00038-8 }}</ref>
Culture is defined by Ebbutt (1998, p. 416) as a
Language also plays an important part in the evaluation process, as language is tied closely to culture.<ref name="Bulmer & Warwick, 1993" /> Language can be a major barrier to communicating concepts which the evaluator is trying to access, and translation is often required.<ref name="Ebbutt, 1998" /> There are a multitude of problems with translation, including the loss of meaning as well as the exaggeration or enhancement of meaning by translators.<ref name="Ebbutt, 1998" /> For example, terms which are contextually specific may not translate into another language with the same weight or meaning. In particular, data collection instruments need to take meaning into account as the subject matter may not be considered sensitive in a particular context might prove to be sensitive in the context in which the evaluation is taking place.<ref name="Bulmer & Warwick, 1993" /> Thus, evaluators need to take into account two important concepts when administering data collection tools: lexical equivalence and conceptual equivalence.<ref name="Bulmer & Warwick, 1993" /> Lexical equivalence asks the question: how does one phrase a question in two languages using the same words? This is a difficult task to accomplish, and uses of techniques such as back-translation may aid the evaluator but may not result in perfect transference of meaning.<ref name="Bulmer & Warwick, 1993" /> This leads to the next point, conceptual equivalence. It is not a common occurrence for concepts to transfer unambiguously from one culture to another.<ref name="Bulmer & Warwick, 1993" /> Data collection instruments which have not undergone adequate testing and piloting may therefore render results which are not useful as the concepts which are measured by the instrument may have taken on a different meaning and thus rendered the instrument unreliable and invalid.<ref name="Bulmer & Warwick, 1993" />
Line 244 ⟶ 245:
===Positivist===
Potter (2006)<ref>Potter, C. (2006). Program Evaluation. In M. Terre Blanche, K. Durrheim & D. Painter (Eds.), ''Research in practice: Applied methods for the social sciences'' (2nd ed.) (pp. 410-428). Cape Town: UCT Press.</ref> identifies and describes three broad paradigms within program evaluation . The first, and probably most common, is the [[positivist]] approach, in which evaluation can only occur where there are
A detailed example of the positivist approach is a study conducted by the Public Policy Institute of California report titled "Evaluating Academic Programs in California's Community Colleges", in which the evaluators examine measurable activities (i.e. enrollment data) and conduct quantitive assessments like factor analysis.<ref>{{cite web|last=Gill|first=Andrew|title=Evaluating Academic Programs in California's Community Colleges|url=http://www.ppic.org/main/publication.asp?i=322|publisher=PPIC}}</ref>
Line 282 ⟶ 283:
==Transformative Paradigm==
The transformative paradigm is integral in incorporating social justice in evaluation. Donna Mertens, primary researcher in this field, states that the transformative paradigm,
Both the [[American Evaluation Association]] and [[National Association of Social Workers]] call attention to the ethical duty to possess [[cultural competence]] when conducting evaluations. Cultural competence in evaluation can be broadly defined as a systemic, response inquiry that is actively cognizant, understanding, and appreciative of the cultural context in which the evaluation takes place; that frames and articulates epistemology of the evaluation endeavor; that employs culturally and contextually appropriate methodology; and that uses stakeholder-generated, interpretive means to arrive at the results and further use of the findings.<ref name=SenGupta>{{cite book|last=SenGupta|first=S., Hopson, R., & Thompson-Robinson, M.|title=Cultural competence in evaluation:an overview. New Directions in Evaluation, 102.|year=2004|pages=5–19}}</ref> Many health and evaluation leaders are careful to point out that cultural competence cannot be determined by a simple checklist, but rather it is an attribute that develops over time. The root of cultural competency in evaluation is a genuine respect for communities being studied and openness to seek depth in understanding different cultural contexts, practices and paradigms of thinking. This includes being creative and flexible to capture different cultural contexts, and heightened awareness of power differentials that exist in an evaluation context. Important skills include: ability to build rapport across difference, gain the trust of the community members, and self-reflect and recognize one’s own biases.<ref name=Endo>{{cite book|last=Endo|first=T., Joh, T., & Yu, H.|title=Voices from the field: Health evaluation leaders in multicultural evaluation|year=2003|publisher=Policy Research Associates|___location=Oakland, CA|page=5}}</ref>
Line 300 ⟶ 301:
====Epistemology (Knowledge)====
Knowledge is constructed within the context of power and privilege with consequences attached to which version of knowledge is given privilege.<ref name="Mertens (2012)" />
====Methodology (Systematic Inquiry)====
Line 312 ⟶ 313:
====Feminist Theory====
The essence of [[feminist theories]] is to
====Queer/LGBTQ Theory====
|