Evaluation research blog

The primary goal of evaluation and policy research is to evaluate some specific aspect of a policy or program – most often the effectiveness of the program, but research questions can also include questions of efficiency or how to improve a program. Most often, these projects are carried out to evaluate public programs, but social scientists might also evaluate the efforts of private entities as well. Traditionally, public policy analysis uses a model of inputs (the resources required for a program), process (how the programs utilize the inputs to create the desired change), outputs (the product or service produced by the process) and outcomes (the longterm impact and consequences of the program). Various types of designs exist for these projects: an evaluability assessment might be conducted first in order to determine if an assessment is even possible, a needs assessment might be conducted in order to determine what kinds of needs a population might have that can be served by a particular policy or program, a process evaluation might be conducted to determine how a policy or program actually delivers its outputs, or a formative evaluation might be taken out to determine how best to improve a policy or program’s process, an impact evaluation might be taken to determine if the program is actually effective, and, finally, an efficiency analysis might be undertaken to compare costs with benefits.

The value of such evaluatory research can be seen in many examples, but I would like to highlight Schutt’s history of the D.A.R.E. program, which attempts to reduce substance use in high school students. D.A.R.E. was implemented in many high schools as a program where law enforcement officers would interact with students to try to reduce substance use rates. Evaluation research determined that D.A.R.E. didn’t cause any lasting impact on the substance rates of students, and as high schools started dropping out of the program, administrators tried to redesign the program to be more effective. The deployment of D.A.R.E. was so widespread, that it’s surprising that such an enormous sum of money was wasted supporting the program. While the program has been redesigned, preliminary research suggests that the redesigned program might be similarly useless.

Ethical concerns arise from two primary sources in evaluation research: the tendency of public programs to try to help marginalized people, and the concerns of working cooperatively with the government. Because research will often focus on groups such as addicts, children and the incarcerated, ethical concerns regarding consent and the distribution of benefits will naturally arise. Similarly, cooperating with the government can require ceding control over confidential information, and the danger of confidentiality being breeched in legal proceedings is particularly high. Boruch offers various guidelines to help perform this research ethically, including: minimizing sample size, minimizing the size of the control group, only testing relevant portions of the program or policy, comparing the effects of different amounts of the treatment instead of the absence of the treatment and varying the treatment across different settings rather than across different individuals in one particular setting. Additionally, federal laws have have been passed to prohibit breaches of confidentiality in court, although this remains a dangerous area that researchers much consider carefully before undertaking a project.

An example of such a study can be found in Wyker, Jordan and Quigley’s “Evaluation of Supplemental Nutrition Assistance Program Education: Application of Behavioral Theory and Survey Validation” (http://dx.doi.org.proxy.library.vcu.edu/10.1016/j.jneb.2011.11.004). Wyker, Jordan and Quigley’s study can be best described as a formative analysis, which considers the conditions necessary for SNAP-Ed programs to change the eating habits of their beneficiaries. Wyker, Jordan and Quigley rely heavily upon previous needs assessments in the design of their study, which is discussed both above and in the textbook by Schutt. Some of the concerns brought up by Schutt are also visible in the article – their sample was not constructed as representatively as the authors would have liked, and the authors suggest randomly sampling or oversampling in order to ensure that more males and Latinos are present in the study. This might run into ethical concerns brought up by Schutt regarding the distribution of benefits – however, if their sample groups only include people who are already participating in the program, this will likely not be a problem.

Privacy Statement