intervention studies 2405

include the Journal of Experimental Criminology
and www.trialsjournal.com.
When randomized trials are not ethical or
feasible, evidence on what intervention works
may be generated through quasi experiments
or through statistical model based approaches.
These are common in sociology and other social
sciences. Quasi experiments and observational
studies produce more equivocal (i.e., potentially
biased) estimates of effect than randomized
trials.
Generally, these approaches try to approxi
mate a randomized trial by using statistical
methods to equate groups or to construct groups
that are similar apart from the intervention. The
methods and the independent variables used
vary depending on the domain. The statistical
approaches in the model based approaches
include propensity scores, selection models,
structural models, instrumental variables, and
other techniques that try to approximate the
results of randomized trials.
Comparing the results of randomized con
trolled trials against quasi experiments or obser
vational studies on the same intervention is
important for several reasons. Randomized trials
ensure unbiased estimates of effect, but they are
often more difficult to carry out than a quasi
experiment. The quasi experiment may be
easier to carry out, but does not provide the
same level of assurance of unbiased estimates
and instead relies on more assumptions. If the
results of using each approach are similar, or
lead to the same policy decisions, one might
then opt for quasi experiments. Empirical com
parisons of the results of each suggest, however,
that results often do differ and neither the
magnitude nor the direction differences are pre
dictable. The discrepancies have been explored
through reviews of intervention studies in health
(Deeks et al. 2003), employment and training
(Glazerman et al. 2003), education and eco
nomic development (Rawlings 2005), and other
areas. Identifying specific domains in which
the nonrandomized intervention studies are
dependable is crucial for science and for build
ing better evidence based policy.
Some professional societies and government
organizations have developed standards for
reporting based on studies of the effects of inter
ventions. The standards have been constructed
to ensure that the relevant evidence is presented
completely and uniformly. In health care, for
instance, the international CONSORT state
ment has been a model for reports on rando
mized controlled trials (Mohrer et al. 2001;
Campbell et al. 2004). Analogous efforts have
been made to ensure uniform reporting on
quasi experimental (nonrandomized) trials,
notably TREND.
Standards for judging the trustworthiness of
evidence from studies on the effects of inter
ventions have also been developed. The inter
national Society for Prevention Research, for
instance, issued guidelines that distinguish
between evidence on efficacy trials and effec
tiveness trials and also handles evidence on
whether effective programs can be dissemi
nated (Flay et al. 2005).
In education, substance abuse, and mental
health, government agencies have developed
systems for screening evidence from studies of
the effects of the interventions. In the US, the
Institute for Education Sciences put high prior
ity on randomized trials, and put only certain
quasi experimental designs in second place. It
eliminated many other study designs as a basis
for dependable evidence. The Substance Abuse
and Mental Health Administration (SAMSHA)
sponsors the National Register of Exemplary
Programs and Practices (NREPP) to assist peo
ple in identifying model programs that have been
identified on the basis of quality of evidence,
including randomized trials (www.modelpro
grams.samsha.gov). In crime and delinquency,
Blueprints screens evidence and identifies model
programs.
Addressing the fourth question, involving
cost effectiveness of different interventions,
usually presumes evidence for answers to the
first three questions. Few peer reviewed jour
nals that report on trustworthy studies of the
effects of interventions also report on the inter
vention costs, however. Accountants, finance
people, and economists can then add value
beyond the first three questions addressed in
intervention studies. Guidelines on the conduct
of cost effectiveness analyses of interventions
have been developed for various substantive
areas of study (National Institute of Drug Abuse
1999; on prevention and treatment, Levin &
McEwan 2001; in education, Rossi et al. 2004).