2404 intervention studies

organizations have developed standards for jud
ging the quality of the studies and their results.
See, for instance, the standards enunciated by
the US Census Bureau, as well as those for the
UK, China, Sweden, Canada, and others.
The second question, on deployment of the
intervention, falls under the rubrics of imple
mentation studies, process research, and pro
gram monitoring, depending on the academic
discipline and agency responsible for generat
ing an evidential answer. Sociological theory
might inform ones choices about what to mea
sure and how (e.g., measuring social capital
in the context of education interventions such
as private versus public schools). Typically,
the evidence to answer the question stems
from performance indicators that permit one to
judge progress of relevant agencies or the ade
quacy of the intervention services. Less often,
the evidence may be generated through perio
dic surveys of clients or customers satisfac
tion with the intervention service, for instance.
Anthropological studies may also be used to
generate hypotheses and ideas about the char
acter of service and delivery from the points of
view of service recipients or other stakeholders
in the process.
In health care, for example, studies that
address the second question often aim to learn
whether government or other professional
guidelines for health care of the elderly (say)
are operationalized in hospital or other care set
tings. Finding that fewer than half the guide
lines are implemented well is important.
Understanding whether, how, and how well
particular interventions can be deployed in dif
ferent settings is no easy matter. The need to
understand has led to the production of sys
tematic reviews of evidence on the topic, such
as Fixsen et al. (2005), and to new peer reviewed
journals such as www.implementaionsciences.
com that cover new empirical work on how to
deploy or not. For a new intervention, deeper
questions hinge on whether it has been imple
mented with fidelity in a trial and how need
for fidelity and need for flexibility in adaption
can be balanced in larger scale trials and in
eventual deployment of the intervention beyond
the trials.
The third question, on relative effects of
interventions, invites attention to randomized
controlled trials that produce the least equivocal
evidence possible about whether one interven
tion is better than another or better than the
ambient service or system (Boruch 1997). In
these trials, individuals, organizations, or geo
political jurisdictions are randomly assigned to
each different intervention, including a control
(ambient conditions). Well run randomized
trials generate a statistically unbiased effect of
the interventions relative effects and a legiti
mate statistical statement of ones confidence
in the results. Put in other words, a trials pro
duct is a fair comparison that takes into account
chance variation in individual and institutional
behavior.
In the US, for instance, randomized trials
have been conducted to test the effect of pro
grams that move poor people from high to low
poverty areas. These Moving to Opportunity
trials (Gibson Davis & Duncan 2005) include
anthropological work on processes and people.
Mexicos Progresa randomized trial was pre
ceded by statistical work on severity of the school
dropout problem in rural areas and informed by
anthropological research on its nature and the
intervention process. Villages were randomly
assigned to an income support program or to
control conditions to learn whether the program
was effective in reducing a chronically high rate
of school dropout (Parker & Teruel 2005).
Large scale studies in which entire organi
zations or entities are randomly allocated to
different interventions are often called cluster
randomized trials, or group randomized trials,
or place randomized trials, depending on the
disciplinary context. Prevention researchers
further distinguish between efficacy trials and
effectiveness trials (Flay et al. 2005). Efficacy
trials are well controlled and depend on experts
and their collaborators to deploy an intervention
in contexts that are well understood, with mea
sures of outcome whose reliability is controlled,
and so on. The effectiveness trials are mounted
later, in environments that are real world in the
sense that the interventions may not be deliv
ered as they ought to be, the measures of out
come are not as reliable, and so on. The interest
in generating better evidence on effectiveness
through such trials has led to the creation of
specialized peer reviewed journals in which trial
results and issues can be reported. These