Evaluation in Health Services Settings: Reader’s Digest Version

Many years ago when the world was young, Drs. Victoria Scott and Kassy Alia-Ray and I wrote a book chapter on how to approach evaluation in community health improvement. That book chapter had a long and circuitous route to publication. Imagine my surprise when just yesterday, a box showed up with two printed copies.

The book in question. Who had editorial control over that cover image?

This chapter was a result of our experiences designing and implementing the SCALE evaluation (which won AEA’s 2017 Evaluation of the Year). So, for those who can’t access the chapter, I wanted to give a high-level and cliff notes version of what what we talked about.

Types of Evaluation Approaches

There are some very broad categories of evaluation. Summative Evaluation looks for results and impact. Basically, did Program X do what it was supposed to, or do something else happen? Process Evaluation documents what happens in terms of implementation. We find that many times, policymakers have to rely on process evaluation and metrics like outputs because the desired outcomes as just too far in the future. Like, if you are putting an opiate prevention program into place, it’s going to be hard to determine whether or not people stop using opiates for a long time. There’s just a time lag between intervention and impact that really tough to track over time without a lot of resources.

This leads us to our preferred method, Formative Evaluation. This model uses process outputs as a means of informing program improvement. There is a time and place for summative evaluation. We ultimately want to know whether or not something works. That’s the bottom line in many/most contexts. However, from our (Victoria, Kassy, and my) perspective, we feel that is can be extremely useful to cultivate improvement while an initiative is ongoing. Rather than wait for results, we try to make sure the data is available to make changes and mid-course corrections. This way, we help to make a program a success, rather than just wait and see whether it is successful.

Evaluation Principles.

At least in this chapter, we articulated three principles that guide our work. This isn’t the only set of evaluation principles out there. David Fetterman has written extensively about principles of Empowerment Evaluation. For this chapter, though, we focused on four things. First is improvement. We want things to get better; we want to see programs succeed. To that end, our evaluation activities are tailored to generate data and knowledge that can be used to make programmatic improvements.

Looking that this figure, we can see all stakeholders
getting closer to the answer.

Second is joint accountability. No one person or group is solely responsible for a program’s success. Within the context of formative evaluation, there is an ongoing dialogue between the funder (the people bankrolling the project), the implementation team (the people designing and delivering the project), the end-users (the recipients of the project), and the evaluation team. Ideally, there will be a sweet spot where their questions and goals are aligned. As the methodologists, the evaluation team’s responsibility is to move toward that sweet spot.

Third is a systems perspective. Basically, there are almost always different levels that interact within large-scale initiatives. We can be dealing with individuals, with teams, with organizations, communities, governmental structures, and on and on. Therefore, we have to think a bit broader about the context to help to understand what is helping or hurting implementation.

Finally, we value social justice. That is, it is personally important for us to work toward the betterment of individuals and their community. There’s a bunch bundled into that, like making sure that services are equitable, that all relevant voices are including in the evaluation, and the unintended consequences are monitored. At least for me (Jonathan), I am not a neutral observer in a lot of work. While our methods need to be rigorous and valid, they are in service of making sure we are making this world a better place. And that’s something that is pretty important to carry through evaluation work.

Multiple Methods to get to the Truth

Used a multiple and mixed methods approach that combines quantitative and qualitative data sources isn’t new. We put data types into three big buckets. Inquiry information are data that we directly seek out; think survey and interviews. You know, the classic stuff. Observation information are data that we can glean through naturalistic settings. Sometimes this is direct observation much like anthropologists have written about (another name check to Fetterman). Other times, this involves gathering extant data about community-settings.

It’s also breast cancer awareness month

For example, in the ReSOLV project, I’ve been looking at school board meeting minutes to see the types of topics that come up and whether we can ascertain any trends related to school safety or community readiness. I’ve also gotten really interested in using Twitter to monitor community-level trends, like in the figure at left. Here, I look at #prevention, and we can see that, yes, COVID still dominates the discussion.

Finally, we have reflection. This is when we engage with stakeholders (the people in the Venn Diagram up top), to help them make meaning and sense of the results. This is a critical component. Data is just words and numbers unless they stimulate thinking and action.


And that’s about it. If you’re interesting in hearing more, reach out to me. Also, if you’re feeling especially bold, I have another copy of this book that I can’t do anything with, so I can send it to you. There are some pretty cool chapters in there. I’ve got my eye on the network analysis chapter right now.