Now that you have developed a logic model, you might be wondering how to integrate your data collection plans with it. If your logic model is clear, using it to build an evaluation plan will be pretty straightforward.
Your logic model might look something like the example below: a grid that connects inputs with activities, outputs and outcomes. If you’re not sure what the difference between outputs and outcomes is, you may want to refer to the Kellogg Foundation's excellent and comprehensive guidebook on developing logic models. The logic model below is based on a logic model template developed by the Milwaukee Public Schools Research and Development Department.
A Logic Model Example
I’ve used short-term, medium-term and long-term outcomes here; there are a lot of different ways to think about outcomes and impacts, and this is just one model.
Let’s use a hypothetical tutoring program as an example. Our tutoring program is designed to help students who are at risk of not graduating from high school. Our research tells us that failing to test proficient in a core subject is a significant indicator of failing to graduate from high school, so we will identify students who are not testing proficient on at least one subject and provide them with one-on-one tutoring. Here is our hypothetical logic model:
Adding the Measures
There are two major questions that we can use data to assess. One, are we doing what we said we would do? And two, is it working? Measures that address the first question are performance measures. Measures that address the second are outcome measures or indicators. I have included some examples here. The list is not exhaustive.
Why Two Kinds of Measures
It is important to look at both process measures and outcome measures for a few reasons. Process measures help you understand if you are implementing the program the way it was designed. Outcome measures tell you whether that program leads to the change you want to create in the world. You’ll want to monitor process measures frequently and outcome measures less frequently (say annually or semi-annually for most programs). Monitoring both will help you understand both whether the model you’re implementing is the right one and also whether you are implementing it in a high-quality way.
If you aren't recording process measures, you won't be able to hunt down the flaws in your process in the event you don't succeed. You won’t know whether the program was inherently flawed, or whether it was a good idea, but just not being implemented well. Monitoring both process and outcome measures will also help you understand which elements of the program are essential. For example, if the students are getting better grades, but our teachers say the collaboration is not good, we might not want to put a lot of energy into improving that relationship because the program works without it.
You will probably find that a lot of your program's process measures are spelled out in your grant application. If you articulated how many participants you would serve and what services they would receive those go into your process measures.
Your outcome measures are proxies for (things you can measure that reflect) the change you want to see in your participants' lives. They need to be clear enough that you can count them easily. For example, don't say "graduate from high school". What if they return as adults and graduate? How long will you track them? Use a clearer measure like "graduate from high school within 5 years." That's a reasonable amount of time to track students, and reflects a positive outcome of our tutoring program. Some of these measures are probably in use elsewhere and may be somewhat standardized. You don't always have to make up your own.
As you move to the right side of the logic model, the outcomes get harder to measure for multiple reasons. There are more externalities to consider. For example, graduating from high school can be influenced by a lot of factors that our tutoring program does not address. It will take a long time for the outcome to happen – probably three or four years before our students start to graduate from high school. Finally, the data will be harder to get because you’ll have to follow up with participants long after they have left the program or use some other data-source. It’s OK that these things are hard to measure. You can probably find some evidence from other programs that if you’re implementing your program well and getting the right short-term results, then the long-term results will follow. This is where a good research base is helpful.
Got questions about developing performance measures, clarifying a logic model, or developing a research base? Get in touch!