Decide What to Evaluate
When deciding what to evaluate, there are three things to think about: 1) the purpose of your evaluation, 2) the stage of your program's development, and 3) the CDC's evaluation standards.
1) Purpose of evaluation
Think back to your reasons for conducting the evaluation. Common ones are:
- To answer questions you have about a program's implementation, efficiency, or outcomes
- To hold a program accountable to its intended goals
- To look for ways a program can be improved
For more information, see Purpose of Evaluation.
2) Stage of your program's development
Thinking about where your program falls within stages of program development will help you figure out what to evaluate ("focus your evaluation") and determine what type(s) of evaluation to conduct at this point:
If a program is in the design stage, an evaluative perspective will help you define the problem the program will address, identify the form the program will take, and determine what program activities will likely be most effective. The design stage is also an opportunity to incorporate evaluation into your program design (e.g., establish data collection processes up front).
During early implementation—when you are testing and refining the program—evaluation can help you identify strengths and weaknesses and inform your decisions about any modifications needed. Evaluation during this time could also measure short-term outcomes. These are referred to as process evaluations.
For established programs, evaluation will likely focus on whether the intermediate - and long-term goals are being met and the intended outcomes are being achieved. These are referred to as outcome evaluations.
Once a program has been around for many years or long after a program has ended, evaluation can help you better understand its lasting impact. This is called a legacy evaluation.
3) Evaluation standards—utility and feasibility
Keeping in mind the CDC's evaluation standards of utility and feasibility will help you design a practical evaluation.
Utility means an evaluation is useful to the intended users. You want to hear from your stakeholders about why they are interested in the evaluation and what information they want it to provide. Engaging stakeholders in the development of your evaluation ensures that it is as relevant as possible to the most number of people.
Feasibility in this case means designing an evaluation that you can realistically carry out given the resources available. Once you have determined what would be most useful to know, the next step is to determine the scope and scale of the evaluation. Consider the complexity of the program as well as which elements could actually be measured.
For example, the programs of many health-related nonprofits are meant to change individuals' behavior. While a nutrition class may be an evidence-based obesity prevention strategy, a small nonprofit with limited resources probably does not have the capacity to visit participants' houses to collect data on changes in household food consumption. Rather, self-reported data on food consumption using a survey or food diaries would likely be used.
With these standards in mind, review your logic model. What will be most useful and feasible to evaluate? Identify the most important inputs, outputs, and outcomes. Depending on the stage of your program, it may be premature to evaluate long-term outcomes, so focus your evaluation on outcomes the program has a reasonable chance to impact within the timeframe of the evaluation.
Source: *U.S. Department of Health and Human Services. Centers for Disease Control and Prevention. Office of the Associate Director for Program—Program Evaluation (2011, Aug. 3). A framework for program evaluation.
Deciding what to evaluate is an important part of your evaluation, as you want to be sure your evaluation is relevant and useful. If you are a grantee of Healthcare Georgia Foundation, this could be a good time to contact the ERC for assistance.