Toolkit
  1. INTRODUCTION TO THE TOOLKIT

  2. INTRODUCTION TO EVALUATION

  3. PLAN YOUR EVALUATION

  4. IMPLEMENT YOUR EVALUATION

Interpret the Results

Arriving at Conclusions

After you have finished your initial data analysis you will have compiled lists of patterns, themes and unanticipated results (e.g., high or low numbers, unique perspectives). The next step is to interpret the data—to ask what the data are telling you about your program (i.e., the significance of the themes or patterns you've identified).

Often you will find that your initial analysis raises more questions than it provides answers. In the process of analysis, you may find that there are some data you want to look at in more detail, and a second level of analysis is needed. Say you are looking at the distribution of responses for a rating-scale question about satisfaction with services. You might want to know if the people who were "not satisfied" had other things in common. By doing a cross-tabulation with their responses to other questions, you might learn, for example, that these respondents are also more likely to not have a primary care physician or they have difficulty accessing care during regular clinic hours.

One way of interpreting data is to make comparisons. You may:

  • Compare the results against targets set for the program (e.g., outputs in the program's logic model).

  • Describe trends in the program data over time (i.e., compare it against itself at different points in time).

  • Make comparisons with other similar programs.

  • Compare the results against standards established by others, such as funders or government agencies.

Involve your team and other stakeholders as you interpret the data. Getting different opinions on meaning and importance will lead you to the most accurate conclusions. As you engage your stakeholders in conversations about the data:

  • Go back to your evaluation questions and think about what you set out to learn about your program and what people wanted to know.

  • Revisit the data's reliability and validity. Did different methods produce the same results? Have you minimized sources of bias in your data?

  • Consider different explanations for the results—are there multiple scenarios that could explain the data?

  • Are your results similar to what you expected? If not, how are they different?

U.S. Department of Health and Human Services. Centers for Disease Control and Prevention. Office of the Director, Office of Strategy and Innovation (2005). Introduction to program evaluation for public health programs: A self-study guide. Atlanta, GA: Centers for Disease Control and Prevention.