Saturday, February 1, 2014

ECUR 809 Assignment 1: Evaluating a Program Evaluation

As an an introduction to the field of Program Evaluation, the first assignment in my current course, ETAD 809, is to review a completed evaluation.
The Calgary AfterSchool program is a project put in place by the City of Calgary and UpStart (The United Way). This city-wide version of the program was launched in September 2009, and the evaluation period ended in June 2013. The AfterSchool program is the implementation of afterschool programming for children and youth across the city, free of charge, in order to promote positive child and youth development. The program claims to be the only of its kind in Canada.
The final program evaluation was published in October of 2013, although there were several interim evaluations along the way. I reviewed the interim evaluation from 2011, mid-way through the project.
The evaluation is based on the programs two goals, identified in the final evaluation as “(i) to increase the participation of Calgary’s children and youth in high-quality after-school programming by increasing the quantity and accessibility of high-quality recreational and developmental programming throughout the city, and (ii) to improve participants’ social and emotional development and school engagement.” Both the interim and final evaluations concluded that the program met both of these goals, and was a success.
Due to the large size of the program, data collection was challenging throughout the study. Pre-and post questionnaires were administered to participants. The results were collected, and for statistical reasons that I don’t fully understand, the evaluators then selected only locations which were able to collect data from all participants to use when analyzing the results.
Two things stand out for me in these documents. First, the results, which are available in graphical format, are separated into categories for children and youth, and then further into “overall”, and those “with poor pre-test scores”. These divisions allow the reader to see how the program impacts each group differently, and highlights the results that are most important, the children and and youth deemed to be at-risk, with poor pre-test scores. I agree with this separation in results, because for many youth, they likely just transferred into the AfterSchool program from another paid program. To know if the programs have a real impact, the customer would need to see results for children that are not receiving this services elsewhere. This does raise, two follow-up questions which I would investigate if running this type of program: What percentage of the participants are included in the at-risk results? Are City’s funds going to the children and youth who really need it?
The second thing that jumped out at me was that the surveys were not administered to any students in grades one to three because they required reading and writing skills. The results in the report do not reflect how well the program serves these children. Although I understand the complications in administering surveys to such young participants, another method, perhaps interviews, anecdotal evidence from program instructors, or parent surveys, could have been used in order to provide a more complete and accurate picture of the program’s effectiveness. Without this information, how can those who manage the program truly know if it is worthwhile to deliver programs for the youngest participants?
____________________________________________

Note: I wrote my original post in Google Docs. Using the Research Tool, I included proper APA footnotes, although I have no idea, yet, if and how that needs to be done in a blog. In any case, they did not copy over to Blogger, nor can I find a way to add them in. Here's a link to my Google Doc that includes the footnotes.

3 comments:

  1. I like the choice you made Karen. The report is very concise and focused on two main goals. I think it demonstrates how to craft your message to the intended audience when conducting a PE. Is there a model or combination of models we discussed that might explain the theoretical framework they applied?

    ReplyDelete
  2. Yes, I think that this evaluation is based somewhat on Scriven's model - and as there are both interim and final evaluations, it addresses both the formative (to see who the program is doing each year), and summative (to assess it's success at the end of the program, and to determine if it works well enough to serve as a model for other areas), forms of evaluation in this model. As it investigates the success of the program against the goals, it is NOT Scriven's Goal Free model, but the formative/summative one.

    ReplyDelete
    Replies
    1. It's also worth noting that although I mentioned goals in my posting, this is not a reference to the Goals-Based Evaluation. According to the explanations on http://managementhelp.org/, the evaluation is more along the lines of an Outcomes-Based evaluation, as it reflects on the impacts that the program has on the participants in the program.

      Delete