[banner]

Summary and Analysis of Extension Program Evaluation in R

Salvatore S. Mangiafico

Evaluation Tools and Surveys

A complete discussion of the proper design and administration of program evaluation tools and surveys is far beyond the scope of this book.  This chapter is intended only to give the reader some context for the remainder of the book, which discusses the analysis of data from these survey tools and other sources.

 

Extension programs

 

An extension program is an organized educational endeavor delivered to some segment of the public, such as famers, students in 4-H, or Master Gardeners.  It is based on an assessments of needs for that group for the specific topic of the program, and is based on objectives for what knowledge students should improve, skills they should learn, or behaviors they should adopt.

 

An extension program will have multiple activities associated with these goals.  The extension educator may be organizing educational events, delivering lectures, writing factsheets, or reporting on research. 

 

Each of these components is evaluated in light of the objectives of the program and the objectives of the individual activities.  In each case objectives should be measureable.

 

In some cases, the evaluator can collect “hard data” about the results of some program:  number of attendees, number of downloads, measured differences in water quality.

 

In other cases evaluation tools such as surveys will be developed to evaluate how well program objectives will be met.  These might ask participants about an increase in their knowledge, a change in their behavior, or adoption of specific practices, along with other relevant information.

 

Designing a questionnaire

 

There is much to be learned about good questionnaire design, and good practices for conducting a survey.  This chapter will only scratch the surface of this topic.

 

Some principles of questionnaire design for program evaluation

 

Length and audience-appropriateness

 

Your questionnaire should be short enough and simple enough so that your respondents will be likely to complete it, and complete it thoughtfully.

 

On the other hand, it should be long enough to gain valuable information.  For example, it might not want to ask simply, “Did you learn something?”, but ask how much was learned in each topic area.  Identifying areas where teaching was successful or not successful helps to improve the program in the future.

 

It is also important to take into account your audience.  Young children may be able to answer only a few questions, simple in scope, and without too many options.  On the other end of the spectrum, expert audiences can much more critically assess programs and program component.  Non-expert adult audiences fall in the middle of this spectrum:  They are capable of assessing their own changes in knowledge, skills, attitudes, behaviors, and implemented practices.  Still, questionnaires for them should still be short and clear, with the possibility of some open-ended questions.

 

Types of impacts

A questionnaire may assess a single type of impact or several types depending on what is appropriate.  It may seek to assess knowledge gain, changes in attitude, gained skills, changes in behavior. 

 

These can be assessed for an extension activity as short as single lecture, a day-long workshop, or after a program lasting several years.

 

Assessment techniques of a questionnaire

 

Knowledge gain: before-and-after tests

For knowledge gain questions, respondents could be tested in a before-and-after manner, where some specific question is asked on a pre-test before the course, and then on a post-test after the course. 

 

It is helpful if the same respondent can be identified on both tests, so that the before responses and after responses can be paired for each respondent.

 

Personally, I find before-and-after assessment rather patronizing and annoying for adult audiences.  Instead, I prefer a self-assessment knowledge gain format, discussed below.

 

As an example, Bakacs et al. (2013) on Table 1, reported an increase in knowledge by respondents for individual questions about using rain barrels, conserving water, and reducing stormwater runoff.  For example, they asked workshop attendees the following question, both before and after the workshop.  They asked respondents for their initials and favorite number for each test so that responses could be paired for each respondent. 


10. A 55 gallon rain barrel, when full will weigh approximately
    A.  200 lbs.
    B.  300 lbs.
    C.  400 lbs.
    D.  Not sure


Knowledge gain: self-assessment questions

Another approach for assessing knowledge gain after a program or event is to allow respondents to assess their own knowledge gain.  This allows self-reflection, and may therefore be more meaningful than looking at correct and incorrect answers on before-and-after tests.

 

One form of self-assessment question asks the respondent to assess their knowledge on a topic both before and after the course, but is answered only once, after the course.

 

As an example, Mangiafico et al. (2011) (Table1) reported self-assessment changes in knowledge gain by respondents for individual questions about environmentally-friendly lawn care.  The survey was done following a lecture, so that respondents reflected about their knowledge before and after the lecture from the perspective of having just listened to the lecture.


Before                                                        After

1  2  3  4  5      I understand how to determine how much     1  2  3  4  5
                   phosphorus should be applied to my lawn.


Behavior change and adopted practices

One way to approach assessing changes in behavior is to use follow-up surveys (discussed below) at some time after a program or workshop to see if participants adopted practices or behaviors.

 

A second method is to question participants at the conclusion of an event concerning their anticipated changes in behaviors based on what they learned at the event.  The obvious drawback to this approach is that respondents are likely to anticipate that they will improve their behaviors, and this anticipation may never materialize in reality.  However, this method is often used because it is easier and allows collecting data from a captive audience.

 

As an example, Mangiafico et al. (2011) (Table2) reported self-assessment anticipated behaviors concerning environmentally-friendly lawn care.


17. I will test my soil or have my soil tested to determine the need to adjust pH and apply phosphorus fertilizer.

   disagree        neutral              agree

      1   2   3   4   5   6   7   8   9   10   Don’t know   Not applicable


Follow-up surveys

Follow-up surveys are conducted at some time after a course or event, and are useful to assess if specific practices were implemented or certain behaviors were changed, to give some time reflection by respondents to consider changes in attitudes or knowledge.

 

As an example, Bakacs et al. (2013) on Table 3, used a follow-up survey to determine if workshop participants installed rain barrels at their homes or businesses after building one at the workshop.

 

The follow-up survey in New Jersey was e-mailed to participants three to six months after the workshops, and were completed online.

 

One drawback to this approach is that it is necessary to collect contact information from participants and take the time re-survey participants.  Another problem is that response rates may be lower than response rates from in-classroom assessments.

 

Rating of course quality and materials

It is often useful to have program participants rate the quality of instruction, presentation, visual aids, handouts, etc.

 

Open-ended questions

Open-ended questions can be very valuable to solicit critical comments about a course or program, or assess knowledge gain or other impacts that aren’t covered by other assessment questions.

 

Opened-ended questions can attempt to solicit critical comments—“What could be done to improve this program?”;  assess knowledge gain—“What did you learn from this program?”;  or gain other information—“Where did you learn about this course?”.

 

These questions can be very valuable, but you should also be cautious since responses may be sparse and not representative of program participants.  In particular, critical responses may be unrepresentatively positive or negative.  In general, participants may or may not put a lot effort into answering open-ended questions, depending on a variety of factors not necessarily related to program content or quality.

 

Demographic data

Finally, collecting demographic data of questionnaire respondents can be useful for interpreting and understanding responses for other questions.  Demographic data may include profession or stakeholder group; current practices; or age, grade, sex, etc.

 

Forms of responses

Answers to questions may be in the form of yes/no, multiple choice, ordered response (Likert), or open-ended.

 

The statistical analysis of these different forms of answers will vary.  This will be unpacked over the course of this book.

 

Examples of evaluation tools

 

Rutgers Cooperative Extension Program Evaluation, Youth Audience, GRK-3

     njaes.rutgers.edu/evaluation/youth/docs/RCEYOUTHEVALUATION_GRK-3_FORM.pdf
     Notes:

•  As a questionnaire written for younger children, most responses to questions are 3- to 5-point Likert items with face emoticons for support, and yes/no questions.

•  Questions 1 and 2 are self-assessment knowledge gain.  3-point Likert items.

•  Question 3 is anticipated behavior change.  Yes/no.

•  Questions 5 and 6 are rating of the instructor and program.  5-point Likert items.

•  Question 7 is a question about demographics.

•  Question 8 is an open-ended question.

 

Rutgers Cooperative Extension Turf Management for a Healthier Lawn, Program Evaluation Form

     rcompanion.org/documents/TurfProgramEvaluation.pdf

     Notes:

•  Questions 1–11 are self-assessment questions of knowledge gain and attitudes, using before-and-after in a single sitting.  5-point Likert items.

• Questions 12–13 are ratings of information and educational materials.  10-point Likert items, with two opt-out options (Don’t know, and Not applicable)

• Questions 17 – 20 are anticipated behaviors.  10-point Likert items, with two opt-out options (Don’t know, and Not applicable)

 

Rutgers Cooperative Extension Program Evaluation, Older Youth Audience, GR4-13A

     njaes.rutgers.edu/evaluation/youth/docs/RCEYOUTHEVALUATION_GR4-13A_FORM.pdf

     Notes:

•  Questions 1–2 are open-ended questions assessing knowledge gain and behavior change.

•  Questions 4–5 are ratings of presenter and program.  5-point Likert items.

•  Question 6 is a question about demographics.

•  Question 7 is an open-ended question.

 

Rutgers Cooperative Extension Program Evaluation - Older Youth Audience, GR4-13B

     njaes.rutgers.edu/evaluation/youth/docs/RCEYOUTHEVALUATION_GR4-13B_FORM.pdf

     Notes:

•  Questions 1–3 are self-assessment questions of knowledge gain, using before-and-after in a single sitting.  4-point Likert items.

•  Question 4 is an open-ended question about behaviors.

•  Questions 6–7 are ratings of presenter and program.  5-point Likert items.

•  Question 8 is a question about demographics.

•  Question 9 is an open-ended question.

 

Optional resource

Poling, R. L.  1999. Example Extension Program Evaluation Tools. Agricultural Extension Service Institute of Agriculture University of Tennessee. web.utk.edu/~aee/evaltools.pdf.

 

Optional Readings

 

[Video]  Statistics Learning Center (Dr. Nic). 2011. “Designing a Questionnaire”.

www.youtube.com/watch?v=FkX-t0Pgzzs.

 

Optional additional resources

 

Barkman, S. J. 2001. A Field Guide to Designing Quantitative Instruments to Measure Program Impact. Purdue University. www.northskynonprofitnetwork.org/sites/default/files/documents/Field%20Guide%20to%20Developing%20Quantiative%20Instruments.pdf.

 

Chavez, C. No date. Survey Design. Loyola Marymount University. www.lmu.edu/Assets/Academic+Affairs+Division/Assessment+and+Data+Analysis/Christine$!27s+Folder/Surveys+Website/Survey+Design+Resource.pdf.

 

Rutgers Cooperative Extension.  2015. Program Evaluation.  njaes.rutgers.edu/evaluation/.

 

Walonick, D.S. 2010. Designing and Using Questionnaires. StatPac.  statpac.com/surveys/surveys.pdf.

 

References for this chapter

 

Bakacs, M., M. Haberland, S.S. Mangiafico, A. Winquist, C.C Obropta, A. Boyajian A., and S. Mellor.  2013. Rain Barrels: A Catalyst for Change?  Journal of Extension 51(3), article 3RIB6. www.joe.org/joe/2013june/rb6.php

 

Mangiafico, S.S., C.C. Obropta, and E. Rossi-Griffin.  2011. A Lawn Care Education Program to Address Water Conservation and Water Pollution Prevention in New Jersey. Journal of the National Association of County Agricultural Agents 4(2). www.nacaa.com/journal/index.php?jid=108.

 

Acknowledgements

I would like to thank Dan Kluchinski of Rutgers Cooperative Extension and the Rutgers Department of Agricultural and Resource Management Agents for supplying some resources for this chapter.