The Journal of Extension - www.joe.org

June 2011 // Volume 49 // Number 3 // Tools of the Trade // v49-3tt6

Obtaining Valid and Useful Evaluations from Immigrant Conference Participants

Abstract
Obtaining valid and useful evaluations from immigrant farmers at conferences can be challenging, because they need to be culturally sensitive, inexpensive, and easily translatable and to satisfy goals of funders and conference organizers. Evaluations requiring conference attendees to think in terms of degree of satisfaction were too complicated to use. Modified DOTS surveys worked well given enough time for translation, but still presented some problems.

Keywords: farming, Hmong, evaluation

Cindy Tong
Postharvest Specialist
University of Minnesota
Saint Paul, Minnesota
c-tong@umn.edu

Joci Tilsen
Assistant Director
Minnesota Food Association
Marine on Saint Croix, Minnesota
jtilsen@mnfoodassociation.org

Tom Bartholomay
Evaluation Specialist
University of Minnesota
Saint Paul, Minnesota
barth020@umn.edu

Introduction

Beginning in the late 1970s, Hmong people started to immigrate to the United States, with many settling in Minnesota (Minneapolis Foundation, 2004). Since 2005, annual conferences geared towards improving farming techniques of immigrant and minority farmers in Minnesota have been organized and supported by several federal, state, and nonprofit organizations. In 2010, the conference was attended by over 160 farmers and included simultaneous interpretation in 4 different languages—Hmong, Bhuta, Karen, and Somali. Obtaining valid and useful evaluations of conference sessions from attendees has been an ongoing challenge. Several different evaluation methods have been tried, with varying levels of success.

Evaluation Goals & Challenges

Evaluation methods must be relatively inexpensive and culturally sensitive and must satisfy various goals. Successful evaluations had to satisfy different stakeholders with different goals.

  • Funders wanted to know how many immigrants were served (demographics) and what impacts the conference had on attendees.

  • Conference organizers wanted feedback on the conference as a whole, what workshops were valuable, other knowledge attendees wanted, if there were language barriers, and ways to improve future conferences.

Obtaining valid evaluation of conference sessions can be challenging regardless of the audience (Archer, 2008; Kiernan, 1999). When attendees have no written language, are unfamiliar with scalar evaluations, want to please organizers, or do not want to stand out individually due to cultural reasons, evaluation responses can be more challenging to analyze. Surveying participants post-conference is difficult if attendees frequently move and have no fixed addresses, or there are five people with the same name.

Methods Tried

In 2008, each attendee was given an evaluation sheet (Figure 1) for each conference session, asking the degree to which they learned new things and will be changing practices based on what was learned and if translations were clear (not shown). Interpreters provided oral instructions on how to fill out the evaluation sheets.

Figure 1.
Example of Part of Evaluation Sheet Used in 2008

1) Please mark the session you attended.
2) For each session you attended fill in the bubble that best represents your opinion.
I learned new things from this sessionI will be changing my practices based on what I learned from this session
Strongly DisagreeDisagreeAgreeStrongly AgreeStrongly DisagreeDisagreeAgreeStrongly Agree
Sessions attended (X)Friday Sessions 1 2 3 4    
 Looking for money?        
 What is your soil worth?        
 Food Safety        

In general, there seemed to be confusion on the first day of the conference about filling out the forms. Some attendees filled out evaluations for concurrent sessions that they could not have attended simultaneously. Some evaluations seemed to be carbon copies of each other. However, these surveys were amenable to statistical analyses, and there was an increase in the number of valid evaluations of those received between Friday and Saturday, suggesting that attendees learned how to fill out the forms between the two days of the conference.

The next year, the evaluation process was simplified, using modified DOTS surveys (Lev, Smith, & William, 1995). Posters asking for demographic data were hung near the registration area. After attendees received their registration material, they were directed to place dots where relevant. Interpreters were on hand to provide assistance. This helped accustom attendees to the DOTS survey method.

For each conference session, survey sheets and colored dots were placed on each table of attendees. Dots colored differently than those given to attendees were placed on all surveys by conference organizers prior to the sessions to decrease hesitancy by attendees at being the first to answer. At the end of each conference session, interpreters asked attendees to fill out the table surveys. For example, for a workshop on Good Agricultural Practices, attendees were asked to "place dots under subjects that you learned in this workshop":

  • How to keep food safety records

  • How to protect my customers from illness and death

  • How to make a food safety plan

  • Hmong, Latino and African farmers can be food safety certified

  • I will start one little part of a food safety plan (yes or no).

This seemed to work well if time was made available at the end of each session for evaluation. The DOTS method was used again the following year. However, attendance was greater than anticipated, and we ran out of dots. Also, we found that many people had marked "yes" to learning about a topic that had not been covered in that particular workshop.

Ideas for Future Evaluations

Other evaluation methods we are considering are:

  • Convenience sampling, in which translators subsample attendees and interview them about the workshops they attended. All interpreters would need to be trained prior to the conference, and more interpreters may need to be hired. Although this method saves time, money, and effort, it can lack credibility (Taylor-Powell, 1998).

  • Group feedback, in which a facilitator would elicit oral workshop evaluations as part of a group session sharing current farming practices.

  • Expert review, in which agency personnel who work with attendees would be asked about changes in attendee behaviors or attitudes observed soon after the conference. Limitations to this method are that changes could be attributed to other activities and that experts would need to communicate with attendees within the evaluation timeframe.

Acknowledgements

We thank Nigatu Tadesse (USDA-FSA) and Ly Vang (Association for the Advancement of Hmong Women in Minnesota) for starting and continuing to help organize this conference and other members of the organizing committee for input on evaluating conference workshops.

References

Archer, T. A. (2008). Response rates to expect from Web-based surveys and what to do about it. Journal of Extension [On-line], 46(3) Article 3RIB3. Available at: http://www.joe.org/joe/2008june/rb3.php

Heller, F. (1969). Group feedback analysis: A method of action research. Psychological Bulletin, 72: 108-117

Kiernan, N. E. (1999). How to evaluate a conference informally with "listening posts." Journal of Extension [On-line], 37(6) Article 6IAW1. Available at http://www.joe.org/joe/1999december/iw1.php

Lev, L. S., Smith, F., & William, R. (1995). DOTS: A visual assessment technique for groups. Journal of Extension [On-line], 33(5) Article 5TOT1. Available at: http://www.joe.org/joe/1995october/tt1.php.

Minneapolis Foundation (2004) Immigration in Minnesota. Discovering common ground. Retrieved from: http://www.minneapolisfoundation.org/uploads/.../ImmigrationBrochure.pdf

Taylor-Powell, E. (1998) Program development and evaluation, sampling. Bulletin G-3658-3, University of Wisconsin - Extension, Madison, Wisconsin.