Weather and Society Watch
Extreme Weather and Society: An Integrated Perspective
by Jeffrey K. Lazo*
Survey researchers put considerable effort and research into making sure that the data they get can say something about the population they are sampling. All else equal, the higher the response rate to a survey, the more likely the data will be representative of the population that was sampled. In fact for some government sponsored surveys, the Office of Management and Budget (OMB) requires (or maybe it is “suggests”) a response rate of 70% or more (OMB 2006).
As an add-on to an NSF-funded project on mental models of flash floods and hurricanes, we implemented a survey with the general public about flash floods in Boulder, Colorado. This work was conducted in part with University of Oklahoma senior meteorology majors Kelsey Mulder and Curtis McDonald as their 2010 Capstone Project. Numerous others were also involved in development and implementation of this survey (including those who spent hours and hours stuffing envelopes) but are too many to mention now.
For the purposes of this Methods section, I am going to talk solely about one part of the implementation of the mail survey—the inclusion (or non-inclusion) of cash in the survey mailing and how that affected survey response rates.
All told, we distributed 1,400 survey packages. Of these, 400 were part of a convenience sample hand-delivered to people in downtown Boulder and on the University of Colorado campus. Another 250 survey packages were mailed, but not using the “Dillman” method for mail surveys. I’ll focus the current discussion on the 750 that were mailed using the Dillman method (Dillman, 2008).
The 750 mailed surveys contained one of five different incentive levels, for a total of 150 surveys each with one of the following incentive levels:
We threw in the single $2 bill versus the two $1 bill to see if the “oddity” of this would affect response rates at that level.
We had an overall response rate of 52%, which is actually very good for a mail survey! As revealed in the figure, a visual inspection of this suggests that increasing the incentive (more dollars) increases the response rate to the survey. But this is not a perfect fit: Note that we had a lower response rate with the single $2 bill than we did with the two $1 bills—not what I expected!
To see if there is a statistically significant impact of the level of incentive in the survey, we performed a regression analysis using a probit model. The “dependent” variable was whether or not the survey was completed and returned. This was coded as a “1” if a survey was returned and a “0” if a survey wasn’t returned. Table 2 shows the results of this analysis.
Essentially this analysis is testing whether or not the amount of the incentive (from none to $5) changed the likelihood that an individual would complete and return the survey. Of interest are the results on the four different incentive levels (1 x $1, 2 x $1, 1 x $2, and 1 x $5), which are shown in the column labeled “Estimate.” All of these are statistically significant (the significance levels in the “Pr > ChiSq” column are all much smaller than 0.05), indicating that including an incentive increases the likelihood of an individual returning the survey. And for the most part, the more the money the higher the response rate will be as indicated by the larger estimates the large the incentives.
So put more cash in the survey and get a better response rate!
But … the question for a researcher is whether or not it is worth putting the extra money into including incentives (if this is even an option—in New Zealand it apparently is illegal to mail cash). Depending on how important it is to get a higher response rate and how much the budget for the survey is, the researcher may consider trading off a higher response rate with a smaller sample size …. this could be a whole new topic for future Methods articles!
There are a lot of other “methods” implicit in this discussion that could also be the focus of future Methods articles including the use of a probit model instead of linear regression, the treatment of non-respondents, the use of mail versus internet, or telephone surveys.
Our plans for now include future Methods discussions on other aspects of this survey in particular, including the use of the “Dillman” method for mail survey implementation and the use of GIS for tracking respondents location and comparing that other non-survey information (such as flood zones). And we’d love it if you contributed your “Methods”!
*Jeffrey K. Lazo (firstname.lastname@example.org) is the director of the Societal Impacts Program (SIP) at the National Center for Atmospheric Research.
Resources and References
Some places to look for more information on the use of incentives in mail surveys include:
Church, A.H., 1993. Estimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis. The Public Opinion Quarterly, 57(1):62-79. Available at Public Opinion Quarterly
Dillman, D.A., J.D. Smyth, L.M. Christian. 2008. Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method. 3rd Ed. Wiley & Sons. 512pp.
OMB. 2006. Office of Management and Budget: Standards and Guidelines for Statistical Surveys. Available at
Singer, E., 2006. The Use of Incentives to Reduce Nonresponse in Household Surveys. Public Opinion Quarterly. 70(5):637–645. Available at http://poq.oxfordjournals.org/content/70/5/637.full.pdf