Banner
UCAR Home | NCAR

NCAR  > SIP  > Weather and Society Watch > Program Highlights
 

 

Weather and Society Watch
Methods

Article #1 |

Extreme Weather and Society: An Integrated Perspective

by Jeffrey K. Lazo*

Survey researchers put considerable effort and research into making sure that the data they get can say something about the population they are sampling. All else equal, the higher the response rate to a survey, the more likely the data will be representative of the population that was sampled. In fact for some government sponsored surveys, the Office of Management and Budget (OMB) requires (or maybe it is “suggests”) a response rate of 70% or more (OMB 2006).

As an add-on to an NSF-funded project on mental models of flash floods and hurricanes, we implemented a survey with the general public about flash floods in Boulder, Colorado. This work was conducted in part with University of Oklahoma senior meteorology majors Kelsey Mulder and Curtis McDonald as their 2010 Capstone Project. Numerous others were also involved in development and implementation of this survey (including those who spent hours and hours stuffing envelopes) but are too many to mention now.

For the purposes of this Methods section, I am going to talk solely about one part of the implementation of the mail survey—the inclusion (or non-inclusion) of cash in the survey mailing and how that affected survey response rates.

All told, we distributed 1,400 survey packages. Of these, 400 were part of a convenience sample hand-delivered to people in downtown Boulder and on the University of Colorado campus. Another 250 survey packages were mailed, but not using the “Dillman” method for mail surveys. I’ll focus the current discussion on the 750 that were mailed using the Dillman method (Dillman, 2008).

The 750 mailed surveys contained one of five different incentive levels, for a total of 150 surveys each with one of the following incentive levels:

  • No incentive
  • A single $1 bill
  • Two $1 bills
  •  A single $2 bill
  • A single $5 bill.

We threw in the single $2 bill versus the two $1 bill to see if the “oddity” of this would affect response rates at that level.
Survey packages that were returned by the U.S. Postal Service indicating bad addresses (107 of them) were removed from the response rate analysis, giving us a different sample size for each incentive level as shown in Table 1. The number of surveys returned completed is shown, as well as the responses rates for each incentive level (the number completed divided by the sample size). The figure plots these response rates by incentive level.

Table 1: Response Rate Information by Incentive Level (Dillman method mailings only)

Line

Incentives

Distributed

Bad Addresses

Sample Size

Completed

Response Rate

A

$0

150

30

120

41

34.17%

B

1 x $1

150

16

134

72

53.73%

C

2 x $1

150

25

125

71

56.80%

D

1 x $2

150

15

135

70

51.85%

E

1 x $5

150

21

129

83

64.34%

F

Total

750

107

643

337

52.41%

We had an overall response rate of 52%, which is actually very good for a mail survey! As revealed in the figure, a visual inspection of this suggests that increasing the incentive (more dollars) increases the response rate to the survey. But this is not a perfect fit: Note that we had a lower response rate with the single $2 bill than we did with the two $1 bills—not what I expected!

 

To see if there is a statistically significant impact of the level of incentive in the survey, we performed a regression analysis using a probit model. The “dependent” variable was whether or not the survey was completed and returned. This was coded as a “1” if a survey was returned and a “0” if a survey wasn’t returned. Table 2 shows the results of this analysis.


Table 2: Probit Model of Response Rate
(n=643)

Parameter

Estimate

Standard
Error

Wald
Chi-Square

Pr > ChiSq

Wald 95% Confidence Interval for Parameter Estimates

Intercept

0.4079

0.1179

11.9619

0.0005

0.1767

0.6391

1 x $1

0.5016

0.1602

9.8005

0.0017

0.1876

0.8156

2 x $1

0.5792

0.1631

12.6061

0.0004

0.2595

0.8989

1 x $2

0.4543

0.1599

8.0778

0.0045

0.1410

0.7677

1 x $5

0.7755

0.1634

22.5227

<.0001

0.4552

1.0958

Essentially this analysis is testing whether or not the amount of the incentive (from none to $5) changed the likelihood that an individual would complete and return the survey. Of interest are the results on the four different incentive levels (1 x $1, 2 x $1, 1 x $2, and 1 x $5), which are shown in the column labeled “Estimate.” All of these are statistically significant (the significance levels in the “Pr > ChiSq” column are all much smaller than 0.05), indicating that including an incentive increases the likelihood of an individual returning the survey. And for the most part, the more the money the higher the response rate will be as indicated by the larger estimates the large the incentives.

So put more cash in the survey and get a better response rate!

But … the question for a researcher is whether or not it is worth putting the extra money into including incentives (if this is even an option—in New Zealand it apparently is illegal to mail cash). Depending on how important it is to get a higher response rate and how much the budget for the survey is, the researcher may consider trading off a higher response rate with a smaller sample size …. this could be a whole new topic for future Methods articles!

There are a lot of other “methods” implicit in this discussion that could also be the focus of future Methods articles including the use of a probit model instead of linear regression, the treatment of non-respondents, the use of mail versus internet, or telephone surveys.

Our plans for now include future Methods discussions on other aspects of this survey in particular, including the use of the “Dillman” method for mail survey implementation and the use of GIS for tracking respondents location and comparing that other non-survey information (such as flood zones). And we’d love it if you contributed your “Methods”!

*Jeffrey K. Lazo (lazo@ucar.edu) is the director of the Societal Impacts Program (SIP) at the National Center for Atmospheric Research.


Resources and References

Some places to look for more information on the use of incentives in mail surveys include:

Church, A.H., 1993. Estimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis. The Public Opinion Quarterly, 57(1):62-79. Available at Public Opinion Quarterly

Dillman, D.A., J.D. Smyth, L.M. Christian. 2008. Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method. 3rd Ed. Wiley & Sons. 512pp.

OMB. 2006. Office of Management and Budget: Standards and Guidelines for Statistical Surveys. Available at
http://www.whitehouse.gov/sites/default/files/omb/inforeg/statpolicy/standards_stat_surveys.pdf.

Singer, E., 2006. The Use of Incentives to Reduce Nonresponse in Household Surveys. Public Opinion Quarterly. 70(5):637–645. Available at http://poq.oxfordjournals.org/content/70/5/637.full.pdf



NCAR Logo    USWRP Logo
©2021 UCAR | Privacy Policy | Terms of Use | Visit Us | Sponsored by