Banner
UCAR Home | NCAR

NCAR  > SIP  > Weather and Society Watch > Program Highlights
 

 

Weather and Society Watch
Methods

Article #1 |

The Dillman Method and Mail Survey Research

by Jeffrey K. Lazo*

In the last issue of Weather and Society Watch (WSW), we inaugurated this Methods Section with a discussion of cash incentives in mail surveys. This issue's methods section is closely related to the issue of improved mail survey implementation, and my example is based on the same survey implementation. This time though I'll talk about what is known in the survey literature as the "Dillman Method."

As noted in the April WSW, all else equal, the higher the response rate to a survey, the more likely the data will be representative of the population that was sampled. This is important as response rates are often evaluated as indicators of the potential representativeness of survey data. Also, as noted in April, the Office of Management and Budget suggests response rates of 70% or more (OMB 2006) for surveys, in part "to ensure that survey results are representative of the target population so that they can be used with confidence to inform decisions."

In 1978 Don Dillman (University of Washington) published Mail and Telephone Surveys, The Total Design Method, which covers everything from how to ask a survey question to how to put a questionnaire in an envelope for mailing. The book has gone through several iterations and is now extended to cover new technologies, such as internet-based surveys and new approaches, such as mixed-methods surveys. Dillman's suggested approach for implementing the mailing of a survey, in particular, has come to be labeled "The Dillman Method."

This method was based on extensive experience and research on survey implementation to maximize response rates in mail surveys. The basic steps to enhance response rates in the Dillman Method include:

  • Send a personalized advance-notice letter
  • Approximately one week later, send the complete survey package with a cover letter, instructions, and the questionnaire and include a return envelope with postage
  • Approximately one week letter, send a follow-up postcard
  • Two weeks later, send a new cover letter, questionnaire, and return postcard to those who have not responded
  • Send a final contact (possibly by registered post) to request completion of the survey.

As a test of the effectiveness of the Dillman Method, in our mail survey we implemented the mailing with and without applying the method with different portions of the sample and compared response rate results.

Of the 1,400 survey packages we distributed, 850 were sent by the U.S. Postal Service. Some of the packages included a cash incentive, and we don't consider them in the current discussion (see the April newsletter edition at http://www.sip.ucar.edu/news/pdf/WSW_April_2011.pdf for the discussion about the impact of cash surveys on response rates and for more information on the topic of the survey). Of the 400 survey packages mailed without cash incentives (e.g., $0 incentive), 150 were mailed following the Dillman Method and 250 were mailed not using the Dillman Method. The non-Dillman Method mailing was a one-time mailing of the survey packet without advance notice or any follow-up.

In the current article we compare response rates for these two groups. Table 1 shows the number of survey packages mailed, bad addresses, completed surveys returned, and the adjusted response rates for these two groups.

After adjusting for bad addresses (at least those returned by the U.S. Postal Service indicating bad address) there is a higher response rate with the Dillman Method than without. There appears to about an 8 point bump (from 26% to 34%) by using the Dillman Method. As this could simply be by chance, we test whether or not this difference is statistically significant (e.g., how likely would it be that we would see this difference just by chance versus there is really an impact on response rates using the Dillman method?).

And as "Completed" is a categorical variable (set equal to 1 if completed and set to zero if not completed), I use a non-parametric test of whether or not the response rates are statistically different. Specifically, I test the null hypothesis (H0) that the response rate with the Dillman Method is the same as without the Dillman Method against the alternative hypothesis (H1) that the response rate is higher with the Dillman Method than without.

As reported in SAS, the z statistic from a Wilcoxon Two-Sample Test (with a continuity correction of 0.5) was 1.51 with a one-sided probability of 0.084. Basically this means there is an 8.4 chance, or equivalently less than about a 1 in 12 probability, that we would have seen the difference in response rates just by chance.

Our result suggests that use of the Dillman Method can lead to significantly higher response rates in mail surveys. But … I also note that if I had tested a two-sided alternative hypothesis (H1) that the response rates are not equal with and without the Dillman Method, the same resulting statistic has a significance of only 0.138—more than the 10% level many researchers would suggest. So … statistics is an art as well as a science, and one's choice of approaches can sometimes find results one is looking for.

Would I recommend, then, that all mail surveys be implemented using the Dillman Method? As noted in my April piece on the use of cash incentives, the question for a researcher is whether or not it is worth putting the extra money into implementing the survey with the extra mailings, letters, postcards, etc., that are recommended with the Dillman Method.

Depending on how important it is to get a higher response rate and how much the budget for the survey is, the researcher may consider trading off a higher response rate against a larger sample size …. that could be a whole new topic for future Methods!

While this and the April Methods Section focused mainly on issues related to implementation of surveys by mail, I hope the reader will realize that many similar issues arise with respect to telephone and internet-based surveys, and in-person interviews. Issues of respondent motivation, response rates, and representativeness of the sample are all important considerations in evaluating the quality of data in a survey-based study. And I'll end by noting that questions of using cash incentives or the Dillman Method for mailing a survey are only a few of the dozens of issues and decisions a researcher deals with in collecting reliable and valid data using surveys.

*Jeffrey K. Lazo (lazo@ucar.edu) is the director of the Societal Impacts Program (SIP) at the National Center for Atmospheric Research.


Resources and references (Including some places to look for more information on the use of the Dillman Method in mail surveys):

Dillman, D.A. 1978. Mail and Telephone Surveys, The Total Design Method. New York: John Wiley and Sons.

OMB. 2006. Office of Management and Budget: Standards and Guidelines for Statistical Surveys. (Available at http://www.whitehouse.gov/sites/default/files/omb/inforeg/statpolicy/standards_stat_surveys.pdf).

Johnson, T., and L. Owens. 2003. "Survey Response Rate Reporting in the Professional Literature." Paper presented at the 58th Annual Meeting of the American
Association for Public Opinion Research. Nashville, TN. May. (Available at http://www.amstat.org/sections/srms/proceedings/y2003/Files/JSM2003-000638.pdf)



NCAR Logo    USWRP Logo
©2021 UCAR | Privacy Policy | Terms of Use | Visit Us | Sponsored by