SUPPORTING STATEMENT
U.S. Department of Commerce
U.S. Census Bureau
Generic Clearance for Internet Nonprobability Panel Pretesting and Qualitative Survey Methods Testing1
A. Justification
1. Necessity of Information Collection
The U.S. Census Bureau is requesting a new OMB generic clearance to conduct a variety of medium-scale iterative Internet research pretesting activities. We will dedicate a block of hours to these activities for each of the next three years. OMB will be informed in writing of the purpose and scope of each of these activities, as well as the time frame and number of burden hours used. The number of hours used will not exceed the number set aside for this purpose.
The Census Bureau is committed to conducting research in a cost efficient manner. Currently, several stages of testing occur in research projects at the Census Bureau. As a first stage of research, the Census Bureau pretests questions on surveys or censuses and evaluates the usability and ease of use of websites using a small number of subjects during focus groups, usability and cognitive testing. These projects are in-person and labor-intensive, but typically only target samples of 20 to 30 respondents. This small-scale work is done through an existing OMB generic clearance. Often the second stage is a larger-scale field test with a split-panel design of a survey or a release of a Census Bureau data dissemination product with a feedback mechanism. The field tests often involve a lot of preparatory work and often are limited in the number of panels tested due to the cost considerations. They are often targeted at very large sample sizes with over 10,000 respondents per panel. These are typically done using stand-alone OMB clearances.
Cost efficiencies can occur by testing some research questions in a medium-scale test, using a smaller number of participants than what we typically use in a field test, yet a larger and more diverse set of participants than who we recruit for cognitive and usability tests. Using Internet panel pretesting, we can answer some research questions more thoroughly than in the small-scale testing, but less expensively than in the large-scale field test. This clearance seeks to establish a medium-scale (defined as having sample sizes from 100-2000 per study), cost-efficient method of testing questions and contact strategies over the Internet through different types of nonprobability samples.
For example, email has been identified as a possible cost-effective notification strategy for online data collection. Email has not been used extensively as a notification mode for past censuses nor other government surveys. (Please see “Supporting literature” section at the end of this section.) Prior to implementing an email strategy, the Census Bureau needs to determine the best email invitation to maximize the likelihood that someone will open the email and initiate the survey. Assessment of numerous email variations in a large-scale test would be cost-prohibitive. Medium-scale testing of email variations is more efficient. This research will be used to answer some fundamental questions about how to optimize email (and possibly text message) contacts.
This research program will be used by the Census Bureau and survey sponsors to test alternative contact methods, including emails and text messages (via an opt-in strategy), improve online questionnaires and procedures, reduce respondent burden, and ultimately increase the quality of data collected in the Census Bureau censuses and surveys. We will use the clearance to conduct pretesting of decennial and demographic census and survey questionnaires prior to fielding them as well as communications and/or marketing strategies and data dissemination tools for the Census Bureau. The primary method of identifying measurement problems with the questionnaire or survey procedure is split panel tests. This will encompass both methodological and subject matter research questions that can be tested on a medium-scale nonprobability panel.
This research program will also be used by the Census Bureau for remote usability testing of electronic interfaces and to perform other qualitative analyses such as respondent debriefings. An advantage of using remote, medium-scale testing is that participants can test products at their convenience using their own equipment, as opposed to using Census Bureau-supplied computers. A diverse participant pool (geographically, demographically, or economically) is another advantage. Remote usability testing would use click through rates and other paradata, accuracy and satisfaction scores, and written qualitative comments to determine optimal interface designs and to obtain feedback from respondents.
The public will be offered an opportunity to participate in this research remotely, by signing up for an online research panel. If a person opts in, the Census Bureau will occasionally email (or text, if applicable) the person an invitation to complete a survey for one of our research projects. Invited respondents will be told the topic of the survey, and how long it will take to complete it. Under this clearance, we will also conduct similar-scale and similarly designed research using other email lists to validate preliminary findings and expand the research.
Methods
Split sample experiments. This involves testing alternative versions of questionnaires, invitations to questionnaires (e.g., emails or text messages), or websites, at least some of which have been designed to address problems identified in draft versions or versions from previous waves. The use of multiple questionnaires, invitations, or websites, randomly assigned to permit statistical comparisons, is the critical component here; data collection will be via the Internet. Comparison of revised questionnaires (or invitations) against a control version, preferably, or against each other facilitates statistical evaluation of the performance of alternative versions of the questionnaire (or invitation or website).
The number of versions tested and the number of cases per version will depend on the objectives of the test. We cannot specify with certainty a minimum panel size, although we would expect that no questionnaire versions would be administered to less than fifty respondents.
Split sample tests that incorporate methodological questionnaire design experiments will have a larger maximum sample size (up to several hundred cases per panel) than other pretest methods. This will enable the detection of statistically significant differences, and facilitate methodological experiments that can extend questionnaire design knowledge more generally for use in a variety of Census Bureau data collection instruments.
Usability Interviews: This method involves getting respondent input to aid in the development of automated questionnaires and websites and associated materials. The objective is to identify problems that keep respondents from completing automated questionnaires accurately and efficiently with minimal burden, or that prevent respondents from successfully navigating websites and finding the information they seek. Remote usability testing may be conducted under this clearance, whereby a user would receive an invitation to use a website or survey, then answer targeted questions about that experience.
Qualitative Interviews: This method involves one-on-one (or sometimes group) interviews in which the respondent is typically asked questions about survey content areas, survey questions or the survey process. A number of different techniques may be involved, including cognitive interviews and focus groups.. The objective is to identify problems of ambiguity or misunderstanding, or other difficulties respondents may have answering survey questions in order to improve the information ultimately collected in large scale surveys and censuses.
Procedures for Clearance
This clearance will only cover pretests primarily conducted remotely, via the Internet. Since the types of surveys included under the umbrella of the clearance are so varied, it is difficult to specify at this point what kinds of activities would be involved in any particular test, but a key component will be the comparison of one invitation, questionnaire or website to another.
We will provide OMB with a copy of questionnaires and invitations in advance of any testing activity. Depending on the stage of development, this may be the printed material from the last round of a survey or a revised draft based on analysis of other evaluation data. For a test of alternative procedures, the description and rationale for the procedures would be submitted. We will also provide a description of the sample design and the planned administration. Attachment B shows an example of the type of materials that will be submitted under this clearance. OMB will endeavor to provide comments on substantive issues within 10 working days of receipt.
The Census Bureau will consult with the Economics and Statistics Administration (ESA) and OMB prior to submission on the appropriateness of submissions under this clearance that may raise policy or substantive issues. With respect to ESA, this will include all research and testing related to the American Community Survey (ACS) and the 2020 decennial census. In addition, the Census Bureau will consult with ESA on any research and testing proposals that are presented to the Data Stewardship Executive Policy (DSEP) Committee. Consultation with ESA includes the Census Bureau providing copies of the materials to be tested in advance of any testing.
The Census Bureau will send ESA and OMB an annual report at the end of each year summarizing the number of hours used, as well as the nature and results of the activities completed under this clearance.
Data collection for this project is authorized under the authorizing legislation for the questionnaire being tested. This may be Title 13, Sections 131, 141, 161, 181, 182, 193, and 301 for Census Bureau-sponsored surveys, and Title 13 and 15 for surveys sponsored by other Federal agencies. We do not now know what other titles will be referenced, since we do not know what survey questionnaires will be pretested during the course of the clearance.
Supporting Literature
The use of prenotice and reminder contacts as a strategy to increase response in household mail surveys is well documented (Couper 2008; Dillman 2000; Dillman, Clark and Sinclair 1995; Dillman et al. 2009; Fox, Crask and Kim, 1988). Past research with postal letters has shown that mentioning the mandatory nature of the survey request, how the data are used, a survey due date, the cost savings associated with a self-response, and a possible in-person visit by an interviewer all show some promise for increasing the self-response to the census questionnaire (Stokes, S., Reiser, C., Bentley, M., Hill, J. and Meier, A., 2011; Martin, 2009; Dillman, Singer, Clark, and Treat, 1996). There is also some evidence that a postcard prenotice may be just as effective as a letter prenotice, while decreasing some of the cost (Beebe, et al. 2010).
However, research exploring prenotification for Internet surveys is not as definitive (Bosnjak et al. 2008; Kaplowitz et al. 2004; Porter and Whitcomb 2007). Some research has shown that sending a prenotification in a different mode (e.g., mail or text message) may increase response to Internet surveys over a same-mode contact (i.e., email invitation to an internet survey; Bandilla, Couper, and Kaczmirek, 2012; Bosnjak et al. 2008; Kaplowitz et al. 2004). However, other studies have shown little or no effect of a using a different-mode prenotice (see one condition in Bandilla, Couper and Kaczmirek, 2012; Millar and Dillman, 2011; Porter and Whitcomb, 2007) or a contrary effect where a prenotice in the same mode (email) increased response (Kaplowitz, Lupi, Couper and Thorp, 2012; Schaefer and Dillman 1998). It has been hypothesized that using pre-notification in another mode may attract greater attention and more legitimacy to the subsequent e-mail invitation (Bosnjak et al. 2008); however, this is obviously not a definitive finding. Kaplowitz and colleagues (2004) found no additional effect of a paper reminder to an Internet survey beyond that provided by a paper prenotice (i.e., the paper reminder increased response rates when there was no paper pre-notice, but not in the condition where there was a paper pre-notice).
Other studies have altered components of the survey invitations or the survey itself to try to improve response rates. An early meta-analysis of Internet studies found the number of contacts, personalized contacts and precontacts to be the dominant factors affecting response rates (Cook, Heath and Thompson, 2000). Subsequent to the meta-analysis, personalization of the email (e.g. dear NAME vs. dear student) has consistently been found to significantly increase response rates (Heerwegh, 2005; Heerwegh, Vanhove, Matthijs, & Loosveldt, 2005; Pearson & Levine, 2003).
Callegaro and colleagues (2009) found no effect on response rates of modifying the subject line within a panel of pre-engaged respondents. Other studies on subject lines have found significant effects, but appear to be very specific to the studies or populations at hand, and less relevant for our need to collect basic person and housing information while offering no incentive (see Henderson, 2011; Porter and Whitcomb, 2005; Titiz and Ziniel, 2010; Trouteaud, 2004). One hypothesis supported by a test with a faculty and student population that could apply to Federal surveys is that an email invitation using an authority subject line (e.g., “Vice President for Finance and Operations asks you to take a survey”) increases response over a topic matter subject line (e.g., “Take a survey on campus environmental stewardship.” Kaplowitz, Lupi, Couper, and Thorp, 2012). Porter and Whitcomb (2005) tested different subject lines with samples of students and found a blank subject line yielded the highest response for a group of students with low involvement with the survey sponsor, over other combinations of subject lines including the sponsor, the reason for the contact and a plea for assistance. Couper (2008) suggests that to avoid response bias the subject line should not include the topic of the survey. Lee (2010) cites an FAQ from Websurveyor in 2000 that instructed users to send survey invitations on a Thursday or Friday to households so respondents can complete the survey over the weekend.
Kaplowitz, Lupi, Couper and Thorp (2012) also manipulated various other components of the survey invitations—invitation mode, location of URL link, length of the invitation text, and survey time/effort estimate. Their results indicate that some of these effects may differ depending on the demographics of the population. They found differing effects of mode of contact, length of the invitation text, and survey time/effort estimate between faculty and students. Klofstad, Boulianne and Basson (2008) found that telling respondents they would receive an email reminder if they failed to complete an Internet questionnaire boosted response rates.
In conclusion, there have been many published studies on this topic but no definitive set of best practices that one can use because each of these studies have their own idiosyncrasies. These studies are a good source of hypotheses and experimental designs but not guidance for practice for a government survey.
Literature on and considerations about the use of nonprobability samples for this type of work have recently been thoroughly covered by a Task Force commissioned by the American Association for Public Opinion Research and are well documented there (Baker, et al., 2013).
2. Needs and Uses
The information collected in this program of developing and testing questionnaires will be used by staff from the Census Bureau and sponsoring agencies to evaluate and improve the quality of the data in the surveys and censuses that are ultimately conducted. Because the questionnaires being tested under this clearance are still in the process of development, the data that result from these collections are not considered official statistics of the Census Bureau or other Federal agencies. Data will be included in research reports prepared for sponsors inside and outside of the Census Bureau. The results may also be prepared for presentations related to survey methodology at professional meetings or publications in professional journals.
Information quality is an integral part of the pre-dissemination review of the information disseminated by the Census Bureau (fully described in the Census Bureau's Information Quality Guidelines). Information quality is also integral to the information collections conducted by the Census Bureau and is incorporated into the clearance process required by the Paperwork Reduction Act.
3. Use of Information Technology
This generic clearance will use 100 percent electronic data collection. It will test primarily Internet data collection instruments (using a secure internet system) with email and/or text message invitations. We will also test Internet data dissemination websites with email invitations or by posting a link on a Census Bureau webpage to test out the website.
4. Efforts to Identify Duplication
This research does not duplicate any other questionnaire design work being done by the Census Bureau or other Federal agencies. The purpose of this clearance is to stimulate additional research, which would not be done under other circumstances due to time constraints. This research will involve collaboration with staff from other agencies that are sponsoring the surveys conducted by the Census Bureau. The research may also involve joint efforts with staff from other Federal laboratory facilities. All efforts would be collaborative in nature, and no duplication in this area is anticipated.
To the maximum extent possible, we will make use of previous information, reviewing results of previous evaluations of survey data before we attempt to revise invitations or questionnaires. However, this information is not sufficient to refine our census and survey invitations or questionnaires without conducting additional research.
This generic clearance request is an explicit addition to the current generic clearance for pretesting (Number 0607-0725) that allows smaller-scale cognitive and questionnaire design and usability testing research as part of testing for its censuses and surveys.
5. Minimizing Burden
This research will be designed as relatively small-scale data collection efforts. This will minimize the amount of burden required to improve questionnaires and procedures, test new ideas, and refine or improve upon positive or unclear results from other tests.
6. Consequences of Less Frequent Collection
This clearance involves one-time questionnaire development activities for each survey that is connected with the clearance. If this project were not carried out, the quality of the data collected in the surveys would suffer. In addition, testing websites with a larger sample-base will allow us to have more confidence in our findings and the resulting revisions to the websites.
7. Special Circumstances
All the guidelines listed in the OMB guidelines are met. There are no special circumstances.
8. Consultations Outside the Agency
Reg Baker, Consultant at Market Strategies International, and an expert in nonprobability internet panel testing has been and will continue to be a consultant on this project.
Consultation with staff from other Federal agencies that sponsor surveys conducted by the Census Bureau will occur in conjunction with the testing program for the individual survey. Consultation with staff from other Federal laboratory facilities may also occur as part of joint research efforts. These consultations will include discussions concerning potential response problems, clarity of questions and instructions, and other aspects of respondent burden. Additional efforts to consult with potential respondents to obtain their views on the availability of data, clarity of instructions, etc., may be undertaken as part of the testing that is conducted under this clearance.
A notice was published in the Federal Register on June 24, 2013 (vol. 78 FR, pg. 37783), inviting public comment on our plans to submit this request. We have not received any response to this notice.
9. Paying Respondents
Respondents will not be paid or provided any incentive for their participation in remote testing activities. For activities conducted in the cognitive laboratory or in focus groups, respondents may be offered an incentive to offset the cost of participation. Details of this incentive will be outlined with each individual request submitted by Census to OMB.
10. Assurance of Confidentiality
All respondents who participate in research under this clearance will be informed that the information they provide is confidential and that their participation is voluntary. This disclosure will be made prior to any data collection.
The confidentiality of information is assured by Title 13, United States Code, or other applicable titles, which authorize the collection of information.
11. Justification for Sensitive Questions
There will be no sensitive questions asked.
12. Estimate of Hour Burden
We are estimating 10 minutes per response for 16,666 respondents per year. Data collections will happen approximately once a month, using approximately 1389 respondents per study. The total estimated respondent burden is 8,333 hours for the period from January 2014 through December 2016. These hours will be distributed as follows:
Respondent burden (hours)
January - December 2014 2,778
January - December 2015 2,778
January - December 2016 2,778
Total 8,334
This estimate is based on our anticipated use of the clearance over the next three years, including burden hours expended for multi-panel questionnaire experiments.
We will use a variety of forms to conduct the research under this clearance, but the exact number of different forms, length of each form, and number of subjects/respondents per form are unknown at this time.
13. Estimate of Cost Burden
There is no cost to respondents for participating in the research we are conducting under this clearance, except for their time to complete the questionnaire and (if applicable) travel to the location to participate in the research.
14. Cost to Federal Government
There is no way to anticipate the actual number of participants, length of interview, and/or complexity of the data collection instruments for the surveys to be conducted under this clearance. Thus, it is impossible to estimate in advance the cost to the Federal Government. The Census Bureau will cover the costs.
15. Reason for Change in Burden
This is a request for nonsubstantive change. Specifically, we would like to add two additional, qualitative methods for use under this clearance: focus groups and cognitive interviews. These methods would also be used for pretesting purposes, and therefore fit well with the overall purpose and scope of the generic clearance vehicle as initially formulated. In addition, we specifically note that we may, depending on the method used, propose to offer an incentive to respondents to offset costs of their participation. Such requests would be described and a rationale provided in each “mini-generic” request submitted under the generic vehicle, if approved All changes to the original request are shown in purple font.
16. Project Schedule
This research program is for questionnaire and procedure development purposes. We will use data tabulations to evaluate the results of questionnaire testing. The information collected in this effort will not be the subject of any printed Census Bureau reports; however, it might be included as a methodological appendix or footnote in a report containing data from a larger data collection effort. The results of this research may, however, be prepared for presentation at professional meetings or publication in professional journals.
Due to the nature of this clearance, there is no definite or tentative time schedule at this point. We expect work to continue more or less continuously throughout the duration of the clearance.
17. Request to Not Display Expiration Date
No exemption is requested.
18. Exceptions to the Certification
There are no exceptions to the certification.
References:
Baker, R., Blumberg, S., Brick, J.M., Couper, M., Courtright, M., Dennis, J.M., Dillman, D., Frankel, M., Garland, P., Groves, R., Kennedy, C., Krosnick, J., Lavrakas, P., Lee, S., Link, M., Piekarski, L., Rao, K., Thomas, R., and Zahs, D., (2010). Research Synthesis: AAPOR Report on Online Panels. Public Opinion Quarterly, 74(4): 711-781.
Baker, R., Brick, J.M., Bates, N.A., Battaglia, M., Coupler, M.P, Dever, J.A., Gile, K.J., and Tourangeau, R. (2013). Report of the AAPOR Task Force on Non-Probability Sampling. Available at http://bit.ly/16EvssL.
Bandilla, W, Couper, M. P., & Kaczmirek, L. (2012): The Mode of Invitation for Web Surveys. Survey Practice, vol. 5, no.3.
Beebe, T.J., et al. (2010). Shortening a survey and using alternative forms of prenotification: impact on response rate and quality. BMC Medical Research Methodology; 10:50.
Bentley, M., Hill, J.M., Reiser, C., Stokes, S., & Meier, A. (2011). "2010 Census Quality Survey," 2010 Census Planning Memoranda Series No. 165, U.S. Census Bureau, http://2010.census.gov/2010census/pdf/2010%20Census%20Quality%20Survey.pdf.
Bosnjak, M., W. Neubarth, M.P. Couper, W. Bandilla & L. Kaczmirek. (2008). Prenotification in web-based access panel surveys: the influence of mobile text messaging versus e-mail on response rates and sample composition. Social Science Computer Review 26(2): 213–223.
Callegaro, M., Kruse, Y., Thomas, M., Nukulkij, P. (2009). The Effect of Email Invitation Customization on Survey Completion Rates in an Internet Panel: A Meta-analysis of 10 Public Affairs Surveys. The American Association for Public Opinion Research (AAPOR) 64th Annual Conference, 2009.
Cook, C., Heath, F., & Thompson, R.L. (2000). A meta-analysis of response rates in web- or internet-based surveys. Educational Psychological Measurement, 60, 821-836.
Council of American Survey Research Organizations (CASRO). (2011). “CASRO code of standards and ethics of survey research. Section 3.a. - Internet Research and Email Solicitation”. Available at: http://www.casro.org/codeofstandards.cfm.
Couper, M.P. (2008). Designing effective web surveys. Cambridge University Press, New York.
Dillman, D. (2000). Mail and Internet surveys: The total design method (2nd ed.). New York: Wiley.
Dillman, D A., Clark, J.R., & Sinclair, M. A. (1995). How Prenotice Letters, Stamped Return Envelopes, and Reminder Postcards Affect Mailback Response Rates for Census Questionnaires. Survey Methodology 21(2): 1-7.
Dillman, D., Singer, E., Clark, J., and Treat, J. (1996). Effects of benefits appeals, mandatory appeals, and variations in statements of confidentiality on completion rates for census questionnaires. Public Opinion Quarterly, 60(3): 376-389.
Dillman, D.A., Smyth, J.D., & Christian, L.M. (2009). Internet, mail and mixed-mode surveys: the tailored design method (3rd ed.). John Wiley & Sons, Hoboken, NJ.
Fox, R., Crask, M., & Kim, J. (1988). A meta-analysis of selected techniques for inducing response. Public Opinion Quarterly, 52, 467-491.
Groves, R., Cialdini, R., and Couper, M. (1992). Understanding the decision to participate in a survey. Public Opinion Quarterly, 56(4): 475-495.
Heerwegh, D. (2005). Effects of Personal Salutations in E-mail Invitations to Participate in a Web Survey. Public Opinion Quarterly 69(4): 588-598.
Heerwegh, D., Vanhove, T., Matthijs, K., & Loosveldt, G. (2005). The effect of personalization on response rates and data quality in web surveys International Journal of Social Research Methodology, 8, 85-99.
Henderson, V. (2011). Increasing (or Decreasing) Response Rate by Changing the Subject of Email Invitations. Presented at The American Association for Public Opinion Research (AAPOR) 66th Annual Conference, 2011.
Kaplowitz, M.D., Hadlock, T.D., & Levine, R. (2004). A comparison of web and mail survey response rates. Public Opinion Quarterly 68(1): 94–101.
Kaplowitz, M.D., Lupi, F., Couper, M. P., & Thorp, L. (2012). The Effect of Invitation Design on Web Survey Response Rates. Social Science Computer Review, 30(3), pp. 339-349.
Klofstad, C., Boulianne, S., & Basson, D. (2008). Matching the message to the medium: results from an experiment on Internet survey email contacts. Social Science Computer Review, 26(4), 498–509.
Lee, B., (2010). Exploring a new research method in diversity research. Procedia Social and Behavioral Sciences 7(C ): 494-503.
Martin, E. (2009). Can a deadline and compressed mailing schedule improve mail response in the decennial census? Public Opinion Quarterly, 73(2): 361-367.
Millar, M. M. & Dillman, D., A. (2011). Improving Response to Web and Mixed-Mode Surveys. Public Opinion Quarterly 75 (2): 249-269.
Pearson, J., & Levine, R.A. (2003). Salutations and response rates to online surveys. In R.
Banks, J. Currall, J. Francis, L. Gerrad, R. Khan, T. Macer, M. Rigg, E. Ross, S. Taylor & A. Westlake (Eds.), The impact of technology on the survey process. Proceedings of the fourth international conference on survey and statistical computing (pp. 351-362). Chesham, Bucks: Association for Survey Computing.
Porter, S.R., & Whitcomb, M.E. (2003). The impact of contact type on web survey response rates. Public Opinion Quarterly, 67, 579-588.
Porter, S.R., & Whitcomb, M.E. (2005). E-mail subject lines and their effect on web survey viewing and response. Social Science Computer Review, 23, 380-387.
Porter, S.R., & Whitcomb, M.E. (2007). Mixed-mode contacts in web surveys. Paper is not necessarily better. Public Opinion Quarterly, 71, 635-648.
Schaefer, D., & Dillman, D. (1998). Development of a standard e-mail methodology: Results of an experiment. Public Opinion Quarterly, 62, 378-397.
Stokes, S., Reiser, C., Bentley, M., Hill, J. and Meier, A., (2011). 2010 Census Deadline Messaging and Compressed Mailing Schedule Experiment, U.S. Census Bureau: 2010 Census Program for Evaluations and Experiments. Last accessed January 10, 2013 at www.census.gov/2010census/pdf/2010_Census_DM_CS.pdf
Tancreto, J.G., Zelenak, M.F., Davis, M., Ruiter, M., & Matthews, B. (2012). “2011 American Community Survey Internet Tests: Results from the First Test in April 2011,” 2012 American Community Survey Research and Evaluation Report Memorandum Series #ACS12-RER-13-R2, DSSD 2012 American Community Survey Memorandum Series #ACS12-MP-01-R2, June 11, 2012. http://www.census.gov/acs/www/Downloads/library/2012/2012_Tancreto_01.pdf
Titiz, H. and Ziniel, S. (2010). The Subject Lines of Web Survey Invitations and Participation Rates. Presented at: The American Association for Public Opinion Research (AAPOR) 65th Annual Conference, 2010.
Trouteaud, A.R. (2004). How you ask counts: A test of internet-related components of response rates to a Web-based survey. Social Science Computer Review, 22, 385-392.
1 All changes to the original request are shown in purple font.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | demai001 |
File Modified | 0000-00-00 |
File Created | 2021-01-23 |