1018-Nature - Supporting Statement B_ rev 4-24-16 final clean

1018-Nature - Supporting Statement B_ rev 4-24-16 final clean.docx

UCAN Survey - National Initiative to Understand and Connect Americans and Nature

OMB: 1018-0163

Document [docx]
Download: docx | pdf

Supporting Statement B for

Paperwork Reduction Act Submission


OMB Control Number 1018-XXXX

UCAN Survey - National Initiative to Understand

and Connect Americans and Nature


1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, state and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved.


This is a one-time data collection of 8,950 total individuals, consisting of three segments: (1) a nationwide survey of English- and Spanish-speaking Americans, including oversamples of African Americans, Asian Americans, and Hispanics; (2) a representative State-level survey of Florida; (3) a representative State-level survey of Texas. (See tables 1 and 2 for details).

Table 1. Nationwide survey

Geography

Total adults

Anticipated sample

United States

237.7 million

4,500

Additional African American Oversample

28.6 million

350

Additional Asian American Oversample

11.9 million

350

Additional Hispanic-Latino/a Oversample

34.5 million

350

TOTAL


5,550

Source: American Community Survey, 2015, http://factfinder.census.gov/ and refined searches by state, age, and racial/ethnic group


At the national level, this robust sample size enables us to examine respondents by salient social categories, including race and ethnicity, which are of increasing interest to conservationists and to the U.S. Fish and Wildlife Service in particular. We will also be able to examine respondents by their location in one of the nine Census divisions (similar to the USFWS’s existing National Survey of Fishing, Hunting, and Wildlife-Associated Recreation), thus providing finer-grained detail for decision makers to design policies and programs that may cross State boundaries.


We will also survey respondents in two of the fastest growing States, Texas and Florida. These States represent important demographic trends for future conservation programs and efforts. In helping to fund this survey, the Texas Parks and Wildlife Department and the Florida Fish and Wildlife Conservation Commission have shown their commitment to using these results to better understand how to best serve their constituencies.


Table 2. State surveys

Geography

Total adults

Anticipated sample

Florida

14.8 million

1,400

Additional African American Oversample

2.2 million

100

Additional Asian American Oversample

377 thousand

100

Additional Hispanic-Latino/a Oversample

3.2 million

100

Subtotal


1,700

Texas

18.3 million

1,400

Additional African American Oversample

2.2 million

100

Additional Asian American Oversample

772 thousand

100

Additional Hispanic-Latino/a Oversample

6.4 million

100

Subtotal


1,700

TOTAL


3,400

Source: American Community Survey, 2015, http://factfinder.census.gov/ and refined searches by state, age, and racial/ethnic group


Although each of these three surveys will be conducted roughly at the same time and using the same questions, each is a distinct source of data. State-level estimates will not be generalized to the US population, nor will analyses be conducted on a pooled dataset of all 8,950 cases. During data analysis, we will explore the possibility of benchmarking state results to US results—making sure to state clearly that the state-level results are not generalizable to the US population and that they were collected in separate sampling efforts.


2. Describe the procedures for the collection of information including:

* Statistical methodology for stratification and sample selection,

* Estimation procedure,

* Degree of accuracy needed for the purpose described in the justification,

* Unusual problems requiring specialized sampling procedures, and

* Any use of periodic (less frequent than annual) data collection cycles to reduce burden.


The research team with whom the Service is collaborating—Dr. Stephen R. Kellert, Professor Emeritus and Senior Research Scholar, Yale University School of Forestry and Environmental Studies; and DJ Case & Associates—evaluated and compared in detail the strengths, weaknesses, costs, and practicalities among telephone, mail, personal interview, and online methodologies for purposes of surveying the general public and sub-populations. This evaluation included review of the standard and contemporary sampling literature, and interviews with survey professionals and consultants acknowledged as industry and government experts. Coincident with this lengthy evaluation was development, peer review, and small-scale testing of the data collection instrument (N = 9) to explore public sentiment toward nature.


Ultimately, the research team selected a double–opt-in, online, self-administered survey. The decision was based on the:


  • depth of survey content and inquiry, and thought and reflection required of respondents;

  • survey completion time (online response, ~20 minutes);

  • significant and reassuring progress in recent years in survey panel composition and sampling to represent populations of interest (grounded in peer-reviewed literature); and

  • fiscal efficiencies.


Such approaches to collect data are becoming increasingly common and now appear in well-regarded, peer-reviewed journals.1 Not all online surveys are created equally however, and we have worked closely with research firm Toluna to minimize error and bias in the sample. We are confident the data will be of high quality and will be adequate for our research goals.


Outreach methods. Toluna currently has a potential pool of participants of 1.7 million Americans. It recruits participants on dozens of websites, including those targeted to particular demographics (such as mothers, Spanish-speakers, and the highly educated). Potential participants undergo a series of quality checks to verify their identity. Potential participants must validate their postal code, complete a CAPTCHA confirmation, and confirm their membership via email (double opt-in). Toluna further uses cookie-based technology during the panelist registration process to ensure there are no duplicate respondents. Digital fingerprinting technology prevents respondents from participating in surveys more than once. Toluna further uses a matching algorithm to flag similarities of new registrants with existing panelists. Participants must have an email address. However, they need not have their own computer or Internet access: they can respond to surveys at libraries, schools, Internet cafes, and other places. Participants receive an incentive for participating (i.e., points), which they can redeem for tangible rewards such as sweepstakes, gift cards, coupons, and cash payments.


Matching the composition of the total population. Toluna uses propensity score stratification to create its samples, a technique that is widely used in the social sciences. A propensity score is the probability that a person is in the “treated” group, given his or her observed background characteristics. It is represented as a scalar summary. Widely cited research has shown that creating 5 strata (i.e., quintiles) within that scalar can remove approximately 90% of the bias caused by strata variables.2


Toluna uses a three-step process to draw each sample and create these 5 quintiles. First, Toluna draws questions from “best-in-class,” nationally representative probability surveys conducted by the Census Bureau, Bureau of Labor Statistics, Centers for Disease Control and Prevention, and National Opinion Research Center (the General Social Survey). These questions ask about attitudes (e.g., political orientation), behaviors (e.g., smoking, having a passport), and demographics (e.g., gender, age, Census division, and education). Second, Toluna applies a logarithmic algorithm to panelists’ own responses, assigning a propensity score (valued at 0 to 1) to each respondent. (The composition of this algorithm is the only proprietary aspect of selecting the sample. All other elements in this process, including data sources, are publicly available and used by other research firms.) Third, Toluna assigns potential participants to a quintile group, which are defined according to a blend of these behavioral, attitudinal, and demographic characteristics. Subjects with similar propensity scores resemble one another on these characteristics. (As an example, generally speaking, quintile group 5 is comprised of highly educated, highly politically active, full-time employed, and affluent Americans.) The composition of the quintiles in the sample reflects the composition of the quintiles in the total population. If that is not the case, we have authorized Toluna to recruit more participants so each quintile in the final sample reflects the total population on attitudes, behaviors, and demographics. This approach eliminates the need to weight heavily the data prior to final analysis and reporting. However, Toluna can also apply small adjustment weights to the final samples if necessary to bring them in line with national- or state-level demographics, behaviors, and attitudes. In this way, Toluna ensures that each quintile is proportionately represented in the final analysis.


Even with such an extensive selection process, we want to ensure we have balanced the national sample not only on general demographic traits, behaviors, and attitudes, but also on a question related to our fundamental topic of interest—connection with nature. To that end, we have directed Toluna to add the following stratification variable to the screening process. The question comes from the General Social Survey, a nationally representative probability survey of non-institutionalized English- and Spanish-speaking Americans, and was asked on the GSS most recently in 2014.


We are faced with many problems in this country, none of which can be solved easily or inexpensively. On improving and protecting the environment, do you think we are spending too much money, too little money, or about the right amount?

( ) too little money

( ) about the right amount

( ) too much money

( ) don't know


Respondent attentiveness. During the surveys themselves, Toluna identifies and eliminates inattentive respondents. Based on the proposed questionnaire design, the following methods would be applied:

  • After collection of the first 300 interviews, the median length of interview will be calculated and respondents who complete the interview in less than one-half the median value will be discarded from the data.

  • Open-ended responses will be reviewed by Toluna’s data processing team and project manager. Responses found to be incoherent or not applicable to the question that was asked will be removed from the data.


To protect respondents’ privacy, Toluna explains the following to prospective respondents:

Toluna will not transfer to any third party the personal information that we collect from you except for narrow exceptions set forth in our privacy policy, located here https://us.toluna.com/Privacy, including transfer to third-party subcontractors, if required by law, or with your express consent. We will never sell or transfer your personal information to any third-party marketer.”


Estimation procedures. Non-probability sampling techniques do not allow for the estimation of sampling errors. Nonetheless, we are confident the techniques described above in section 2 mitigate the impact of exclusion bias. Based on Toluna’s experience conducting the type of sampling scheme we propose, they find they generally need to screen about twice as many potential participants—to match the population universe—than they would if only creating a nonrepresentative sample. All explanatory variables will be ordinal- or nominal-level variables. Social science practice widely accepts Likert-style scales as interval-level data, so, where appropriate, variables with Likert-style metrics will be treated as such. The team will examine evidence of biophilia using descriptive statistics that are intuitively appealing and interpretable for administrators and program managers (such as frequencies and measures of central tendency and dispersion). Where data permit, the team will calculate correlation coefficients to explore potential relationships. Text analysis software (NVivo and SPSS Text Analytics for Surveys) will be applied to help analyze and interpret open-ended responses. As analysis progresses, appropriate multivariate, mathematical procedures such as cluster analysis will be employed for grouping respondents. Nested regression models will reveal the importance of certain traits while adjusting for others; regression analysis will also show potential interactions between socio-demographic diversity and expressed values for nature and the role of nature in perceptions of health. Where appropriate, nonparametric statistics will be reported (applicable to the nominal- and ordinal-level data in the survey).


3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.


As described above, the pool of potential participants are those who have an email address (but need not necessarily have a computer or Internet connection at home). Individuals contacted via email voluntarily click a link to participate in the survey. If the target number of complete surveys is not achieved after the initial invitation, additional invitation messages will be sent to nonrespondents. These follow-up messages will be identical to the original invitation. Toluna has discovered it gets better participation by sending the same message again than it does by creating a “reminder” message. Up to two reminders will be sent to each nonrespondent, after which communication regarding the survey is terminated. This automated information collection achieves enormous efficiencies in reducing contact and response burden on participants, as well as similarly large economies in data collection and entry, while using accepted methodology in survey research. To assist in increasing the response rate and quality of resulting data, the survey will be available in Spanish (Latin American) upon respondent’s request. If still more respondents are needed to fill the sample quotas, Toluna will send out additional invitations.


Toluna’s method does not measure participation rates and non-response in the traditional sense. Ultimately, non-response and participation are concerns because they can introduce error and bias to the sample. Given the steps we have described above, including the addition of the biophilia-related measure, we believe we have minimized error and bias to a reasonable level given the degree of accuracy needed for our purposes.


4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.


To ensure strong evidence for the bio-cultural model of biophilia—and to help ensure clarity of survey questions and minimize response burden—five tests of survey concept and methods were completed:


  • In late August, 2011, Dr. Kellert, and David J. Case, President of D.J. Case & Associates, fielded (at private expense) two carefully worded questions (that is, two variables deemed likely candidates for dependent variables in the proposed survey) in Opinion Research Corporation’s CARAVAN telephone survey; a nationally-projectable study among a sample of 1,000 adults 18 years of age and older (1/2 male and 1/2 female) living in private households in the continental United States. Responses to these two variables were cross-tabulated by 10 socioeconomic variables measured in the same survey. Findings offered proof of concept for biophilia, as well as affirmation of question wording for key dependent variables, and the importance of measuring selected independent or explanatory socioeconomic variables in the proposed survey.


  • In June and early July 2015, 15 focus groups were conducted (OMB Control No, 1090-0011) to examine public sentiments toward nature, and in so doing, examine proof of concept for eight biophilic values and allied variables, such as outdoor interests, activities, and knowledge. We used the focus group results to refine the national survey of adult Americans. In addition, the focus groups provided qualitative meaning and context for interpreting the quantitative survey. These focus groups were conducted in the following cities with a focus on assuring representation of different racial/ethnic groupings (note: “general public” simply denotes that respondents were asked to participate regardless of racial/ethnic identification):


  1. Chicago IL (“general public” participants)

  2. Dallas TX 1 (Hispanic-Latino/a participants)

  3. Dallas TX 2 (Hispanic-Latino/a participants)

  4. Houston TX 1 (“general public” participants)

  5. Houston TX 2 (African American participants)

  6. Jacksonville FL 1 (“general public participants)

  7. Jacksonville FL 2 (African American participants)

  8. Los Angeles CA (Asian American participants)

  9. Miami FL 1 (Hispanic-Latino/a participants)

  10. Miami FL 2 (Hispanic-Latino/a participants)

  11. New York City NY (“general public” participants)

  12. San Antonio TX 1 (“general public” participants)

  13. San Antonio TX 2 (Hispanic-Latino/a participants)

  14. Tampa FL 1 (Asian American participants)

  15. Tampa FL 2 (Hispanic-Latino/a participants)


  • As a check for respondent understanding of survey questions, and as confirmation of time for survey completion (estimated at ~20 minutes by the research team), a pilot test of the final-draft survey was conducted with five content experts unfamiliar with the study and four members of the general public (N = 9); respondent experiences with the pilot test were solicited. The content experts indicated that the survey items were conceptually solid and clear. Members of the general public affirmed that they understood the survey, and found the survey items/questions interesting and thought-provoking.


  • The research team requested that online-programming staff from Toluna (online survey subcontractor) estimate time for completion of the online survey. Toluna staff estimated survey completion at ~20 minutes; not an inconsequential confirmation by Toluna given that the firm’s pricing varies based on estimated time for completion.


  • The research team submitted the research design (including consent to participate and human subject protocols) to Heartland Institutional Review Board (IRB), which on May 26, 2015, approved and classified the survey as follows: “There is no more than minimal risk to the subjects.”


5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


The individual directly responsible for information collection, analysis, and report writing is:


Dr. Stephen R. Kellert

Tweedy Ordway Professor Emeritus

Yale University

School of Forestry & Environmental Studies

195 Prospect Street

New Haven, Connecticut

203-605-9126

[email protected]


Co-managers of the survey are:


David Case

President

D.J. Case and Associates

317 E. Jefferson Blvd.

Mishawaka, Indiana 46545

574-258-0100

[email protected]


Phil Seng

Vice-president

D.J. Case & Associates

317 E. Jefferson Blvd.

Mishawaka, Indiana 46545

574-258-0100

[email protected]


The following social scientists reviewed question wording, statistical design, and will collaborate in analysis of the survey:


Dr. Daniel Escher

Social Scientist

D.J. Case and Associates

317 E. Jefferson Blvd.

Mishawaka, Indiana 46545

574-258-0100

[email protected]


Dr. Jessica Mikels-Carrasco

Social Scientist

D.J. Case and Associates

317 E. Jefferson Blvd.

Mishawaka, Indiana 46545

574-258-0100

[email protected]


Dr. Daniel J. Witter

Research Director and Social Scientist

D.J. Case and Associates

1160 Branch Street

Holts Summit, Missouri 650443

573-896-5375

[email protected]


The following program managers, administrators, statisticians, and human dimension research specialists were involved in the review of survey concept/rationale; and/or question wording, statistical design, and analysis-planning for the survey.


From the Disney Corporation:

  • Beth Stevens, SVP, Environmental Conservation - 407-560-4551

  • Jackie Ogden, PH.D., VP, Animals, Science and Environment

  • Kim Sams, Director, Corporate Citizenship - 407-560-4541


From the Florida Fish and Wildlife Conservation Commission (FWC):

  • Ann Forstchen – 727-896-8626

  • Jack Daugherty – 850-488-6251

  • Diane Eggeman – 850-488-3831

  • Jim Estes – 850-487-0554

  • Judy Gillan – 850-921-4484

  • Doc Kokol – 850-488-9327

  • Jerri Lindsey – 850-410-4951

  • Scott Sanders - 850-617-9548

  • Rae Waddell – 850-488-5878

  • Bob Wattendorf – 850-488-0520


From the Texas Parks and Wildlife Department (TPWD):

  • Carter Smith, Director – 512-389-4802

  • Brent Leisure – 512-389-4866

  • Craig Bonds – 512-389-4643

  • Chris Holmes – 512-389-4880

  • Darcy Bontempo – 512-389-4574

  • Darlene Lewis – 512-389-8745

  • David Buggs – 512-389-8595

  • David Terre – 512-389-4855

  • Eddie McKenna – 512-389-8696

  • Jennifer Bristol – 512-389-8143

  • Jeremy Leitz – 512-389-4333

  • John Davis – 512-389-8587

  • Johnnie Smith – 512-389-8060

  • John Taylor – 512-389-4338

  • Josh Havens – 512-389-4557

  • Kelly Dziekan – 512-389-8525

  • Ky Harkey – 512-705-9388

  • Nancy Herron – 512-389-4362


From the USFWS:

Dan Ashe

Director

U.S. Fish and Wildlife Service

202-208-4717

[email protected]


Steve Chase

Chief, Division of Education Outreach

National Conservation Training Center

U.S. Fish and Wildlife Service

304-876-7266

[email protected]


Jay Slack

Director

National Conservation Training Center

U.S. Fish and Wildlife Service

304-876-7265

[email protected]


Translating the English survey to the Spanish version (Latin American Spanish):


Ms. Diana Carrasco, MS

Adjunct Professor

School of Business

Miami Dade College

[email protected]


Independently back-translating the Spanish survey version to English for consistency and agreement with the original:


Mr. Waldo A. Mikels-Carrasco, MA

Director of Community & Population Health Development

220 W. Colfax Avenue, Suite 300

South Bend, Indiana 46601

574-968-4340

[email protected]


1 Brand, Jennie E., and Yu Xie. “Who Benefits Most from College?: Evidence for Negative Selection in Heterogeneous Economic Returns to Higher Education.” American Sociological Review 75, no. 2 (April 1, 2010): 273–302. doi:10.1177/0003122410363567.

Ghitza, Yair, and Andrew Gelman. “Deep Interactions with MRP: Election Turnout and Voting Patterns Among Small Electoral Subgroups.” American Journal of Political Science 57, no. 3 (July 2013): 762–76. doi:10.1111/ajps.12004.

Terhanian, George, and John Bremer. “A Smarter Way to Select Respondents for Surveys?” International Journal of Market Research 54, no. 6 (2012): 751–80.

Wang, Wei, David Rothschild, Sharad Goel, and Andrew Gelman. “Forecasting Elections with Non-Representative Polls.” International Journal of Forecasting 31, no. 3 (July 2015): 980–91. doi:10.1016/j.ijforecast.2014.06.001.

2 Austin, Peter C. “An Introduction to Propensity Score Methods for Reducing the Effects of Confounding in Observational Studies.” Multivariate Behavioral Research 46, no. 3 (May 31, 2011): 399–424. doi:10.1080/00273171.2011.568786.

Rosenbaum, Paul R., and Donald B. Rubin. “Reducing Bias in Observational Studies Using Subclassification on the Propensity Score.” Journal of the American Statistical Association 79, no. 387 (September 1984): 516–24. doi:10.1080/01621459.1984.10478078.

13


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorDaniel Witter
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy