Well-being Survey SS Part B FINAL_6.29.16 revisions OMB

Well-being Survey SS Part B FINAL_6.29.16 revisions OMB.pdf

Financial Well-Being National Survey

OMB: 3170-0063

Document [pdf]
Download: pdf | pdf
BUREAU OF CONSUMER FINANCIAL PROTECTION
PAPERWORK REDUCTION ACT SUBMISSION
INFORMATION COLLECTION REQUEST
SUPPORTING STATEMENT PART B
FINANCIAL WELL-BEING NATIONAL SURVEY
(OMB CONTROL NUMBER: 3170-XXXX)

PART B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS

1. Respondent Universe and Selection Methods
Potential Respondent Universe
The potential respondent universe for the Financial Well-Being Survey consists of all noninstitutionalized
adults in the United States (defined as the 50 states and District of Colombia). The population is divided by
age, household income with respect to the Federal poverty level, and race/ethnicity in order to facilitate the
survey’s goals. Exhibit 1, below, shows the U.S. population by age, race/ethnicity, and household income
with respect to the Federal poverty level.
Exhibit 1. U.S. Adult Population by Age, Race/Ethnicity, and Household Income
Age
35-54
55-61
62-74
75+
Less than 200% of Federal poverty level
Hispanic
7,621,241
5,993,813
1,031,205
1,154,647
620,041
Black non-Hispanic
4,818,416
4,104,646
1,171,783
1,272,421
702,422
White non-Hispanic
12,948,210
11,224,361
3,792,796
5,493,038
5,077,536
Other non-Hispanic
2,243,413
1,750,736
452,178
549,842
310,375
Total
27,631,280
23,073,556
6,447,962
8,469,948
6,710,374
More than 200% of Federal poverty level
Hispanic
6,678,456
7,040,974
1,559,624
1,422,724
540,292
Black non-Hispanic
4,182,802
6,024,483
1,790,310
1,745,523
637,761
White non-Hispanic
26,340,876
43,448,441
16,283,681
19,810,279
9,465,114
Other non-Hispanic
3,834,520
4,836,332
1,258,133
1,283,618
474,641
Total
41,036,654
61,350,230
20,891,748
24,262,144
11,117,808
Total
Hispanic
14,299,697
13,034,787
2,590,829
2,577,371
1,160,333
Black non-Hispanic
9,001,218
10,129,129
2,962,093
3,017,944
1,340,183
White non-Hispanic
39,289,086
54,672,802
20,076,477
25,303,317
14,542,650
Other non-Hispanic
6,077,933
6,587,068
1,710,311
1,833,460
785,016
Total
68,667,934
84,423,786
27,339,710
32,732,092
17,828,182
Note: Estimates from 2009-13 American Community Public Use Microdata Sample.
18-34

Total
16,420,947
12,069,688
38,535,941
5,306,544
72,333,120
17,242,070
14,380,879
115,348,391
11,687,244
158,658,584
33,663,017
26,450,567
153,884,332
16,993,788
230,991,704

Page 1 of 10

Sample
The sampling frame will consist of panelists on the GfK KnowledgePanel (hereafter “GfK panel”), a
probability-based nonvolunteer Internet panel. GfK’s recruitment includes non-Internet households. NonInternet households are provided with the means to complete surveys (a laptop and free Internet). As this
sampling frame was itself the result of sampling, sampling procedures for recruitment into the GfK panel are
described next. The sampling frame for the selection of panelists consisted of random digit dialing (RDD)
using a dual-frame landline and cell phone design through 2009 before switching to address-based sampling
(ABS) in 2010. ABS was used to supplement the RDD frame in 2009 before replacing the RDD frame
entirely in 2010. The RDD sampling scheme used a stratified design: one stratum had a higher
concentration of black and Hispanic households relative to national estimates from the 2000 Census and the
other had a lower concentration relative to Census estimates. Telephone numbers from the first stratum
were selected at approximately twice the rate of those from the second stratum. The ABS sample is
supplemented by RDD recruitment targeting high incidence Hispanic areas.
The desired sample of 6,115 completed surveys consists of a nationally representative sample of 5,000 with
respect to age, sex, and household income (less than 200% of Federal poverty line and 200% or more of
Federal poverty line), an additional oversample of 1,000 adults aged 62 and above, and a field test (n=115).
The oversample is designed to yield a greater number of older adults to provide for greater statistical power
for analyses of this population. The desired number of respondents per stratum is shown in Exhibit 2,
below, based on the distribution of the U.S. population with respect to age, race/ethnicity, and household
income shown in Exhibit 1. Sample sizes beside the nationally representative sample of 5,000 and the age
62 and above oversample are not fixed quotas and the final number of completed surveys will depend on
response rates. The approach taken to achieve these targets is discussed next.
Exhibit 2. Desired Sample Sizes by Age, Race/Ethnicity, and Household Income

Hispanic
Black non-Hispanic
White non-Hispanic
Other non-Hispanic
Total
Hispanic
Black non-Hispanic
White non-Hispanic
Other non-Hispanic
Total
Hispanic
Black non-Hispanic
White non-Hispanic
Other non-Hispanic
Total

Age
18-34
35-54
55-61
62-74
75+
Total
Less than 200% of Federal poverty level
165
130
22
48
26
391
104
89
25
53
29
300
280
243
82
228
210
1,043
49
38
10
23
13
133
598
500
139
352
278
1,867
More than 200% of Federal poverty level
145
152
34
59
22
412
91
130
39
72
26
358
570
940
352
821
392
3075
83
105
27
53
20
288
889
1,327
452
1,005
460
4,133
Total
310
282
56
107
48
803
195
219
64
125
55
658
850
1,183
434
1,049
602
4,118
132
143
37
76
33
421
1,487
1,827
591
1,357
738
6,000

Note: Exhibit does not include the field test sample (n=115).

Page 2 of 10

In order to achieve the desired sample size, a general population sample (i.e., with equal probability of
selection) designed to yield 5,000 completed surveys will be drawn from GfK panelists. The sampling
scheme will account for variation in cooperation rates by panelist demographics (e.g., lower cooperation
rates for low socioeconomic status, black, Latino) by drawing sample by the inverse of the expected
response rates for demographic groups, where the target number of completed surveys from low-income,
black, and Hispanic households is set greater than their representation in the general population. This
approach aims to ensure that the final sample of completed surveys will contain at least as many surveys
from black and Hispanic panelists and panelists less than 200% of the Federal poverty level as would be
expected given their representation in the U.S. population. Given the analytic goals of this study, achieving
a more than representative number of completed surveys from these populations presents fewer problems
than would achieving too few. The sampling scheme is therefore designed with greater net probabilities of
selection for low-income and black and Hispanic panelists, taking into account the general population
sample and oversample. As described previously, there will be an additional oversample of adults aged 62
and above designed to yield an additional 1,000 completed surveys.
Although weights allow the sample population to match the U.S. population based on observable
characteristics, similar to all survey methods, it remains possible that non-coverage or non-response results
in differences between the sample population and the U.S. population that are not corrected using weights.
Of particular concern to this survey effort would be if GfK panel members – as a group of people who have
agreed to be part of an internet panel - had systematically biased perceptions of their financial well-being. A
concern about the potential face validity of this group might be that their willingness to participate in a
research panel with only very small monetary incentives could conceivably result in downwardly biased
financial well-being. However, existing evidence provides some reassurance on this point.
The question “How would you rate your household’s financial situation today?” was posed in 2014 in both
the Pew Survey of American Family Finances via the GfK panel, and in the Gallup Daily tracking telephone
survey via random digit dialing of cellphones and landlines in all 50 states and the District of Columbia.
Detailed information on responses to this question in both surveys is included in Table 1, below. Results
were quite similar across the two sources (both “excellent” and “fair” response percentages from GfK are
within the margin of error for Gallup, and “good” and “poor” responses are only a few percentage points
outside the margin of error). The fact that the GfK responses are actually slightly more positive than the
Gallup responses runs counter to the hypothesis that GfK panel members have a more negative financial
mindset than the general population.
Expected Response Rates
The completion rate of the survey is expected to be between 55% and 65% (i.e., the proportion of invited
panelists who complete the survey) based on previous GfK surveys. The survey has not been previously
administered. As a probability-based Internet panel, the cumulative response rate will include recruitment
into the GfK panel. Following, the American Association for Public Opinion Research (2015), these
additional stages at which nonresponse occur are at the point of recruitment into the panel (measured by the
recruitment rate: RECR), at the profile stage where demographic information is collected on panelists
(measured by the profile rate: PROR) and for the response to the invitation to conduct a particular survey
(measured by the completion rate: COMR). The cumulative response rate (CUMR) is calculated as RECR
× PROR × COMR. These rates are estimated as recruitment rate ≈ 14%, profile rate ≈ 60% to 65%, and
completion rate ≈ 55% to 65%. As can be seen, the bulk of nonresponse for the GfK panel occurred at the
recruitment and profile stages: the maximum possible cumulative response rates are 9.1% (CUMR = 0.14 ×
0.65 × 1.00). Assuming a 65% completion rate, a cumulative response rate of 5.9% would be expected
(CUMR = 0.14 × 0.65 × 0.65).

Page 3 of 10

2. Information Collection Procedures
Statistical Methodology for Stratification and Sample Selection
The following methods will be used for sample selection:
1. Field test sample. Sample will be drawn sufficient to yield 100 completed surveys in English and
15 in Spanish. Sample for the English surveys will be drawn with equal probability from GfK’s
English-language panelists and sample for the Spanish surveys will be drawn with equal probability
from GfK’s Spanish-language panelists.
2. General population sample. This sample is designed to yield 5,000 completed surveys. A sample
will be drawn from the GfK panel with probabilities of selection inverse to the expected completion
rate for sociodemographic groups; e.g., 𝑛𝑛ℎ ⁄𝑟𝑟̂ℎ , where 𝑛𝑛ℎ is the desired number of completed
surveys from the ℎth group and 𝑟𝑟̂ℎ is the expected completion rate of that group. Were the expected
completion rates to match the actual completion rates, weights for probability selection will be
balanced by nonresponse weights, yielding equal weights for all groups, minimizing design effects.
That is, groups with lower completion rates will have lower weights for probability of selection
because more panelists are included in the sample but higher weights for nonresponse because of
the lower completion rates. Conversely, groups with higher expected completion rates will have
higher weights for probability of selection because fewer panelists will be selected but lower
weights for nonresponse due to the higher completion rates. In practice, completion rates will be
unlikely to perfectly match expectations. Moderate departures in completion rates from
expectations will still yield a very efficient sampling scheme. Given GfK’s knowledge of their
panel, expected completion rates should be quite accurate.
3. Oversample of panelists aged 62 and above. An oversample will be drawn from members of the
GfK panel aged 62 and above. This sample will yield 1,000 completed surveys.
Estimation Procedure
In order to obtain valid survey estimates, estimation will be done using properly weighted survey data. The
weight to be applied to each respondent is a function of the overall probability of selection, and appropriate
nonresponse and post-stratification ratio adjustments.
Weighting takes place in multiple stages. The first two stages adjust for the probability of selection of the
GfK panel from the RDD or ABS frame:
1. Base weights are calculated as the inverse probability of selection.
2. A panel demographic post-stratification weight is calculated as an additional adjustment based on
demographic distributions from the most recent data from the Current Population Survey in order to
adjust the sample for sources of sampling and nonsampling error (i.e., coverage error and
nonresponse error). This weighting adjustment is applied prior to the selection of any client sample
from GfK panel, and these weights are used in the stratified, weighted, selection procedure for
drawing samples from the panel.
The next stage adjusts for probability of selection, nonresponse, and ensures the representativeness of the
specific sample for the survey:
3. Once the sample has been drawn, fielded and the data compiled from all GfK panel respondents, a
sample-specific poststratification process is carried out to adjust for survey nonresponse and for
Page 4 of 10

elements related to the study-specific sample design, including the age 62+ oversample. An
iterative raking procedure starting with the panel base weight is used for this task. Demographic and
geographic distributions for the population ages 18+ from the most recent Current Population
Survey provide the majority of the benchmarks for this adjustment. The demographic variables
used are gender, age, race/ethnicity, education, U.S. Census region, metropolitan area, Internet
access, and, language spoken at home (English/Spanish).
Up to this point, all weights are provided by GfK. The final stage will be an additional adjustment and will
be calculated in order to calibrate the survey data to U.S. sociodemographic norms. Given the particular
interest in analyses of the age 62+ population, we will calibrate for respondents below age 62 and those aged
62 and above, ensuring that both portions of the sample are representative when analyzed alone or together.
Calibration factors will include age, sex, race/ethnicity, region, and education within these larger groupings
using target values from American Community Survey data. We also will make adjustments for Internet
use, where the weighting targets are drawn from National Health Interview Survey Internet items due to the
expected under-representation of non-Internet households in the GfK panel. We plan to provide two sets of
weights: one for analysis of the 5,000 completed surveys of the nationally representative sample alone and
one for analysis of the 6,000 completed surveys of the nationally representative sample and age 62+
oversample.
Sampling Error
For the general population sample of 5,000 completed surveys, a margin of error of 1.7% is expected at the
95% confidence level. This margin of error is based on the full sample of 5,000 for a statistic at 50%.
Margins of error for subsamples will be greater. We assume a conservative design effect of 1.5—higher
than is typical for the panel—to account for the fifth weighting step and the impact of differences between
expected and actual completion rates for sociodemographic groups. The margin of error for the 6,000
completed surveys of the nationally representative sample and age 62+ oversample is expected to be 1.6%,
accounting for the design effect introduced by the need to weight down age 62+ respondents to ensure they
are represented correctly in the weighted sample. The margin of error for the age 62+ respondents from the
general population survey and oversample is expected to be 2.6%, assuming the same design effect of 1.5.

3. Methods to Maximize Response Rates and Address Issues of Nonresponse
Methods to Maximize Response Rates
The Financial Well-Being National Survey employs a number of strategies to maximize response rates,
detailed below.
An advance letter will be mailed on CFPB letterhead to panel members. The letter will describe the reasons
for the data collection request in English and Spanish (one side per language) and will be signed by the
CFPB Director or other senior official. Advance letters and postcards are associated with increased
response rates for web surveys (Bertoni et al. 2015; Crawford et al. 2004; Dykema et al. 2011; Harmon,
Westin, and Levin 2005; Kaplowitz, Hadlock, and Levine 2004), mirroring extensive literature documenting
the positive effects of advance letters on response rates to both self-administered (Edwards et al. 2002) and
interviewer-administered surveys (de Leeuw et al. 2007; Goldstein and Jennings 2002; Hembroff et al.
2005; Link and Mokdad 2005; Shettle and Mooney 1999). The status of CFPB as a U.S. government
agency increases the legitimacy of the survey request, which is in turn associated with higher response rates
(Fox, Crask, and Kim 1988; Goyder 1987; Groves, Cialdini, and Couper 1992).
Following the advance letter, an email invitation and three reminder emails will be sent to selected panelists
using the customized subject line “Financial Well-Being National Survey” across all communications.
Typical GfK panel survey invitations are not customized and use the subject line “Your Latest GfK Panel
Survey.” Linking the subject line to the advance letter may increase response rates. See Appendix B for a
Page 5 of 10

copy of the advance letter, email invitation and three reminder emails in English, and Appendix E for these
materials in Spanish..
GfK recommends a three week field period in order to allow sufficient time for response. For this research,
the field period will be up to four weeks in order to ensure that there is sufficient time for panelists to
respond to the survey invitation and complete the survey.
GfK operates an incentive program through the use of a point system to encourage participation and create
member loyalty. Members can redeem their points for cash, merchandise, gift cards or game entries.
Additionally, members may also be entered into special sweepstakes with both cash rewards and other prizes
to be won. All incentives for this survey are part of the standard GfK point system and no additional
survey-specific incentives are proposed.
Average survey length of the Financial Well-Being National Survey is estimated at 20 minutes. A sizable
body of evidence find that longer surveys are associated with lower response rates and lower quality data
due primarily to respondent fatigue (Crawford, Couper, and Lamias 2001; Kaplowitz et al. 2012; Marcus et
al. 2007; Peytchev 2009; Vehovar and Cehovin 2014a, 2014b; Yan et al. 2011). To maximize response and
data quality, best practices suggest an online survey of no more than 20 minutes in length. See Appendix A
for a copy of the Financial Well-Being National Survey in English and Appendix D for a copy in Spanish..
Nonresponse
Issues of nonresponse will be addressed as follows. During the survey’s field period, there will be weekly
tracking of the number of completed surveys against desired sample sizes. If a likely short-fall in response
for a particular subpopulation of interest occurs, additional sample may be selected in order to ensure that
sufficient sample is available for analysis. The raking and calibration procedures described in Section 2,
ensure that the weighted sample is comparable to U.S. population norms. The study’s report will include an
analysis of nonresponse for the GfK panel sample in the form of tabulations of demographic variables by
respondent or nonrespondent status. Being a panel, there is extensive demographic information for
respondents and nonrespondents.

4. Testing of Procedures or Methods
The survey research plan and drafts of the data collection instruments have been reviewed by Bureau
personnel, Abt Associates staff, other project team members, and the external research advisors to ensure
that the survey contains the correct measures to meet the research objectives. Additionally a pretest of the
survey was conducted with nine individuals to ensure that the questions are clear, the survey flows well, and
that it takes no longer than 20 minutes to complete. The two new scales to be used in the survey – financial
well-being and financial ability – have been extensively pre-tested using both cognitive interviewing and
quantitative analysis of response patterns, under a previous OMB clearance, OMB No. 3170-0043. Most
other items were selected or adapted from existing, generally accepted scales and published survey
instruments. See Appendix C for a listing of the survey items and their source.
Prior to the survey launch, a field test will be conducted to ensure that all elements of the survey function as
intended. We will conduct a robust field test of 100 completed English-language and 15 Spanish-language
surveys. The field time for the field test will be less than one week and will include mailing advance letters.
The field test will ensure that the survey instrument functions as designed and that other procedures work as
intended. Key outcomes from the field test will include unit and item nonresponse rates. Any issues
identified with the survey during the field test will be addressed, including programming and testing any
changes to the instrument. This field test is not envisioned to test or change the nature of the survey
questions or to alter the purpose of the collection. Rather, it will be a test of the mechanics of the survey
collection. However, should the field test result in the need to substantively change questions, the Bureau
will resubmit the instrument to OMB for approval prior to launching the full survey.
Page 6 of 10

5. Contact Information for Statistical Aspects of the Design
The Bureau will work with the contractor, Abt Associates, to conduct the proposed data collection.
Genevieve Melford, Senior Research Analyst in the Office of Financial Education serves as Government
Technical Representative. She can be reached at 202-435-7696 or at [email protected]. The
study’s Principal Investigator is Dr. Dee Warmath of the University of Wisconsin-Madison. Dr. Warmath
can be reached at 608-262-2312 or [email protected].
Development of the survey research plan, administration of the data collection, analysis and reporting will
be overseen by Abt Associates (statistical and research contractor) and its subcontractors, University of
Wisconsin-Madison, Abt SRBI and GfK. Members of this research team include:
Donna DeMarco – Project Director
Senior Associate
Abt Associates
55 Wheeler Street
Cambridge MA 02138
1-617-349-2322
[email protected]
Dee Warmath, Ph.D. – Principal Investigator
Assistant Professor
Department of Consumer Science
School of Human Ecology – University of Wisconsin-Madison
4222 Nancy Nicholas Hall
1300 Linden Drive
Madison, WI 53706
1-608-262-2312
[email protected]
Judy Geyer, Ph.D. – Director of Analysis
Associate Scientist
Abt Associates
55 Wheeler Street
Cambridge MA 02138
1-617-520-2962
[email protected]
Benjamin Phillips, Ph.D. – Data Collection Task Leader
Vice President
Abt SRBI
55 Wheeler Street
Cambridge, MA 02138
1-617-386-2609
[email protected]

References
American Association for Public Opinion Research. 2015. “Standard Definitions: Final Dispositions of Case
Codes and Outcome Rates for Surveys.” 8th ed. AAPOR, Deerfield, IL.

Page 7 of 10

Bertoni, Nick, Andrew Burkey, Molly Caldaro, Scott Keeter, Charles DiSogra, and Kyley McGeeney. 2015.
“Advance Postcard Mailing Improves Web Panel Survey Participation.” Paper presented at the annual
conference of the American Association for Public Opinion Research, Hollywood, FL.
Crawford, Scott D., Mick P. Couper, and Mark J. Lamias. 2001. “Web Surveys: Perceptions of Boredom.”
Social Science Computer Review 19:146-62.
Crawford, Scott D., Sean E. McCabe, Bob Saltz, Carol J. Boyd, Bridget Freisthler, and Mallie J. Pachall.
2004. “Gaining Respondent Cooperation in College Web-Based Alcohol Surveys: Findings from
Experiments at Two Universities.” Paper presented at the 59th Annual Conference of the American
Association for Public Opinion Research, Phoenix, AZ, May.
De Leeuw, Edith, Mario Callegaro, Joop Hox, Elly Korendijk, and Gerty Lensvelt-Mulders. 2007. “The
Influence of Advance Letters on Response in Telephone Surveys: A Meta-Analysis.” Public Opinion
Quarterly 71:413-43.
Dykema, Jennifer, John Stevenson, Brendan Day, Sherrill L. Sellers, and Vence L. Bonham. 2011. “Effects
of Incentives and Prenotification on Response Rates and Costs in a National Web Survey of
Physicians.” Evaluation & the Health Professions 34:434-47.
Edwards, Phil, Ian Roberts, Mike Clarke, Carolyn DiGuiseppi, Sarah Pratap, Reinhard Wentz, and Irene
Kwan. 2002. “Increasing Response Rates to Postal Questionnaires: Systematic Review.” British
Medical Journal 324:1883-85.
Fox, Richard J., Melvin R. Crask, and Jonghoon Kim. 1988. “Mail Survey Response Rate: A Meta-Analysis
of Selected Techniques for Inducing Response.” Public Opinion Quarterly 52:467-91.
Goldstein, Kenneth M. and M. Kent Jennings. 2002. “The Effect of Advance Letters on Cooperation in a
List Sample Telephone Survey.” Public Opinion Quarterly 66:608-17.
Goyder, John. 1987. The Silent Minority: Nonrespondents in Sample Surveys. Boulder, CO: Westview.
Groves, Robert M., Robert B. Cialdini, and Mick P. Couper. 1992. “Understanding the Decision to
Participate in a Survey.” Public Opinion Quarterly 56:475-95.
Harmon, Michele A., Elizabeth C. Westin, and Kerry Y. Levin. 2005. “Does Type of Pre-Notification
Affect Web Survey Response Rates?” Paper presented at the 60th Annual Conference of the American
Association for Public Opinion Research, Miami Beach, May.
Hembroff, Larry A., Debra Rusz, Ann Rafferty, Harry McGee, and Nathaniel Ehrlich. 2005. “The CostEffectiveness of Alternative Advance Mailings in a Telephone Survey.” Public Opinion Quarterly
69:232-45.
Kaplowitz, Michael D., Timothy D. Hadlock, and Ralph Levine. 2004. “A Comparison of Web and Mail
Survey Response Rates.” Public Opinion Quarterly 68:94-101.
Kaplowitz, Michael D., Frank Lupi, Mick P. Couper, and Laurie Thorp. 2012. “The Effect of Invitation
Design on Web Survey Response Rates.” Social Science Computer Review 30:339-49.
Link, Michael W. and Ali Mokdad. 2005. “Advance Letters as a Means of Improving Respondent
Cooperation in Random Digit Dial Studies.” Public Opinion Quarterly 69:572-87.
Marcus, Bernd, Michael Bosnjak, Steffen Lindner, Stanislav Plischenko, and Astrid Schutz. 2007.
“Compensating for Low Topic Interest and Long Surveys.” Social Science Computer Review 25:372-83.
Peytchev, Andy. 2009. “Survey Breakoff.” Public Opinion Quarterly 73:74-97.
Page 8 of 10

Shettle, Carolyn and Geraldine Mooney. 1999. “Monetary Incentives in U.S. Government Surveys.” Journal
of Official Statistics 27:379-92.
Vehovar, Vasja and Gregor Čehovin. 2014a. “Questionnaire Length and Breakoffs in Web Surveys: A Meta
Study.” Presented at the 7th Internet Survey Methodology Workshop, South Tyrol, Italy, December 1-3.
-----. 2014b. “WebSM Draft Report: Questionnaire Length and Breakoffs in Web Surveys.” WebSM,
Ljubljana, Slovenia.
Yan, Ting, Frederick G. Conrad, Rodger Tourangeau, and Mick P. Couper. 2011. “Should I Stay or Should I
Go: The Effects of Progress Feedback, Promised Task Duration, and Length of Questionnaire on
Completing Web Surveys.” International Journal of Public Opinion Research 23:131-47.

Page 9 of 10

Page 10 of 10


File Typeapplication/pdf
AuthorDonna DeMarco
File Modified2016-07-22
File Created2016-07-22

© 2024 OMB.report | Privacy Policy