2593ssb02

2593ssb02.docx

Evaluating Customer Satisfaction of EPAs Research Products (Renewal)

OMB: 2080-0085

Document [docx]
Download: docx | pdf



PART B OF THE SUPPORTING STATEMENT (FOR STATISTICAL SURVEYS)











INTRODUCTION TO PART B

The Environmental Protection Agency (EPA) will conduct the following type of statistical survey to estimate end user satisfaction of its research products. This survey will evaluate EPA’s research products based on the quality, usability, and timeliness of these products and will be collected by collecting survey-based feedback from the key users of EPA’s research products, who include the Agencies state, local, and non-governmental partners.





This page is intentionally left blank.



B.1 SURVEY OBJECTIVES, KEY VARIABLES AND OTHER PRELIMINARIES

B.1.a Survey Objectives

The Environmental Protection Agency’s (EPA) Office of Research and Development (ORD) evaluates its scientific research products through the annual survey. The information collected in this survey relates to one of ORD’s Long-Term Performance Goals (LTPG) in EPA’s FY22-26 Strategic Plan, which supports Cross-Agency Strategy 1. The LTPG aims to increase the percentage of research products meeting partner needs, to 95% by September 30, 2026. EPA will distribute a 30-question survey via online survey platform with the intent to:

  • Objective 1: Investigate how EPA’s research products inform non-federal end user’s decision making and implementation of environmental and public health programs.

  • Objective 2: Inform the annual end of year performance reporting to the Office of Management and Budget through the Annual Performance report.

The primary objective of the survey is to collect information from non-federal product users that frequently use EPA’s research products to inform their decision making and implementation of environmental and public health programs. These data are used to produce an enterprise-wide evaluation of EPA’s ability to develop a portfolio of research products that is consistent with the needs of its key end users.

The sample design is discussed in section B.2.b

B.1.b Key Variables

Likert Scale responses are solicited to gather feedback on ORD products. These responses are converted into 1-5-point values. The mean Likert Scale responses to each question by product is calculated by aggregating the survey data. After they are averaged within each product, these values will be used to calculate end user satisfaction scores for each product.

B.1.c Statistical Approach

This data collection approach is designed to achieve a desired level of precision for estimates of the percentage of products that meet EPA’s partners’ needs across the population of products included in EPA’s Strategic Research Action Plans (hereafter referred to as an enterprise level approach). EPA will use a stratified random selection of products to reduce the overall number of surveys that are distributed, thereby reducing the burden on the Agency and on its survey respondents. The sampling of products is done in a random selection that is stratified according to which the research center or office in the Office of Research and Development produced the product. This stratified random sampling method will maintain EPA’s sampling targets at the enterprise level. Further descriptions can be found in Section B.2

B.1.d Feasibility

The data collection instrument was designed with the capabilities of the typical respondent in mind. To fully assess feasibility, EPA convened a workgroup to comment on the proposed data collection and its feasibility. The data collection instrument is designed to be completed in short period of time and does not include requests for information that are expected to be complex or burdensome. In addition, EPA will provide technical assistance for completing the data collection instrument.

The time frame for the data collection is acceptable for the users of the data within the Office of Research and Development and sufficient to support the completion of the Agency’s Annual Performance Report each year.

B.2 SURVEY DESIGN

This section contains a detailed description of the statistical survey design and sampling approach including a description of the sampling frame, sample identification, precision requirements and data collection instrument.

Product Selection

A stratified random selection of products will be selected and evaluated. ORD defines products as the following1:

Product – A product is a deliverable that results from a specific research project or task and is meant for use by an end user. This can include one of many scientific deliverables. These include journal articles, agency reports, software tools, databases, and more.

ORD formerly defined products to in the context of the Strategic Measure to include both research products and outputs. After significant feedback received from ORD subject matter experts, workgroup members, and key staff, this was redefined to only include research products. The Strategic Measure evaluation methodology is designed to solicit feedback from partners (please see definition of ORD’s partners under respondent selection below) who are using ORD’s research products. ORD recognizes that there are certain contexts in which it is not appropriate for a product to be reviewed using the survey-based evaluation designed for the Strategic Measure. ORD’s Centers/Offices, with assistance from National Program Directors, may exclude certain products from consideration for the Strategic Measure Evaluation based on one of the following criteria:

  1. The product has not been delivered. 

  1. The product is not intended for direct use by an EPA partner.

  1. This product is a portion of a larger deliverable and will not be used in the near term by the intended partner. 

  1. The product was produced by a non-ORD entity. 

ORD developed this eligibility criteria in effort to minimize the number of respondents that may flag themselves as unfamiliar with the products under review. This also ensures that ORD’s products are delivered to the partner in its final form (i.e., publicly accessible, peer-reviewed, and produced by EPA). EPA will not be collecting enough survey data on a per-product basis to meet precision objectives for end user satisfaction estimates at the product-level. Because of this, EPA will not report out on product-specific results. Instead, EPA will provide an estimate of the percent of research products that meet partner needs.

Respondent Selection

Partners are defined as individuals from a Program Office, Region, Federal Agency, or Non-federal Organization such as a state, local government, tribe, or another group of individuals that are closely engaged in the development, delivery, and use of the research products. The population of products that are evaluated through this survey are planned as a result of strategic coordination with EPA Program Offices, Regions, other federal agencies, and non-federal partners, through the use of judgement-based sampling. Each product that is evaluated is produced either at the request of an ORD partner or as a part of a larger effort in support of an ORD partner.

The selection of survey respondents is a multi-step process. The initial survey respondent selections are made by the research laboratories and centers that led the development of the research products. In collaboration with ORD’s National Research Programs, survey respondents are identified if they meet one of the following criteria:

  1. The individual was engaged by ORD to inform the development, delivery, or application of the product;

  2. The individual or their program requested the product;

  3. The product was designed to support the individual’s work function; or

  4. The product was not designed specifically for this individual’s work function, but fits some other interest or application for them.


After these selections are made by ORD’s Centers/Offices, and National Research Programs, ORD engages in a process in which the list of initial selections is reviewed and verified by the organizations in which the identified recipients work. ORD engages with EPA’s Program and Regional Offices to verify the list of proposed survey respondents. Through this process, some respondents were removed from the list and others were added. This process allows ORD to increase the quality of the survey recipient list by reducing the overall number of mistargeted surveys that result from factors such as retirements and staffing changes.

EPA engages in a process similar to the above to ensure high quality survey respondent targeting in state and local governments. With the support of intergovernmental associations such as the Environmental Council of States (ECOS), EPA maximizes the accuracy of its survey respondent identification.

The population of individuals surveyed is not a representative sample of all users of a given EPA research product. Most of the products in question are released publicly as journal articles, EPA reports, or online tools or applications. As a result, hundreds of thousands of individuals and groups access, use, cite, and download EPA’s research products each year. While these individuals and groups are important beneficiaries of EPA’s research products, this survey is not designed to capture feedback from them. EPA employs alternative methods, such as decision document analysis and online unique page view and download analysis to capture this.

B.2.a Target Population and Coverage

The target population for this survey are partners of ORD in the United States. Partners are defined as individuals in EPA program offices, regions, and other agencies and organizations who are most closely engaged in the development, delivery, and use of the research products. The survey is designed to produce estimates of the satisfaction of users of EPA research products. These estimates provide end user satisfaction scores at the enterprise level for ORD in a given fiscal year. As explained in Section B.2, EPA will not provide results within sampling strata or at any level below ORD’s full portfolio.

B.2.b Sample Design

This section describes the sample design. It includes a description of the sampling frame, target sample size, stratification variables and sampling method. The sampling design employed is a stratified random selection of products that are evaluated.

B.2.b.i Sampling Frame

A research product is defined as a product is a deliverable that results from a specific research project or task and is meant for use by an ORD partner. This can include one of many scientific deliverables. These include journal articles, agency reports, software tools, databases, and more. The sampling frame used is developed from EPA’s Research Approval Planning Implementation Dashboard (RAPID). RAPID is EPA’s system of record on research deliverables, containing metadata on all deliverables planned by ORD through its strategic research planning framework. Formerly, ORD utilized the Research Management System (RMS), which has since been replaced with RAPID. RAPID provides a cohesive user experience that reduces the number of touchpoints to access research planning, implementation, and product information. The implementation of RAPID improved ORD’s research planning and management process, by integrating numerous databases in one system. The following information is extracted from RAPID for this data collection:

The following fields related to research planning:

National research program

Resarch Project (including project title, project lead center)

Research lab/center and division leading product development

Title and description of product

Type of product (i.e. journal article, EPA report, software tool, etc.)

Justification for the Use of RAPID

The following criteria are often used in assessing a proposed sampling frame:

It fully covers the target population.

It contains no duplication.

It contains no foreign elements (i.e., elements that are not members of the population).

It contains information for identifying and contacting the units selected in the sample.

It contains other information that will improve the efficiency of the sample design

Four of these five elements of an effective sampling frame are captured in the product pool listed in RAPID. The only element that is not well captured in RAPID is related to the presence of foreign elements in the system. As stated earlier, all deliverables falling under ORD’s strategic research planning framework are included in RAPID. The evaluation survey developed is designed to gather feedback on a wide variety of research products developed by EPA, however several products listed in RAPID are outside of the target population based on a number of criteria:

  • The product has not been delivered

  • The product is not intended for direct use by a Program, Region, or other Federal user

  • The product is foundational research that will not be used in the near term by the intended user

  • The product was produced by a non-ORD entity


EPA will manually identify research products that meet the above criteria and remove them from potential selection.

B.2.b.ii Sample Size

EPA implements a stratified random sampling method to identify the 60 products that which will be surveyed. This sampling method involves the division of a population into smaller sub-groups known as strata. In stratification, the strata are formed based on their respective research center. To aid with equal representation of products between Research Centers, at least 10 products are selected from each Research Center. If a Research Center has produced less than 10 eligible products, all from that Research Center will be used in the survey.

Products
EPA’s goal is to establish a sample size that allows for the determination of the percentage of products that meet partner needs with precision targets set at the enterprise level. Precision targets are discussed in Section B.2.c. The sample size is determined using the following equation:


Where: Z = Z value (e.g. 1.96 for 95% confidence level)

p = percentage picking a choice, expressed as decimal (.5 used for sample size needed)

c = confidence interval, expressed as decimal

The sample size calculation is not very sensitive to year-to-year variation in the number of products that are eligible for evaluation. In a given year, the number of products eligible can range from under 150 to over 300, however the size of the 95% confidence interval of the estimate of the number of products that meet partner needs ranges from ±10 to ±13 products.

Respondents
EPA’s selection of survey respondents will fully capture the population of key users of EPA’s research products. The number of surveys requested is expected to be about four individuals per research product. With 60 products per year being evaluated, the number of surveys requested will be about 270. EPA estimated burden costs rounding the average number of respondents to 270 as this is likely the maximum number of individuals that will respond to the survey. EPA targets a 60% response rate, therefore the figure for average number of respondents does not reflect individuals identified for the survey. Missing survey responses are problematic insofar as the missing data is distributed non-randomly across the list of research products. In cases where surveys are missing from research products entirely, meaning no eligible responses were received for a given research product, then that product will be omitted from the evaluation.

B.2.b.iii Stratification Variables

The objective of stratification is to increase the efficiency of the sampling design and reduce the overall number of surveys that need to be distributed. Product selection is stratified according to the research center that developed the product. There are four Research Centers in ORD from which the products will be sampled.

Stratification increases the precision of estimates compared with a simple random sample of products to evaluate. With stratified samples, the data collection can more effectively capture the variety and complexity of research products that are developed by ORD. This leads to the stratification according to centers. Each center in ORD specializes in specific environmental and public health topics and designs its research approaches and products to these fields. Using the centers as the stratification variable allows EPA to capture this variability, while still collecting a representative sample of products according to product type and research program. ORD also applies sub-stratification on ORD product research areas for additional analysis.

For each fiscal year, at least 10 products are randomly selected from each of ORD’s Research Centers. If a center produced less than 10 eligible products, all from that respective Center will be used in the survey. The following steps are taken is a Center/s produced less than 10 eligible products:

  1. Select all products from Center/s with less than 10 eligible products.

  2. Calculate the percentage of eligible products for all other Centers (e.g., if the total number of eligible products is 88 but Center 1 only ‘delivered’ 9 eligible products and Center 2 only ‘delivered’ 8 eligible products the proportion (%) of products by Center 1 and Center 2 is out of 71 and not 88. The sampling size for proportion determination is out of 33 (not 60).

  3. Take the percentage out of determined sampling size to see how many products will be selected from each Center (e.g., 70.4% of 33 = 23.2 ~23 products from Center1 and 29.6 of 33 = 9.8 ~10 products from Center2).

  4. Conduct a stratified random sampling with Excel

  5. Combine all selected products and create a template with the 60 randomly selected products that will be surveyed and distribute the list to Centers and National Program Directors for their reference.

The following steps are taken if all Centers have 10 or more eligible products:

  1. Find the percentage of eligible products that are associated with each Center.

  2. Take that percentage out of 60 to determine how many products will be selected from each Center.

  3. Conduct a stratified random sampling with Excel.

  4. Combine all selected products and create a new template with the 60 randomly selected products that will be surveyed and distribute the list to Centers and National Program Directors for their reference.

Survey respondents are selected according to each individual product, which will be visited in more detail in B.4.a

B.2.b.iv Sampling Method

An equal probability random sample will be drawn from each stratum, with a target of at least 10 products drawn from each research center.

B.2.b.v Multi-stage Sampling

Multi-stage sampling is a method for selecting the sample of respondents that employs more than one sample frame and sampling procedure. Stage one sampling frame is a list of all eligible products delivered by each Research Center. The second stage is a listing of all the eligible research products by product research focus area.



B.2.c Precision Requirements

B.2.c.i Precision Targets

EPA’s goal is to estimate the number of research products that meet partner needs within a 13-product margin of error at a 95 percent confidence level. See section B.2.b.ii for more information. For example, if the estimated number of products that meet partner needs is 250 out of 300, EPA will at a 95% confidence level that the actual number of products that meet partner needs will be between 237 and 263.

B.2.c.ii Nonsampling Error

EPA developed an assessment approach that will employ several quality assurance techniques to maximize response rates, response accuracy, and processing accuracy to minimize non-sampling error. This includes the following:

  • EPA technical support, including assistance with using Qualtrics, the online survey instrument, and accessing information about the project, to its product users to ensure the data collection instrument and the overall project goals are understood.

  • Survey questions are those that users of products should know. EPA does not ask questions that require monitoring, research, or calculations on the part of the respondent.

  • The data collection instrument is designed to be completed in one sitting and without gathering any additional information, such as feedback from colleagues or extensive recollection of previous activities.

  • The electronic format of the survey will make returning the data collection instrument convenient.

B.2.d Data Collection Instrument Design

As described in supporting statement B, section 4, the data collection instrument is an online survey. (see Appendix A). Each respondent receives an email summarizing the project and ask them to participate by filling out a survey evaluating an EPA research product. Each email is personalized for each respondent and contains the following information:

  • A brief description of the project and a request to fill out a survey reviewing an EPA research product

  • A link to a product summary sheet and a link to the product itself for reference

  • Contact information for individuals to provide technical support if necessary.

The online survey is facilitated using the online tool Qualtrics. After about 5 minutes of reviewing the survey request and product materials, it is designed to be completed in about 10 minutes. There are three types of questions included in this survey:

  • Questions requesting information about the respondent (i.e., name, organization)

  • Likert-scored questions regarding the quality and usability of the research products in question

  • Contextual questions regarding how and when the respondent received and used the product and engaged with ORD regarding the product

B.3 Previous Results of the FY18-FY22 ICR

B.3.a Past Results

Prior to the ICR approval in FY19, EPA conducted a pre-test of the data collection instrument ahead of the May 2018 pilot study. The pre-test was executed by a staff and management-level workgroup within ORD, in which the workgroup was asked to review the data collection instrument and test the user-interface of the online survey tool. The pre-test gathered feedback on the quality of the data collection instrument, including modifying question language for clarity and conciseness.


Following the approval, EPA conducted its annual Strategic Measure Evaluation in FY19-FY22. From FY18-FY22, ORD increased the percentage of products meeting partner needs from 77% - 94%. Since FY19, the pool of eligible products has fluctuated due to the Strategic Research Action Plan cycle and changes in product eligibility criteria. Previously, ORD had a wider definition for research products that are eligible for the Strategic Measure Evaluation and considered outputs to be products. Based on significant feedback received from ORD personnel and workgroup members, ORD redefined products to ensure products were reasonably compared.


ORD’s Partner satisfaction survey continues to be one the main indicators to meet Cross-Agency Strategy 1: Ensure Scientific Integrity and Science-Based Decision Making. When given the opportunity to provide additional information, ORD’s product users often provided substantive and specific feedback regarding their experiences using the research products in question. This information and the product users’ willingness to engage provides ORD with avenues to increase the quality, usability, and timeliness of its research products. This feedback is used to improve ORD’s research planning systems and has been a driving force to involve program and regional partners in research development. This evaluation provides evidence that hundreds of key product users across EPA's Program Offices and Regions, other federal agencies, and external partners have had positive experiences working with ORD and using its research products.


B.3.b Product Performance Summary

Five iterations of the Strategic Measure Evaluation were completed since FY18. Since FY18, a total of products 240 have been evaluated and included in the end user satisfaction survey, 10 products were excluded from the evaluation as they received no respondents from ORD’s partners. From evaluations conducted through FY18-22, 34 products failed to meet partner needs and 206 products met partner needs. On average from FY18-22, 85.8% of products met partner needs. The average maximum score products received over the past five years is 99.76 out of 100 possible points. The average minimum score products received over the past five years is 74.1 out of 100 possible points. Results from this evaluation are published in the Annual Performance Report in the Congressional Justification. The table below outlines performance data from FY18-22.





FY18

FY19

FY20

FY21

FY22

Totals/Averages

Products Included

48

47

48

50

47

240 (a total of 240 products have been evaluated since FY18)

Excluded Products

2

3

2

0

3

10 (a total of 10 products have been excluded from the evaluation due to receiving no responses)

Failed Products

11

10

8

3

2

34 (a total of 34 products failed to meet partner needs)

Passed Products

37

37

40

47

45

206 (a total of 206 products have met partner needs since FY18)

Highest Product Score

100

98.8

100

100

100

99.76 (average)

Lowest Product Score

69.2

79

60

79

83.5

74.14 (average)

Average Responses Received Per Product

4.4

5.78

4.15

4.32

3.4

4.41 (average)



B. COLLECTION METHODS AND FOLLOW-UP

B..a Collection Method

The collection methodology is an online survey. The data collection instrument (see Appendix A for the survey) will be sent via email. The survey period will be available for two weeks and EPA staff will send up to three follow-up emails prior to the survey period closing to ensure a response.

B.4.b Survey Response and Follow-up

The target response rate (defined as the ratio of responses to eligible respondents) is 60 percent. Around 2 weeks prior to the survey period, an initial invitation should be sent to recipients, informing them of their selection to participate in the SM survey and providing them a brief explanation of the importance of the SM survey to ORD’s understanding of how their products meet partner needs. This initial outreach began in FY21 and aims to encourage a higher and faster survey response rate with more detailed responses. It also aims to detect any select recipients who should not have been identified for participation based off unfamiliarity with the product. This initial outreach is sent by the appropriate National Research Directors in email form. Then, once the survey is active, an email is released to the recipient list with instructions on how to complete the survey and provides basic instructions on how to complete it. During the month-long open survey period, two reminder emails are sent to boost response rates. To find the list of recipients who have yet to complete the survey, a cross check occurs between the recipient list and emails from the Qualtrics output where completed responses are being tracked and updated.

B.5 ANALYZING AND REPORTING SURVEY RESULTS

B.5.a Data Preparation

The data collection instrument will be reported directly to EPA from each individual survey respondent. Surveys are automatically screened by the data collection instrument for completeness and duplication to improve data control. After the survey period had ended, all surveys will be individually screened by EPA staff to check for missing data or any signs of self-exclusion on the part of the survey respondent. In efforts to minimize the number of partners that self-identify as unfamiliar of products they are requested to review, ORD conducts initial outreach prior to releasing its survey to notify partners that they have been selected for the survey along with a description of the research product they are being asked to review.

B.5.b Analysis

A scoring model was developed by ORD as a part of the development of the evaluation framework. The model calculates a score for each individual product on a scale of up to 100 points. These points come from three categories for each product: the scientific quality of the product (35 points), the usability of the product (50 points), and the timeliness of the development and delivery of the product (15 points). Of these 100 points, 75 of them are sourced from the survey described in this document and 25 come from data collection methods internal to EPA. Each of these categories are informed by a set of indicators that further describe the qualities of the product and a set of data fields that inform these.

A research product is determined to “meet partner needs” if the product receives a score of 85 of 100 on the scale described above. This threshold was established by EPA senior leadership. In cases where products receive multiple surveys, the survey-based scores are averaged using the arithmetic mean and calculated into the 100-point scale.

EPA will prepare a report for EPA senior leadership that tabulates the results of the evaluation survey and explains the precision of the estimate of the percent of products that meet partner needs. Examples of statistics provided will include:

  • Total number of products that score above or equal to 85 (meet partner needs).

  • Standard errors calculated for the estimate.

B.5.c Reporting Results

The results of this survey will be made available each year to the EPA, OMB and the public through EPA’s Annual Performance Report, which is an attachment to EPA’s Annual President’s Budget Request that is available at epa.gov/planandbudget.

Records of all surveys distributed, all metadata, raw data, and analysis documentation and stored in a combination of restricted-access network drives and password-protected Microsoft SharePoint sites.



























This page is intentionally left blank.




Appendix A – Response to Comment


Table 8: EPA/ORD Response to Comment

EPA/Office of Research (ORD) received one substantive comment on the topic of potential bias in the survey methodology. ORD’s response can be seen below.

#

Author

Comment/Response

1

Anonymous Comment (1)

Summary of the comment: The reviewer is concerned about a potential bias in the EPA survey method for identifying respondents who helped develop the product, particularly when the user set is small. They suggest getting feedback from non-users who considered the product to mitigate this bias. To view the entire comment, please visit https://www.regulations.gov/comment/EPA-HQ-ORD-2018-0774-0007.


ORD’s Response: Thank you for providing feedback on EPA/ORD’s Information Collection Request. Engagement in the development of a product is defined as being involved in the strategic research planning process, product conceptualization, design/execution, receiving updates on the status of the research, or providing feedback on findings or drafts of the product.

Engagement is important because it helps to ensure that the end-users find the product usable and that there is sufficient guidance and training for the research products. EPA’s Office of Research and Development only surveys partners external to ORD.





1 See ORD’s Strategic Measure Data Quality Record (DQR)for this definition

ICR for EPA Research Product Evaluation 1 May 2023

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorJackson, Aaron
File Modified0000-00-00
File Created2023-08-18

© 2024 OMB.report | Privacy Policy