2593ss01partb

2593ss01partb.docx

Evaluating Customer Satisfaction of EPAs Research Products (New)

OMB: 2080-0085

Document [docx]
Download: docx | pdf


PART B OF THE SUPPORTING STATEMENT (FOR STATISTICAL SURVEYS)











INTRODUCTION TO PART B

The Environmental Protection Agency (EPA) will conduct the following type of statistical survey to estimate end user satisfaction of its research products. This survey will evaluate EPA’s research products based on the quality, usability, and timeliness of these products and will be collected by collecting survey-based feedback from the key users of EPA’s research products, who include the Agencies state, local, and non-governmental partners.





This page intentionally left blank.



B.1 SURVEY OBJECTIVES, KEY VARIABLES AND OTHER PRELIMINARIES

B.1.a Survey Objectives

The Environmental Protection Agency’s (EPA) Office of Research and Development (ORD) is implementing a new framework to evaluate its scientific research products. One of the measures is designed to track the number of research products meeting customer needs, which is aligned with the Agency’s third Strategic Plan Goal: Rule of Law and Process. EPA will distribute a 30-question survey via online survey platform with the intent to:

  • Objective 1: Investigate how EPA’s research products inform non-federal end user’s decision making and implementation of environmental and public health programs.

  • Objective 2: Inform the annual end of year performance reporting to the Office of Management and Budget through the Annual Performance report.

The primary objective of the survey is to collect information from non-federal product users that frequently use EPA’s research products to inform their decision making and implementation of environmental and public health programs. These data are used to produce an enterprise-wide evaluation of EPA’s ability to develop a portfolio of research products that is consistent with the needs of its key end users.

The sample design is discussed in section B.2.b

B.1.b Key Variables

Likert Scale responses are solicited to gather feedback on ORD products. These responses are converted into 1-5-point values. The mean Likert Scale responses to each question by product is calculated by aggregating the survey data. After they are averaged within each product, these values will be used to calculate end user satisfaction scores for each product.

B.1.c Statistical Approach

This data collection approach is designed to achieve a desired level of precision for estimates of the percentage of products that meet EPA’s stakeholders’ needs across the population of products included in EPA’s Strategic Research Action Plans (hereafter referred to as an enterprise level approach). EPA proposes a stratified random selection of products to reduce the overall number of surveys that are distributed, thereby reducing the burden on the Agency and on its survey respondents. The sampling of products is done in a random selection that is stratified according to which research laboratory or research center in the Office of Research and Development produced the product. This stratified random sampling method will maintain EPA’s sampling targets at the enterprise level. Further descriptions can be found in Section B.2

B.1.d Feasibility

The data collection instrument was designed with the capabilities of the typical respondent in mind. To fully assess feasibility, EPA convened a workgroup to comment on the proposed data collection and its feasibility. The data collection instrument is designed to be completed in short period of time and does not include requests for information that are expected to be complex or burdensome. In addition, EPA will provide technical assistance for completing the data collection instrument.

The time frame for the data collection is acceptable for the users of the data within the Office of Research and Development and sufficient to support the completion of the Agency’s Annual Performance Report each year.

B.2 SURVEY DESIGN

This section contains a detailed description of the statistical survey design and sampling approach including a description of the sampling frame, sample identification, precision requirements and data collection instrument.

Product Selection

A stratified random selection of products will be selected and evaluated. ORD defines products as the following1:

Product – A product is a deliverable that results from a specific research project or task and is meant for use by an end user. This can include one of many scientific deliverables. These include journal articles, agency reports, software tools, databases, and more.

ORD uses this definition in the context of the Strategic Measure to include both research products and outputs2. Therefore, the term “product” will refer to both products and outputs for the duration of this document.

The Strategic Measure evaluation methodology is designed to solicit feedback from Program/Regional partners who are using ORD’s research products. With this in mind, ORD recognizes that there are certain contexts in which it is not appropriate for a product to be reviewed using the survey-based evaluation designed for the Strategic Measure. To account for this for End of Year FY18 data collection, ORD allowed research Labs and Centers, with assistance from National Program Directors, to exclude certain products from consideration for the Strategic Measure based on one of the following criteria:

  1. It is not a final product (i.e. a draft assessment)

  2. This product is not intended for direct use by a Program, Region, or other Federal partner

The “not intended for direct use by a Program, Region, or other Federal partner” criteria resulted in some products being excluded because they were designed exclusively for non-federal partners, such as state governments or local utilities, who cannot be surveyed absent approval under an ICR. A catch-all “other” criterion was also used, which included products that were delayed due to changes in requests from Program/Regional partners, delays in journal publication, and longer-term projects that had not yet produced products for immediate use by ORD’s partners.

An in-depth description of how the products were selected and excluded can be found in section B.3.b Pilot Test. A similar process is being conducted to identify products for ORD’s FY19 data collection.

EPA will not be collecting enough survey data on a per-product basis to meet precision objectives for end user satisfaction estimates at the product-level. Because of this, EPA will not report out on product-specific results. Instead, EPA will provide an estimate of the percent of research products that meet customer needs.

Respondent Selection

This recipient identification strategy is narrow in scope. Customers here are defined as a group of individuals that were most closely engaged in the development, delivery, and use of the research products. The population of products that are evaluated through this survey are planned as a result of strategic coordination with EPA Program Offices, Regions, other federal agencies, and non-federal partners, through the use of judgement-based sampling. Each product that is evaluated is produced either at the request of an ORD customer or as a part of a larger effort in support of an ORD customer.

The selection of survey respondents is a multi-step process. The initial survey respondent selections are made by the research laboratories and centers that led the development of the research products. In collaboration with ORD’s National Research Programs, survey respondents are identified if they meet one of the following criteria:

  1. The individual was engaged by ORD to inform the development, delivery, or application of the product;

  2. The individual or their program requested the product;

  3. The product was designed to support the individual’s work function; or

  4. The product was not designed specifically for this individual’s work function, but fits some other interest or application for them

After these selections are made by the ORD Labs, Centers, and National Research Programs, ORD engages in a process in which the list of initial selections is reviewed and verified by the organizations in which the identified recipients work. To illustrate, ORD engaged with EPA’s Program and Regional Offices to verify the list of proposed survey respondents. Through this process, some respondents were removed from the list and others were added. This process allows ORD to increase the quality of the survey recipient list by reducing the overall number of mistargeted surveys that result from factors such as retirements and staffing changes.

EPA will engage in a process similar to the above to ensure high quality survey respondent targeting in state and local governments as well. With the support of intergovernmental associations such as the Environmental Council of States (ECOS), EPA will be able to maximize the accuracy of its survey respondent identification.

The population of individuals surveyed is not a representative sample of all users of a given EPA research product. Most of the products in question are released publicly as journal articles, EPA reports, or online tools or applications. As a result, hundreds of thousands of individuals and groups access, use, cite, and download EPA’s research products each year. While these individuals and groups are important beneficiaries of EPA’s research products, this survey is not designed to capture feedback from them. EPA employs alternative methods, such as decision document analysis and online site hit analysis to capture this.

B.2.a Target Population and Coverage

The target population for this survey are customers of the Office of Research and Development in the United States. Customers are defined as individuals in EPA program offices, regions, and other partner agencies and organizations who are most closely engaged in the development, delivery, and use of the research products. This survey is designed to produce estimates of the satisfaction of users of EPA research products. EPA will be able to provide estimates of end user satisfaction scores at the enterprise level for ORD in a given fiscal year. As explained in Section B.2, EPA will not provide results within sampling strata or at any level below ORD’s full portfolio.

B.2.b Sample Design

This section describes the sample design. It includes a description of the sampling frame, target sample size, stratification variables and sampling method. The sampling design employed is a stratified random selection of products that are evaluated.

B.2.b.i Sampling Frame

A research product is defined as a deliverable that results from a specific research project or task and is meant for use by an ORD customer. This can include one of many scientific deliverables. These include journal articles, agency reports, software tools, databases, and more. The sampling frame used is developed from EPA’s Research Management System (RMS). RMS is EPA’s system of record on research deliverables, containing metadata on all deliverables planned by the ORD through its strategic research planning framework. The following information is extracted from RMS:

  • The following fields related to research planning:

    • National research program

    • Resarch Project (including project title, project lead lab/center and staff member)

    • Research Task (including task title, task lead lab/center and staff member)

  • Research lab/center and division leading product development

  • Title and description of product

  • Type of product (i.e. journal article, EPA report, software tool, etc.)

  • Contact information for lead staff member assigned to the product

Justification for the Use of RMS

The following criteria are often used in assessing a proposed sampling frame:

  • It fully covers the target population.

  • It contains no duplication.

  • It contains no foreign elements (i.e., elements that are not members of the population).

  • It contains information for identifying and contacting the units selected in the sample.

  • It contains other information that will improve the efficiency of the sample design.

Four of these five elements of an effective sampling frame are captured in the product pool listed in RMS. The only element that is not well captured in RMS is related to the presence of foreign elements in the system. As stated earlier, all deliverables falling under ORD’s strategic research planning framework are included in RMS. The evaluation survey developed is designed to gather feedback on a wide variety of research products developed by EPA, however several products listed in RMS are outside of the target population based on a number of criteria:

  • The product has not been delivered

  • The product is not intended for direct use by a Program, Region, or other Federal user

  • The product is foundational research that will not be used in the near term by the intended user

  • The product was produced by a non-ORD entity

EPA will manually identify research products that meet the above criteria and remove them from potential selection.

EPA employs systems other than RMS to track and archive EPA research products. For example, the EPA Science Inventory is a database that contains metadata and other documentation for all publicly available scientific products developed by EPA. This system, however, is far broader in scope than RMS, containing data that is irrelevant to this evaluation and requiring overly burdensome processes.

B.2.b.ii Sample Size

Products
EPA’s goal is to establish a sample size that allows for the determination of the percentage of products that meet customer needs with precision targets set at the enterprise level. Precision targets are discussed in Section B.2.c.The sample size is determined using the following equation:


Where: Z = Z value (e.g. 1.96 for 95% confidence level)

p = percentage picking a choice, expressed as decimal (.5 used for sample size needed)

c = confidence interval, expressed as decimal

The sample size calculation is not very sensitive to year-to-year variation in the number of products that are eligible for evaluation. In a given year, the number of products eligible can range from under 300 to over 500, however the size of the 95% confidence interval of the estimate of the number of products that meet customer needs ranges from ±10 to ±13 products.

Respondents
EPA’s selection of survey respondents will fully capture the population of key users of EPA’s research products. The number of surveys requested is expected to be about five individuals per research product. With 50 products per year being evaluated, the number of surveys requested will be about 250. At EPA’s 90% eligible response rate target, the number of surveys received will be about 225. Missing survey responses are expected to be problematic insofar as the missing data is distributed non-randomly across the list of research products. In cases where surveys are missing from research products entirely, meaning no eligible responses were received for a given research product, then that product will be omitted from the evaluation.

B.2.b.iii Stratification Variables

The objective of stratification is to increase the efficiency of the sampling design and reduce the overall number of surveys that need to be distributed. Product selection is stratified according to the research lab/center that developed the product. There are 6 research Laboratories and Centers in the Office of Research and Development from which the products will be sampled.

Stratification increases the precision of estimates compared with a simple random sample of products to evaluate. With stratified samples, the data collection can more effectively capture the variety and complexity of research products that are developed by the Office of Research and Development. This leads to the stratification according to research lab and research center (labs/centers). Each lab/center in ORD specializes in specific environmental and public health topics and designs its research approaches and products to these fields. Using the developing labs/centers as the stratification variable allows EPA to capture this variability, while still collecting a representative sample of products according to product type and research program.

For each fiscal year, 5 products are randomly selected from each of the research centers and 10 are selected from each research lab for every 50 products that are selected. The stratified random selection will be tested against other characteristics of each individual product, such as the National Research Program the product falls under or the type of product it is, to verify that the stratified random selection is representative of the full population of products.

Survey respondents are selected according to each individual product, which will be visited in more detail in B.4.a

B.2.b.iv Sampling Method

An equal probability random sample will be drawn from each stratum, with the number of products drawn varying depending on whether the stratum is a research laboratory or research center. With a 50-product sample, 5 products are randomly selected from each of the research centers and 10 are selected from each research lab.

B.2.c Precision Requirements

B.2.c.i Precision Targets

EPA’s goal is to estimate the number of research products that meet customer needs within a 13-product margin of error at a 95 percent confidence level. See section B.2.b.ii for more information. For example, if the estimated number of products that meet customer needs is 250 out of 300, EPA will provide this estimate including a 95% confidence interval that ranges from 237 to 263.

B.2.c.ii Nonsampling Error

EPA developed an assessment approach that will employ several quality assurance techniques to maximize response rates, response accuracy, and processing accuracy to minimize non-sampling error. This includes the following:

  • EPA will provide technical support, including assistance with using the survey instrument and accessing information about the project, to its product users to ensure the data collection instrument and the overall project goals are understood.

  • Survey questions are those that users of products should know. EPA does not ask questions that require monitoring, research or calculations on the part of the respondent.

  • The data collection instrument is designed to be completed in one sitting and without gathering any additional information, such as feedback from colleagues or extensive recollection of previous activities.

  • The electronic format of the survey will make returning the data collection instrument convenient.

B.2.d Data Collection Instrument Design

As described in supporting statement A, section X, the data collection instrument is an online survey (see attachment X). Each respondent receives an email summarizing the project and asks them to participate by filling out a survey evaluating an EPA research product. Each email is personalized for each respondent and contains the following information:

  • A brief description of the project and a request to fill out a survey reviewing an EPA research product

  • A link to a product summary sheet and a link to the product itself for reference

  • Contact information for individuals to provide technical support if necessary.

The online survey is facilitated using the online tool SurveyGizmo. After about 5 minutes of reviewing the survey request and product materials, it is designed to be completed in about 15 minutes. There are three types of questions included in this survey:

  • Questions requesting information about the respondent (i.e. name, organization)

  • Likert-scored questions regarding the quality and usability of the research products in question

  • Contextual questions regarding how and when the respondent received and used the product and engaged with ORD regarding the product

B.3 PRE-TESTS AND PILOT TEST

B.3.a Pre-tests

EPA conducted a pre-test of the data collection instrument ahead of the May 2018 pilot study. The pre-test was executed by a staff and management-level workgroup within the Office of Research and Development (ORD), in which the workgroup was asked to review the data collection instrument and test the user-interface of the online survey tool. The pre-test gathered feedback on the quality of the data collection instrument, including modifying question language for clarity and conciseness.


B.3.b Pilot Test

Three rounds of data collection were completed related to this evaluation. First was a small-scale pilot test that was conducted in early 2018 to determine if data collection was feasible on a larger scale. After the success of the small-scale pilot study, full-scale data collection was conducted with federal survey respondents to support end of year FY2018 and end of year FY2019 reporting. The latter two data collection rounds took place in late 2018 and early 2019 respectively.

Small Scale Pilot Test

The small-scale pilot test was conducted on a sample of 17 research products that were delivered in FY2016. 89 surveys were distributed in connection with these 17 products, resulting in 49 returned surveys.

The small-scale pilot test was conducted with the goal of achieving the following:

  1. Developing working definitions of key terms and an algorithm for calculating a performance metric on research products that meet customer needs

  2. Developing a data collection methodology sufficient to meet annual reporting requirements

  3. Identifying technical, strategic, and policy needs required to successfully implement the evaluation framework

The results of the small-scale pilot study were reported to ORD senior leadership to review in May 2018, finding that the definition of key terms in the metric, evaluation proposal, and data collection methods are sufficient for ORD’s needs and are acceptable to implement moving forward. On this basis, the small-scale pilot study was deemed a success.

Full-Scale Data Collection

Full-scale data collection was conducted using survey responses from federal users of research products to support FY2018 and FY2019 performance reporting.

Of the 272 products that were listed in ORD’s Research Management System for FY17, 50 were excluded from selection eligibility based on the criteria above (28 for “not a final product”, 9 for “not intended for direct use by a Program…”, and 13 for “other”), resulting in a pool of 222 eligible products. From these 222, 50 were selected through a stratified random selection according to the Lab or Center that developed the product. These 50 were the basis of the evaluation. This round of data collection distributed surveys nearly exclusively to individuals within the EPA and other federal agencies, however 3 individuals employed by state governments were included in the FY2018 sample (see section A.3.c). This data was used to support FY2018 performance reporting.

Of the 327 products that were listed in ORD’s Research Management System for FY18, 131 were excluded from selection eligibility based on the criteria above (27 for “not a final product”, 60 for “not intended for direct use by a Program…”, and 44 for “other”), resulting in a pool of 196 eligible products. From these 327, 50 were selected through a stratified random selection according to the Lab or Center that developed the product. These 50 were the basis of the evaluation. This round of data collection distributed surveys nearly exclusively to individuals within the EPA and other federal agencies, however 9 individuals employed by state governments were included in the FY2019 sample (see section A.3.c). This data was used to support FY2019 performance reporting.

B.4 COLLECTION METHODS AND FOLLOW-UP

B.4.a Collection Method

The proposed collection methodology is an online survey. The data collection instrument (Attachment X) will be sent via email. The survey period will be open for one month and EPA staff will send up to three follow-up emails prior to the survey period closing to ensure a response.

B.4.b Survey Response and Follow-up

The target response rate (defined as the ratio of responses to eligible respondents) is 90 percent. EPA realizes that this is an ambitious target, but EPA believes that there are special circumstances that warrant such a target. EPA conducted the following activities to achieve that high response rate in previous surveys:

  • EPA distributed messages to survey respondents from Agency senior leaders to indicate that the survey was a high priority project

  • EPA distributed follow-up emails to survey respondents ahead of the survey period closing

During its small-scale pilot test, FY2018 full scale data collection, and FY2019 full scale data collection, EPA achieved response rates of 55%, 59%, and 56% respectively. In each of these cases, the response rate was below the 90% response rate target, but EPA continues to work to achieve its response rate targets by engaging in the activities described above.

B.5 ANALYZING AND REPORTING SURVEY RESULTS

B.5.a Data Preparation

The data collection instrument will be reported directly to EPA from each individual survey respondent. Surveys are automatically screened by the data collection instrument for completeness and duplication to improve data control. After the survey period had ended, all surveys will be individually screened by EPA staff to check for missing data or any signs of self-exclusion on the part of the survey respondent.

B.5.b Analysis

A scoring model was developed by ORD as a part of the development of the evaluation framework. The model calculates a score for each individual product on a scale of up to 100 points. These points come from three categories for each product: the scientific quality of the product (30 points), the usability of the product (50 points), and the timeliness of the development and delivery of the product (20 points). Of these 100 points, 70 of them are sourced from the survey described in this document and 30 come from data collection methods internal to EPA. This scoring model and its relative values were established through a series of interviews and discussions between ORD and EPA Program Offices to ascertain end user priorities and preferences for research products. Each of these categories are informed by a set of indicators that further describe the qualities of the product and a set of data fields that inform these.

A research product is determined to “meet customer needs” if the product receives a score of 85 of 100 on the scale described above. This threshold was established by EPA senior leadership. In cases where products receive multiple surveys, the survey-based scores are averaged using the arithmetic mean and calculated into the 100-point scale.

EPA will prepare a report for EPA senior leadership that tabulates the results of the evaluation survey and explains the precision of the estimate of the percent of products that meet customer needs. Examples of statistics provided will include:

  • Total number of products that score above or equal to 85 (meet customer needs).

  • Standard errors calculated for the estimate.

B.5.c Reporting Results

The results of this survey will be made available each year to the EPA, OMB and the public through EPA’s Annual Performance Report, which is an attachment to EPA’s Annual President’s Budget Request that is available at epa.gov/planandbudget.

Records of all surveys distributed, all metadata, raw data, and analysis documentation and stored in a combination of restricted-access network drives and password-protected Microsoft SharePoint sites.

























This page intentionally left blank.



Appendix A – Data Collection Instrument


EPA/ORD Research Product Evaluation Form



Shape1

About You



Your contact information

First Name: _________________________________________________

Last Name: _________________________________________________

Title: _________________________________________________

Organization Name: _________________________________________________

Email Address: _________________________________________________



What product are you reviewing with this survey?

You can find the title of the product in the survey request email that was distributed to you

_________________________________________________



Shape2

Product Use



How would you rate your familiarity with this product?

( ) Very Unfamiliar ( ) Unfamiliar ( ) Neutral ( ) Familiar ( ) Very Familiar



[If the respondent rated their familiarity at “Very Unfamiliar” or “Unfamiliar” Why do you feel you are unfamiliar with this product?

____________________________________________________________________________________________________________________________________________________________









How did you or your office use (or expect to use) this product?

Select all that apply and add any additional uses of this product that apply


Have used in the past for:

Expect to use in the future for:

Do NOT expect to use for this purpose:

Regulatory decision-making

[ ]

[ ]

[ ]

Site-specific (such as Superfund) decision-making

[ ]

[ ]

[ ]

Providing guidance or support to other Programs, Regions, or non-EPA partners

[ ]

[ ]

[ ]

Preparedness exercises

[ ]

[ ]

[ ]

Provides relevant scientific information

[ ]

[ ]

[ ]

Other (please describe)

[ ]

[ ]

[ ]



Please describe in your own words how you have used or expect to use this product. Please provide a web link to a reference or example use of the product if available.

____________________________________________________________________________________________________________________________________________________________



Shape3

Product-Specific Questions

In this section, we will ask you to react to a number of statements on your experience using this product. Please rate each statement from Strongly Disagree to Strongly Agree

1) Overall, I am satisfied with this product.

( ) Strongly Disagree ( ) Disagree ( ) Neutral ( ) Agree ( ) Strongly Agree ( ) Not applicable/Don't know



2) The data, methods, and results in this product are documented in a transparent way.

( ) Strongly Disagree ( ) Disagree ( ) Neutral ( ) Agree ( ) Strongly Agree ( ) Not applicable/Don't know



3) The data, methods, and results of this product are accurate and scientifically sound.

( ) Strongly Disagree ( ) Disagree ( ) Neutral ( ) Agree ( ) Strongly Agree ( ) Not applicable/Don't know



4) This product is in the proper format (i.e. EPA report, journal article, online toolkit) for me (or my office) to make use of it most effectively.

( ) Strongly Disagree ( ) Disagree ( ) Neutral ( ) Agree ( ) Strongly Agree ( ) Not applicable/Don't know



5) The product is designed or presented in a way that is user-friendly for me.

( ) Strongly Disagree ( ) Disagree ( ) Neutral ( ) Agree ( ) Strongly Agree ( ) Not applicable/Don't know



6) I can easily access the product when I need it.

( ) Strongly Disagree ( ) Disagree ( ) Neutral ( ) Agree ( ) Strongly Agree ( ) Not applicable/Don't know



7) I was able to easily access any underlying data and/or documentation for this product necessary to fully understand and make use of it.

( ) Strongly Disagree ( ) Disagree ( ) Neutral ( ) Agree ( ) Strongly Agree ( ) Not applicable/Don't know



8) This product was accompanied by necessary summaries, manuals, technical documentation, etc. for me to readily understand and make use of it.

( ) Strongly Disagree ( ) Disagree ( ) Neutral ( ) Agree ( ) Strongly Agree ( ) Not applicable/Don't know



9) I know who in ORD to reach out to with any questions, clarifications, or concerns regarding this product.

( ) Strongly Disagree ( ) Disagree ( ) Neutral ( ) Agree ( ) Strongly Agree ( ) Not applicable/Don't know



10) I received the necessary support from ORD to effectively make use of the product.

( ) Strongly Disagree ( ) Disagree ( ) Neutral ( ) Agree ( ) Strongly Agree ( ) Not applicable/Don't know



11) The product has informed my or my office’s decisions or actions.

( ) Strongly Disagree ( ) Disagree ( ) Neutral ( ) Agree ( ) Strongly Agree ( ) Not applicable/Don't know



12) Which of the following best describes the timeliness of your receipt of this product?

( ) I had no specific expectation or requirement for when I needed this product

( ) I had a deadline for which I needed this product and that deadline was met

( ) I had a deadline for which I needed this product and that deadline was NOT met

( ) Other - Write In (Required): _________________________________________________



13) If you have any additional comments regarding the scientific quality of this product, please add them here

____________________________________________________________________________________________________________________________________________________________



14) If you have any additional comments regarding the usability of this product for you, please add them here.

____________________________________________________________________________________________________________________________________________________________



15) If you have any additional comments regarding the timeliness of this product's development and delivery, please add them here.

____________________________________________________________________________________________________________________________________________________________



Shape4

Your Relationship with ORD



16) Did you or your office request this product from ORD?

( ) Yes

( ) No

( ) Not for this product specifically, but for we have requested work in this topic area

( ) I don't know

( ) Other - Write In (Required): _________________________________________________



17) How were you or your office engaged in the planning and development of this product?

Select all that apply

[ ] I/We were involved in strategic research planning of this topic area.

[ ] I/We requested this product from ORD.

[ ] I/We have worked with ORD on related research but not this specific product.

[ ] I/We helped conceptualize this product.

[ ] I/We were involved in the design or execution of the research.

[ ] I/We received regular updates on the status of the research.

[ ] I/We provided feedback on preliminary findings or drafts of the product.

[ ] I/We co-authored this product.

[ ] I/We were not involved in the planning or development of this product.

[ ] Don't know/Don't remember



18) How involved were you or your office in the development of this product?

( ) Not at all involved ( ) Somewhat involved ( ) Moderately involved ( ) Very involved ( ) I don't know



19) How satisfied are your level of involvement in the development of this product?

( ) I am satisfied with my level of involvement

( ) I would like to have been more involved but didn’t have the time

( ) I would like to have been more involved but was not aware of the opportunity to participate

( ) I would like to have been more involved but was not sure how I could best contribute

( ) Not applicable

( ) Other - Write In: _________________________________________________



20) If you have any additional comments regarding your engagement with ORD regarding this product, please add them here.

____________________________________________________________________________________________________________________________________________________________



21) How did you become aware of or acquire this product?

( ) I received it from a contact in ORD

( ) I received it from a non-ORD colleague

( ) I learned about it through an EPA forum or community of practice

( ) I learned about it through an ORD newsletter

( ) I learned about it through an ORD meeting or webinar

( ) I learned about it at an academic or professional conference

( ) I found it independently through the public internet

( ) I found it independently through the EPA Intranet

( ) I received it for the first time with this survey

( ) I don't remember

( ) Other - Write In: _________________________________________________



22) How satisfied are you with the delivery and communication of this product?

( ) Very Dissatisfied ( ) Dissatisfied ( ) Neutral ( ) Satisfied ( ) Very Satisfied



23) How satisfied are you with the ORD's training about how to use this product?

( ) Very Dissatisfied ( ) Dissatisfied ( ) Neutral ( ) Satisfied ( ) Very Satisfied ( ) Not applicable



24) If you have any additional comments regarding ORD's communication and delivery of this product, please add them here.

____________________________________________________________________________________________________________________________________________________________





25) In general, how satisfied are you with the scientific quality of ORD's research products?

( ) Very Dissatisfied ( ) Dissatisfied ( ) Neutral ( ) Satisfied ( ) Very Satisfied



26) In general, how satisfied are you with the usability of ORD's research products?

( ) Very Dissatisfied ( ) Dissatisfied ( ) Neutral ( ) Satisfied ( ) Very Satisfied



27) Are you aware of any other individuals or offices who may be using this product? If so, please list them here.

____________________________________________________________________________________________________________________________________________________________





28) Are there other individuals, offices, or groups who you think could benefit from using this product? If so, please list them here.

____________________________________________________________________________________________________________________________________________________________





29) If you have any additional comments regarding this product, this survey, or how ORD can better support your work in general, please add them here.

____________________________________________________________________________________________________________________________________________________________



Shape5

Thank You!



1 See ORD’s Strategic Measure Data Quality Record (DQR) for this definition

2 For internal planning purposes, ORD defines its research deliverables through two terms: products and outputs. The distinction between a “product” and an “output” is not a core consideration of ORD’s Program/Regional partners, and so ORD elected to combine all partner-focused deliverables using the term “products.”

ICR for EPA Research Product Evaluation 1 June 2019

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorFranklin, Jamia
File Modified0000-00-00
File Created2021-01-14

© 2024 OMB.report | Privacy Policy