ECA-P PRA Supptg Statemt for Gen.Clear. (Final for DIR 11-5-04)

ECA-P PRA Supptg Statemt for Gen.Clear. (Final for DIR 11-5-04).doc

Generic Clearance Information Collection for ECA Evaluation Program [DOS Bureau of Educational and Cultural Affairs (ECA)]

OMB: 1405-0158

Document [doc]
Download: doc | pdf

U.S. Department of State

Bureau of Educational and Cultural Affairs (ECA)

Office of Policy and Evaluation (ECA/P)


Supporting Statement

Generic Clearance Information Collection for ECA Evaluation Program


This request for Office of Management and Budget (OMB) review asks for a Generic Clearance for the U.S. Department of State’s Bureau of Educational and Cultural Affairs’ Evaluation Program. OMB approval of this request will facilitate the Department’s ongoing evaluation of its international exchange programs and ensure compliance with several OMB and Congressional results-based reporting requirements.


A. Justification


1. Circumstances that make the collection of information necessary.


The U.S. Department of State’s (DOS) Bureau of Educational and Cultural Affairs (ECA) conducts regular in-depth evaluations of its international exchange programs through the

ECA Evaluation Program administered by ECA’s Office of Policy and Evaluation (ECA/P).


ECA’s exchange programs and the regular evaluation of these programs are critical to the State Department’s foreign affairs mission. Now more than ever, the promotion of a balanced and accurate view of the United States and its policies, and the building and sustaining of international linkages and partnerships, are at the forefront of the U.S. foreign affairs mission and the State Department’s public diplomacy strategy. ECA’s exchange programs help meet this mission by providing experiential knowledge about the United States and Americans, often dispelling misperceptions used by those hostile to us to undermine our national interests.

While exchange programs have traditionally been the tools of long-term relationship building, they are also critical in our current efforts to inform, engage and influence our allies and enemies.


The collection of information is necessary to best ensure regular and timely evaluations of ECA’s exchange programs under the ECA Evaluation Program. These evaluations provide ECA with key information about which exchange activities are working effectively, which should be altered to be more effective, and which should be dropped or added to optimize the often small windows of opportunities to engage and impact foreign target audiences.

This collection will also facilitate ECA’s response to the various evaluation and results-reporting requirements established in the Government Performance and Results Act of 1993 (GPRA), the President’s Management Agenda (PMA), and OMB’s Program Assessment Rating Tool (PART). Collection of information through the ECA Evaluation Program is critical to performance and budget integration and OMB’s own requirement for “independent and quality evaluations of sufficient scope [that] are conducted on a regular basis…to support program improvements and evaluate effectiveness.”


ECA further recognizes the critical need to evaluate its exchange programs as good management under its authorizing legislation, the Mutual Educational and Cultural Exchange Act of 1961, as amended, the Fulbright-Hays Act (the Act). (A copy of the Act is attached.) The Act provides statutory authority for the establishment of ECA and ECA’s exchange programs, as well as for ECA’s evaluation (or monitoring) of its activities. The Act states: “The [ECA] Bureau shall be responsible for managing, coordinating, and overseeing programs established pursuant to this Act.”


In summary, this information collection will best allow ECA to use evaluation and performance measurement data to assess more effectively and improve ECA programming, learn of the effectiveness of ECA programs, comply with established reporting requirements, and ensure program accountability and best value for public resources spent on ECA programs. Without

a doubt, ECA’s program-specific evaluations fill a real need and gap, in that they substitute methodologically rigorous data collection and analyses of program effectiveness and impacts for more subjective, ad hoc, non-standardized anecdotal material.


2. Use and practical utility of collection data.


Summary


The primary purpose of this generic clearance information collection is program evaluation. Through program evaluation, ECA is able to directly assess and measure its customer/respondent base, including: their participation in and satisfaction with ECA programs; their experiences and accomplishments during or since participation; and, their preferences for existing and future ECA programs, products and services. The information collected also serves other purposes related to ECA’s performance measurement and budgeting, program management and design, program planning and results reporting, and information dissemination and outreach.


Data collected through the generic clearance will be derived from customer/respondent observations of their exchange experience through surveys, personal interviews and/or focus groups. The data collection instruments are designed to assess customer/respondent satisfaction and to determine the overall effectiveness of ECA programs in meeting legislative and program goals. The clearance will cover customer/respondent-based, and program-based, evaluation for ECA exchange programs. The customer/respondent base (or target audience) is limited primarily to applicants, participants and alumni of ECA exchange programs, and to a lesser extent, to U.S. and foreign host families and institutions and program administrators in ECA and ECA’s partner organizations.

Each ECA evaluation incorporates general consistency and commonality in evaluation research questions, structure and design, survey question structure and data collection methods. Each of these items is discussed in detail in this section. Because each evaluation relates to a different ECA exchange program, some variance in surveys will occur in order to collect program-specific information beyond the commonly asked questions.


ECA expects to use the generic clearance five to ten times per year.



Generic Clearance Procedures


The Department of State’s Information Collection Coordinators and ECA/P will jointly administer the clearance procedures under this generic clearance. ECA/P will contact the Department’s Information Collection Coordinator (ICC) to coordinate the design of data collection instruments and delivery mechanisms. ECA/P staff will also discuss and review with other ECA program offices specific needs for data collection from customers/respondents and prepare evaluation instruments to ensure conformance with Department of State’s strategic goals and the goals of individual ECA programs, as well as conformance with OMB requirements. ECA/P staff also coordinates with its external contractors (independent evaluators) selected to conduct individual evaluations. The ICC will approve evaluation instruments for each data collection. ECA/P will forward draft instruments and methodology for each evaluation to the Department of State’s Office of the Legal Adviser and the ICC for internal review and clearance. The ICC will then submit the draft evaluation instruments and any supporting documents to OMB for review and clearance prior to actual data collection.


The ICC will also submit an annual report to OMB, based on reporting information provided by ECA/P, that demonstrates data collections conducted under the generic clearance, including:

  • description of individual evaluation projects conducted

  • response rate achieved by individual evaluation projects

  • % of responses collected by electronic means

  • number of respondents and respondent burden hours used

  • dates of each survey administered

  • individual and aggregated costs of surveys

  • individual summaries of results and program/product decisions (if any to date) that were made based upon customer/respondent responses and feedback.

Purposes of collected information


Compliance with OMB and PMA Requirements


Data collected through the ECA Evaluation Program enables ECA to comply with the mandates for regular and rigorous results reporting and performance measurement as required not only by GPRA, but also by the President’s Management Agenda (PMA) – in particular, the Program Assessment Rating Tool (PART). The OMB PART requires reporting on annual and long-term performance goals. The need for annual reporting heightens the need of all programs to have sufficient data available to respond to OMB in a timely manner and with sufficient scope. ECA has structured its ECA Evaluation Program to be able to provide OMB with detailed results data. Without a generic clearance information collection for the ECA Evaluation Program, an ongoing and continuous program, the long lead time for individual project clearances would effectively preclude ECA of needed flexibility in administering its evaluation instruments. This would seriously jeopardize ECA’s ability to meet OMB’s stated requirements, and ECA would only have the ability to report annually on customer satisfaction measures (see attached Justification document).

Program Evaluation


Program evaluations conducted under this collection will, at a minimum, enable ECA to address in-depth the following types of common research questions:


  • Are ECA exchange programs meeting their legislative mandates and programmatic objectives? How effective are ECA exchange programs in meeting their legislative and programmatic goals and objectives?

  • Do ECA exchange participants gain a better understanding of the United States and Americans, and foreign host countries and their citizens?

  • Do ECA exchange participants develop on-going relationships at the individual and institutional level with people and institutions in the United States or foreign countries?

  • What changes or achievements have ECA participants and alumni made in their professional and personal development as a result of their exchange program experiences and participation?

  • What kinds of experiences comprise the ECA exchange experience? Which of these experiences have an effect on the outcomes for the exchange?

  • With whom, and how, do participants and alumni of ECA exchange programs share information and knowledge gained from their exchange experience?

  • How do ECA alumni remain involved with their exchange program and alumni activities?

  • How do ECA participant and program characteristics and administration influence program outcomes?


Program Management


The data supplied through the ECA Evaluation Program allows the program managers in ECA and ECA’s partner organizations to make informed decisions about program design and improvements, program management and budgets, and resource allocation. These evaluations also help program managers to better develop longer-term program strategies. In addition, assessing the specific impacts of the programs on participants, partner institutions, host countries and communities, with methodological rigor and on a regular basis, may act as a catalyst for the establishment of new oversight mechanisms. Such mechanisms will strengthen management practices. Programs have used the data from evaluations to leverage private sector funding and support for programs, greatly expanding the reach of ECA’s limited federal dollars. Evaluations also provide specific recommendations to ensure results attainment and efficient administration of the bureau’s programs.


Compliance with GPRA Reporting and Planning Requirements


Evaluation data are used in DOS’ strategic planning process. ECA’s annual Bureau Performance Plan relies on evaluation data to inform decision-makers about directions to pursue, gaps to fill, audiences to reach, and the extent to which previous initiatives have succeeded or need adjustment. Completed evaluations under the ECA Evaluation Program, along with major findings, recommendations, actions taken/to be taken and expected results, are incorporated into the annual Performance and Accountability Report (PAR) submitted to OMB and Congress. Data and results information collected from the evaluations enables ECA and DOS to comply with these GPRA requirements. The results of the evaluations are shared with OMB, the General Accounting Office (GAO) and Congress.


Bureau-level Budget Allocations


Collected data are provided to ECA’s senior managers to advise of program effectiveness and results in order to better inform Bureau-wide resource decisions and allocations. In times of conservative budgets and resources, and pursuant to the goals of the PMA, program allocations and budgets become more closely tied to program performance. Therefore, the ability to collect program effectiveness data through the ECA Evaluation Program becomes more critical and essential to the budget decision-making process. Use of data from evaluations has been central to determining which program models would be used to launch exchange initiatives with the Muslim and Arab world.


Public Dissemination and Outreach


ECA also uses the completed evaluations as a means of disseminating information and publicizing results to U.S. taxpayers and key stakeholders on the effectiveness, efficiency, and accomplishments of ECA exchange programs and services. Ultimately, these evaluations are used to build trust between the Department of State and American citizens by providing evidence that federal funds are being spent wisely and achieving the purposes they are mandated to achieve.


Implementation of the ECA Evaluation Program


As noted above, ECA’s Office of Policy and Evaluation (ECA/P) is directly responsible for the administration of the ECA Evaluation Program. In administering program evaluations or outcome assessments for the Public Diplomacy community, ECA/P follows the Program Evaluation Standards of the American Evaluation Association, and has been recognized as a leader in government evaluation by OMB and GAO. The evaluations conducted by or sponsored by ECA/P comprise both foreign policy and operational assessments and typically look at program outcomes with regard to their link to the respective exchange program’s legislative mandate, established program goals, and to U.S. strategic foreign policy goals.


Integral to the ongoing ECA Evaluation Program, ECA develops a multi-year Evaluation Schedule that specifies the ECA exchange programs to be evaluated in the next two to three-year cycle. (The ECA Evaluation Schedule – FY 2005-2006 is attached.) ECA/P generally contracts with independent evaluators to design evaluation projects, collect, measure and analyze quantitative and qualitative data, and report findings. The contractors, selected through competitive processes, are from independent social science research firms and consultancies that are recognized for the quality of their products and technical expertise in their respective fields.

For each evaluation, the ECA/P evaluation officer develops the scope of work, manages the contract, reviews all evaluation work conducted by the contractor, and provides technical oversight. ECA/P evaluation officers collaborate to guarantee methodological consistency among all ECA/P-sponsored evaluations. ECA/P staff also provides the contractors with ECA alumni and/or participant lists per evaluation and the contractors draw scientifically valid samples, and in a small number of evaluations, contractors utilize the program’s census population.


The external contractors are responsible for providing ECA/P with a proposed and actual task-by-task work plan, with corresponding timelines, in accordance with ECA’s Evaluation Methodology and Plan for Use by External Contractors (see attachment).


ECA’s exchange programs are specifically designed to respond to the need for citizen connections and communication, professional and cultural linkages, educational and institutional linkages, and community linkages between the United States and countries worldwide.

The evaluations will fall under one of three ECA program elements:

The Office of Citizen Exchanges manages professional, youth, and cultural programs through grants with non-profit American institutions, including community organizations, professional associations, and universities to support two-way exchange partnerships with like institutions abroad. All of the programs are thematically based, with a wide range of projects, and are related to specific, global regions.

The Office of International Visitors brings nearly 5,000 International Visitors to the United States annually. These are identified as “future leaders” in the fields of labor, government, education, business, media, the sciences and the arts. Each visitor, whether individually or in a group, confers with professional colleagues and U.S. counterparts, and interacts with U.S. citizens from geographically diverse areas under the auspices of approximately 100 local international visitors’ councils.

The Office of Academic Programs supports several initiatives. It manages the Fulbright Program which facilitates exchanges for scholars, advanced researchers, notable individuals, and university students and continues to be the flagship exchange program for ECA and its predecessor, the U.S. Information Agency. The Office of Academic Programs also supports various programming in English, English instruction and pedagogic materials, undergraduate programs for U.S. and foreign students, as well as an extensive network of educational advising centers for foreign students overseas.

ECA Evaluation Structure


ECA evaluations look for the “results” of programs and use a four-level approach to categorize the results. ECA also uses, where possible, four levels of analysis. Since all programs administered by ECA fall under the Fulbright-Hays Act, mutual understanding between cultures is the common focus in all ECA evaluations. ECA also asks questions related to thematic and program-specific (or legislative mandate) goals. ECA’s conceptual framework for evaluation is adapted and based on the work of Donald Kirkpatrick, Professor Emeritus at the University of Wisconsin. Kirkpatrick’s four level framework is recognized internationally and used extensively in the private sector by organizations including Peoplesoft, Cisco, and Caterpillar.


Please refer to the attached document ECA Evaluation Structure for more details on ECA’s conceptual framework for evaluation, including the levels of results, levels of analysis, and question structure and types.


Question Structure


In order to help ECA comply with performance and reporting requirements – and to track ECA’s progress in meeting its performance indicators – ECA evaluations ask questions associated with (a) activity goals, (b) mutual understanding, and (c) thematic area. In addition, data collection instruments developed by external contractors in collaboration with ECA/P contain program-specific questions.


Some program-specific questions for ECA’s flagship Fulbright Program include:


  • What changes did you help initiate in your host institution during your program?

  • Have you continued to collaborate with colleagues from your host institution?

  • How has your Fulbright experience impacted your subsequent educational and professional experiences and achievements?


ECA/P also incorporates elements of the American Customer Satisfaction Index™, developed by Claes Fornell at the University of Michigan, and already approved for government use by OMB.


3. Efforts to Minimize Burden and Use of Technology


To limit respondent burden, sampled universes are carefully developed by ECA/P’s external contractors, and are program-specific. Census populations may also be used for ECA programs with a small number of participants and/or alumni. Survey questions, as previously mentioned under “Question Structure” in Item 2 above, are limited to four primary content areas that are applied to ECA evaluations, with a limited number of additional questions that are specific to individual ECA programs and the goals and objectives of those programs. In general, the development of survey questions is coordinated among ECA/P staff, ECA/P’s external contractors and ECA program-level managers and staff. The parties thoroughly review the questions posed in order to fine-tune the questions, limit the number of questions being asked,

to be precise about what to measure and what results may be anticipated.


Electronic surveys are becoming more and more commonplace for research and evaluation and are proving to be both time- and cost-effective to develop, administer and tabulate. Therefore, and in accordance with the “electronic option” requirements in the Government Paperwork Elimination Act, this information collection utilizes Information Technologies to offer an electronic option to data collections that were previously paper-based. In fact, the use of electronic surveys is vital to evaluations conducted under the ECA Evaluation Program as sample sizes range from the hundreds to the thousands.


Information technology significantly reduces the burden of labor on the ECA/P external contractor in administrating surveys (via E-mail and/or Internet), as well as during data cleaning and analysis. Electronic responses are either E-mailed to a data file and then transferred into a pre-existing database, or automatically downloaded into pre-existing databases developed per evaluation. Information technology, especially web-based surveys, also significantly reduces data entry burden on ECA alumni and participant respondents, ECA partner organization respondents, ECA/P, and ECA program managers at different points in the evaluation process.


Web or electronic based systems facilitate respondents’ data entry across computer platforms. One innovative feature of many of the web systems is the thorough editing of all submitted data for completeness, validity and consistency. Editing is performed as data are entered. Most invalid data cannot enter the system, and questionable or incomplete entries are called to the respondents’ attention before they are submitted to the evaluators. Web-based surveys employ user-friendly features such as automated tabulation, data entry with custom controls such as checkboxes, radial buttons, data verification and error messages for easy online correction, standard menus, and predefined charts and graphics. All these features facilitate the analysis and reporting processes, provide useful and rapid feedback to the data providers, and reduce burden.


Technology will also be used in the following ways:


  • ECA/P’s external contractors will use existing ECA program office and partner organization databases to contact alumni and participants. ECA’s partner organizations are not-for-profit organizations awarded ECA grants for the purpose of administering exchange programs on ECA’s behalf.

  • The contractors will use electronic data collection methods when developing, testing and administering surveys. The contractors may use both electronic and web-based data collection mechanisms.

  • ECA/P will also rely on information technology to meet records maintenance requirements, as appropriate for each data collection.


Although electronic means are the primary and preferred means of conducting all data collections under the ECA Evaluation Program, ECA/P acknowledges that some respondents still require printed and mailed documents, especially when there are no known electronic means available to them. Therefore, some of ECA/P’s evaluations will use parallel electronic and paper instruments to ensure that its customer base is truly reflected in evaluation results and feedback. In such cases, respondents will have the option to supply hand-written responses on paper instruments. Respondents could be anywhere in the world, with extremely varied levels of electronic means available to them, therefore, ECA/P and its contractors need to plan accordingly.


The back-office components that support this business process analyze digital data. Paper-based collections are currently manually entered into back-office systems for inclusion in final data analysis. The electronic input option completes an end-to-end electronic process from collection to analysis while leaving the paper response option in place. Based on ECA/P’s recent experiences in administering data collection instruments by electronic means (via email and web-based), ECA/P estimates that 60% of the responses under this collection will be collected electronically – as indicated on the accompanying OMB Form 83-I.

4. Duplication


The ECA Evaluation Program as a whole has been structured to best ensure the regular assessment of ECA’s exchange programs and to guarantee consistency in data collection methods and instruments, while avoiding duplication. Research questions relate specifically to ECA programs, products, and services. The Department’s Forms Clearance Officer and ECA/P staff ensures that evaluation and research questions do not involve duplicate efforts. Since the ECA Evaluation Program collects information at the bureau level only, ECA ensures oversight of its evaluation activities to avoid duplication and over-surveying of respondents or of programs.


To avoid duplicate information collections, ECA/P publishes evaluation results and shares final reports with the ECA program offices. ECA/P also widely disseminates evaluation findings to other bureaus within the Department (including U.S. Missions), as well as to outside stakeholders, including ECA partner organizations, not-for-profit organizations, professional evaluator associations, OMB, and Congress.


5. Minimizing Burden on Small Businesses


This information collection will have no or minimal impact on small businesses. Potential respondents are typically applicants, participants, alumni or administrators of ECA’s exchange programs. However, a smaller number of potential respondents may also include U.S. hosts of exchange participants, such as, host families and small host business. Burden in these cases will be limited to the short period of time (30 to 60 minutes) it takes for these hosts to voluntarily complete a survey instrument or be interviewed in conjunction with an evaluation.


Small businesses that specialize in research and evaluation services may be contracted by ECA to conduct an evaluation or assessment under this information collection.


6. Consequences of Less Frequent Collection


If the information is not collected, ECA and the Department of State will be unable to document the effectiveness, impacts or outcomes of its exchange programs. In addition, ECA and the Department will not be able to meet accountability requirements or assess the degree to which programs are meeting their goals. Moreover, the Department will be unable to comply fully with the congressional and executive mandates, including OMB’s mandates, to evaluate and report on the results of exchange programs.


Currently, the only other mechanisms for assessing near-, intermediate-, and long-term results and outcomes deriving from exchange programs are ad hoc anecdotal reporting by U.S. Embassies overseas and end-of-project reports by the partner organizations implementing the exchange programs. Without this information collection, ECA does not have a way to gather, analyze and report statistically significant independent results data.


Each evaluation conducted under the ECA Evaluation Program is expected to be a one-time information collection, although some ECA programs may undergo follow-up evaluation(s) as needed to measure changes and results over a longer period of time. Most individual respondents will be queried once, although current ECA exchange participants could be queried several times – before, during and after their exchange program.


Evaluations are scheduled to provide timely and critical data and analysis for inclusion in results reporting mandated by GPRA, OMB’s PART and the President’s Management Agenda. They will also provide timely information to the ECA program offices on a regular basis, as they work to improve program design, management, and efficiencies. Collecting information less frequently would prevent the Department of State from making optimal use of this data for reporting, and program planning and management purposes.


7. Special circumstances


There are no special circumstances.


8. Consultation


Public Notice # 4827 was published in the Federal Register on September 10, 2004 (Vol. 69,

No. 175, Pages 54824-54825) and requested comments from the public. No comments were received during the 60-day public comment period.


In addition, ECA/P undertakes extensive consultation for each evaluation with the appropriate ECA program offices and other Department Bureaus and partner organizations. ECA/P also confers with its external contractors on evaluation design and statistical information to confirm that project designs comply with standard evaluation practices, are statistically sound and user-friendly. Potential questionnaire topics and questions are discussed with ECA program offices and revisions are made as appropriate.


At the outset of each evaluation, ECA/P’s external contractors conduct extensive consultations with key contacts in ECA program offices and ECA partner organizations to determine the availability and quality of existing program and participant data. The contractors also rely on consultation with these stakeholders during the development of surveys, in-depth interview questionnaires and focus groups protocols. During the proposal review phase for evaluations, trained and experienced social science analysts in the Department’s Bureau of Intelligence and Research review questionnaires and proposed methodology to ensure they are appropriate for the particular project and are consistent with recognized standards in quantitative and qualitative research and evaluation work. Following an in-depth vetting process, surveys and other data collection documents are tested for user-friendliness, clarity and understandability by a small group of ECA alumni or participants (less than 10) similar to the proposed recipient universe.


Final evaluations are shared with professional evaluators throughout the world via professional associations and higher education institutions for review and comment, thereby allowing independent and public scrutiny of evaluation projects from beginning to end.





9. Paying Respondents


ECA does not provide payments, gifts or other forms of remuneration to respondents of its various evaluations under the ECA Evaluation Program.


10. Assurance of Confidentiality

ECA/P and its external contractors follow all procedures and policies as stipulated under the Privacy Act of 1974. As a general rule, all instruments used in an evaluation include introductory information that explains the purpose of the evaluation and states that participation is voluntary and confidential.

All ECA evaluation respondents are informed of the following:

    • All personal information collected through surveys, interviews and focus groups is considered confidential.

    • All responses will be coded to ensure the confidentiality of individual responses.

    • Data collected will not be shared, sold or used for fundraising or any other purpose unrelated to the evaluation.

    • Survey data and findings will be used only in an aggregate form for the express purposes of fulfilling the data needs of ECA’s Evaluation Program and ECA’s associated planning and reporting requirements.


11. Sensitive Questions


Questions, in general, are carefully composed and structured to avoid being sensitive in nature to respondents. However, on occasion, questions may be asked of foreign participants – on an optional basis – about their religious or ethnic affiliation. For example, this may be done for U.S. foreign policy reasons related to outreach to and engagement of Muslim and Arab audiences. This type of sensitive information will be used to provide only demographic data.


12. Estimate of Hour Burden


The ECA evaluations are generally one-time collections from specific ECA-related respondent types. However, respondents may be surveyed more than once and ECA programs may be evaluated more than once.


Respondents may include U.S. and foreign applicants, current exchange visitor participants

(J-1 visa) and alumni of the ECA exchange programs, program administrators, U.S. and foreign partner or grantee organizations, U.S. and foreign hosts of exchange participants, and other similar types of respondents associated with ECA’s exchange programs. Occasionally, participants in ongoing exchange or alumni activities may also be asked for periodic updates (e.g., current contact and work information). In such cases, responses are part of normal, mandatory grant and program monitoring.


The number of respondents and responses to evaluations conducted under the ECA Evaluation Program varies from year-to-year depending on the specific ECA exchange programs being evaluated and the scope of the evaluations (i.e., the study population). For example, newer programs will typically have smaller participant and alumni populations whereas older programs, such as The Fulbright Program, will have larger participant and alumni populations. Therefore, response rates are relative to the study participant/alumni population (or universe). In particular,

it should be noted that occasional single, large evaluations might also skew one year’s accumulative responses upward.


As indicated on the accompanying OMB Form 83-I, ECA/P anticipates 2,617 annual respondents to this generic information collection. This number is the average of ECA Evaluation Program’s cumulative respondent numbers for each of the past three years (i.e., 3,484 in 2001; 1,585 in 2002; and 2,782 in 2003).


In general, respondent burden estimates may vary between 30 and 90 minutes, depending on the data collection method and the exchange program being evaluated. Pre-testing the data collection instruments also helps to minimize response burden. Average hour burden estimates provided on data collection instruments will be based on past instruments.


The estimated annual hour burden, as indicated on the OMB Form 83-I is 1,962. This number is based on an estimated 2,617 respondents multiplied by an average of 45 minutes per response.


By way of example, the figures in the table below reflect the burden results of the U.S. Fulbright Scholar Program Survey (completed in 2001) and the U.S. Fulbright Student Program Survey (completed in September 2004). These surveys, which were administered under ECA/P’s current information collection (OMB No. 4115-0118), are similar to surveys anticipated for future ECA evaluations.



Evaluation Survey


Number of Respondents

Number of Responses per Respondent

Average Hours per Response


Response Burden

Percentage of Responses Collected Electronically

U.S. Fulbright Scholar Program Survey (2001)


801


1


.75


600.75


80%

U.S. Fulbright Student Program Survey (2004)


1,083



1


.5


541.5


100%


13. Estimate of Cost Burden


ECA does not expect respondents of the ECA Evaluation Program to incur any costs other than that of their time voluntarily expended to respond to the data collection.


14. Estimated Cost to the Federal Government


The estimated annual cost to the Federal Government of this information collection will be approximately 2.6 million dollars. This is comprised of ECA/P’s estimated 2 million dollar annual budget and associated staff and administrative costs of approximately $600,000. To date, the average cost for ECA/P-sponsored surveys conducted by external contractors is $300,000 – out of the 2 million dollar annual budget for the ECA Evaluation Program. This per survey/evaluation average cost includes costs for all external contractor work related to the evaluation, as outlined in ECA’s Evaluation Methodology and Plan, and contractor travel.


ECA/P staff oversees the evaluation projects and contracts. The calculation for staff costs is $75,220 (annual Washington locality pay for GS-13/3 evaluation officer—average in the office)

x 26.4% fringe benefit = $95,078 annual cost per FTE x 6 FTE = $570,468. Other operating expenses such as equipment, printing, publications, and related mailing are nominal and are absorbed in the ECA’s operating expense budget.


There are no other costs associated with this information collection.


15. Reason for Change in Burden


This is a new collection of information.


  1. Publication of Results


The external contractor is responsible for submitting to ECA/P all evaluation data collection information, tabulation, and analysis, and a Final Report and Executive Summary for publication. The contractor will also submit preliminary and draft reports to ECA/P for iterative review.

ECA/P disseminates Final Reports to ECA program managers, partner organizations and other internal and external stakeholders. In addition, Final Reports are made available to the general public – notified through public announcements to professional evaluation associations and institutions of higher education – in hard copy and electronic format. Executive Summaries and one-page Evaluation Summaries are also produced for each completed evaluation and are available on ECA/P’s web page at http://exchanges.state.gov/education/evaluations.


These reports are not published by any third-party entity. Evaluation contractors are forbidden contractually from publishing results unless ECA has made a specific exception – all products of the collections are the property of ECA.


Included with this submission are the Executive Summary and Evaluation Summary for two evaluations completed under the ECA Evaluation Program: Outcome Assessment of the U.S. Fulbright Scholar Program and Evaluation of the English Language Fellows Program. These documents help demonstrate a sampling of methodologies, surveys, data analyses and results associated with the ECA Evaluation Program. The full Final Reports, including Appendices,

are available upon request.


  1. Request to Not Display Expiration Date


ECA will display the assigned expiration date of the generic information collection on all information collection instruments.


18. Exceptions to the Certification


There are no exceptions.

B. Collections of Information Employing Statistical Methods


1. Respondent Universe


The ECA Evaluation Program evaluates each ECA exchange program separately. Because each program typically involves participants from many countries (or going to many countries) over several years, the first step in defining the respondent universe is for the ECA/P evaluation officers to consult with the particular program office to determine the scope of the evaluation; i.e., the countries from which the respondents come and the time frame.


After the countries and time frame have been selected, the next step is to ensure that the final sample is representative across these criteria, including large enough anticipated responses within each category to ensure adequate power for detecting statistically significant differences. Because the number of participants in any given ECA exchange program does differ significantly by country, a census of participants may be necessary in one country, whereas a sample may suffice in others. The difficulty in obtaining updated address information for alumni scattered across the globe where the communications infrastructure is not always optimal means that additional samples need to be drawn at the time of the initial sample to ensure enough names for replacement. Replacement is conducted so that the sample distribution remains proportional to the original criteria.


By way of example, Table 1 below illustrates the respondent sample for the U.S. Fulbright Student Program Survey, which was administered 100% electronically (via E-mail with a link to the web-based survey) during the summer of 2004.


Table 1.


Host Region

1980-1990

1991-1995

1996-2000

Sample Total



# Grantees

# Email

% Email

# Grantees

# Email

% Email

#

Grantees

#

Email

%

Email

# Grantees

#

Email

%

Email


Africa (sub-Saharan)

100

70

70%

117

80

68%

116

80

69%

333

230

69%


East Asia and Pacific

162

88

54%

150

95

63%

170

96

56%

482

279

58%


Eastern Europe and NIS

103

57

55%

105

61

58%

137

96

70%

345

214

62%


Near East

104

60

58%

110

75

68%

117

77

66%

331

212

64%


South Asia

98

54

55%

125

73

58%

129

82

64%

352

209

59%


Western Europe

373

218

58%

321

198

62%

290

167

58%

984

583

59%


Western Hemisphere

152

89

59%

155

103

66%

165

95

58%

472

287

61%


Total All Regions


1092


636

58%


1083


685

63%


1124


693

62%

3,299

2,014

61%
















Survey Goals:














1. Initial sample size (potential Emails):

3,299













2. Number able to locate via web (50%)

1,668













3. Number of valid responses (60%)

1,000














Survey Results:


1. Final sample size (contacted via Email):

1,723


2. Number able to locate via web (100%)

1,723


3. Number of valid responses (63%)

1,083



In addition, Table 2 below illustrates the ECA Evaluation Program’s cumulative respondent numbers and response rates for each of the past three years.


Table 2.


2001

2002

2003

N = Universe of alumni for surveying

R = Number of respondents

% = Response rate

Y = Program years represented

N = 5,272

R = 3,484

% = 66 percent

Y = 1976-2000

N = 2,757

R = 1,585

% = 57 percent

Y = 1976-2001

N = 4,471

R = 2,782

% = 62 percent

Y = 1993-2002


2. Procedures for Collecting Information


Data collection methods and procedures used under the ECA Evaluation Program may vary according to evaluation project and the countries in which data is being collected. In general, however, the ECA evaluations utilize a combination of the following methods: paper surveys, web-based surveys, face-to-face structured interviews, telephone interviews, in-depth open-ended interviews, and focus groups. Factors used to determine the data collection methods in any given country relate to the availability of Internet access and telephone service, the reliability of the postal service, and the cultural and political attitudes (and apprehensions) towards surveying. For each evaluation, the data collection methods are discussed in detail, and specific country plans are developed with a contingency plan in place.


Alumni names and most recent contact information are provided to the external contractor by ECA’s partner organizations administering the exchange program and/or by ECA program offices. Contact information is updated by the contractor, in conjunction with alumni associations, in-country partners, the Public Affairs Section of the Embassies, and the program office. All alumni in the sample are sent an initial e-mail or a written notice, or contacted by telephone, to inform them of the evaluation and asking them to participate. Notices of the evaluation are also posted on the State Department’s ECA alumni website, https://alumni.state.gov, and in ECA alumni newsletters and mailings, where appropriate.


Statistical Methodology


Survey responses are not weighted. The research design is such that the sample should be representative of the country populations, and thus parallels that of the defined universe.

There are no unusual problems requiring specialized sampling procedures.


Data are usually collected only once from any given individual during a specific evaluation. However, some evaluations may require that data be collected from participants before, during and after their exchange programs.


3. Methods to Maximize Response


Data collection instruments are always pre-tested with a small group of people similar to the evaluation target audience (see details in item 4 below). Data collection methods are tailored to fit the prevailing conditions in each country involved in an evaluation. In addition, initial contact

E-mails or letters are sent, and/or telephone calls are made, to prospective respondents, and follow-up reminders are sent periodically to non-respondents encouraging them to respondent. (Refer to Table 2 above for illustrations related to sample size and corresponding response rates for the ECA Evaluation Program.)


4. Testing of Procedures


The survey instruments are always pre-tested with fewer than 10 individuals to ensure clarity, brevity, relevance, user-friendliness, understandability, sensitivity, and that most alumni will be willing to provide answers. Pre-tests may be conducted by distributing the survey by E-mail or regular mail, followed up by individual telephone conversations with the contractor/researcher to go over results. Pre-tests may also be conducted in focus groups, with individuals meeting together to go over the instruments. In all cases, pre-tests have been extremely useful for clarifying instructions and questions, refining the response categories, and even adding new questions.


5. Consultations on statistics.


ECA/P’s external contractors selected to conduct evaluations under the ECA Evaluation Program provide team members who specialize in statistics to assist with the research design, data collection, and analysis. ECA/P has worked with such recognized research firms as SRI, Aguirre International, American Institutes for Research (AIR), T.E. Systems Inc., and ORC Macro International.




8


File Typeapplication/msword
File TitleDepartment of State
AuthorRSILVER
Last Modified Bysimoniansaa
File Modified2004-11-12
File Created2004-10-13

© 2024 OMB.report | Privacy Policy