Supporting Statement Part B

Supporting Statement Part B.doc

Generic Clearance Information Collection for ECA Evaluation Program [DOS Bureau of Educational and Cultural Affairs (ECA)]

OMB: 1405-0158

Document [doc]
Download: doc | pdf


U.S. Department of State

Bureau of Educational and Cultural Affairs (ECA)

Office of Policy and Evaluation (ECA/P)


Supporting Statement – Part B

Generic Clearance Information Collection for ECA Evaluation Program

B. Collections of Information Employing Statistical Methods


1. Respondent Universe


The ECA Evaluation Program evaluates each ECA exchange program separately. Because each program typically involves participants from many countries (or going to many countries) over several years, the first step in defining the respondent universe is for the ECA/P evaluation officers to consult with the particular program office to determine the scope of the evaluation; i.e., the countries from which the respondents come and the time frame.


After the countries and time frame have been selected, the next step is to ensure that the final sample is representative across these criteria, including large enough anticipated responses within each category to ensure adequate power for detecting statistically significant differences. Because the number of participants in any given ECA exchange program does differ significantly by country, a census of participants may be necessary in one country, whereas a sample may suffice in others. The difficulty in obtaining updated address information for alumni scattered across the globe where the communications infrastructure is not always optimal means that additional samples need to be drawn at the time of the initial sample to ensure enough names for replacement. Replacement is conducted so that the sample distribution remains proportional to the original criteria.


By way of example, Table 1 below illustrates the respondent sample for the U.S. Fulbright Student Program Survey, which was administered 100% electronically (via E-mail with a link to the web-based survey) during the summer of 2004.


Table 1.


Host Region

1980-1990

1991-1995

1996-2000

Sample Total



# Grantees

# Email

% Email

# Grantees

# Email

% Email

#

Grantees

#

Email

%

Email

# Grantees

#

Email

%

Email


Africa (sub-Saharan)

100

70

70%

117

80

68%

116

80

69%

333

230

69%


East Asia and Pacific

162

88

54%

150

95

63%

170

96

56%

482

279

58%


Eastern Europe and NIS

103

57

55%

105

61

58%

137

96

70%

345

214

62%


Near East

104

60

58%

110

75

68%

117

77

66%

331

212

64%


South Asia

98

54

55%

125

73

58%

129

82

64%

352

209

59%


Western Europe

373

218

58%

321

198

62%

290

167

58%

984

583

59%


Western Hemisphere

152

89

59%

155

103

66%

165

95

58%

472

287

61%


Total All Regions


1092


636

58%


1083


685

63%


1124


693

62%

3,299

2,014

61%
















Survey Goals:














1. Initial sample size (potential Emails):

3,299













2. Number able to locate via web (50%)

1,668













3. Number of valid responses (60%)

1,000














Survey Results:


1. Final sample size (contacted via Email):

1,723


2. Number able to locate via web (100%)

1,723


3. Number of valid responses (63%)

1,083



In addition, Table 2 below illustrates the ECA Evaluation Program’s cumulative respondent numbers and response rates for each of the past three years.


Table 2.


2001

2002

2003

N = Universe of alumni for surveying

R = Number of respondents

% = Response rate

Y = Program years represented

N = 5,272

R = 3,484

% = 66 percent

Y = 1976-2000

N = 2,757

R = 1,585

% = 57 percent

Y = 1976-2001

N = 4,471

R = 2,782

% = 62 percent

Y = 1993-2002


2. Procedures for Collecting Information


Data collection methods and procedures used under the ECA Evaluation Program may vary according to evaluation project and the countries in which data is being collected. In general, however, the ECA evaluations utilize a combination of the following methods: paper surveys, web-based surveys, face-to-face structured interviews, telephone interviews, in-depth open-ended interviews, and focus groups. Factors used to determine the data collection methods in any given country relate to the availability of Internet access and telephone service, the reliability of the postal service, and the cultural and political attitudes (and apprehensions) towards surveying. For each evaluation, the data collection methods are discussed in detail, and specific country plans are developed with a contingency plan in place.


Alumni names and most recent contact information are provided to the external contractor by ECA’s partner organizations administering the exchange program and/or by ECA program offices. Contact information is updated by the contractor, in conjunction with alumni associations, in-country partners, the Public Affairs Section of the Embassies, and the program office. All alumni in the sample are sent an initial e-mail or a written notice, or contacted by telephone, to inform them of the evaluation and asking them to participate. Notices of the evaluation are also posted on the State Department’s ECA alumni website, https://alumni.state.gov, and in ECA alumni newsletters and mailings, where appropriate.


Statistical Methodology


Survey responses are not weighted. The research design is such that the sample should be representative of the country populations, and thus parallels that of the defined universe.

There are no unusual problems requiring specialized sampling procedures.


Data are usually collected only once from any given individual during a specific evaluation. However, some evaluations may require that data be collected from participants before, during and after their exchange programs.


3. Methods to Maximize Response


Data collection instruments are always pre-tested with a small group of people similar to the evaluation target audience (see details in item 4 below). Data collection methods are tailored to fit the prevailing conditions in each country involved in an evaluation. In addition, initial contact

E-mails or letters are sent, and/or telephone calls are made, to prospective respondents, and follow-up reminders are sent periodically to non-respondents encouraging them to respondent. (Refer to Table 2 above for illustrations related to sample size and corresponding response rates for the ECA Evaluation Program.)


4. Testing of Procedures


The survey instruments are always pre-tested with fewer than 10 individuals to ensure clarity, brevity, relevance, user-friendliness, understandability, sensitivity, and that most alumni will be willing to provide answers. Pre-tests may be conducted by distributing the survey by E-mail or regular mail, followed up by individual telephone conversations with the contractor/researcher to go over results. Pre-tests may also be conducted in focus groups, with individuals meeting together to go over the instruments. In all cases, pre-tests have been extremely useful for clarifying instructions and questions, refining the response categories, and even adding new questions.


5. Consultations on statistics.


ECA/P’s external contractors selected to conduct evaluations under the ECA Evaluation Program provide team members who specialize in statistics to assist with the research design, data collection, and analysis. ECA/P has worked with such recognized research firms as SRI, Aguirre International, American Institutes for Research (AIR), T.E. Systems Inc., and ORC Macro International.





4



File Typeapplication/msword
AuthorR. A. Ciupek
Last Modified ByR. A. Ciupek
File Modified2007-01-16
File Created2007-01-16

© 2024 OMB.report | Privacy Policy