Supplemental_Statement_Part_B_Final

Supplemental_Statement_Part_B_Final.docx

How Differences in Pedagogical Methods Impact ChalleNGe Program Outcomes

OMB: 0704-0506

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT – PART B

B.  COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS

1.  Description of the Activity

The ChalleNGe Program currently has 34 programs in 29 states (plus the District of Columbia and Puerto Rico). The study involves the administration of an online survey to all of the classroom teachers at all 34 programs. Each program has approximately 5 to 6 classroom teachers. Accordingly, the potential respondent universe includes approximately 190 (the number of teachers can fluctuate due to position vacancies). No sampling methods will be used to select respondents; all teachers will be invited to participate.

Table 1. Potential respondents

Program(s):

Total Programs

Total Teachers

Likely responders if 80% response rate

Likely responders if 90% response rate

All

34

190

152

171

GED only

18

~101

~80

~91

Non-GED

16

~89

~72

~80



Based on an expected response rate of 80 percent, we estimate that our sample will include about 152 teachers. This will provide us with sufficient power to discern differences in responses to our survey measures. In the event that we are able to get a higher response rate, say, around 90%, we expect to have about 171 responses total. In either case, we plan to break down some of the answers based on the type of program (GED vs. Credit Recovery / Diploma), in which case we expect to get 80-91 responses in the first group, and 72-80 responses in the second group, depending on their respective response rates.


Based on conversations with Program Directors and the steps we plan to take to maximize our response rate (see below), we believe our estimated response rate of 80 percent is conservative and we are likely to achieve a higher rate, but we have no previous data collection and so no prior response rate for comparison.


Table 2 presents power analysis for determining differences on categorical variables between the groups of GED and Non-GED programs. Naturally, our ability to discern differences increases with the response rate and effect size. If the response rate is around 80% for both groups, we expect to have reasonable power to detect only large differences between groups (effect size .5 and higher), whereas in the event that all participants respond, we should be able to detect smaller differences in effect size, between .3 and .4.


2.  Procedures for the Collection of Information

The survey will be fielded on-line. ChalleNGe staff will be providing the study team with the names and email addresses of all of the teachers. Unique user ID numbers and passwords will be generated for each potential respondent. Each teacher will be sent an email using a prepared text that contains a description of the study, invites them to assist with the data collection by completing the survey, and directs them (via URL or link to click) to the survey should they choose to participate. They will be provided with the user ID number and password to use to complete the survey.

Table 2. Power Analysis

Effect Size

Power at 80% response

Power at 90% response

Power at 100% response

0.1

3%

3%

5%

0.2

9%

10%

22%

0.3

23%

27%

55%

0.4

45%

51%

84%

0.5

69%

75%

97%

0.6

87%

91%

100%


Because our survey is designed as a census, we plan to achieve variation in terms programs and geographic location, and to produce expected sample sizes large enough to be able to discern differences on our surveyed measures between various categories. We are using no sampling procedures; instead, we are surveying all teachers at each of the sites. Below, we discuss measures of statistical accuracy based on the sampling plan outlined in Table 1, which informs our potential sample sizes.


Table 3 provides the statistical measures of precision that we plan to achieve, broken down by potential response rates and confidence levels. Note that these estimates are conservative (based on a .5 proportion of ‘yes’ answers to all questions). In reality, as proportions of responses deviate from .5, our margins of errors will shrink. The calculations in the table are based on formulae for margin of error in a z-test for population proportions.


Table 3. Conservative expected margins of error


Program

80% response rate 

90% response rate


90% confidence

95% confidence

90% confidence

95% confidence

GED

0.09

0.11

0.09

0.10

Non-GED

0.10

0.12

0.09

0.11

Total

0.07

0.08

0.06

0.07


Assuming an 80-percent response rate, our margin of error for each question for all teachers is at most 7%, meaning that our confidence interval for each question will be at most 14% wide. Our 95% confidence intervals will be at most 16% wide under the same response rate. Once the population of teachers is broken down into those who teach in GED vs. non-GED programs, the sample size decreases and so the confidence intervals become wider, as shown in table 2. Assuming a 90% response rate, we expect our confidence intervals to be slightly narrower, around 12 percentage points for 90% confidence, and 14 percentage points for 95% confidence in our estimates. Again, we expect actual intervals to be even narrower that these bounds; these calculations present a worst-case scenario.


We do not foresee any unusual problems and do not plan to implement special sampling procedures. We have chosen to reduce respondent burden by using a short survey that we expect to take at most 20 minutes to complete.

3.  Maximization of Response Rates, Non-response, and Reliability

We will take several steps to maximize our response rate. First, each program director will be sent a letter signed by the DASD (RA-Resources) that will describe the study and its importance to the ChalleNGe program, and urge staff to consider participating in the survey. Directors will be asked to distribute the letter to all teaching staff. Second, the researchers will notify the director of each program when the emails have been sent to each teacher. The directors will be asked to alert their staff that the survey invitations have been sent and provide them with contact information should they have questions or in the event they did not receive an email invitation to participate. Staff size at each program is quite small, so we expect the directors will likely do this in person at a staff meeting or otherwise. Third, emails will be sent to non-respondents one week and two weeks after the initial email. The text of those emails reminds the potential respondent about the survey and how to go about completing it should they choose to do so. Finally, one week after the second email reminder a letter will be sent to each teacher that has not responded letting them know how to get their log-in information if they have not received it or have misplaced it and when the survey will be closed. No other actions will be taken to deal with instances of non-response.

Because there are no benefits to those who complete the survey and no costs to those who don’t, and because the survey does not ask polarizing questions, we expect no selection bias in those who choose to participate and that the study conclusions will be generalizable to the population of teachers who chose not to take the survey. Additionally, there is no disclosure risk: the email of each respondent is not recorded. Rather, each respondent is given a code identifier. Each survey taker is required to check off the box that informs them of this lack of disclosure risk, so they are likely to answer the questions honestly.

We ensure accuracy and reliability of responses in the following ways:

  • Most of the survey questions are multiple-choice, so there is little chance of the respondent misunderstanding a question.

  • Respondents are allowed to skip any question(s), thus, they have no motivation to give inaccurate answers. Respondents are required to check off the box informing them that they are permitted to skip questions at the beginning of the survey.

  • Respondents are provided with contact information for survey administrators so that they can clarify any questions they find ambiguous.

  • Where possible, we’ve provided boxes for respondents to write in free-form answers to the questions, in case they don’t find our multiple choices sufficient.

4.  Tests of Procedures

We tested the survey on teachers to increase the chances that our population of interest would find it easy to understand and complete. It was tested twice. First it was tested on three teachers at the Fort Gordon ChalleNGe site. Feedback was solicited from the respondents on the ease of completing the survey; difficulties in understanding the questions; and suggestions for alternate wording or additional questions. Based on this feedback, revisions were made to the survey. It was then tested a second time on three teachers at the Washington State ChalleNGe site. Feedback was once again solicited and no additional substantive comments or concerns were raised.


5.  Statistical Consultation and Information Analysis

a.  Dr. Yevgeniya (Jane) Pinelis at CNA was consulted on statistical aspects of the study design. Dr. Pinelis can be reached at (703) 824-2052.

b.  Dr. Lauren Malone at CNA will be responsible for the collection and analysis of the data. She can be reached at (703) 824-2741.

3


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorPatricia Toppings
File Modified0000-00-00
File Created2021-01-28

© 2024 OMB.report | Privacy Policy