Part B QCCIIT revised 11-15-11_clean

Part B QCCIIT revised 11-15-11_clean.doc

Measurement Development: Quality of Caregiver-Child Interactions for Infants and Toddlers (Q-CCIIT)

OMB: 0970-0392

Document [doc]
Download: doc | pdf












Measurement Development: Quality of Caregiver-Child Interactions for Infants and Toddlers


Supporting Statement, Part B

For OMB Approval



November 10, 2011









B. STATISTICAL METHODS (USED FOR COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS)

This section provides supporting statements for each of the five points outlined in Part B of the Office of Management and Budget (OMB) guidelines, in order to collect information for the Measurement Development: Quality of Caregiver-Child Interactions for Infants and Toddlers (Q-CCIIT) project. The submission requests clearance for recruiting and sampling procedures, data collection instruments and procedures, plans for data analysis, and reporting of findings from the Q-CCIIT pilot and psychometric field tests. As noted in Part A, the Q-CCIIT project is a three-year project that started in 2010–2011, with a two-year data collection and reporting phase (FY 2011–2013).

B.1. Respondent Universe and Sampling Methods

Sampling and estimation procedures


The Q-CCIIT project will consist of a pilot test and a psychometric field test. Each phase has multiple sampling stages. The pilot test will consist of 60 classrooms from 2 geographic locations; the psychometric field test will include 400 classrooms from 10 geographic locations. Each phase will consist of three purposive sampling stages: geographic location, child care setting, and classroom.


One of the major concerns of this data collection is to have the greatest cultural and economic diversity possible as well as a representative range of quality. To obtain information needed to select locations and centers purposively for diversity, we will consider a number of different sources. First, we will receive information about Early Head Start program characteristics from the Early Head Start Family and Child Experiences Survey (Baby FACES) study. Second, in selecting locations, we will investigate state guidelines for ratio and group size (as a proxy for quality) as well as information from national studies and Quality Rating and Improvement Systems, if available. Third, we will consider the locations of universities in relation to the centers in an attempt to sample classrooms with high levels of maternal education and exemplary child care programs, which would add to the diversity we seek. Third, we will consult the Common Core of Data, which maintains demographic information on local elementary schools. Fourth, we can use U.S. Census data to find additional demographic information at the local level. All these sources will inform our purposive selection of geographic locations for sampling programs/settings and classrooms. See Section A.2 for more on the recruitment of child care settings into the Q-CCIIT project.


Pilot test. For the pilot phase, we will purposively select 2 locations. In each, we will select 5 to 10 center-based programs depending on how many are needed to yield 7-8 infant and 7-8 toddler classrooms (for a total of 15 each across 2 locations).3 Some of these may be mixed-age classrooms. We will select 15 family child care (FCC) settings within each of the locations. To select settings, we will consult lists from Child Care Resource and Referral agencies (CCR&Rs) in the same geographic areas as the centers and select them purposively to obtain diversity as described above. This will give us a total of 30 FCC settings in the pilot. From the center-based programs, we will have a total of 30 classrooms selected for cultural, linguistic, and economic diversity to the extent possible, with a minimum of 15 infant and 15 toddler classrooms total across all sites (the remaining classrooms may be mixed-age). The pilot data collection phase includes the Q-CCIIT observation and the caregiver background questionnaire.


Psychometric field test. For the psychometric field test, we will begin by selecting 10 Early Head Start programs from the 89 currently participating in the Baby FACES study. We will target those programs in large urban or suburban geographic areas that include a university so that we can access a diverse population across maternal education, income, cultural, and linguistic backgrounds. To ensure representation of the child care population, we will also include Early Head Start programs in rural areas, from which community child care providers have differing access to supports. We will also choose these programs based on proximity to other Early Head Start programs.


For each of the 10 Early Head Start programs selected, we will attempt to obtain information about the geographic boundaries of its service area. We will obtain a list of all other Early Head Start programs adjacent to this program to understand the aggregate Early Head Start service area through the Head Start Program Information Report. We will also obtain a list of all FCC settings and community-based centers that serve both infants and toddlers up to 36 months old in each Early Head Start service area using CCR&Rs for the appropriate zip codes.


In each aggregate Early Head Start service area, we will purposively select about 5 Early Head Start centers, 2 community-based centers, and 10 FCC settings. We will assign random numbers to facilitate the selection of the appropriate number of settings within each service area.


We will create a second area, not serviced by Early Head Start, adjacent to each of the 10 Early Head Start service areas. The second areas will be about the same size as the aggregate Early Head Start service areas and will not include child care settings that serve Early Head Start children through a partnership with participating Early Head Start programs. To create a second area, we will start with a radius of 10 miles beyond the aggregate Early Head Start service area. We will then expand this by increments of 5 miles until we have recruited the number of classrooms needed. The second area is for data collection purposes and will be used to ensure the representativeness of our findings across different centers and FCC settings.


In the area not served by Early Head Start, we will select about 3 community-based centers and 10 FCC settings.4 After ensuring that a wide variety of community-based programs and FCC settings are available, we will use random numbers to select the number we need.


In summary, we will have selected 10 geographic locations based on the locations of our original 10 Early Head Start programs. In each location, we will have two areas of about the same size, one that is served by Early Head Start grantees and one that is not (Figure B.1). There will be about 5 Early Head Start centers, 5 community-based centers, and 20 FCC settings for each of the 10 geographic locations. This gives approximate totals of 50 Early Head Start centers, 50 community-based centers, and 200 FCC settings.


After selecting the centers, we will select classrooms. We will treat each FCC setting as a single classroom for sampling and, at each center-based program; we will select a minimum of two classrooms, which will create 100 classrooms from Early Head Start centers, 100 from community-based centers, and 200 from FCC settings, for a total of 400 observations. We will select infant and toddler classrooms before mixed-age classrooms, if possible. If an abundance of center-based classrooms meet our specifications, we will assign them random numbers to aid in selecting the appropriate number. It is unlikely that many of the 200 FCC settings will be devoted solely to infants or toddlers. Instead, they will probably include children of mixed ages.



Figure B.1

Sampling for the Psychometric Field Test



1 Geographic Location
(10 total geographic locations)







1 Area Served by
Early Head Start

1 Area Not Served by
Early Head Start





~10 FCCs



~2 Center-Based
Programs

~5 Early Head
Start Centers

~3 Center-Based
Programs

~10 FCCs







1 Classroom
(mixed-age)

>2 Classrooms
(~1 infant,
1 toddler)


>2 Classrooms
(~1 infant,
1 toddler)

>2 Classrooms
(~1 infant,
1 toddler)

1 Classroom (mixed-age)








Note: Results in 400 classrooms across 10 geographic locations. We anticipate approximately 200 FCC classrooms, 100 toddler classrooms (50 are Early Head Start, 50 are community based), and 100 infant classrooms (50 are Early Head Start, 50 are community based).



The psychometric field test will include test-retest and validation measure observation components that require further selection of classrooms within the sample. We will assign random numbers to the classrooms to assist with selecting the 60 test-retest classrooms. Validation classrooms will be selected based on the schedule of Q-CCIIT observations to maximize cost-efficiency. Classrooms could potentially be selected for both test-retest and validation observations.


B.2. Procedures for the Collection of Information

Sampling and estimation procedures

Estimation procedure. Because the sampling for this project will be done almost entirely in a purposive, as opposed to probabilistic, manner, no weights will be constructed to account for the probability of selection. However, we do plan to create weighting adjustments to account for nonresponse at the various stages of data collection.


Degree of accuracy needed for the purpose described in the justification. As described in A.16, analyses with the psychometric field test data will involve calculation of reliability and validity estimates for all classrooms as well as subgroups of classrooms based on ages served.


Data collection procedures

As noted previously, we propose to collect information using multiple methods, including classroom observations and self-administered questionnaires (SAQs) with caregivers. Below is a brief description of each Q-CCIIT project instrument.


Child care setting recruitment form. To start recruitment, we will send an advance letter to selected settings, and a Mathematica site coordinator will then call the setting to review the basic topics discussed in the letter, including data collection activities, tokens of appreciation, and respond to questions. As part of this initial call, the Mathematica site coordinator will collect information on the number of eligible classrooms as well as identify a setting point person (SPP) who will later provide more specific information on eligible classrooms affiliated with the setting. For settings that are eligible and agree to participate, Mathematica site coordinators will call SPPs to obtain classroom child rosters5 and update information to assist in completion of a child care setting recruitment form. Site coordinators will collect the age ranges of the classroom and in some cases the birthdates of the children from birth to 36 months of age in each of the eligible classrooms. We will also collect the languages spoken in the classroom, as well as staffing and classroom schedule information. We will use this information to determine when data collectors will visit the classroom and whether data collectors need to be bilingual. We estimate that the gathering of this information prior to the visit will take 30 minutes per setting.


Q-CCIIT observation measure. In both the pilot and the field test, we will conduct classroom observations for 3 hours in the morning. In general, the observations will not require anything from participants in the project and thus will not impose a time burden. The Q-CCIIT measure will include a request to observe a short group activity (such as shared book time for caregivers who already do such activities) taking less than 10 minutes of the 3 hours. It will conclude with one follow-up question on how typical the day was. . We expect that 100 percent of eligible classrooms within participating settings will participate.


Caregiver background questionnaire. After the observation, caregivers who spend more than fours a day in the classroom will be asked to complete a questionnaire on characteristics that could account for variation in observed interactions. The Q-CCIIT observer will provide a paper-and-pencil instrument to the caregiver, as well as an envelope for returning it to the observer when completed. The questionnaire will take 20 minutes, and we expect that 100 percent of caregivers will complete it.


B.3. Methods for Maximizing Response Rates and Dealing with Nonresponse

The Q-CCIIT project expects to obtain a very high response rate for the caregiver questionnaires. Strategies for maximizing response follow.


Achieving a high response rate starts with obtaining a high level of cooperation. Our attractive and easy-to-read materials, as well as our relationships with the setting’s staff, will enable us to explain the project to caregivers, supporting their participation. We will distribute advance letters and fact sheets to settings and caregivers; gift cards as tokens of appreciation are another strategy for boosting participation. For his/her assistance in organizing data collection, we will give each SPP a $25 gift card to purchase materials to use with the children. Each participating caregiver will receive a $25 gift card for allowing us to conduct the observation and for completing the caregiver background questionnaire.


Despite encouraging participation through clear and attractive materials and gift cards as tokens of appreciation, we do anticipate some nonresponse. Our electronic data receipting system will enable us to track real-time response rates by instrument and respondent. Thus, we will monitor that caregivers complete instruments in a timely manner. We intend to follow up with nonresponders while our observers are on site. Gathering caregiver background questionnaires on site will ensure greater response rates, but for a caregiver who could not complete the form at that time of the observation and was left with materials for shipping the questionnaire, we will take follow-up measures. For example, we will instruct Q-CCIIT observers, before they leave the geographic location, to check in with caregivers from whom they have not collected forms; we will also enlist the assistance of the SPPs for pursuing SAQs, conduct follow-up telephone calls for caregivers who have not returned SAQs, and ship additional SAQs when necessary.


In summary, although we will make our best efforts to avoid nonresponse, we will also have procedures in place to convert nonresponse and maximize completion rates.


We will calculate marginal and cumulative response rates at each stage of sampling and data collection. As reflected in the American Association for Public Opinion Research industry standard for calculating response rates, the numerator of each response rate will be the number of eligible completed cases, and the denominator will be the number of eligible cases.

B.4. Test of Procedures or Methods to Be Undertaken

We developed the Q-CCIIT observational measure iteratively during a pretest phase in the late spring of 2011. Throughout this phase, we refined observation items and procedures and developed new items as needed. The pretest involved conducting classroom observations (no questionnaires were administered); with the Q-CCIIT measure evolving and changing.

B.5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

This team is led by Dr. Amy Madigan of OPRE/ACF/DHHS, federal project officer; and from Mathematica, Dr. Louisa Tarullo, project director; Dr. Sally Atkins-Burnett, principal investigator; and Dr. Shannon Monahan, survey director. The plans for statistical analyses for this project were developed by Mathematica, with support from Dr. Margaret Burchinal, FPG Child Development Institute, University of North Carolina. Additional staff consulted on statistical issues at Mathematica include Ms. Barbara Carlson, senior statistician.

3 For sampling purposes, an infant classroom will be defined as one where children are younger than 15 months. A toddler classroom will be defined as one with children between 15 months and 36 months. This cut-point matches professional organizations use of age ranges to define recommended group size and child-to-staff ratios.

4 The 20 total FCC settings might not be equally spread between the Early Head Start service area and the area not served by Early Head Start.

5 Because some settings could have concerns about providing personal identifying information, Mathematica site coordinators will be prepared to work with SPPs to use alternate means of creating identifiers, such as recording children’s initials rather than their names on rosters.

File Typeapplication/msword
AuthorDawn Smith
Last Modified ByDepartment of Health and Human Services
File Modified2011-11-21
File Created2011-11-15

© 2024 OMB.report | Privacy Policy