NonSub Change Memo - ECE ICHQ TUS Fall Fielding

NonSub Change Request ECE-ICHQ TUS 9.13.21.clean.docx

Assessing the Implementation and Cost of High Quality Early Care and Education

NonSub Change Memo - ECE ICHQ TUS Fall Fielding

OMB: 0970-0499

Document [docx]
Download: docx | pdf

To: Jordan Cohen

Office of Information and Regulatory Affairs (OIRA)

Office of Management and Budget (OMB)


From: Meryl Barofsky

Office of Planning, Research and Evaluation (OPRE)

Administration for Children and Families (ACF)


Date: September 14, 2021


Subject: NonSubstantive Change Request – Assessing the Implementation and Cost of High Quality Early Care and Education: Comparative Multi-Case Study (OMB: 0970-0499)



This memo requests approval of nonsubstantive changes to the approved information collection, Assessing the Implementation and Cost of High Quality Early Care and Education: Comparative Multi-Case Study (OMB: 0970-0499). Specifically, we are seeking approval for: 1) non-substantive changes to our time use survey; and 2) non-substantive changes to our currently approved experiments looking at the timing of a token of appreciation.


Background on project

The goals of the Assessing the Implementation and Cost of High Quality Early Care and Education field test are to (1) refine the implementation measures to further improve their psychometric properties, (2) refine the cost measures to identify costs associated with each center function and across ages of children, and (3) test the associations between implementation, cost, and quality measures. In light of the COVID-19 pandemic, the field test data collection originally scheduled for 2020 could not proceed as planned.


We received approval for a nonsubstantive change request in March 2021 to conduct data collection in centers that had previously participated in the study. In April 2021, we received approval to engage additional centers in the field test (up to the approved 80 centers under this control number—some that previously participated in earlier phases and some new centers). Combined, these two requests were in line with the plans for the original field test. We recently had a revised package approved that requests the addition of a teaching staff survey (the SEQUAL); that package included a token of appreciation experiment. We also received approval in May 2021 for a token of appreciation experiment for the time use survey.


We are writing to request approval of non-substantive changes to the time use survey and to revise the approach to the token of appreciation experiments, while maintaining the approved level of token of appreciation.


Background for this request

We discovered issues with the quality of data coming from the time use survey (TUS) in the field test. The TUS was revised substantially from Phase 2 to the field test including adding COVID-related questions. Issues compromising the data quality came in part due to the pace in quickly preparing for the field test (due to the inability to extend the current contract beyond the period of performance), preventing a pre-test, as well as the change in mode from hard-copy delivered in person by field staff (as done in the prior phase) to a web-based instrument.

We began doing our checks of the TUS data after about a third of our potential sample completed the survey. We noticed a lot of double-counting in hours across activities and hours summing to much more than the time worked in a typical week. This is a major concern because it compromises our ability to allocate staff labor hours to the different key functions appropriately. We cannot use the data as collected.

Data from the TUS is essential to creating the ICHQ cost measures by key function. Time use allows us to allocate labor hours—the largest resource category—across the five key functions of a center. Understanding the variation in costs across functions is vital to using ICHQ measures to shed light on efficient, and potentially different, pathways to quality.


We propose to administer a revised version of the TUS with the entire field test sample.


  • Proposed Revisions: The structural problem with the TUS impacted the majority of the questions in the survey. We have created a corrected version that will be shorter to administer overall and take less time per respondent than individual follow-up to clarify responses on the original TUS. Additionally, this version is fully compatible on a mobile device.



  • Why the Full Field Test Sample: Fielding the corrected instrument with just non-respondents would give us an incomplete and likely biased picture of time use. Specifically, the TUS was fielded on a staggered basis as centers were recruited and centers with mixed funding and smaller centers were faster to complete the recruitment process than centers within other funding categories and larger centers. Losing the bulk of our sample, particularly for one of the key funding subgroups, would prevent us from doing subgroup analyses to explore patterns in staff time use across centers of different funding sources, a key lever through which the government can support quality. The sample was also released for recruitment by state. If we do not go back to all respondents, the remaining sample would consist primarily of centers in just 2 of the 4 states.


Not having the time use data for the full sample will also limit the ability to explore whether patterns in time use seen in the prior Phase 2 hold in a larger sample and how time use varies across the full sample and for different staff roles across centers with different characteristics. It will also prevent us from producing the full range of cost measures for all 80 centers in the field test. Not having TUS data for the full sample of centers also limits our statistical power to detect significant differences in the means of cost measures between subgroups of centers. It also limits our ability to examine associations between cost, implementation, and quality for the full sample and whether associations vary by different center characteristics.


We also propose to administer the revised TUS and the SEQUAL at the same time to all eligible center staff, combine the tokens of appreciation, and revise the approved experiments examining the timing of the token of appreciation.

  • Joint Administration: Since we are going out into the field to collect responses on the SEQUAL from the same respondents to the TUS, it is least burdensome to collect the TUS and the SEQUAL information at the same time.

  • Revised Experiment Plan: The original two token of appreciation experiments were planned to be separate because we expected to finish the TUS data collection in August/September and administer the SEQUAL beginning in October. Since it will no longer be viable to administer them as distinct surveys and we need to combine the fielding of the two surveys, we also need to revisit the structure of the participant tokens of appreciation and the experiment.


Although we had quality issues with the original TUS, we are able to make some statements about the pre- and post-pay experiment. The findings from the TUS pre- and post-pay experiment based on the first part of the sample are conclusive and helpful in identifying an effective approach to producing high response rates among center staff.  We see a significantly higher response rate among the group of respondents who received the pre-paid payment over those who did not (81% compared to 61%, respectively) (Exhibit 5). We do not see differences in the timing to complete the surveys between the two groups (about 11 days, on average).

Exhibit 1. Preliminary and partial findings from the token of appreciation experiment, among surveys released before July 2021


Response ratea

Days to completeb



Number of staff

Percentage

Number of staff

Min

Max

Mean

Median

SD

$10 pre-pay and $10 post-pay

68

80.9%

55

2

35

10.71

7

7.98

$20 post-pay

117

60.7%

71

1

31

10.72

8

8.50

Note: Includes 26 field test centers for which the time use survey was been released more than 1 week prior to the survey pause. One field test center would not offer staff payments. Phase 3 centers (8 centers) are not included in the experiment. Excludes surveys that were released on July 13 and July 16, the week before the time use survey was paused. Respondents in these 8 centers had only a few days to complete the survey and staff in the pre-payment group were unlikely to have received their pre-payments from their director.

aIncludes all staff in an experiment group for which the time use survey has been released. We compared the proportion of staff who complete the survey in each group and found response rates differ across the two groups.

bIncludes all staff in an experiment group who have completed the time use survey. We compared the mean days to complete the survey for each group and found the means did not differ across the two groups.

SD=Standard deviation.



Studies in ECE are increasingly incorporating staff surveys to understand center functioning and quality but little is known about how the token of appreciation literature that points to the effectiveness of pre-payments works for staff, and specifically teaching staff, in center-based settings. Continuing to test different payment amounts with center staff will make a substantial contribution to how the field can efficiently and effectively conduct surveys in center-based settings.


Overview of Requested Changes

Revisions to the time use survey. We made the following revisions to the time use survey to improve data quality to support the analysis goals:

  • Dropped the COVID questions that were interrupting the flow of questions and adding to cognitive load

  • Dropped the “other” category that was not adding new information

  • Added running sums to decrease cognitive difficulty in summing time for a typical week across activities

  • Streamlined range checks to improve accuracy and quality of the hours reported

  • Streamlined to three small grids to reduce burden and cognitive difficulty for respondents and increase accessibility on a small screen (such as a mobile device)

  • Created a Yes/No question to ask if any time is spent on each periodic activity, followed by a numeric question (hours/year) to decrease respondent burden

The revised time use survey is included with this request.

Token of appreciation experiment. Based on the preliminary results of the TUS participant pre- and post-pay experiment, we recommend combining the approved $20 total amount for the time use survey and the $30 total amount for the SEQUAL teaching staff survey for a total amount of $50 for the combined instruments. This could help us achieve good response rates while testing variations (and cost effectiveness) in the amount of pre-payments. We will present the time use survey and SEQUAL as a single task in the respondent invitations. The experiment will compare a low pre-payment with a 50% pre-payment. Centers will be randomly assigned to receive (1) a $10 pre- and a $40 post-payment or (2) a $25 pre- and $25 post-payment.

This approach does increase the amount provided to individuals being asked to complete the TUS a second time. The additional amount is justified by the increased individual burden; being asked to complete the survey a second time.

Revisions to other instruments and materials. We revised the center re-engagement script (Instrument 7) and the survey outreach materials (Attachment F) to align with the plan to administer the time use survey and SEQUAL teaching staff surveys at the same time. Updated versions are included with this request.



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorJones, Molly (ACF)
File Modified0000-00-00
File Created2021-09-15

© 2024 OMB.report | Privacy Policy