Responses to OMB Questions (Attachment B)

OMB - Summary Research Goals Attachment B.doc

2010 Dress Rehearsal of the Re-engineered Survey of Income and Program Participation

Responses to OMB Questions (Attachment B)

OMB: 0607-0957

Document [doc]
Download: doc | pdf


Summary of 2010 Research Questions and Analysis Plans



RESEARCH GOALS:

LABEL

RESEARCH GOALS: EXPANDED DESCRIPTION


HOW WILL WE ASSESS?

POSSIBLE ANALYSIS METHODS

Processing

Evaluate and assist the further development of the data processing and associated systems.

There are no special implications of the 2010 field test for this research goal – the test simply supplies the raw material to push through the new system, which will identify needed refinements.

RO “Training”

/ Experience

This is not a true research question, but rather a statement of a desired outcome that represents an important goal of the 2010 test. We want all Regional Offices (ROs), including those not directly involved in the field test, to have a chance to learn from the experience, and to begin to understand the nature of the re-engineered SIPP program.

Non-participating RO staff have been involved in a recent FLD-sponsored conference which (a) described the results of initial testing, (b) introduce the major design features of the 2010 test, and (c) briefly outlined some initial thinking about future research directions. We are also contemplating a post-field-test debriefing with RO staff, which would also include observers from non-participating ROs.

Costs

A major motivator for the re-engineering effort was the need to reduce costs. Based on the 2010 test, what are the estimated cost savings of the re-engineered SIPP program compared to the current/traditional SIPP design?

We have established separate project codes for the field test, and will implement new procedures to ensure that those codes are recorded correctly. Field test costs can be extrapolated to a full production design, and those costs compared to the current program.

FR Training

Was training effective, and was it effective for all levels of FR experience (new hires, non-SIPP experienced, SIPP-experienced)? We will focus particularly on new skills required of the FRs, namely their ability to

- administer the new instrument skillfully

- administer the EHC’s “landmark events” procedures appropriately

- recognize when Rs are having trouble

- probe/assist appropriately

- see and explore possible connections across domains to assist recall

- encourage Rs to think.


A new training procedure is a more formal system of practice interviews for FRs who need to be trained earlier than is optimal – we want to evaluate that as well.

We have developed a multi-faceted approach to this key research goal, including the following:

- daily quizzes and a final “certification” exam

- interview recordings with which to evaluate FRs’ adherence to EHC (and other) practices

- assessment of instrument “markers,” such as time stamps, item n-r rates, “straightlining,” (other?)

- a post-training assessment by trainers

- intensive interview observation, including both observers’ written comments and a special form for recording observation “data”

- FR debriefings – both a set of specific questions to be answered for each completed interview and post-field-period focus groups

Field Support Materials

Were the “support” materials – the advance letter, the SIPP brochures, the calendar aid – effective?

The instrument includes a very brief set of post-interview respondent debriefing questions focused on these materials.


The calendar aid, in particular, will also be evaluated in the course of reviewing recorded interviews, in the observers’ reports, and in the FR debriefing focus groups.

Instrument Mechanics

Did the instrument work as intended? What “bugs” were encountered? How should those bugs be repaired?

We will use the Blaise instrument’s capacity to record item-level notes in order to capture information on instrument problems. We will focus FRs’ attention on this research goal in training, so that they understand that an important part of their job is taking the necessary time to record this information.


Problems in the EHC component of the instrument will be captured in a post-interview set of FR debriefing questions directed specifically at this issue.

Instrument Usability by FRs

Were FRs able to navigate the instrument smoothly and effectively? Were they able to easily access the special features of the EHC interview (e.g., checking for parallel events in other domains)?

Pretesting throughout the development process, including that carried out by FRs and other RO staff, has already identified many such issues, some of which are already repaired, others of which are in the repair queue.


The methods designed to identify instrument bugs [see above] may also serve to capture usability problems.


Usability can also be addressed through several already-mentioned methods, such as the observations, the analysis of recorded interviews, and the FR debriefings.

Interview Process” Issues

How did FRs elicit landmark events, and how did Rs report them? What instrument cues were most effective? What landmarks are most effective? How could the instrument and/or training be improved to elicit more effective landmarks? Did FRs make effective use of landmark events and cross-domain information to assist R recall? Did the interview “flow” well within a household, from one R to the next? Did the interview keep Rs reasonably engaged and interested? Did the interview present questions to Rs that were obviously inappropriate? How did FRs handle those situations? How long was the interview? How did Rs react to the length? Were there particular sections of the interview that seemed unnecessarily long? Is there something about particular question wording or question ordering that causes problems? Are flashcards used appropriately? (Which ones? How often?) How important are they in assisting response?

The primary tool for evaluating interview process issues is the careful evaluation of the recorded interviews; observer feedback will also play a major role.

Data Quality

Compared to standard SIPP, are the EHC data of at least equivalent quality?

We will compare estimates from the EHC field test sample with estimates for the CY 2009 time period derived from a comparable set of SIPP 2008 panel respondents.


We are also focusing the 2010 sample in states where we have good possibilities for obtaining administrative records for selected need-based programs and other characteristics, and are seeking information about records accessibility ion all states included in the 2010 sample.. If successful, we can use the records to assess very objectively the relative quality of the EHC data and standard SIPP data covering CY2009 from same states and same sample stratum.


-3-



File Typeapplication/CDFV2-corrupt
File Modified1998-01-22
File Created2015-01-28

© 2024 OMB.report | Privacy Policy