Att_REL West Study D OMB Part A Support Stmt 1.11.07 XMT

Att_REL West Study D OMB Part A Support Stmt 1.11.07 XMT.doc

Impact Study: High School Instruction with Problem-Based Economics

OMB: 1850-0825

Document [doc]
Download: doc | pdf

High School Instruction: Problem-Based Economics


CONTRACT NO. ED-06-CO-0014







Request for OMB Approval



Revised Submittal: January 12, 2007







Submitted by:


PART A. JUSTIFICATION


A1. Circumstances Making Collection of Information Necessary

Importance of Study


Two characteristics are called for in today’s high school reform agenda that could transform the curriculum: rigor and relevance. Rigor means challenging curricula leading students to deep understanding of important ideas. Relevance means seeing how this knowledge applies to real life. Together, these factors can engage students in learning that produces both academic achievement and transferable study skills. Problem-based instruction is one approach designed to place learning in the context of the real world. In problem-based methods,

  • students confront a realistic dilemma that, through analysis, investigation, research, and discussion allows for more than one possible solution;

  • students seek knowledge that is essential to understanding and solving the problem; and

  • students become intrigued by the problem they are addressing, and motivated to learn the standards-based content.

A problem-based approach to curriculum is frequently a defined component of current high school reform models (Expeditionary Learning Outward Bound, 1999; Honey & Henríquez, 1996; Newmann & Wehlage, 1995); however, teachers and schools often have difficulty incorporating problem-based teaching into daily classroom instruction (Hendrie, 2003).

One promising approach to problem-based instruction has been developed by the Buck Institute for Education (BIE). The BIE has partnered with university economists and expert teachers to create a well-defined Problem-Based Economics (PBE) curriculum. Units lasting 4-15 days provide clear instructions for covering core content. The curriculum is introduced with a two-day workshop led by expert teachers who have used the materials in classrooms. While BIE has developed curriculum units, with accompanying teacher training, in several domains of government and social studies, the most fully developed and tested are the economics units.

The following description of the problem-based approach illustrates how it is different from the typical direct instruction approach found in most economics classrooms:

“…These units, which can take from one day to three weeks to complete, scaffold and, to some degree, constrain teacher and student behavior. Each unit contains seven interrelated phases: entry, problem framing, knowledge inventory, problem research and resources, problem twist, problem log, problem exit and problem debriefing. Student groups generally move through the phases in the order indicated, but may return to a previous phase or linger for a while in a phase as they consider a particularly difficult part of the problem. The teacher takes a facilitative role, answering questions, moving groups along, monitoring positive and negative behavior, and watching for opportunities to direct students to specific resources or to provide clarifying explanations. In this version of problem-based learning, students do not learn entirely on their own; teachers still “teach,” but the timing and the extent of their instructional interventions differ from those used in traditional approaches. Problem-based learning teachers wait for teachable moments before intervening or providing needed content explanations, such as when students want to understand specific content or recognize that they must learn something.” (Mergendoller, Maxwell and Bellisimo, in press, p. 1)

Economics has been the focus of attention because of the opportunity to improve instruction in what may be a required course, but is often poorly taught. In general, high school economics courses do not help students to understand our economic system and the relationships between supply and demand, consumers and producers, and the workings of world trade (NCEE, 1999). In addition, most teachers are not prepared to teach economics and are discouraged by their teaching experiences, because good instructional materials are not available, and professional development is scanty at best. Identifying a reliable and valid solution to this problem is of great value nationally. Thirty states require student testing in economics or intend to by 2006; 33 require standards to be implemented (NCEE, 2003). NAEP will test economics in 2006.

The BIE economics curriculum has been developed to respond to standards developed by the National Council for Economics Education (NCEE) and is supported by professional development for teachers teaching the curriculum. BIE has partnered with the Centers for Economic Education, affiliated with NCEE, to disseminate the curriculum.


Evidence has been gathered in several studies that BIE’s problem-based economics curriculum appears to be beneficial for diverse students (Mo & Choi, 2003; Ravitz & Mergendoller, 2005; Moeller, 2005). Previous research indicates the curriculum is effective with both low- and high-achieving students and that its specific practices are correlated with better student retention of core concepts (Ravitz & Mergendoller, 2005; Moeller, 2005).

Specifically, one quasi-experimental study included 15 teachers, and 1162 students who provided data consisting of a) student and teacher background surveys; b) student and teacher checklists of practices used and their helpfulness; and c) pre-, post- and final (delayed post) content tests (Ravitz & Mergendoller, 2005). The study related the background characteristics of the teachers and students to learning outcomes. Overall, the largest gains in learning were seen among students who reported low prior achievement (reported effect size of .5), while high prior-achieving students also outperformed expectations. This suggests an overall curvilinear relationship between prior achievement and learning in problem-based instruction. Specific problem based practices were associated with long-term learning gains, while other more traditional or non-problem-based practices were associated only with short-term learning.

Another quasi-experimental study using data from 252 economics students at 5 high schools and controlling for individual characteristics, the problem-based economics approach appears to have increased learning of macroeconomics, especially when instructors were well trained (reported effect size of .54) (Maxwell, N., Mergendoller, J. & Bellisimo, Y., 2005).

In each of these studies, implementation has been shown to vary based in part on the experience of the economics teachers. As a result, Moeller (2005) specifically examined the factors that influence the implementation of the PBE curriculum. The study documented teachers’ and students’ responses to the units and the challenges that teachers faced when implementing the PBE curriculum in their classrooms. The results of this research have been used to inform the development of professional development approaches to better support teachers in their efforts to integrate problem-based learning into their economics curriculum (Moeller, B., 2005).


The approach that is proposed in this study builds on this body of work. The research team and program developers have developed an implementation approach that provides not only base instruction through the summer professional development program, but ongoing support during the instructional program of the next two semesters. The combination affords us the opportunity to understand the impact of the intervention on students in the semester immediately following the summer professional development. As important, however, is our ability to measure student outcomes that have benefited from a semester of practice and professional support. Student outcomes, measured at the end of the spring 2008 semester, will have been influenced by teachers’ experience with the approach and curriculum, a finding that is consistent with earlier work on PBE as well.


This study is designed to test the efficacy of BIE’s economics curriculum on student learning of economic content and problem-solving skills. Student achievement outcomes are of primary importance. These outcomes are mediated by changes in teacher knowledge and pedagogical practice. Research questions are noted below in temporal order:


  • Teacher Outcomes

  1. Does Problem-Based Economics (PBE) change teacher content knowledge of economics?

  2. Does the use of PBE change the confidence of teachers to teach economics?

  3. Does the use of PBE change the enthusiasm or desire of teachers to teach economics?

  4. Does use of PBE change economics teachers’ instructional practices that are used in the classroom?


    • Student Outcomes

  1. Does PBE change students’ motivation to learn economics?

  2. Does PBE change students’ content knowledge in economics?

  3. Does PBE change students’ problem-solving skills in economics?

A2. Purpose and Uses of the Data


In order to answer the above research questions, this study will implement a randomized controlled trial (RCT) of a social studies curriculum that uses problem-based instructional approaches to teach high school economics. Detailed research design, data collection procedure and timeline, and data analysis is presented below.

Research Design

The goal of this study is to evaluate the effectiveness of PBE using a cluster-randomized trial design. Teachers serve as the unit of randomization and students, the primary unit of observation, are nested within teachers. Such designs are used because of their feasibility, cost-effectiveness, and usefulness when the risk of contamination between teachers within the same school is thought to be low. Although cluster-randomized designs maintain the inherent internal validity that randomization imparts and are consistent with how programs, curricula, and practices are typically delivered in educational settings, such designs provide less statistical precision of impact estimates than those based on individual randomization. To achieve an acceptable margin of error, sample sizes in studies that use cluster random assignment must be larger than is the case with studies that use individual randomization.

Teachers will be randomly assigned to the treatment or control condition and will remain in their assigned treatment/control condition until the conclusion of the study. We expect there to be between two and four economics teachers per school. Given the pedagogical changes that are required to ensure a complete implementation of the treatment, the study will be conducted over one summer and two consecutive academic semesters. During the summer, the treatment teachers will receive the professional development intervention. They will have the opportunity to teach students with the new instructional approach for two semesters while receiving additional support.

Design Overview and Timeline. Table 1, below, depicts the experimental design for teachers and students; Table 2 depicts key features of the study design. Treatment teachers will receive professional development in summer 2007; control teachers will receive a delayed treatment in summer 2008. The implementation is the classroom instructional program in economics for a single semester course. In fall 2007, 12th grade students enrolled in fall semester high school economics classes will receive either the PBE curriculum or the typical course. The same will be true for a second group of 12th grade students who enroll in economics in spring 2008, and will include the full complement of measurement for this group of spring students. Spring 2008 will be the final semester of instruction during the intervention period. As depicted in the table, treatment teachers will have had one semester to practice the instructional approach with the five curricular modules before spring 2008. One of the benefits of the design is that it allows examination of the extent to which PBE impacts on student learning increase as teachers gain a semester of experience with the curriculum.

Counterfactual - The control group represents the treatment-as-usual conditions. Control group members will be exposed to the regular economics curriculum in their sites, but will be barred from participating in the treatment. It is also possible that teachers may change practices because they were assigned to the control condition. We will put in place a monitoring system to assess professional development activities, curriculum practices, and/or other “intervention-like” activities in both treatment and control conditions to better interpret observed program impacts, or lack thereof, as well as to document the treatment contrast.


Table 1. High School Instruction with Problem-Based Economics: Experimental Design



2007/08



Spring

Summer


Fall




Spring












Teachers










Treatment

O

PD


PBE/5 Mod

O



PBE/5 Mod

O

Control

O

TxU


TxU

O



TxU

O





















Students










Fall Cohort










Treatment



O

PBE/5 Mod

O





Control



O

TxU

O















Spr Cohort










Treatment







O

PBE/5 Mod

O

Control







O

TxU

O











O = Observations or measurement points

PD = Problem-Based Economics Professional Development

TxU = Treatment as usual

PBE/5 Mod = Delivery of 5 Modules of Problem-Based Economics curriculum

= Student Cohort 1

= Student Cohort 2


Table 2. High School Instruction with Problem-Based Economics - Study Features

Study Design

Cluster-randomized trial; multiple student cohorts with comparison

Unit of Assignment

Teachers

Sample Characteristics

120 teachers/4,800 students per cohort (estimated)

Statistical Power Estimates

For Type 1 error = .05, 80% or higher power to detect MDES of 0.15-0.22 at student level and 0.46 at teacher level

Implementation begins

Summer 2007


Study Outcomes and Measurement. Teacher and student outcomes will be measured with a set of instruments that capture content gains in economics learning as well as changes in student and teacher engagement in problem-based instruction. Key outcome variables for the proposed study are summarized below.





Table 3. Key Outcome Variables


Student Outcomes

Teacher Outcomes

  • Scores on standardized tests of economic content knowledge (Test of Economic Literacy [TEL])

  • Scores on performance assessments of student conceptual understanding

  • Self-report of interest in economics

    • Reported benefits from professional development experiences Self-reported changes in practices and beliefs

    • Interest in teaching economics

    • Confidence teaching different topics

    • Scores on standardized tests of economic content knowledge (Test of Economic Literacy [TEL])

Table 4 is a measurement schedule that depicts which instruments will be provided to treatment and control teachers and students. The specific content and constructs for each instrument are shown as well as the timeline foe administration. In addition, each of the instruments is described briefly in the text that follows. Copies of the instruments have been included in Appendix C of this document. The central outcome measures for this study are content knowledge gains in economics for students, measured by the Test of Economic Literacy (TEL) and performance assessments developed by UCLA/CRESST. These measures were developed by organizations completely unrelated to the program developers.

In addition, there are a series of attitudinal measures that will be given to both students and teachers to assess changes in engagement with the curriculum and satisfaction. These measures have been provided by the BIE and modified by the REL West research team.1 The nonexistence of valid measures has led BIE staff over many years to develop these instruments, borrowing from existing attitudinal surveys as needed during the development of new instrumentation. During the development process, the instruments have been reviewed by university-based economists independent of BIE, and used in several studies. REL West staff have looked independently and been struck by the volume of measures available to examine individuals interest in “the economy” but not learning about “the economy.” In the text that follows, we note the contribution from other instruments that became the foundation of the currently proposed instruments.

The measurement plan for this study includes three broad strategies: teacher outcome and attitudinal measures; student outcome and attitudinal measures; and implementation measures. Each of these requires a different data collection protocol to insure the data are not compromised in any way. The following data collection protocols will be followed:

    1. Teacher outcome and attitudinal measures

For convenience, data collection for teachers will be made available on-line and communication will be by email (teachers will be given the choice to have instruments sent to them by mail, if they prefer.) On-line administration allows the study team to manage the administration “windows” and provide efficient reminders by email and phone about outstanding data. The primary teacher knowledge outcome measure, the TEL, will be made available through an arrangement between the REL West research team and the NCEE who provides on-line administration regularly of this test.

    1. Student outcome and attitudinal measures

Student measures of economics content knowledge will be administered by test proctors. For both the pre and post-tests, the REL West research team will, by prior arrangement, work with school-level staff (counselors, principals, local proctors) to insure the test administration follows explicit protocols provided in the Examiner’s Manual of the TEL. A checklist will establish the rules for opening the tests, verifying student identity, distributing the forms, keeping time, and collecting final documents. Upon completion of the testing period, proctors will seal the tests and mail them back to REL West for scoring.

    1. Implementation measures

A complete implementation of the PBE curriculum includes administering post-tests at the end of each curriculum unit and keeping track of whether and to what extent a series of steps were followed during the instructional program. These are noted below as the “Student End-of-Unit Post Tests” and the “Teacher End of Unit Surveys”. The classroom teacher is responsible for administering the former and completing the latter. These data will not be used as outcome measures but rather are intended to allow for formative feedback to students and reinforce the implementation strategies.


.

Table 4. Measurement Timeline






Treatment

Comparison

Instrument

Contents / Constructs

Time needed

Timeline

Month(s)

Teacher

Student

Teacher

Student

Teacher Background Survey

  • Demographic data

  • Years teaching economics

  • Self-ratings of economic content knowledge

  • Number of college classes taken

  • Satisfaction with teaching materials and methods

  • Pedagogical practices used

  • Confidence of teaching key economics concepts

  • Enthusiasm for teaching economics in the future

5-10 minutes

Prior to assignment

May 2007

X


X


Teacher Test of Economic Literacy (Pre-test)

  • Content knowledge in economics

45 minutes

Prior to assignment

May 2007

X


X


Teacher Summer Institute Evaluation

  • Quality of the institute (6 dimensions)

  • Institute benefits

5-10 minutes

After institute

Summer 2007

X




Student Background Survey

  • Demographic data

  • Language background and proficiency

  • Interest in different subjects

  • Peer support for learning

  • Individual academic orientation

  • Interest in economics

  • Self-rated skills

5-10 minutes

Start of each semester

September 2007 January 2008



X


X

Student Test of Economic Literacy (Pre-test)

  • Content knowledge in economics

45 minutes

Start of each semester

September 2007 January 2008



X


X

Teacher End of Unit Surveys








  • Overall unit “dosage” (time on task)

  • Content emphasis

  • Use of benchmark lessons

  • Use of problem logs

  • Overall fidelity

  • Emphasis on economics problem solving

  • Use of Debrief

  • Challenges in implementation

10 minutes

During the Fall and

Spring semesters

After Each Unit

(both Fall and Spring semesters)

X














Treatment

Control

Instrument

Contents / Constructs

Time needed

Timeline

Month(s)

Teacher

Student

Teacher

Student

Student Unit Post Tests

  • Unit-related content knowledge

30 minutes per unit

During the Fall and

Spring semesters

After Each Unit

(both Fall and Spring semesters)


X



Teacher End of Semester Survey2

  • Satisfaction with teaching materials and methods

  • Pedagogical practices used

  • Confidence of teaching key economics concepts

  • Enthusiasm for teaching economics in the future

  • Professional development received

  • Enthusiasm & attitude toward PBE (treatment teacher only)

5-10 minutes

End of Spring semester


June 2008


X


X


Teacher Test of Economic Literacy (Post-test)

  • Content knowledge in economics

45 minutes

End of Spring semester


June 2008


X


X


Student End of Semester Survey3

  • Student assessment of teaching practices / implementation

  • Peer support for learning

  • Individual academic orientation

  • Self-rated content learning and Interest in economics

  • Self-rated skills

  • Rating of PBE (participants only)

10 minutes

End of each semester

January 2008

June 2008



X


X

Student Test of Economic Literacy (Post-test)

  • Content knowledge in economics

45 minutes

End of each semester

January 2008

June 2008



X


X

Student Performance Assessment Tasks (CRESST)

  • Conceptual knowledge & economic problem-solving skills

20 minutes per task

End of each semester

January 2008

June 2008



X


X


1) Teacher Background Survey


The teacher background survey is a collection of items used in prior work by BIE and includes items on teacher pedagogy developed for Teaching, Learning and Computing (1998). The survey will be given to all teachers prior to assignment to treatment or control groups. Data will be used to examine baseline differences in background characteristics between treatment and comparison groups, and to collect pre-random assignment variables to use as covariates in subsequent data analyses. A section includes standard items on teachers’ gender, age, ethnicity and other background demographic information.


The teacher background survey uses items from Ravitz and Mergendoller (2005) to assess self-ratings of economic content knowledge, the number of college classes taken, and confidence teaching economics. When used in cited study, these items comprised a reliable six-item scale (alpha = .84), which was correlated with student learning outcomes. The items on teacher pedagogy assess prior use of PBL-related methods in economics, such as teachers’ use of group work or having students work on open-ended problems (Ravitz, Becker & Wong, 2000). In the cited study, these contributed to a reliable seven-item index (alpha = .90).


2) Test of Economic Literacy


The TEL, 3rd edition, is a primary test of economic content learning, used widely in peer-reviewed studies and developed by the NCEE. The TEL is a standardized, nationally normed achievement test with parallel forms appropriate for pre-/post-testing (Walstad & Rebeck, 2001). The test has been designed for assessing basic economic concepts that are taught in high school economics courses in 11th and 12th grade. The test contains 40 items, of which 11 items are common to both forms. The TEL is designed as a timed test, requiring about 30-40 minutes for high school students. In addition to using the test for student outcome measurement (pre-post/treatment-control), the test developers recommend using the TEL for “in-service courses and workshops for current teachers” as an assessment tool (Walstad & Rebeck, 2001; p. 13).


The TEL Handbook reports an alpha of 0.89 for both Form A and Form B (Wasltad & Rebeck, 2001, p. 17). The mean TEL score, standard deviation, and subgroup sample sizes are provided in Appendix D to illustrate differences in performance between students who have and have not had an economics course, across several socio-demographic characteristics. The TEL designers conclude from these data that performance on the test is responsive to the economics instruction, regardless of the subgroup of students (Walstad & Rebeck, 2001; p. 29). Both versions of the test (Form A and B) have been matched for content coverage and difficulty (Walstad & Rebeck, 2001).


In this study, TEL will be used as a pre-post measure for both teachers and students under both treatment and control conditions. Form A will be administered to teachers as a pre-test in spring 2007 prior to random assignment; Form B will be administered in June 2008 at the conclusion of data collection activities. Students will receive Form A at the start of their semester, and Form B at the end. The student pre-post gains will be a content learning outcome. The teacher pre-post gains will address content learning as a result of exposure to the curriculum and professional development. Ancillary analyses focusing on the impact of PBE on student outcomes will also include teacher content knowledge gains as a control variable to see if the curriculum itself accounts for student test score gains above and beyond content knowledge gains by teachers.


3) Teacher Summer Institute Survey


An evaluation survey will be given to teachers after attending the summer institute. It will address overall satisfaction with the institute. It will ask about the overall quality of the institute, using a form developed by BIE and used in its various workshops including previously funded summer institutes. This form uses a 5-point scale to judge the quality of the institute in terms of organization, presenters, content, materials and handouts, facilities, and amenities. Teachers will identify one or two of the most beneficial aspects of the institute, choosing from learning new content, improving existing content, new teaching approaches, or improving existing teaching approaches. The evaluation survey will more generally ask about the quality of the experience and will include open-ended questions about what was most or least valuable. These data will be used for descriptive purposes and for formative feedback to the program developers.


4) Student Background Survey


This instrument includes a number of items from the Student Assessment of Learning Gains instrument that was developed at the Wisconsin Center for Educational Research. This work has proven useful in other studies comparing problem-based with traditional instruction. A version of this brief student background survey was used in a study by Mergendoller, Maxwell and Bellisimo (in press) to address interest in economics and problem-solving skills. On the pre-test survey, students will indicate their ethnicity, primary language, and gender. We expect to pilot this instrument in early-spring 2007;some additional items may be added to increase reliability of this measure.


Interest in learning economics. Most available instruments assume a basic knowledge of economics that high school students do not have (Hodgin, 1984). As a result, we designed our own instrument asking students about their interests in learning about economic issues using the stem: “How interested are you in reading newspaper and/or magazine articles about…” followed by items describing the economic plight of various groups (e.g., economic issues faced by the poor) and two items describing general economic issues (e.g., unemployment). Students responded on the scale ranging from 1 (Very Interested) to 5 (Not Interested). BIE staff calculated scores by taking the mean response across all six items. Cronbach’s alpha for the instrument was 0.80 in the cited study.


In addition, this survey uses measures from Ravitz and Mergendoller (2005) of students’ “prior academic achievement.” These include expected grade point average and post-high school plans (ordered from no plans to attending a 4-year, very academic college). In the previous study, these measures were correlated with each other (r=.39). The combined measure of prior achievement was strongly correlated (p < .001) with final exam scores at the student (r=.47), teacher (r=.67), and class levels (r=.66). Expected grade point average was a stronger predictor of final exam scores than the graduation plans question, but the index combining both these items showed even stronger correlations. Students in the top 50% on prior achievement scored substantially higher on the final exam than those in the lower 50% (Effect size = .78, p < .001).


5) Teacher End-of-Unit Surveys


These surveys will address the overall “dosage” of PBE provided by teachers to students – classifying teachers as high or low implementers. It will address variations in implementation fidelity for use in tailoring professional development and as covariates for study outcomes. Previous work by Ravitz and Mergendoller (2005) indicated that some basic measures of implementation of key aspects of the units did appear to be related to student learning outcomes. These will be brief surveys to show that teachers did, in fact, implement the unit, and whether this was a relatively strong or weak implementation. Teachers will be asked to complete and email/mail survey responses within five days of completing the unit.


6) Student End-of-Unit Post Tests


The treatment teachers will give students unit-level post-tests when the tests are available as part of the complete implementation of the PBE curriculum. These descriptive data will be used to understand short-term learning as a result of the unit. We expect teachers in their second semester of implementation to have higher student end-of-unit scores than in their first semester.4


Most of these tests have been extensively piloted by BIE. Reliability indices for the units of President’s Dilemma and Great Awakening were greater than .80 (Ravitz & Mergendoller, 2005).


7) Teacher End-of-Semester Surveys


This survey will measure changes in teaching practices compared to the pre-survey. Specifically, it will measure the change in use of constructivist-oriented teaching methods (from pre-survey). It will also measure the change in satisfaction with teaching economics and interest in teaching economics again, for comparison to pre-survey. This will include the extent to which they felt they adequately taught key economic concepts.


A set of questions will address barriers to the teaching of economics (e.g., student interest and motivation, student skills). We will also ask the comparison group whether – like the treatment group – they received any professional development that improved their economics content or teaching methods. Common items in this survey and the Teacher Background Survey will allow for comparisons over time on issues of teacher pedagogical practices, attitudes and engagement in their teaching of economics.


Treatment and control teachers will complete slightly different forms. The form for treatment teachers includes additional items related to their experience using PBE.


8) Student End-of-Semester Surveys


The post-test version of the Student Background Survey re-assesses interest in economics and problem-solving skills. It also includes reflections on the experience of the semester (e.g., how Economics compared to other classes, how they liked different kinds of activities). Common items in this survey and the Student Background Survey will allow for descriptive comparisons over time on issues of attitudes and engagement in their learning of economics.


Treatment and control students will complete slightly different forms. The form for treatment students includes two additional items asking them to evaluate their experience using PBE. We expect to pilot this instrument in late-Spring 2007; some additional items may be added to increase the reliability of this measure.


9) Student Performance Assessment Tasks


Beyond TEL, performance tasks will be used to judge student conceptual knowledge and economic problem-solving skills. UCLA/CRESST has developed cognitive-based economics performance problems and a validated rubric for assessing conceptual knowledge and argumentation. The economics assessments are based on CRESST's extensive experimental research in model-based, cognitively sensitive assessment (e.g., Baker, 1997; Baker, Freeman, & Clayton, 1991; Baker, et al., 1996; Baker & Mayer, 1999; Niemi, 1996; O’Neil, 1999).


The rubric for scoring these tasks addresses: a) quality of conceptual understanding; b) quality of explanation - argumentation; c) misconceptions or errors; and d) use of relevant prior knowledge.

The five specific assessment tasks, aligned with each of the PBE units, were created and then piloted with over 300 students in Spring 2005. These economics performance tasks make no explicit reference to the BIE curriculum and were piloted with teachers who both did and did not use the relevant curriculum units. The assessment tasks and their common rubric were revised based on several rounds of student responses. Based on this pilot work, CRESST has indicated that the tasks will provide good evidence about the quality of student conceptual understanding in economics.

The performance tasks will be used at the end of each semester as a measure of student learning, and will not have a pre-test component. It is estimated that each task would require 15-20 minutes to complete (or 75-100 minutes to complete all five tasks for a single student). To reduce the testing burden, but to obtain a sufficient sample for each task for data analyses, five versions of the test booklet will be produced using a simple balanced incomplete block (BIB) matrix sampling design (see Table 5).


Table 5. BIB Matrix Sampling Design for the Performance Tasks


Booklet version

Position 1 block

Position 2 block

1

A

B

2

B

C

3

C

D

4

D

E

5

E

A


In this design, each booklet contains two performance tasks, and each performance task appears once in either position (to take into account the order effects). The resulting test booklets will be packed in spiral order (i.e., one each of booklets 1 through 5, then 1 through 5 again, and so on). Spiraled distribution of the booklets ensures that the sample size for each booklet will be approximately equal and that these samples will be randomly equivalent. It also reduces the likelihood of students sitting near each other taking the same booklet.


Students from both the treatment or control groups will be required to take the performance tasks. With an estimate of 40 students per teacher, each performance task will be taken by 16 students per teacher based on the sampling design.

Impact Analyses

Adjusted post-intervention outcomes for students and teachers in the treatment group will be compared to the outcomes for their counterparts in the control group. The primary hypothesis-testing analyses will involve fitting conditional multilevel regression models (HLM), with additional terms to account for the nesting of individuals within higher units of aggregation (e.g., see Goldstein, 1987; Raudenbush & Bryk, 2002; Murray, 1998). The design thus involves clustering at the classroom level, as students are nested within teachers. A random effect for teachers will be included in the model to account for the nesting of student observations within teachers. Potential fixed effects include treatment group, state (CA or AZ), baseline (pre-test) measures of outcome variables, and other student and teacher-level covariates. The purpose of including statistical controls is not to remove potential sources of bias from the impact estimates, which is the purpose of the experimental research design, but to minimize random error and to increase the precision of the estimates.

Consider the following two-level hierarchical linear model for a continuous outcome:

Econ:i:j =0 + 1Prei:j + 2TxjFalli:j + 3TxjSpringi:j + IIi:j + TTj + τj + i:j [1]

where subscripts i and j denote student and teacher, respectively; the nesting is reflected by the colons (:); Econ represents student economics achievement; Pre represents the baseline measure of the outcome variable; Tx is a dichotomous variable indicating student enrollment in a teacher’s class who has been assigned to the treatment condition; Fall and Spring represent students who were enrolled in economics in the fall and those who were enrolled in economics in the spring, respectively (TxFall and TxSpring represent interactions between treatment status and student cohort); and I and T are two vectors of control variables for students and teachers, respectively, measured prior to exposure to the intervention.5 Lastly, τ represents a random variable for teachers (clustering group), and i:j is an error term for individual sample members. In this model, the intervention effects are represented by β2 and β3, which capture treatment/control differences in changes in the outcome variable between pretest and posttest for the fall and spring cohorts of students, respectively. Specifically, β3 represents the PBE impact on students when they are exposed to teachers with a full semester of prior experience with the PBE curriculum, and the difference between β3 and β2 taps the “teacher practice effect” of the curriculum. Wald tests will be performed to estimate whether impacts are statistically different for the fall and spring cohorts of students. τj captures random effects (intercept) of teachers which accounts for the positive intraclass correlations in the data. Simple extensions to model allow us to examine differential effectiveness across subgroups by including interactions between treatment status and one of the variables in I or T. Model [2], which estimates the average PBE impacts over the entire academic year, shows how we can estimate separate program effects for boys and girls:

Econ:i:j =0 + 1Prei:j + 2BTxjBoyi:j + 2GTxjGirli:j + 3Spring + IIi:j + TTj + τj + i:j [2]

Program impacts on boys and girls are captured by the coefficients 2B and 2G, respectively. By statistically testing the hypothesis 2B = 2G, we can then establish whether program impacts are statistically different for boys and girls. We plan to investigate gender, race/ethnic, and ELL/non-ELL differences in PBE program impacts – and expect to find more pronounced positive impacts on students who traditionally exhibit lower levels of achievement in academic subjects.

Traditional or multilevel regression models will also be used to examine how intervention characteristics (e.g., implementation fidelity) are related to program effectiveness. Because the designs do not involve random assignment to different types of implementation regimes, these analyses will be purely descriptive in nature, and should not be used to make causal inferences. Nonetheless, the results from these analyses may be useful for planning subsequent experimental research on “best practices.” Models analogous to [1] and [2] will be estimated, except only the treatment group will be analyzed, and covariates will be included for measures of implementation.


A3. Use of Information Technology to Reduce Burden


Technology will be used in a variety of ways during the data collection process. Basic contact information about the schools in which teachers work will be gathered on an electronic database created by the WestEd research team. The research team will use this database to keep track of teachers’ contact information and other information used to manage the study. Technology will also be used to link respondents data from surveys and testing directly to analytic datasets without re-keying data. This saves time and reduces that chance of errors during data input.

Second, communication between the research team and selected school officials and/or teachers will occur through email, fax, and conference calls that take advantage of information technology and reduce burdens associated with paperwork. The communication will cover initial inquiries, the exchange of preliminary information, the scheduling and planning of site visits, and the review of draft reports.

Throughout the study, a toll-free number and email addresses will be available to respondents to allow them to contact the research team with any questions or requests for assistance. This information, along with the names of contact persons on the research team at WestEd will be printed on all data collection instruments.


A4. Efforts to Identify Duplication


Each instrument will be carefully reviewed (and some will be piloted using a small sample) to make sure that we only collect the most necessary information needed for this study. The secondary information such as student grade point average (GPA) will be accessed and collected through the electronic database at the school (or district) level.


A5. Methods to Minimize Burden on Small Entities


The research team will collect data from few small entities, as most of the data sources will be from teachers and students. The few small entities are likely to be associated with the external technical assistants and consultants who may assist with data key-in and help with scoring the tests. Only minimal information will be needed from these small entities, and so no significant impact on small entities is expected.


A6. Consequences of Not Collecting the Data


The data collection efforts in this study will allow researchers to study the effectiveness of problem-based instruction in high school economics. As indicated earlier, economics has been the focus of attention, but is often poorly taught. On the other hand, although the problem-based approach developed by BIE appears to be beneficial for diverse students, a more rigorous study (such as the current study) is needed to further examine how this approach affects teachers’ teaching and this in turn affects students’ learning in economics. Data collected in this study, including both teacher and student data, is based on an experimental design, which is generally considered to be the strongest design when the interest of the study is in establishing a cause-effect relationship (p. 189, Trochim, W. M. K., 2001). Failure in collecting such data will have great adverse impacts on examining the effectiveness of the problem-based instruction (as compared to the traditional instruction methods), which is the main interest of this study.


A7. Special Circumstances Related to Information Collection

This information collection fully complies with 5 CFR 1320.5(d)(2).


A8. Federal Register Comments and Persons Consulted Outside the Agency


A notice about the study will be published in the Federal Register when the final OMB package is submitted.


The research team will seek the expertise of persons outside the agency through the creation of a Technical Working Group (TWG). The TWG will provide consultation on the design, implementation and analysis of this study, as well as the entire portfolio of Regional Educational Laboratory (West) studies (REL West). They are expected to consult with REL West for five days per year through a combination of in-person and teleconferenced meetings. An honorarium of $1200 will be paid to each TWG member. The TWG will play an important role in providing insight and guidance in support of a successful evaluation. The TWG members are listed below:


Professor Jamal Abedi, CRESST, University of California, Davis

Dr. Lloyd Bond, Carnegie Foundation for the Advancement of Teaching

Professor Geoffrey Borman, University of Wisconsin

Professor Brian Flay, Oregon State University

Professor Tom Good, University of Arizona

Dr. Corinne Herlihy, Manpower Demonstration Research Corporation (MDRC)

Dr. Joan Herman, CRESST, University of California, Los Angeles

Professor Heather Hill, University of Michigan

Dr. Roger Levine, American Institutes for Research (AIR)

Dr. Jason Snipes, Council of the Great City Schools


A9. Payment or Gift to Respondents


Teachers will be provided a $1000 honorarium for their participation in the study.


A10. Assurance of Confidentiality Provided to Respondents


WestEd staff will comply with the Privacy Act for all individual and teacher data collected in the study. All data will be carefully handled in a responsible manner so they are not seen by or released to anyone not working on the project. Data will be reported in a summary fashion so no specific individual or school may be identified. Finally, all data will be maintained in secure and protected files that do not include personally identifying data.

No information will be collected that would identify individual participants. Participants will not be referenced by either their name or their position title. An explicit statement regarding confidentiality will be communicated to any and all participants.


A11. Justification of Asking Sensitive Questions

No questions will be asked that are of a sensitive nature.


A12. Estimate of the Hour Burden of Information Collection


The estimated total response burden is about 36,514 person-hours. This total represents the sum of the estimated burden for all portions of the study. Table 6 aggregates the estimated total hours and costs to participants of this study.

Table 6. Aggregate Respondents and Hour Burden6


Task

Number of Respondents

Hour Burden

Monetary Burden

Sampling/Gaining Cooperation

180

180

$6,240

Piloting of Instruments

12

4

$79

Teacher Data Collection

480

330

$9,900

Student Data Collection

28,800

36,000

$360,000

TOTAL

29,472

36,514

$376,219


Sampling and Gaining Cooperation. At the outset of the study, the process of data collection will be initiated with phone calls to superintendents and principals. These initial contacts will allow us to reach teachers in schools who may be interested in participating in the study. We estimate that explaining the nature of the study and securing permission to collect data could take up to an hour on average to complete. In subsequent years, renewing contacts may require a shorter amount of time, but the process of collecting documents relevant to the study will increase.


The number of superintendents and principals shown in Table 7 corresponds to the number of teachers who will be recruited. Our recruitment strategy includes contacting school districts that have indicated support for conducting research. We anticipate that these contacts will result in teachers who will agree to participate; this increases efficiency during the study and reduces the overall burden during the study recruitment process.

Table 7. Estimated Burden for Sampling and Gaining Cooperation


Task

Type of Respondent

Number

Time Estimate

Total Hours

Hourly Rate

Estimated Cost of Burden

Sampling Tasks

Superintendents

20

1

20

$60

$1,200

Gaining Cooperation

School Principals

40

1

40

$36

$1,440

Gaining Cooperation

Teachers

120

1

120

$30

$3,600

TOTAL

-

180

-

180

-

$6,240

Piloting of Instruments. In advance of the start of the study, we will ask a group of 6 teachers to review the newly developed instruments to insure the administration protocols are clear and that our communication through email and on-line data collection technology is functioning properly. Similarly, we will ask a group of 6 students to review the instruments to insure the administration protocols are clear. Feedback received from the teachers/students will be incorporated into revised administration protocols and help to modify some items (so the item descriptions will be clear for respondents). Table 8 lists the estimated burden for doing piloting of instruments.


Table 8. Estimated Burden for Piloting of Instruments


Task

Type of Respondent

Number

Time Estimate

Total Hours

Hourly Rate

Estimated Cost of Burden

Piloting of Teacher Instruments

Teachers

6

0.33

1.98

$30

$59.4

Piloting of Student Instruments

Students

6

0.33

1.98

$10

$19.8

TOTAL

-

12

-

3.96

-

$79.2

Teacher Data Collection. Teacher data from various sources (different instruments collected across two years of implementation) will be collected as specified in Table 4. Table 9 lists the estimated burden for teacher data collection.


Table 9. Estimated Burden for Teacher Data Collection


Task

Type of Respondent

Number7

Time Estimate (in minutes)

Total Hours

Hourly Rate

Estimated Cost of Burden

Teacher Data Collection before Fall of 2007

Teachers

120

65 (treatment)

120

$30

$3,600

55 (control)

Teacher Data Collection during the Fall of 2007

Teachers

120

50 (treatment)

50

$30

$1,500

0 (control)

Teacher Data Collection during the Spring of 2008

Teachers

120

50 (treatment)

50

$30

$1,500

0 (control)

Teacher Data Collection at the end of Spring of 2008

Teachers

120

55 (treatment)

110

$30

$3,300

55 (control)

TOTAL

-

480

-

330

-

$9,900


Student Data Collection. Similarly, student data will be collected as indicated in Table 4. Table 10 provides the estimated burden for student data collection.


A13. Estimate of Cost Burden to Respondents

Respondents will mainly come from teachers and students. The hourly rate for each respondent is outlined in section A12. There are no other additional respondent costs aside from those outlined in section A12.


A14. Estimate of Annual Cost to the Federal Government


The total cost for the study is $840,567 over five years. The average yearly cost is $168,113. Most of the costs for the study are incurred in years 1 and 2 as data collection efforts are under way.


A15. Program Changes or Adjustment

This request is for new information collection.








Table 10. Estimated Burden for Student Data Collection


Task

Type of Respondent

Number8

Time Estimate (in minutes)

Total Hours

Hourly Rate

Estimated Cost of Burden

Student Data Collection at the beginning of Fall of 2007

Students

4,800

55

4,400

$10

$44,000

Student Data Collection during Fall of 2007

Students

4,800

150 (treatment)

6,000

$10

$60,000

0 (control)

Student Data Collection at the end of Fall of 2007

Students

4,800

959

7,600

$10

$76,000

Student Data Collection at the beginning of Spring of 2008

Students

4,800

55

4,400

$10

$44,000

Student Data Collection during Spring of 2008

Students

4,800

150 (treatment)

6,000

$10

$60,000

0 (control)

Student Data Collection at the end of Spring of 2008

Students

4,800

958

7,600

$10

$76,000

TOTAL

-

28,800

-

36,000

-

$360,000


A16. Plans for Tabulation and Publication of Results

We plan to produce two technical reports in which evaluation results will be presented: 1) an interim evaluation report that will be based on data collected during the first implementation semester, and 2) a final evaluation report based on all the collected data. These reports are scheduled to be completed in 2008 and 2009, respectively. These reports will be published through the REL network and made available to the Regional Comprehensive Centers and the National High School Content Center for additional dissemination.

The findings of an experimental study on high school instruction in PBE will have significant audiences in the practitioner and academic community. The interest will span educators interested not only in the quality of economics instruction and learning, but also those interested more broadly in problem-based instructional approaches. The approaches are used frequently in medical, law and other professional programs. The study designers anticipate making contributions to peer-reviewed journals and making presentations at research meetings.



A17. Seeking Approval to Not Display the OMB Expiration Date

No request is being made for exemption from displaying the expiration date.


A18. Explanation of Exceptions

This collection of information involves no exceptions to the Certification for Paperwork Reduction Act Submissions.

1 Following the submittal of revised research plan to IES on September 11, 2006, ED staff with the Regulatory Information Management Unit have reviewed these instruments and made suggested revisions – word choice, structure, tense. The revisions were helpful and are reflected in the instruments shown in Appendix C.

2 Instrument will have two versions – one for treatment teachers and another for control teachers.

3 Instrument will have two versions – one for students under treatment teachers and another for students under control teachers.

4 Teachers will be instructed to use the end-of-unit tests as part of the standard implementation of PBE. .

5 The model will also include fixed effects for schools.

6 Rounded to the near integers.

7 Total number of teachers in the study (i.e., 60 teachers per condition).

8 Total number of students based on the estimate of 40 students per teacher (class).

9 Each student will be taking two CRESST performance tasks at the end of Fall and Spring semesters.


File Typeapplication/msword
File TitleHigh School Instruction: Problem-Based Economics
AuthorKevin Huang
Last Modified BySheila.Carey
File Modified2007-01-25
File Created2007-01-25

© 2024 OMB.report | Privacy Policy