Abstracts

COTF_Gen_Clear_Abstracts ddr 2007Feb13.doc

Generic Clearance for Studies to Assess Learning and Flow in Video Games

Abstracts

OMB: 2700-0128

Document [doc]
Download: doc | pdf



NASA Generic Clearance Abstracts


The NASA-sponsored Classroom of the Future (COTF) will conduct numerous studies on identifying and assessing learning and tracking flow in videogames. Though the methodology in each study below may differ somewhat, the purpose of each collection is similar. Without basic research into assessment of learning in games, NASA Education will have no measurement of how much learning occurs in the games it develops. NASA Education will use the results of the COTF game research to inform its investment in development of videogames to support increased science, technology, engineering, and mathematics (STEM) literacy and pipeline achievement.


1. Commercial Off-the-shelf Games and Learning. Each game will have assessment items developed to measure learner growth in the targeted learning objectives. The entire collection of data consists of participants’ pre- and posttest responses to these assessment questions. Study data will be collected using database technologies. The project will begin in FY08. Protocols will be scored for learning gains using repeated measures analyses. Learning gains will be included in annual reports to NASA Education as well as disseminated in publications and at professional conferences.


2. Genre Learning Outcome Matrix. COTF will also build on COTF expert review by nine COTF staff conducted in FY06 that matched commercial off-the-shelf (COTS) gameplay with learning outcomes. CET/COTF will make a list of gaming technology and education experts and contact them directly by telephone. These games experts will be trained and asked to study COTF definitions for learning types and game types and then rate each type of game genre for its ability to promote each type of learning objective. Experts write a rationale for why they evaluated the games as they did. Supportive documentation, entitled “COTF_outcomes_genre_ infoDDR 07Jan02.doc,” contains a matrix chart that experts will complete and a sample completed team narrative. Results will be included in a report submitted annually to the NASA Education Technology and Products Office and disseminated nationally through peer-reviewed publications and conference presentations.


3. Real-life Expectancies Based Upon Game World Interactions. COTF staff will conduct phone interviews with 45 participants. Researchers will ask subjects to describe how interactions with the synthetic world interface might have carried over into real life: Do participants ever find themselves trying to accomplish something in real life that they would do in a synthetic world? If the researchers and participants identify transfer expectations, then researchers will ask participants to describe those transfer expectations, their correlates (the real-life expectation and the synthetic world activity), and why they might occur. Participants will also be asked to provide references that refer other synthetic world community members known to have had similar transfer expectations. Researchers will ask that referrals include e-mail contact information. Results will be included in a report submitted annually to the NASA Education Technology and Products Office and disseminated nationally through peer-reviewed publications and conference presentations.


4. NASA Game—Prototype Usability (Playtesting). This data collection will be conducted each year, FY 07-09,

and will use nine COTF employees1 for each new round of usability testing1. (See the footnote below for justification for the number of employees.) Usability testing will allow NASA staff to revise the game prototype before it is used in a study of learning and assessment in games. Two types of data are collected. The first will measure learning, that is, change in the game player’s knowledge of the NASA mission within a targeted area to be determined by project leadership. The second will measure how the game affects the player’s “flow” experience. COTF will examine the results for patterns, but the small sample size will not support most inferential statistical analyses. The results of these studies will be used by the NASA prototype game project team to refine the game and its embedded assessments. Several collection instruments will be used, including, but not limited to, software and interviews. The interview questions are included in the document entitled “COTF_Usability_Testing_Interview_Questions.doc” and will be used for the NASA game as well as the experimental games (e.g., Selene). Results will be included in a report submitted annually to the NASA Education Technology and Products Office and disseminated nationally through peer-reviewed publications and conference presentations.


5. Experimental Game(s)—Selene—Prototype Usability (Playtesting). This data collection will be conducted each year, FY 07-09,

and will use nine COTF employees1 for each new round of usability testing. Usability testing will allow COTF staff to test and revise the game prototype before it is used in a study of learning and assessment in games. COTF will work with game developers to build hooks into the games that allow for assessment. COTF will experiment with two types of assessment tools and techniques. The first will measure learning, that is, change in the game player’s knowledge of the NASA mission within a targeted area to be determined by project leadership. The second will measure how the game affects the player’s flow experience. Several collection instruments will be used, including, but not limited to, software and interviews. The interview questions are included in the document entitled “COTF_Usability_Testing_Interview_Questions.doc” and will be used for the NASA game as well as the experimental games (e.g., Selene). COTF will examine the results for patterns, but the small sample size will not support most inferential statistical analyses. The results of these studies will be used by the NASA prototype game project team to refine the game and its embedded assessments. Results will be included in a report submitted annually to the NASA Education Technology and Products Office and disseminated nationally through peer-reviewed publications and conference presentations.


6. NASA Game—Experimental Testing. This data collection will be conducted each year, FY07-FY09. COTF will use statistical software to identify patterns in the data. Classical statistical methods that may be used include but are not limited to ANOVA, multivariant regression, principal component analysis, and k Nearest Neighbor clustering. We suspect that nonlinear methods may be required given the inherent complexity of the learning process. The nonlinear methods most likely to yield information fall into the broad category of self-organizing systems. These systems, in general, cluster data based on commonalities within the data itself without regard to outcome or result. The clusters can then be examined for common factors but not included in the data driving the clustering. Results will be included in a report submitted annually to the NASA Education Technology and Products Office and disseminated nationally through peer-reviewed publications and conference presentations.


7. Experimental Game(s)—(Selene)—Assessment. This data collection will be conducted each year, FY07-FY09. COTF will measure learning and flow by programming the Selene game software to track each player’s decisions and actions while playing the game. The player’s actions and decisions are the player’s gameplay. COTF will also collect demographic data about players in order to statistically control for individual players’ characteristics. COTF will use statistical software to identify patterns in the data. Classical statistical methods that may be used include but are not limited to ANOVA, multivariant regression, principal component analysis, and k Nearest Neighbor clustering. We suspect that nonlinear methods may be required given the inherent complexity of the learning process. The nonlinear methods most likely to yield information fall into the broad category of self-organizing systems. These systems, in general, cluster data based on commonalities within the data itself without regard to outcome or result. The clusters can then be examined for common factors but not included in the data driving the clustering. Results will be included in a report submitted annually to the NASA Education Technology and Products Office and disseminated nationally through peer-reviewed publications and conference presentations.

1 COTF has used game design (Fullerton, Swain, & Hoffman, 2004) and instructional design literatures (Smith & Ragan, 1993) to set the number of usability testers (playtesters) at nine. According to Fullerton, playtesting is the single most important game design activity (p. 196). Technically, playtesting is not the same as usability testing. Playtesting allows the designer additional insight into how the players experience the game. It is a component of iterative design (test, evaluate, and revise). Both group and individual dynamics are necessary to test game effectiveness. Historically, four individuals can provide sufficient one-to-one (player to researcher) information in early states of instructional design. Field trials are conducted with about 30 individuals representing each targeted audience. COTF budgeted to employ nine playtesters. This will allow the researchers the flexibility to study individuals at play within the game, focus groups of three players, and larger group (nine members) open discussion. Given the time constraints (only about two weeks), responsible use of budget dollars, and financial resource constraints, nine playtesters will allow COTF grouping flexibility but will maintain a manageable sample size that will provide informative data without overwhelming the analysis with too much data to be processed, evaluated, and reported to the game design team at Georgia Tech. A playtester pool of nine affords breakout into smaller groups of three. This practice accords with the federal policy to test instruments with 10 persons or fewer to reduce the burden on the public and enhance the quality of the information obtained through the data collection (83i justification A.12. “Unless directed to do so, agencies should not conduct special surveys to obtain information on which to base hour burden estimates. Consultation with a sample [fewer than 10] of potential respondents is desirable.”)


File Typeapplication/msword
File TitleNASA Generic Clearance Abstracts
AuthorDebbie Denise Reese
Last Modified ByWalter Kit
File Modified2007-03-07
File Created2007-03-07

© 2024 OMB.report | Privacy Policy