Ssb_17ho_3.06.18

SSB_17HO_3.06.18.docx

Test Predictability of Falls Screening Tools

OMB: 0920-1220

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT: PART B





Test Predictability of Falls Screening Tools

OMB# 0920-XXXX





Date: March 6, 2018












Point of Contact for OMB:

Elizabeth Burns

Centers for Disease Control and Prevention

National Center for Injury Prevention and Control

4770 Buford Highway NE MS F-64

Atlanta, GA 30341-3724

Phone: 770.488.3661

Email: [email protected]




Table of Content

s

Collection of Information Employing Statistical Methods 3

B1. Respondent Universe and Sampling Methods 3

B2. Procedures for the Collection of Information 6

B3. Methods to Maximize Response Rates and Deal with Nonresponse 8

B4. Test of Procedures or Methods to be Undertaken 9

B5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 11



Attachments

A-1. Public Health Service Act (PHSA) 42 U.S.C

A-2. AmeriSpeak Technical Overview

A-3. 60-Day Federal Register Notice

A-4. Public Comment

A-5. Subject Matter Experts (SMEs)

A-6. Privacy Impact Assessment (PIA)

A-7. Documentation for AmeriSpeak Panel for IRBs

A-8. AmeriSpeak Standard Profile Variables with Values

A-9. AmeriSpeak Privacy Statement

A-10. IRB Protocol

A-11. IRB Approval Letter

B-1. Sample Prenotification Postcard

B-2. Sample Prenotification Email

B-3. Falls Diary

B-4. Cover Letter for Monthly Survey & Diary – Mail Version

B-5. Cover Letter for Monthly Survey & Diary – Email Version

B-6. Proxy Survey Protocol

B-7. Baseline Survey – Web Mode

B-8. Baseline Survey – Telephone Mode

B-9. Monthly Update Survey – Web Mode

B-10. Monthly Update Survey – Telephone Mode

B-11. Final Survey – Web Mode

B-12. Final Survey – Telephone Mode

B-13. Proxy Survey – Web Mode

B-14. Proxy Survey – Telephone Mode











Collection of Information Employing Statistical Methods

B1. Respondent Universe and Sampling Methods



The longitudinal Test Predictability of Falls Screening Tools project will use NORC’s AmeriSpeak® Panel to identify a community living sample of adults 65 and older. AmeriSpeak panel is designed to approximate a representative sample of the US population (see Attachment A-2). Below we discuss the likelihood that the data from this study will yield a representative study, however, it is not necessary that the sample used in this study is strictly representative. For the purposes of this test, it is only necessary that it is sufficiently varied with respect to the characteristics that might predict it wide-spread applicability. CDC/NORC will not use this data to draw conclusions about the likelihood of falls among community living adults over 65.



Exhibit 1. Sampling Universe, Sampling Frame Creation and Baseline Survey Sample Size

Universe of Older Adults 65 and Older

49.2 million households

NORC’s National Sampling Frame

3 million households

Older Adult Households

20,000 households

Older Adults Recruited into AmeriSpeak Panel

6,245

Survey Invitations Issued

2,925

Baseline Survey Respondents

1,900



NORC’s National Frame begins with an area probability sample constructed using a two-stage probability sample design.1,2 NORC’s National Frame contains almost 3 million households, including over 80,000 rural households not available from the US Postal Service Delivery Sequence File (USPS DSF) but identified by direct listing by field staff. The sample includes households in all 50 states and the District of Columbia. The AmeriSpeak sample panel is recruited from NORC’s National Frame of addresses using probability-based sampling in conjunction with mail, telephone, and in-person contacts using face-to-face recruitment. The American Association of Public Opinion Research (AAPOR) weighted response rate RR3 for the 2014-2016 panel recruitment is 34%. Once panel households are recruited, rigorous methods are employed2 to maximize survey response rates and maintain cooperation of participants. NORC has formally documented the response rate calculation.3 In addition, an address-based sample from the USPS delivery sequence file is used in four states where the NORC National Frame has inadequate sample size. Importantly, basic demographic information such as age, gender, and race/ethnicity are collected for all AmeriSpeak Panel members, which allows use of oversampling for targeted populations in specific studies.

Samples selected for any subgroup among adults 18+ years of age are selected using unbiased sampling methods and sample design weights are calculated using the specific sample design for a study in conjunction with the probability based AmeriSpeak panel weights. Thus, for example, after OMB approval an unbiased stratified sampling method will be used to select a representative sample of community-dwelling adults 65+ years of age from the 20,000-household AmeriSpeak panel. The stratified sample of 65+ year old adults will be selected using 24 demographic sampling strata to account for expected differential completion rates by demographic subgroups so that the set of panel members with a completed interview for a study is a proportionate sample of the 65+ years adult population. In addition, if a household has one more than one active 65+ year old adult panel member, only one adult in the household is randomly selected for the study. To ensure we reach 1,900 participants in the baseline survey, we will contact 2,925 members of the AmeriSpeak Panel. Historically, about 65% of AmeriSpeak panelists 65 and older agree to participate in a study, which equals 1901 respondents.

The table below provides distributions of unweighted and weighted estimates (using the final panel weights) from the full AmeriSpeak panel for the panel sample who are aged 65 and older.  The Falls Survey sample will be selected as a subsample from the full AmeriSpeak panel. To maintain representativeness of the target population, various stages of sample design and weighting adjustments are required to derive the final survey weights of the Falls Survey sample.  The Falls Survey sample weights will reflect the following design features from both the AmeriSpeak Panel and application of weighting adjustments to maintain representatives of the target population of people 65+ in the population: (a) probability of selection of the housing unit in the Panel, (b) adjustments for unknown eligibility of the housing unit in the Panel, (c) nonresponse associated with Panel recruitment, (d) Panel attrition, (e) nonresponse from eligible adults in households where at least one adult was recruited, (f) probability of selection of the Falls Survey sample from the Panel, (g) nonresponse associated with the selected sample for the Falls Survey, and (h) a raking ratio adjustment of Falls Survey respondents to external population control totals (from the Current Population Survey) for the population age 65+ by age-group, sex, Census Division, Education, Race/Hispanic ethnicity, Housing tenure, and Household phone status.






Exhibit 2. Distribution of survey sample selected




AmeriSpeak Panel




(# of adults 65+ in panel, n = 4,788)


Age

Benchmark (from March 2017 CPS*)


Unweighted %

Weighted % (using final panel weight)

Error = Weighted - Benchmark

65-69

34.0


38.5

37.1

3.1

70-74

25.3


28.3

29.8

4.5

75-79

17.3


16.2

15.6

-1.7

80-84

11.5


9.8

10.2

-1.3

85+

11.9


7.2

7.3

-4.6







Race






NH-White

77.0


64.6

78.5

1.5

NH-Black

8.9


13.4

8.1

-0.8

Hispanic

8.2


15.8

7.3

-0.9

NH-Asian/Pacific Islander

4.6


1.4

1.4

-3.2

NH-All Other

1.3


4.8

4.7

3.4







Gender






Male

45.0


44.3

49.8

4.8

Female

55.0


55.7

50.2

-4.8







Region






Northeast

18.6


16.4

17.9

-0.7

Midwest

21.5


22.2

22.0

0.5

South

37.3


40.1

38.2

0.9

West

22.6


21.2

21.9

-0.7

*Current Population survey

The baseline interview collects information on the existing screeners and questions in addition to the Stay Independent Checklist that are currently used in clinical practice to identify risk factors for falls such as comorbidities, current medications, and activities of daily living. After the baseline collection, follow-up surveys are conducted once each month for 11 months with the same sample of older adults to assess the incidence of falls. A final survey will be conducted with the sample 12 months post-baseline collection to again ask the baseline interview questions and collect the final month of falls data. We will conduct interviews with the Test Predictability of Falls Screening Tools sample by internet or by phone, depending on respondents’ preferred mode of participation. We estimate obtaining a minimum of 1,520 completed final interviews with the original completed baseline interviews from the sample of older adults selected from the AmeriSpeak Panel. Additional information on the sampling and recruitment of the AmeriSpeak Panel can be found in the Technical Overview of the AmeriSpeak Panel NORC’s Probability-Based Research Panel (Att. A-2).

B2. Procedures for the Collection of Information

Participants will initially be contacted via advance postcard through the mail (for those who prefer telephone administration) or through the internet (for those who prefer web administration). Since participants have already consented to participate in the AmeriSpeak Panel, the postcard will ask about their interest in participating in a new survey designed to understand factors that predict falls. An example of the pre-notification postcard can be found in Attachment B-1.

Baseline survey. The baseline survey, conducted via web (Att. B-7) and telephone (Att. B-8) will collect complementary data relevant for identifying risk factors for falls. Respondents will be asked to report on current medications, including prescription and non-prescription medications. Respondents will also report on their medical history, including any acute or chronic medical conditions that are risk factors for falls. Osteoporosis, urinary incontinence, cardiovascular disease, and decreased physical function or endurance are among the conditions which are risk factors for falls. Additionally, the survey will query respondents on fall prevention activities, offered either in a clinical or community setting, in which they have engaged during the past year. Demographic information that AmeriSpeak members provide as part of their profile survey will be utilized as well, including age, gender, race, and geographic area.

Eleven monthly update surveys. NORC will administer a brief monthly update survey via web (Att. B-9) and telephone (Att. B-10) every month to respondents who participated in the baseline survey. Following up monthly will diminish recall bias and reduce sample loss. This brief monthly update survey will have a limited set of questions to capture recent falls (injury producing falls and non-injury producing falls) and changes in health status. AmeriSpeak panelists are regularly contacted to ensure their contact information is kept up to date. A proxy, identified by the respondent at the baseline survey, will be contacted to complete the monthly update survey if the respondent is unavailable. More details about the proxy survey can be found in section B3.

Final Survey. Upon completion of the last monthly update survey, NORC will field the final survey with the sample. The final survey will also be conducted online (Att. B-11) and by telephone (Att. B-12) and will capture a lot of the same information as the baseline survey including existing screeners and questions in addition to the Stay Independent Checklist that are currently used in clinical practice to identify risk factors for falls such as comorbidities, current medications, activities of daily living, and fall prevention activities. However, the baseline survey collects this information for the year prior to study participation whereas the final survey collects this information for the final month of study participation. In conjunction with the repeated monthly monitoring of falls transpiring over the course of the year, the baseline and final survey data will be used to determine cohort changes in responses as well as changes in risk factors for falls over time and changes in health status and medications.

We use the following assumptions for our sample design:

  • The percentage of older adults reporting a fall over one year is expected to be approximately 29%,4,5 and will be measured by the responses to the monthly update surveys and the final survey of the same sample of older adults.

  • Questions in the baseline Test Predictability of Falls Screening Tools survey with a higher sensitivity derived from the literature on the falls history (0.50 sensitivity and 0.80 specificity) will be identified.6

  • We assume a parsimonious subset of questions administered in the baseline Test Predictability of Falls Screening Tools survey will positively predict a fall occurring within a one-year period with a sensitivity of approximately 0.75 and a specificity of 0.80. We expect a higher sensitivity when this subset is combined with other questions identifying risk factors for falls.

Exhibit 2 summarizes the resulting sample distribution for at least one reported fall under our assumptions with a sample size of 1,520 completed interviews at the final survey. We expect to detect an effect size of approximately 3.6 percentage points with 80% power between the falls screening questions and the incidence of falls for the population of adults 65 and older.

Exhibit 3. Sample Distribution for Successful Screening of Falls


Condition – Fall

Totals

Absent

Present

Screener Positive

216

331

547

Screener Negative

863

110

973

Totals

1079

441

1520



Exhibit 3 summarizes estimates of population prevalence, sensitivity, specificity, 95% confidence intervals for each estimate, and likelihood ratios for the observed total sample. Taking into account the estimated weighting effect of 1.2 for the sample, the 95% confidence interval for an estimate of 0.75 sensitivity is [0.70, 0.79] and the confidence interval for an estimate of 0.80 specificity is [0.77, 0.83].


Exhibit 4. Estimates of Population Prevalence, Sensitivity, Specificity, and Likelihood Ratios for Total Sample



Estimated Value

95% Confidence Interval

Lower Limit

Upper Limit

Prevalence

0.29

0.27

0.32

Sensitivity

0.75

0.70

0.79

Specificity

0.80

0.77

0.83


Positive [C]

3.75

3.25

4.32

Negative [C]

0.31

0.26

0.37

Positive [W]

1.53

1.34

1.76

Negative [W]

0.13

0.11

0.16

Likelihood Ratios: [C] = conventional, [W] = weighted by prevalence


Panelists may participate in AmeriSpeak Panel studies online (via computer, tablet, or smartphone) or by computer-assisted telephone interviewing (CATI) modes. CATI mode respondents represent a population currently underrepresented in web panels, which exclude non-internet households or “net-averse” persons. For studies focused on older adults from the AmeriSpeak Panel, approximately 40% of the completed interviews are completed through the telephone mode.

Data collection materials including all questionnaires are included as Appendices B-2 through B-4.

B3. Methods to Maximize Response Rates and Deal with Nonresponse

Participants in the Test Predictability of Falls Screening Tools project are registered with AmeriSpeak, and will be offered survey choice “points” to redeem for prizes which are commonly provided to survey panel respondents who complete online surveys. The points will not be sent to respondents from CDC, but instead will be provided by the online panel provider to respondents who complete the survey. This is part of the business model of the online panel provider.

All panel-based research organizations – whether probability-based such as AmeriSpeak or non-probability – motivate panel members to continue participating in the surveys through an internal rewards program. Relatively small amounts of points are used for AmeriSpeak Panel surveys based on an understanding of what would keep the participants engaged and motivated to obtain maximum retention of panelists and survey participation. For the Test Predictability of Falls Screening Tools project, points worth $5, $2, and $10 will be awarded to panelists for completing the baseline survey, each monthly update survey, and the final survey, respectively. Panelists who complete all 11 monthly update surveys will receive bonus points worth $10. Therefore, the greatest total amount of points a panelist will be able to receive for participation will be worth $47 (which averages to $3.62 per completed survey).

Panelists who do not respond to the initial survey invitation within three weeks will receive a reminder email emphasizing the importance of their participation in the project. Participants will also receive email reminders before monthly update survey data collection. Telephone panelists will receive multiple follow-up call attempts at strategic times over the course of the survey recruitment period. Panelists who miss a month of data collection will remain in the survey and can return to any prior month to complete that survey at any time. All panelists will be asked to complete the final survey regardless of the number of monthly surveys they or their proxy have completed. We will also provide a falls calendar and log for all participants to record falls activity between data collection periods. A template for the falls diary is included in Attachment B-3. A sample cover letter that will accompany the mailing of the diary is included in Attachment B-4, with the corresponding email in Attachment B-5.

One of the follow-up challenges is that one out of five falls causes a serious injury such as a broken bone or a head injury, and more than 700,000 patients are hospitalized annually because of a fall injury.7 Respondents who fall and are hospitalized will not be able to respond to every monthly update Test Predictability of Falls Screening Tools survey. To minimize the risk of missing these important data, NORC will collect in the baseline survey contact information for a person who can serve as a proxy respondent if the survey respondent cannot be reached. If a respondent fails to respond to all follow-up attempts within two weeks, a short proxy survey will be conducted with the alternative contact to determine the reason for the non-response (e.g., survey attrition, health issues, or mortality) and to collect any information about the panelist’s falls that the proxy can answer (See Attachment B-6 for the protocol for conducting proxy surveys). A short proxy survey is included in Attachments B-13 and B-14.

As it is known ahead of time that the response rate for the survey will be less than 80% (the AmeriSpeak Panel recruitment rate is 34% based on the 2014-2016 AAPOR RR3 weighted rate), the response rate for the baseline survey is expected to be approximately 65%, and approximately 80% of baseline survey responders are expected to also respond to the final survey. NORC will calculate and report unit and item non-response rates and carry out a non-response bias analysis following the guidelines in Standard 3.2 of the OMB Standards and Guidelines for Statistical Surveys.8 NORC will assess and measure non-response bias by evaluating the demographic and geographic representativeness of the baseline and final survey participants compared to the Current Population Survey (CPS) population benchmarks. The sample composition of the baseline and final survey participants will also be compared to evaluate whether any within-study attrition is contributing to non-response bias.

B4. Test of Procedures or Methods to be Undertaken

Whenever possible, the Test Predictability of Falls Screening Tools project relies on questions and measures that have been previously developed and tested, with their validity and reliability demonstrated in community-dwelling older adults. Cognitive testing of the baseline survey was conducted on eight individuals to test the length of the survey as well as general comprehension. Cognitive test data will be reviewed for inconsistent responses that might signal problems of comprehension, recall, or reporting in the survey questions. These findings will thereby be used to identify potential improvements to the survey prior to beginning of the actual data collection. AmeriSpeak also uses a “soft launch” during the initial field of the baseline survey. In the first few days, the baseline survey is only released to 10% of the sample. All baseline survey data is inspected by NORC staff and respondent feedback is solicited to ensure there are no technical issues to address before releasing to the full sample.

Exploratory factor analysis and confirmatory factor analysis (CFA) will be used to demonstrate which survey items have the greatest likelihood of predicting future falls while controlling for key demographic factors and prior history of fall risk factors.9,10,11 To narrow down the larger list of survey items, item response theory (IRT) will be used to understand the key individual survey questions and groupings of survey questions that are most predictive of reported falls.12,13 IRT encompasses any model relating the probability of an examinee’s response to a question to an underlying ability.14 To describe the impact of different modifiers, IRT will be used to test the importance of question responses for different modifiers of interest (e.g., environmental risk, physical activity) to understand whether predictive power varies by demographic or other risk category.15,16 We will then evaluate the sensitivity and specificity of the factors identified in CFA to describe how modifying factors are related to actual falls and fall severity.

Descriptive statistics will be conducted at each of the 13 points of data collection. These statistics will include frequencies and percentages for categorical data as well as means, medians, and modes for continuous data. The primary purpose of these analyses is to clean the data and to look for trends and outliers.

Next, univariate analyses will be conducted after the 6-month follow-up survey is complete and after the Final Survey is complete to determine which of the independent variables (demographics, fall history, medical history, etc.) are related to the dependent variables (falling) with statistical significance. The dependent variable is whether participants did (fallers) or did not (non-fallers) experience a fall, or experienced a medically treated fall (medically treated fallers), during the Test Predictability of Falls Screening Tools project period. A two-sample t-test will be conducted to determine whether there is a significant difference in age between those participants who experience a fall during the project period (fallers) and those who do not (non-fallers). T-tests of this nature will be conducted for all continuous independent variables in the dataset. Chi-square tests will be used to determine whether between-group differences exist for all dichotomous independent variables such as gender. Categorical independent variables with more than two categories, such as race and geographic area, will be analyzed using the ANOVA statistic. Finally, Spearman’s Rho correlations will be used to determine the relationships between the scores on screeners identifying risk factors for falls and whether or not participants experienced a fall. These analyses will be repeated to compare non-fallers and medically treated fallers.

Multivariate analyses will also be conducted after the 6-month follow-up survey is complete and after the Final Survey is complete. Logistic regression models will be built in order to determine which of the independent variables, found to be significantly related to the dependent variable in the univariate analyses, significantly contribute to fall status. In building logistic regression models we will be able to control for key subgroups with more risk factors for falls, such as women and persons with prior history of falls, thereby identifying what other individual items have the greatest likelihood of predicting future falls. Two sets of models will be built at each point (6 and 12 months) to compare fallers with non-fallers and to compare medically treated fallers with non-fallers. These models will also incorporate a time variable so that the length of time between the preceding interview and the occurrence of a fall can be examined in relation to the individual items while controlling for covariates such as the characteristics of key subgroups with more risk factors for falls already mentioned.

Other types of multivariate models may be utilized as well. For example, a discriminant function analysis of the 6- and 12-month data together could indicate which items best predict whether a participant belongs in the non-faller, non-medically treated faller, or medically treated faller group.

B5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

NORC is the contractor collecting and analyzing the data for the survey. The following individuals have reviewed technical and statistical aspects of procedures that will be used to implement the survey.

Kennon R. Copeland, PhD, (301-634-9432) is Senior Vice President and Director of Statistics and Methodology for NORC at the University of Chicago. In addition, Copeland has a Senior Advisory role on the AmeriSpeak Panel. Copeland has more than 35 years of experience in sample design, weighting methods, and error measurement methods for large-scale household, establishment, and healthcare surveys. At NORC, he is responsible for sample design and estimation methodology for government and public interest surveys. For 17 years prior to joining NORC, Copeland was at IMS Health, where his last position was Senior Director for Statistical Services. While at IMS, he developed and implemented survey methodologies for the physician, hospital, pharmacy, and clinic segments of the health care market, designed and conducted studies of physician treatment patterns, and developed and implemented studies of pharmaceutical pricing. Copeland earned his PhD in Survey Methodology from the University of Maryland and his MS in Statistics from the University of Kentucky. 

Vicki J. Pineau, MS, (404-240-8401) will lead the sample design task and is a Senior Statistician at NORC. Pineau is a sampling statistician and veteran survey research methodologist with over 28 years of experience in survey and sample design covering a myriad government and public interest surveys and censuses in vaccines and immunizations, health policy, income, labor force, poverty, program participation and agriculture. Pineau's expertise includes: sample designs for rare populations and general population samples; dual frame designs for landline and cellular telephone number samples; panel sample design and sample selection; address-based and Internet sampling; application of complex multi-stage weighting methods and nonresponse bias and non-coverage bias assessment and adjustment. Pineau has a MS in Mathematics from the University of South Carolina.

Stephanie Jwo, MA, (312-230-3082) is the Web Survey Manager for AmeriSpeak and will lead the data collection task. She is responsible for developing client services’ standard operating procedures for the AmeriSpeak team, including standards for questionnaire programming and review, and data collection and management. Jwo also trains and manages the AmeriSpeak Client Services team on all standard operating procedures for conducting studies using NORC’s AmeriSpeak Panel. Prior to joining NORC, she managed over 200 projects for the Government & Academic group at GfK (formerly Knowledge Networks). Jwo is an expert in conducting health surveys (e.g., smoking cessation, sexual health) and surveys with a variety of age groups, including parent-teen dyad surveys, longitudinal studies, and Hispanic/Spanish-language studies for web panels. Jwo earned an MA in Experimental Psychology from San Jose State University.



1 NORC at the University of Chicago: 2010 National Sample Frame. Available at: http://www.norc.org/Research/Projects/Pages/2010-national-sample-frame.aspx

2 Panelists are invited to participate in AmeriSpeak surveys through email invitations to web-preference panelists and calls to phone-preference panelists. Respondents are credited points for their participation in surveys that can be redeemed for cash or physical goods. NORC encourages invited panelists who have not completed a survey to respond through email reminders to web-preference panelists and phone calls to phone-preference panelists.


NORC maintains strict rules to limit respondent burden and reduce the risk of panel fatigue. On average, AmeriSpeak panel members typically participate in AmeriSpeak web-based or phone-based studies two to three times a month.


Because the risk of panel attrition increases with the fielding of poorly constructed survey questionnaires, the AmeriSpeak team works with NORC clients to create surveys that provide an appropriate user experience for AmeriSpeak panelists. AmeriSpeak will not field surveys that in our professional opinion will result in a poor user experience for our panelists and in panel attrition.

3 Montgomery, R., Dennis, J.M., & Ganesh, N. (2016). Response Rate Calculation Methodology for Recruitment of a Two-Phase Probability-Based Panel: The Case of AmeriSpeak. NORC at the University of Chicago: AmeriSpeak Research. Available at: http://d3qi0qp55mx5f5.cloudfront.net/amerispeak/i/research/WhitePaper_ResponseRateCalculation_AmeriSpeak_2016.pdf

4 Bergen, G., Stevens, M.R., Burns, E.R. (2016). Fall and Fall Injuries Among Adults Aged ≥ 65 Years – United States, 2014. Morbidity and Mortality Weekly Report, Centers for Disease Control and Prevention. Available at: https://www.cdc.gov/mmwr/volumes/65/wr/mm6537a2.htm?s_cid=mm6537a2_e

5 Centers for Disease Control and Prevention. “Important Facts about Falls.” (2016). Available at: http://www.cdc.gov/HomeandRecreationalSafety/Falls/adultfalls.html

6 Thurman, D.J., Stevens, J.A., Rao, J.K. (2008). Practice Parameter: Assessing patients in a neurology practice for risk of falls (an evidence-based review). Report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology. 2008 Feb 5;70(6):473-9.

7 Centers for Disease Control and Prevention. “Important Facts about Falls.” (2016). Available at: http://www.cdc.gov/HomeandRecreationalSafety/Falls/adultfalls.html

8 Office of Management and Budget. (2006). Office of Management and Budget Standards and Guidelines for Statistical Surveys. Available at: https://www.whitehouse.gov/sites/default/files/omb/inforeg/statpolicy/standards_stat_surveys.pdf

9 Fabrigar L.R., Wegener D.T., MacCallum R.C., & Strahan E.J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods.4(3):272.

10 Harrington, D. (2008). Confirmatory factor analysis. Oxford University Press.

11 Thompson, B. (2004). Exploratory and confirmatory factor analysis: Understanding concepts and applications. American Psychological Association.

12 Embretson S.E. & Reise S.P. (2013). Item response theory. Psychology Press.

13 Klotzbuecher, C.M., Ross, P.D., Landsman, P.B., Abbott, T.A., & Berger, M. (2000). Patients with prior fractures have an increased risk of future fractures: a summary of the literature and statistical synthesis. Journal of Bone and Mineral Research. 1;15(4):721-39.

14 Van der Linden, W. J. & Hambleton, R. K. (eds.) (1997). Handbook of Modern Item Response Theory. New York: Springer Verlag.

15 Gregg, E.W., Pereira, M.A., & Caspersen, C.J.. (2000). Physical activity, falls, and fractures among older adults: a review of the epidemiologic evidence. Journal of the American Geriatrics Society. 48(8):883-93.

16 Barnett, A., Smith, B., Lord, S.R., Williams, M., Baumand, A. (2003). Community‐based group exercise improves balance and reduces falls in at‐risk older people: a randomised controlled trial. Age and Ageing. 32(4):407-14.

14


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorRosie Sood
File Modified0000-00-00
File Created2021-01-21

© 2024 OMB.report | Privacy Policy