Evaluating the June Area Enumerator Training - Quality Measures

0213 - Evaluating_the_June_Area_Surveys_Field_Enumerator_Training - Mar 2019.pdf

Agricultural Surveys Program

Evaluating the June Area Enumerator Training - Quality Measures

OMB: 0535-0213

Document [pdf]
Download: pdf | pdf
Confidential - Internal Use Only

Evaluating the June Area Survey’s Field
Enumerator Training
United States
Department of

Agriculture

by
Michael Gerling & Tyler Wilson
Research & Development Division
Justin Van Wart, Ph.D
Northern Plains Regional Field Office

National
Agricultural
Statistics
Service
Research and
Development Division
Washington DC 20250
RDD Research Report
Number RDD-19-01
March 2019

Joseph Rodhouse
Research & Development Division
Andy Higgins
National Operations Division

Acknowledgements
Executive Sponsors
Linda J. Young, Ph.D
Barbara Rater
Response Rate Research Team Manager
Gerald Tillman
NASDA Response Rate Research Sub-Team
Rebecca Dubbs – Sub-Team Lead
Michael Gerling
Andy Higgins
Beckie McCracken
Marcella Simmons
Shareefah Williams
Research and Development Division
Tyler Wilson
Joseph Rodhouse
Heather Ridolfo, Ph.D
Shane Ball, Ph.D
Northern Plains Regional Field Offices
Justin VanWart, Ph.D
Census Survey Division
Suzanne Avilla
Dawn Grahm
Methodology Division
Kathy Ott
National Operations Division
Chris Gotschall

i

Executive Summary
The National Agricultural Statistics Service (NASS) conducts over 400 agricultural surveys
annually to make estimates on crops and livestock, explore production practices, and identify
economic trends.
In 2015, NASS created the Response Rate Research Team (RRRT) to identify ways to improve
response rates. The National Association of State Departments of Agriculture (NASDA) Training
sub-team was formed to focus on telephone and field enumerator training. One of the sub-team’s
tasks was to look at improving current field enumerator training. In early 2017, the sub-team
created pre- and post-field enumerator workshop evaluation forms for NASS’s 12 regional field
offices (RFOs) to use at their June Area Survey (JAS) training workshops.
The JAS is an area-frame-based annual survey that provides information on U.S. crops, livestock,
grain storage capacity as well as number, type, and size of farms. The JAS sample is comprised of
approximately 10,000 designated land areas (segments). A typical segment is about one square
mile--equivalent to 640 acres. NASS Field interviewers are provided a paper aerial photo showing
the sampled segment area and must account for all land inside the segment boundary. They divide
each segment into tracts of land that represent unique operating arrangements. Field enumerators
visit the segments, locate and interview the operator(s) of any land found to have agricultural
activity, and record all agricultural activity associated with the operator on a paper questionnaire.
Prior to the start of the screening and date collection stages, field enumerators attend a training
workshop.
The study showed that overall, field enumerators found that the JAS training workshops increased
their knowledge about the survey and increased their perceived confidence in explaining the
purpose of the survey. This increase was noted across all field enumerators regardless of their level
of experience with the JAS (from new hires to those having over 10 years of service).
The study also revealed other areas of the JAS training needing improvement. Field interviewers
indicated that the trainers should spend more time conducting group exercises and instructing on
how to properly complete Section D (Crops and Land Use on Tract) of the questionnaire and to
draw out tracts and fields on the aerial photos.

ii

Recommendations


Continue conducting in-person field enumerator training workshops.
o Spend more time on how to complete Section D (Crops and Land Use on Tract) of
the questionnaire
o Spend more time on how to draw out tracts and fields on the aerial photos.
o Spend more time conducting group exercises.



Explore the resources necessary for establishing a survey monkey license and additional
staffing resources to maintain the data collection instruments.



Compare the effectiveness of three training options: 1) In-person training 2) Online selfpaced training and 3) Instructor-led online training.

iii

Table of Contents

Abstract

1

1.0

Introduction

2

2.0

Goals

2

3.0

June Area Survey

3

4.0

NASS’s Current Training of Field Enumerators

3

5.0

NASS’s Past Research on Field Enumerator Training

3

6.0

External Research on Field Interviewer Training

3

7.0

The Study

4

8.0

Methods, Analyses & Findings

5

8.1

The Screening Process and Screening Form by Years of Service

5

8.2

The Purpose of Section D and How to Use It

6

8.3

The Purpose of the Other Parts of the Questionnaire

7

8.4

How Confident are you in Your Ability to Explain the Purpose of this Survey

7

8.5

Text Analysis of Opened Ended Question

8

8.5.1

Text Analysis of Pre-Evaluation Form

8

8.5.2

Text Analysis of Post-Evaluation Form

8

9.0

Discussion and Lessons Learned

8

10.0

Recommendations

10

11.0

Conclusion

10

12.0

References

11

Appendices

A

Appendix A - JAS Definitions

A-1

C

Appendix B - Pre-Training Evaluation (On-line version)

B-1

D

Appendix C - Post-Training Evaluation (On-line version)

C-1

iv.

Evaluating the June Area Survey’s Field Enumerator Training
Michael Gerling, Tyler Wilson, Justin Van Wart,
Joseph Rodhouse and Andy Higgins1/

Abstract
The National Agricultural Statistics Service (NASS) conducts over 400 agricultural
surveys annually to make estimates on crops and livestock, explore production
practices, and identify economic trends. In 2015, NASS created the Response Rate
Research Team (RRRT) to research how to improve response rates. A sub-team
was formed to focus on telephone and field enumerator (interviewer) training. One
of the sub-team’s tasks was to look at improving the current field enumerator
training. In 2018, the sub-team studied the effectiveness of its June Area Survey
(JAS) field enumerator training workshops using pre- and post-training evaluation
forms.
The study showed that overall, field enumerators feel that the JAS training
workshops are increasing their knowledge about the survey and thus their perceived
confidence in explaining the purpose of the survey. This increase is seen across all
field enumerators from new hires to those having over 10 years of service.
The study also revealed other areas of the JAS needing improvement. Field
interviewers indicated that trainers should spend more time conducting group
exercises and instructing on (1) how to properly complete Section D (Crops and
Land Use on Tract) of the questionnaire and (2) how to draw out tracts and fields
on the aerial photos.
Key Words: training, agriculture, workshops, field enumerators, surveys
__________________
1/ Michael W. Gerling - Mathematical Statistician & Tyler Wilson - Survey Methodologist of the National Agricultural

Statistics Service’s Research & Development Division located at 1400 Independence Avenue, SW.,
Washington, DC 20250-2001. Justin VanWart – Agricultural Statistician of NASS’ Northern Plains Regional
Field Office located at 100 Centennial Mall N, Suite 263, Lincoln, Nebraska, 68508. Joseph Rodhouse –
Research Statistician from NASS’s Research and Development Division. Andy Higgins – Agricultural
Statistician from NASS’ National Operations Division located in 9700 Page Avenue, Suite 400 St. Louis, MO
63132.

1

1. Introduction
The National Agricultural Statistics Service’s (NASS) primary purpose is to provide timely,
accurate and useful statistics on United States’ and Puerto Rico’s agriculture. NASS conducts more
than 400 agricultural surveys annually for the purpose of producing estimates on crops and
livestock, exploring production practices, and identifying economic trends. The data collection
modes employed are mail, fax, telephone, personal interview, and the internet.
In 2015, NASS formed the Response Rate Research Team (RRRT) to determine how to improve
response rates. Multiple sub-teams were established to examine various aspects of the data
collection process and to obtain feedback from NASS customers (farmers, data users, agricultural
associations, NASS staff, and National Association of State Department of Agriculture (NASDA)
enumerators). NASDA employs the telephone and field enumerators for all of NASS’s data
collection activities.
The NASDA Training sub-team was formed to focus on telephone and field enumerator training.
One of the sub-team’s tasks was to look at how to improve field enumerator training.
In early 2017, the sub-team developed and disseminated pre- and post-field enumerator training
evaluations to NASS’s 12 regional field offices (RFOs) to measure the effectiveness of their June
Area Survey (JAS) training workshops. A formal study was never intended. These evaluations
were provided to the RFOs to use as desired. Some, but not all, regions used the forms. After the
training workshops, the sub-team received copies of the evaluation forms. However, after
reviewing them, the sub-team determined that these evaluations would not be practical to draw
statistically valid generalizable conclusions. These findings were not surprising since the forms
were not originally intended to be used in a study. NASS’s Northern Plains RFO (NPRO)
converted the paper evaluation forms to web questionnaires utilizing Survey Monkey. As a result,
the information collected was in an electronic format, making the output readily available for
statistical analysis.
In late 2017, the sub-team decided to conduct a formal study taking the lessons learned from their
earlier efforts. The sub-team originally focused on the Agricultural Resource Management Survey
Phase III (ARMS III). However, resources to design, test, and deploy survey evaluations were
unavailable. In addition, it was too late to make changes to the ARMS program. The next scheduled
field enumerator workshop was for the June Area Survey (JAS). Thus, working jointly with
NASS’s Training Group, the JAS was selected.

2. Goals
The goals of this research were: 1) determine whether field enumerator training is effective; 2)
describe NASS’s current field enumerator training; 3) research NASS’s past research in evaluating
field enumerator training; 4) analyze the pre- and post-training evaluations; and 5) provide
recommendations to improve field enumerator training.

2

3. June Area Survey
The JAS is an annual survey that provides information on U.S. crops, livestock, grain storage
capacity and the number, type, and size of farms. The JAS sample is comprised of approximately
10,000 designated land areas (segments). A typical segment is about one square mile--equivalent
to 640 acres. Each segment is outlined on an aerial photo and provided to NASS field interviewers
(enumerators). Field enumerators visit these segments, locate and interview the operator(s) of any
land found to have agricultural activity, and record all agricultural activity associated with the
operator on a paper questionnaire. A separate paper questionnaire is completed for each
agricultural operation operating any land within the segment. Appendix A provides definitions of
segments, tracts, agricultural tracts, etc.
4. NASS’s Current Training of Field Enumerators
Before the JAS workshop, field enumerators are sent an Interviewer’s Manual, a copy of the JAS
questionnaire, and practice exercises. The training workshop generally involves a review of the
JAS questionnaire including the most difficult sections and a review and handout of JAS
assignments, aerial photos, and questionnaires for each field enumerator. During the workshop,
role playing exercises as well as various practice exercises are conducted. The workshop also
covers how to handle refusals and inaccesibles and how to handle other surveys that are
overlapping with the JAS. Time is also devoted to updating field enumerators’ iPads. Although
currently not used for JAS, the iPADs are being used for those surveys running concurrently with
JAS. In 2018, the training budget was reduced causing some RFOs to limit their workshops to
only the enumerators’ supervisors. The supervisors would afterwards train their enumerator staff
as needed. This was a major change from previous years.
5. NASS’s Past Research on Field Enumerator Training
Typically, at the conclusion of a training workshop, attendees are provided with a post-workshop
evaluation with the typical questions pertaining to what went well and what areas need
improvement. These evaluations are reviewed and used to improve the following year’s training.
For multiple-day workshops, some of NASS’s 12 RFOs conduct daily evaluations to understand
what topics need to be review the next day. In addition, the evaluation forms are not standardized
across each RFO. None of these efforts have led to a formal research report.

6. External Research on Field Interviewer Training
A number of studies point to the importance of evaluating training. The following are some of the
highlights.
In Evaluating Workshops and Institutes Practical Assessment, Ayers (1989) stated that evaluating
training workshops show 1) the real worth of a training program, 2) how to improve future
workshops, and 3) justification for current funding and become the basis for future funding.

3

Beth Kanter (author, speaker, and master trainer) recommends including an evaluation survey for
workshops. The survey should have questions to measure participants’ views on whether the
instructor has been an effective teacher and whether the workshop has been effective in advancing
the participants’ learning. Stiegler & Biedinger (2016) stated that the optimization of data quality
begins with the selection of the interviewers, continues in interviewer training, and ends with
interviewer monitoring.
In 1959, Donald Kirkpatrick created the Four-Level Training Evaluation Model to help trainers
measure the effectiveness of their training in an objective way. This model has evolved over time
and has become the gold standard. Today, the four-levels are as follows: 1) Reaction, 2) Learning,
3) Behavior and 4) Results. Kirkpatrick, (1996) also mentions the importance of keeping in mind
the practicality and expenses when evaluating training.
The Kirkpatrick Model has been modified by companies over time to the following:
 Pre-training skills assessment: Measure the learner’s level of knowledge or skill prelearning and again post learning.
 Application in the workplace: Assess whether the newly learned skills or knowledge are
being applied in the workplace.
 Individual behavior assessment: Assess whether the training improved the individual’s
embracement of the corporate culture and goals.
 Team behavioral assessment: Measure whether teamwork is being more coherent and
effective.
 Meeting goals or targets: Measure the individual/team performance against goals or
targets every few months to assess the impact of the learning.
7. The Study
After a general review of the 2017 pre- and post-field enumerator training evaluation forms and
receiving technical guidance from NASS’s survey methodologists, Kathy Ott and Heather Ridolfo,
the 2018 field enumerator training evaluations were re-designed. The questionnaire was
comprised of a series of 10-point scale questions. This was based on research by Wittink and
Bayer, (1994). Additionally, the sub-team decided to add several open-ended questions.
Justin VanWart of the NPRO updated the Survey Monkey web survey versions accordingly.
Screen shots of the pre- and post-workshop evaluations are provided in Appendices B and C.
The 12 regional offices were given the option to participate in the research study. The Delta,
Eastern Mountain, Great Lakes, Heartland, Mountain, Northern Plains, Northwest, Pacific,
Southern Plains, and Upper Midwest agreed to participate.

4

8. Methods, Analyses & Findings
The regions participating in the study provided their feedback on using and receiving the pre- and
post-workshop evaluations. The training evaluation data were downloaded from Survey Monkey,
processed, and analyzed. Basic statistics from Survey Monkey were disseminated to the respective
RFOs.
There were 1,108 partially completed/completed pre-evaluation forms and 819 completed postevaluation forms. Duplication caused by multiple submissions by field enumerators were removed
from both data sets. Records with missing or erroneous enumerator IDs and those having only one
of the two evaluations forms completed were also removed from the data sets. Thus, the total
number of good matches was 534 or 48 percent. In the future, the number of missing/erroneous
IDs will be reduced by adding a “hard” edit check in the enumerator ID field and perhaps a more
sophisticated edit can be programmed to determine whether the field enumerator has completed a
pre-evaluation before providing the post evaluation.
The Wilcoxon Signed Rank Test was utilized to assess whether significant differences in perceived
knowledge were found after completing the training. The Wilcoxon signed-rank test is a nonparametric statistical hypothesis test used to compare two related samples, matched samples, or
repeated measurements on a single sample to assess whether their population mean ranks differ
(i.e. it is a paired difference test) (Wilcoxon, Katti & Wilcox, 1963). Each question was evaluated
pre and post training.

8.1

The Screening Process and Screening Form by Years of Service

Overall, trainees experienced a significant increase in knowledge about the screening form and
process. However, according to the values in the actual estimate column (Table 1), new employees
(0-3 years) experienced the greatest increase in perceived knowledge with an average increase of
2.28 points after training. The training had the smallest effect (yet still significant, p < 0.01) on
more experienced employees. The evidence shows that the training was a success in increasing the
enumerators’ perceived knowledge of the survey.

Table 1: Screening Process and Screening Form by Years of Service
Years of Service

Wilcoxon
Signed Rank PValue

Paired T-Test PValue

0-3 Years

<0.001

<0.001

2.28

237

4-9 Years

<0.001

<0.001

0.76

135

10+ Years

<0.001

<0.001

0.30

158

All

<0.001

<0.001

1.3

534

5

Actual Estimate
Absolute Value

N

8.2

The Purpose of Section D and How to Use It?

Field enumerators consistently stated that Section D was a difficult section of the questionnaire
due to its complexity. Overall, trainees experienced a significant increase in knowledge in how to
complete Section D. According to the values in the actual absolute estimate column (Table 2), new
employees (0-3 years) experienced the greatest increase of perceived knowledge (an average 2.31
point increase after training). Older employees (10+ years) experienced the least amount (yet still
significant) of knowledge gained. These findings suggest that the majority of enumerators
increased their knowledge of Section D, especially those with little experience.
Table 2: Purpose of Section D and How to Use It?
Years of Service

Wilcoxon
Signed Rank PValue

Paired T-Test PValue

Actual Estimate
Absolute Value

N

0-3 Years

<0.001

<0.001

2.31

235

4-9 Years

<0.001

<0.001

0.88

133

10+ Years

0.003

0.001

0.29

154

All

<0.001

<0.001

1.35

526

8.3

The Purpose of the Other Parts of the Questionnaire?

Field enumerators were asked for their thoughts pertaining to the rest of the questionnaire. Overall,
trainees perceived a significant increase in knowledge about the purpose of other parts of the
questionnaire. According to the values in the actual estimate absolute column (Table 3), new
employees (0-3 years) experienced the greatest increase (an average 2.39 point increase after
training). Older employees (10+ years) experienced the least amount (yet still significant) of
knowledge gained.
Table 3: The Purpose of the Other Parts of the Questionnaire
Years of Service

Wilcoxon
Signed Rank PValue

Paired T-Test PValue

Actual Estimate
Absolute Value

N

0-3 Years

<0.001

<0.001

2.39

237

4-9 Years

<0.001

<0.001

0.84

133

10+ Years

<0.001

<0.001

0.35

150

All

<0.001

<0.001

1.39

521

6

8.4

How Confident are you in Your Ability to Explain the Purpose of this Survey?

Overall, trainees experienced a significant increase in their perceived confidence on how to explain
the purpose of the survey. According to the values in the actual estimate column, new employees
(0-3 years) experienced the greatest increase (an average2.01 point increase after training). Older
employees (10+ years) experienced the least amount (yet still significant, p < 0.01) of knowledge
gained. Knowing the purpose of the survey and having self-confidence are key aspects in having
a successful data collection effort and improved data quality.

Table 4: How Confident are you in Your Ability to Explain the Purpose of this Survey?
Years of Service

Wilcoxon
Signed Rank PValue

Paired T-Test PValue

Actual Estimate
Absolute Value

N

0-3 Years

<0.001

<0.001

2.01

235

4-9 Years

<0.001

<0.001

0.96

133

10+ Years

<0.001

0.002

0.31

153

All

<0.001

<0.001

1.24

525

8.5

Text Analysis of Opened-ended Questions

A text analysis using SAS 9.2 and Text Explorer in SAS JMP 13 Pro was conducted on the openended questions 6-9 of the pre-training evaluation and questions 6-11 of the post-training
evaluations. See Appendices C and D for copies of the evaluations.

8.5.1 Text Analysis of Pre-Evaluation Form
Questions 6 and 7 pertain to the field enumerators’ perception of the most difficult sections of the
questionnaire and what they would like to have covered more in the training workshop.
According to term and phrase frequency lists, Section M (Land Values) and Section D were
mentioned over 100 times as being the most difficult for JAS’s respondents to complete. In
addition, Section D was mentioned approximately 100 times when enumerators were asked what
additional information they wanted covered at the training workshop.
Questions 8 and 9 pertained to concerns about enumeration and reasons for non-response. The
number one concern that field enumerators mentioned was time (or lack thereof) of the
farmers/ranches sampled. The overarching theme in these responses, according to term and phrase
lists, is that there are too many surveys and not enough time for the farmers/ranchers to complete

7

them. In addition to lack of time, issues with government trust and busy respondents were the most
mentioned reasons for nonresponse as perceived by field enumerators in Question 9.
8.5.2 Text Analysis of Post-Evaluation Form
Question 6 pertains to the field enumerators’ perception of whether the survey was explained
adequately. According to the post survey response, 97 percent (517) said “yes” when asked if all
aspects of the survey were covered during the training.
Questions 7 and 8 ask for the field enumerators’ perception on what they found most and least
useful in the training. According to Question 7 and Figure 1, the field enumerators thought that
the Section D training was the most useful. Based on the responses in Question 8, a majority of
people indicated that all the information was useful in varying degrees.

Figure 1: Word Cloud of Most Useful Training Items

Question 9 pertains to areas of the survey that field enumerators would prefer more training time
on. According to term and phrase frequency, three themes emerged: (1) Section D, (2) maps, and
(3) group practice.
Enumerators were also asked to indicate where in the training less time could be devoted to going
forward (Question 10). Responses appeared fairly random, except that many (37) mentioned
“nothing” in their answers, indicating that they perceived their time was well spent overall.
Questions 11 and 12 focused on what field enumerators would like to have done differently in the
workshop. Additional comments appear quite specific to each individual. However, there were
high frequencies of positive terms and phrases such as “nothing,” “good,” and “great.” These are
indicative that the information conveyed during the workshop was well-received.

9. Discussion & Lessons Learned
NASS continues to search for ways to improve the efficiency and overall quality of the field
enumerator training. Evaluating field enumerator training workshops provides the agency with a

8

baseline to determine whether resources allocated to training are effective. Future research could
be done to compare the effectiveness of in-person instructor led training, online self-paced training,
and instructor-led online training.
Having the training evaluations available online has several benefits: 1) edit checks to
improve data quality 2) eliminate the need for paper forms, and 3) receive results faster.
Also, the data can be reviewed once the evaluation has been submitted to make sure the
trainees are completing the form as intended. In the future, NASS may want to consider the
excerpt from John Eades’s 2014 blog on “3 Ways to Measure Training Effectiveness”. John
Eades is an author, speaker on leadership development and organizational alignment, and
CEO of an e-learning company. He states that:
“Creating a visual assessment of an employee’s skill set and performance
before and after a training moment. These snapshots, or skylines, of a learner’s
abilities can give a clear picture of performance and skill improvements you
can directly tie to training. A simple example would be, testing a sales person’s
current sales skills prior to training, then retesting the individual after the
event to see the delta.”
At NASS, this could be a dashboard displaying each enumerator’s performance, including
skills attained, goal-oriented behaviors, teamwork skills and targets met. NASS could better
monitor interviewer performance with this type of visual assessment of employee skills pre
and post training, and apply adaptive or tailored training curriculum where the data show
an area of needed improvement.
Training evaluations and online displays could also be expanded to include the ARMS Phase III
and Objective Yield to measure the effectiveness of these field enumerator training workshops.
The regional field offices’ staff were also greatly appreciative with the insight that the training
evaluations provided.
If formal evaluation of training workshops continues, we suggest the following strategies to
improve future studies:


Add a “hard” edit check to the enumerator ID question on the pre and post-evaluation forms
where it is not only required but also the length of the input is correct. This will reduce the
number of non-matching completed pre- and post-training evaluations.



Program Survey Monkey to reduce the number of repeated submissions.



Define reasons why field enumerators are not able to complete both evaluations.



Review what the two non-participating regions are doing and try to standardize/add
questions so these regions could be added into future studies.

9

10. Recommendations


Continue conducting in-person field enumerator training workshops.
o Spend more time on how to complete Section D (Crops and Land Use on Tract) of
the questionnaire and how to draw out tracts and fields on the aerial photos.
o Spend more time conducting group exercises.

11.



Explore the resources necessary for establishing a Survey Monkey license and the
additional staffing resources required to maintain the data collection instruments.



Compare the effectiveness of three training options: 1) In-person training, 2) Online selfpaced training, and 3) Instructor led online training.
Conclusion

Overall, field enumerators indicated that the JAS training workshops are increasing their
knowledge about the survey and increasing their perceived confidence in explaining the purpose
of the survey. This increase is across all enumerators regardless of the level of experience but
greatest for less experienced field enumerators.
Trainers need to spend more instruction time on Sections D of the questionnaire and on drawing
tracts and fields on the aerial photos. In addition, more time should be dedicated to conducting
group exercises.
As training budgets tighten, other training options, such as online self-paced training and
instructor-led online training, should be explored. This research is a start to defining a baseline to
compare these other training options with the current in-person instructor-led training.
In conclusion, the research shows that NASS should continue to hold in-person instructor-led field
enumerator workshops for the JAS.

10

12. References
Ayers, J. (1989) Evaluating Workshops and Institutes Practical Assessment, Research &
Evaluation, Volume 1, Number 8, ISSN=1531-7714. Retrieved from
http://pareonline.net/getvn.asp?v=1&n=8, accessed on October 11, 2018.)
Berken, S. (Jan 28, 2013.) How to Run a Good Workshop, Scott Berken’s Blog. Retrieved from
http://scottberkun.com/2013/run-a-good-workshop/, accessed October 11, 2018.
DePalma, A. (March 2011) E-Training vs. In-Person Training, American Society of
Mechanical Engineers (ASME). Retrieved from https://www.asme.org/careereducation/articles/certification/e-training-vs-in-person-training, accessed on October 11, 2018.
Dobransky, M., Vanry, N. (estimated 2017) Instructor-led Training vs. eLearning, Edgepoint
Learning’s Blog. Retrieved from https://www.edgepointlearning.com/blog/instructor-ledtraining-vs-elearning/, accessed October 10, 2018.
Eades, J. (September 30, 2014) 3 Ways to Measure Training Effectiveness, eLearning Industry
website. Retrieved from https://elearningindustry.com/3-ways-measure-training-effectiveness,
accessed October 11, 2018.
Flesher, S. (May 23, 2018) How to Measure the Performance of Training Programs. Skill builder
LMS
–
Base
Corp
Learning
System’s
website.
Retrieved
from
https://www.skillbuilderlms.com/how-to-measure-the-performance-of-training-programs/,
accessed on October 11, 2018.
French Institute for Demographic Studies (INED), (estimated 2017) Interviewer Training, INED’s
website.
Retrieved from https://www.ined.fr/en/resources-methods/survey-methodology/methodologicalchoices/interviewer-training-and-data-collection-management/, accessed October 11, 2018.
Kanter, B. (February 18, 2014) Six Tips for Evaluating Your Nonprofit Training Session,
Beth’s Blog. Retrieved from http://www.bethkanter.org/training-after/, accessed on October 11,
2018.
Kirkpatrick, D. and Kirkpatrick J. (1996) Kirkpatrick’s Four-Level Training Evaluation Model Analyzing Training Effectiveness, Mindtools’ website. Retrieved from
https://www.mindtools.com/pages/article/kirkpatrick.htm, accessed on May 11, 2018.
Lavrakas, P. (2008) Interviewer Training, Encyclopedia of Survey Research Methods. Retrieved
from http://methods.sagepub.com/reference/encyclopedia-of-survey-research-methods/n248.xml,
accessed October 11, 2018.
NASDA (2018) About the NASDA-NASS Partnership, National Association of State

11

Departments of Agriculture’s website. Retrieved from http://www.nasda.org/nass/about,
accessed on May 11, 2018.
Neelam-Bhuyan, S. (April 23, 2016) How to Measure the Impact of Your training Program,
MindTickle’s website. Retrieved from https://www.mindtickle.com/blog/measure-impacttraining-program/, accessed on October 10, 2018.
Optimus Learning Services (January, 15, 2017) Metrics to Measure for Effective Learning &
Development Management, Optimus Learning Services’ website. Retrieved from
http://www.optimuslearningservices.com/blog/ld-strategy/5-metrics-to-measure-for-effectivelearning-and-development-management/, accessed October 11, 2018.
Rogers, M., Surrency, A. (March 2002) Child Needs Assessment Tool Kit Training Manual, A
Guide for Training Field Interviewers and Supervisors, The World Bank, Retrieved from
http://siteresources.worldbank.org/INTECD/Resources/CNAToolkitTrainingManual.pdf,
accessed October 11, 2018.
Shepherd R., Turbett P. (December 2006) Post-School Outcomes Data Collection Guide:
Training Interviewers, State University, New York at Potsdam and National Post-School
Outcomes Center University of Oregon. Retrieved from
https://transitionta.org/sites/default/files/dataanalysis/NPSO_TrainingInterviewers.pdf, accessed
October 11, 2018.
Stiegler, A., Biedinger, N. (2016) Interviewer Skills and Training, GESIS Survey Guidelines,
Mannheim, Germany: GSIS-Leibinz Institute for the Social Sciences, doi: 10.15465/gesissg_en_013. Retrieved from https://www.gesis.org/en/gesis-surveyguidelines/statistics/interviewer-training/, accessed October 11, 2018.
Survey Research Center (2016) Guidelines for Best Practice in Cross-Cultural Surveys. Ann
Arbor, MI: Survey Research Center, Institute for Social Research, University of Michigan.
Retrieved from http://ccsg.isr.umich.edu/index.php/chapters/interviewer-recruitment-selectionand-training-chapter, accessed on August 28, 2018.
United States Office of Personnel Management Employee Services Executive Resources &
Employee Development (January 2011) Training Evaluation Field Guide Demonstrating the
Value of Training at Every Level, US-OPM, 1900 E Street, NW Washington, DC 20415.
Retrieved from: https://www.opm.gov/policy-data-oversight/training-anddevelopment/reference-materials/training_evaluation.pdf
Wikipedia (2018) Definition of Wilcoxon Signed-Rank Test. Retrieved from
https://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test, (accessed October 11, 2018).
Wilcoxon, F., Katti, S. K., & Wilcox, R. A. (1963) Critical Values and Probability Levels for the
Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test. American Cyanamid Company.

13

Wittink, D., Leonard, R. (1994, The Measurement Imperative, Marketing Research, Vol 6, No 4,
pp14-22.
Yupangco, J. (June 18, 2017) Learning Metrics That Matter: Data Points You Should Be
Measuring eLearning Industry’s website. Retrieved from
https://elearningindustry.com/learning-metrics-that-matter-data-points-measuring, accessed
October 11, 2018.

13

Appendix A
JAS Definitions

Segments:

Land areas with identifiable boundaries such as ditches, roads, railroads, streams,
etc. that serve as sampling units in the June Area Survey. Segments are assigned a
permanent number and outlined in red on aerial photos. Segments generally range
in size from one-half square mile to three square miles.

Tract:

An area of land inside a segment under one type of land operating arrangement.
There are two types of tracts:
1.) Ag Tract: Consists of agricultural land.
2.) Non-Ag Tract: Consists of residential, shopping centers, lakes, woods, and
any land not considered agricultural.

Usable:

Completed reports for agricultural tracts - questionnaires containing usable data.

A-1

Appendix B
Pre-Workshop Evaluation Form

B-1

Appendix B
Pre-Workshop Evaluation Form – Continued

B-2

Appendix C
Post-Workshop Evaluation Form

C-1

Appendix C
Post-Workshop Evaluation Form - Continued

C-2


File Typeapplication/pdf
AuthorYoung, Linda - NASS
File Modified2019-06-24
File Created2019-06-24

© 2024 OMB.report | Privacy Policy