GL UW Videos QAPP

GL UW Videos QAPP.docx

Generic Clearance for Citizen Science and Crowdsourcing Projects (Renewal)

GL UW Videos QAPP

OMB: 2080-0083

Document [docx]
Download: docx | pdf

QAPP

Using Citizen Science to Analyze UW Videos

17 of 17






Quality Assurance Project Plan

Using Citizen Science to Analyze Underwater Video in the Great Lakes



September, 2017









Signature Date

Mari Nord ___________________________________ _________

Project Lead



Molly Wick ______________________________________ _________

Technical Contact



John Dorkin __________________________________ __________

Quality Assurance Coordinator



Andy Tschampa __________________________________ ________

Quality Assurance Manager



Chris Korleski __________________________________ ________

Water Division Director





SECTION 1.0, PROJECT OBJECTIVES, ORGANIZATION, AND RESPONSIBILITIES


    1. The purpose of study


The goal of this project is to assess the feasibility of a web-based citizen science approach in providing habitat and invasive species information from underwater videos collected as part of Office of Water’s (OW) National Coastal Condition Assessment (NCCA). Through stakeholder consensus, we will design, test, and deploy a web-based citizen science application to analyze underwater video, and compare results to expert analysis.


1.2 Project objectives


We will address the following question: Can citizen science analysis of underwater video contribute relevant and cost-effective information compared to previously completed expert analysis?


    1. The secondary data needed


The data to be used in this project includes videos collected as part of the two most recent National Coastal Condition Assessment Surveys conducted in 2010 and 2015. Video were collected at sites throughout the nearshore, defined as <30 m depth and within 5km from shore, based on the NCCA’s probabilistic survey design; as well as in the connecting channels of the St. Mary’s River and the Huron-Erie Corridor (EPA, 2016).

The video set up used was a GoPro or SeaViewer camera with a light attached to a cable, which was lowered to the bottom to collect at least 2 minutes of video at each site. Field technicians were instructed to hold the camera just above the bottom, but in some cases the camera hit the bottom. Because the camera was just attached to a cable and not a frame, the camera could spin and typically would get a 360 degree view of the entire site. The field of view of the video can vary dramatically depending on the exact height above bottom, the water clarity, angle of view, substrate type, camera angle, and how much the boat drifted during collection. Figure 1 shows an example screenshot from a typical cobble/boulder site.


Figure 1: Example screenshot from an underwater video collected from a hard-bottomed (cobble/boulder) site through the National Coastal Condition Assessment.



1.4 The planned approach for evaluating project objectives (i.e., data analysis)


The objective of this project is to design, test, and deploy a web-based citizen science approach to analyzing underwater video to help inform stakeholders in management decisions regarding benthic habitat and invasive species in the Great Lakes. Based on a stakeholder engagement process from the start, the results will produce estimates of habitat attributes/conditions in the Great Lakes nearshore to meet managers’ data needs. The project will test the ability of an end-user designed citizen science project to provide high-quality video interpretation data for management of habitat and invasive species in the Great Lakes for a cost significantly less than expert analysis (e.g. Gardiner, et al., 2012)


Application Development: State and federal resource managers will form a design team. This team will define the habitat and invasive management problem, data needs, how underwater video might address those needs, and how to set up a citizen science video analysis so that results are useful and applicable. Based on their criteria, existing citizen science web platforms will be evaluated, including the Zooniverse.org which is a popular platform for citizen scientists to help with imagery interpretation (e.g. Simpson et al., 2014). A beta-version of an imagery analysis web application will be developed for the selected platform based on the needs defined by the design team.

The analysis developed will generally follow the previous expert analysis developed by MED scientists (see Appendix 1), although it may be more limited in scope and designed to answer specific questions depending on the data needs of stakeholders. The expert analysis identified substrate type (hard or soft); relative vegetation cover and height; mussel presence and cover; and round goby/fish presence. The design team will utilize the previous expert analysis to build a simplified analysis to be conducted by citizen scientists in response to the specific data needs of the stakeholders.

A consortium of technical experts, including the design team and other state resource managers, along with members of the general public, will beta-test the design. A formal input process will allow testers to provide feedback to optimize the analysis for the data needs defined, and improve instructions and “help” resources. The application will be modified and improved based on testers’ input. Following testing and application optimization, the project will be launched publicly and publicized.


Life of Project: Public data collection will continue for approximately 9 months. During that time the project team will monitor and facilitate participants’ experiences. This includes answering questions and posting project updates and/or supplementary information on an associated blog and social media outlets. An ongoing, interactive presence on the web application and associated social media has been shown to make online citizen science projects more successful. Project participation metrics will be evaluated and used to direct additional engagement efforts throughout the course of the project.


Compile, Synthesize, and Validate Results: Results will be compiled and synthesized. Statistical analyses will be conducted and results will be validated by comparing to the results of expert analysis. More detail on statistics, and data validation is found below in Section 4.0. Results of the science questions defined above will be summarized, and shared with state resource managers and with volunteers and the public online. Results will help guide state habitat and invasive species management actions, as well as GLWQA and NCCA monitoring and assessment work in the Great Lakes.


1.5 Responsibilities of project participants


Two main teams will run this project: The stakeholder design team, and the application development team.


Stakeholder Team:

The stakeholder design team will be a multi-agency team coordinated by Office of Research and Development (ORD) to identify and provide input on the data needs and priorities for the project. That team will also provide feedback on draft/beta versions of the application, and help publicize the application to their respective networks prior and during the application launch.


Project Participant

Organization/Division

Role/Duties

Mari Nord

EPA Region 5

Project lead – Assist in engagement and coordination of stakeholders; assist in marketing web application to Great Lakes networks.

Molly Wick

EPA Office of Research and Development, Oak Ridge Institute for Science and Education

Team Lead – Lead stakeholder team to identify data needs and goals for web application, communicate data needs and web application goals to development team.

Todd Nettesheim

EPA Great Lakes National Program Office

Point of contact – Serve as liaison to GLWQA Science Annex; assist with engaging stakeholders and marketing web application to Great Lakes networks.

Beth Hinchey Malloy

EPA Great Lakes National Program Office

Stakeholder Design Team Member -

Serve as liaison to GLWQA Lakewide Management Annex; assist with engaging stakeholders and marketing web application to Great Lakes networks.

Todd Wills, Jay Wesley, Rachel Coale, Matt Preisser

MI DNR, MI DEQ

MI Stakeholder Design Team State Contacts –Provide input on data needs and goals for web application design, help market web application to state networks.

Michele Wheeler, John Masterson, Bob Wakeman, Eva Lewandowski, Paul Skawinski

WDNR

WI Stakeholder Design Team State Contacts - Provide input on data needs and goals for web application design, help market web application to state networks.

Vic Santucci

IL DNR

IL Stakeholder Design Team State Contact

Jim Lehnen

NY DEC

NY Stakeholder Design Team State Contact


Application Development Team:

The application development team will take input from the design team to work through the logistics of design and implementation of the application. This includes hosting the application, launching it, and moderating it. ORD will also coordinate the application development team.


Project Participant

Organization/Division

Role/Duties

Mari Nord

EPA Region 5

Project lead - Assist in engagement and coordination of development team; assist in marketing web application to Great Lakes networks.

Molly Wick

EPA Office of Research and Development, Oak Ridge Institute for Science and Education

Project Coordinator – lead development team discussion, communicate the data needs and application goals determined by the stakeholder team to the development team and the web programmer.

Carole Braverman

EPA Region 5

Application Development Team - provide technical input on web application development; ensure the goals of the web application are feasible and achievable.

David Bolgrien

EPA Office of Research and Development

Application Development Team - provide technical input on web application development; ensure the goals of the web application are feasible and achievable.

Todd Nettesheim

EPA Great Lakes National Program Office

Point of contact- Assist with engaging stakeholders and marketing web application to Great Lakes networks.

Beth Hinchey Malloy

EPA Great Lakes National Program Office

Application Development Team Member

Assist with engaging stakeholders and marketing web application to Great Lakes networks.

Sarah Lehmann

EPA Office of Water

Website Manager – Videos will be hosted on EPA’s youtube channel.

Hugh Sullivan

EPA Office of Water

Application Development Team Member – Provide input on NCCA priorities and needs for videos, ensure web application is aligned with NCCA program.

Jonathan Launspach

CSRA

Application Development Team Web Programmer – Develop web application based on input from stakeholder team.


The videos will be analyzed by the public during the web application’s life. The video analysis procedures will be defined by the data needs identified by the stakeholder design team (e.g. mussel presence), and the methods to address those priorities (e.g. exact questions public will answer about mussels in videos) will be defined by the application development team.

Following the application’s life, the results of multiple citizen analysts reviewing the videos will be compiled and reviewed for Quality Assurance by ORD. ORD will complete a report summarizing results which will be distributed to all members of the stakeholder design team, the application development team, as well as individuals who tested the application or participated in analysis through the web application; and will be available online.


SECTION 2.0, SOURCES OF SECONDARY DATA


The source of data used for this project are videos collected through the National Coastal Condition Assessment. This is a large dataset (over 1000 videos) that is readily available. This dataset was selected because ORD had previously developed methods for analysis of these videos (Lietz et al., 2015). Furthermore, these videos have been analyzed by a team of experts, so we have expert analysis to which we can compare the citizen science analysis results (Wick et al., 2017).

The videos were collected using the standard field protocol described in USEPA, 2009 and USEPA, 2015. The development of reliable, cost-efficient methods to analyze underwater video will benefit the NCCA program and other scientists and programs utilizing underwater video as the technology has become more accessible and affordable. The methods for underwater video analysis tested here will also apply to other data sources such as underwater video collected by ORD as part of the 2017 Lake Huron Coordinated Science and Monitoring Initiative. All project deliverables will cite the original source of data. The ability to also analyze new video collected by ORD through other programs will reduce workloads of current staff and increasing the ability to share results with public quickly.


SECTION 3.0, QUALITY OF SECONDARY DATA


This pilot project aims to determine appropriate methods for video analysis to address challenges in accuracy, precision, representativeness, completeness, and comparability.


Accuracy of video analysis is challenging to address because in-situ verification of underwater video interpretations is not possible for previously collected video, and is very expensive for future video collections. Instead of addressing accuracy directly, our citizen scientist approach aims to increase precision by having multiple analysts review videos. Agreement among two trained experts on video interpretation is often low, mostly due to discrepancies in interpretation of video quality and confidence in observations (68 – 89%, Wick et al., 2017). By having many (e.g 10) analysts view the videos, we will identify high quality videos when multiple analysts agree on interpretations. Conversely, we will identify poor quality videos when multiple analysts disagree on video interpretation. A minimum number of analysts necessary for confirming precision in a given video will be determined based on previous studies in the literature and the total number of citizen scientists we are able to engage (e.g. Wiggins & Crowston, 2011).


While public analysts have a significantly lower level of training than experts, the analysis will be developed to provide the necessary background information and examples, and will be designed to be simple enough that the general public can successfully perform the analysis. The ability of the public to complete the analysis will be confirmed through beta-testing of the application by the general public. During beta-testing, the application will be modified as necessary based on feedback from testers to ensure that it is understandable and achievable for the general public, and also enjoyable to ensure adequate participation for project success.


Representativeness, completeness, and comparability are addressed by the sampling methods and probabilistic survey design which is described in USEPA, 2009, and USEPA, 2015.


SECTION 4.0, DATA REPORTING, DATA REDUCTION, AND DATA VALIDATION


The video analysis developed to be completed by citizen scientists on the web application will follow methods developed by Lietz et al., 2016 and Wick et al., 2017, with input from the stakeholder design team on data need priorities. The analysis, user interface, training materials, and supporting materials will be designed based on lessons learned by other crowd-sourced imagery and media analysis projects (e.g. Nov et al., 2014; Dickinson et al., 2012; Wiggins & Crowston, 2011; Rotman et al., 2012; Mugar et al., 2014; Raddick et al., 2009; Newman et al., 2012; Reed et al., 2013; Cox et al., 2015; Price et al., 2013; Eveleigh et al., 2014; Luczak-Roesch et al., 2014; Tinati et al., 2015; Edward et al., 2013; Curtis, 2015; Haythornthwaite, 2009; Crowston & Fagnot, 2008; Kosmala et al., 2016).

Projects typically set targets for number of independent reviews based on the type of media being interpreted, and the type of analysis (e.g. Batalha et al., 2013 required five independent views of each light curve, while Swanson et al., 2016 averaged 26 views of each wildlife image). We evaluate and set a target for minimum number of views for each video. Triggers will be established for additional expert review of individual videos (e.g. Batalha et al., 2013; Wiggins & Crowston, 2011).

The citizen science analysis will ultimately produce a complicated dataset with multiple interpretations for each video. These data will need to be reduced to a single interpretation for each video in order to be summarized. A rich bank of literature has been compiled on data reduction methods and results utilized by citizen science approaches to media analysis, available at https://www.zooniverse.org/about/publications. Methods for data analysis utilized by other citizen science video analysis projects will be reviewed and applied for this project.

Thresholds can be set for determining video quality based on the minimum level of agreement or disagreement of multiple viewers (Newman et al., 2012). The sensitivity of the analysis results to those thresholds will be evaluated. A consensus algorithm can be applied to determine the result for each video (e.g. Swanson et al., 2015; Matsunaga et al., 2016). Consensus algorithms used in other studies are often available as open source resources (Swanson et al., 2015).

Statistical analysis methods will be selected by the application development team, following the methods done for other citizen science video analysis projects. Swanson et al., 2016 used the following metrics: level of agreement among classifications, fraction of classifications supporting the aggregated answer, and fractions of classifiers who reported “nothing here” for an image that was ultimately classified as containing an animal. These statistics will provide information on the uncertainty and difficulty of interpreting videos.

The results of the citizen scientists’ analysis will be compared to the results of previously-collected expert interpretations of the same videos to evaluate the citizen science analysis. While the entire video dataset has been analyzed by at least a single expert, a subset (56 videos) has been analyzed by multiple experts allowing a more in-depth validation effort for a subset of the dataset. Citizen scientists results that disagree with expert analysis will be flagged for further review, and if necessary be reviewed by multiple experts. Because no ground truth data are available, determining the actual accuracy of experts or volunteers is not possible. Results that disagree with single/multiple expert analysis indicate marginal video quality and will be eliminated from final results.

The deliverable products that will be prepared for this project include a final report and a peer- reviewed journal article. Results and data will be shared with state and local stakeholders.




References

Batalha, N. M., Rowe, J. F., Bryson, S. T., Barclay, T., Burke, C. J., Caldwell, D. A., ... & Dupree, A. K. (2013). Planetary candidates observed by Kepler. III. Analysis of the first 16 months of data. The Astrophysical Journal Supplement Series, 204(2), 24.

Cox, J., Oh, E. Y., Simmons, B., Lintott, C., Masters, K., Greenhill, A., ... & Holmes, K. (2015). Defining and measuring success in online citizen science: A case study of Zooniverse projects. Computing in Science & Engineering, 17(4), 28-41.

Crowston, K., & Fagnot, I. (2008, July). The motivational arc of massive virtual collaboration. In Proceedings of the IFIP WG 9.5 Working Conference on Virtuality and Society: Massive Virtual Communities (pp. 1-2).

Curtis, V. (2015). Online citizen science projects: an exploration of motivation, contribution and participation (Doctoral dissertation, The Open University).

Dickinson, J. L., Shirk, J., Bonter, D., Bonney, R., Crain, R. L., Martin, J., ... & Purcell, K. (2012). The current state of citizen science as a tool for ecological research and public engagement. Frontiers in Ecology and the Environment, 10(6), 291-297.

Edward, P., Sebastien, C., & Colin, W. (2013). Measuring the conceptual understandings of citizen scientists participating in zooniverse projects: A first approach. Astronomy Education Review.

Eveleigh, A., Jennett, C., Blandford, A., Brohan, P., & Cox, A. L. (2014, April). Designing for dabblers and deterring drop-outs in citizen science. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems (pp. 2985-2994). ACM.

Gardiner, M. M., Allee, L. L., Brown, P. M., Losey, J. E., Roy, H. E., & Smyth, R. R. (2012). Lessons from lady beetles: accuracy of monitoring data from US and UK citizen‐science programs. Frontiers in Ecology and the Environment, 10(9), 471-476.

Haythornthwaite, C. (2009, January). Crowds and communities: Light and heavyweight models of peer production. In System Sciences, 2009. HICSS'09. 42nd Hawaii International Conference on (pp. 1-10). IEEE.

Kosmala, M., Wiggins, A., Swanson, A., & Simmons, B. (2016). Assessing data quality in citizen science. Frontiers in Ecology and the Environment, 14(10), 551-560.

Lietz, J. E., Kelly, J. R., Scharold, J. V., & Yurista, P. M. (2015). Can a rapid underwater video approach enhance the benthic assessment capability of the national coastal condition assessment in the great lakes?. Environmental management55(6), 1446-1456.

Luczak-Roesch, M., Tinati, R., Simperl, E., Van Kleek, M., Shadbolt, N., & Simpson, R. J. (2014, June). Why Won't Aliens Talk to Us? Content and Community Dynamics in Online Citizen Science. In ICWSM.

Matsunaga, A., Mast, A., & Fortes, J. A. (2016). Workforce-efficient consensus in crowdsourced transcription of biocollections information. Future Generation Computer Systems, 56, 526-536.

Mugar, G., Østerlund, C., Hassman, K. D., Crowston, K., & Jackson, C. B. (2014, February). Planet hunters and seafloor explorers: legitimate peripheral participation through practice proxies in online citizen science. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing (pp. 109-119). ACM.

Newman, G., Wiggins, A., Crall, A., Graham, E., Newman, S., & Crowston, K. (2012). The future of citizen science: emerging technologies and shifting paradigms. Frontiers in Ecology and the Environment, 10(6), 298-304.

Nov, O., Arazy, O., & Anderson, D. (2014). Scientists@ Home: what drives the quantity and quality of online citizen science participation?. PloS one, 9(4), e90375.

Price, C. A., & Lee, H. S. (2013). Changes in participants' scientific attitudes and epistemological beliefs during an astronomical citizen science project. Journal of Research in Science Teaching, 50(7), 773-801.

Raddick, M. J., Bracey, G., Gay, P. L., Lintott, C. J., Murray, P., Schawinski, K., ... & Vandenberg, J. (2009). Galaxy zoo: Exploring the motivations of citizen science volunteers. arXiv preprint arXiv:0909.2925.

Reed, J., Raddick, M. J., Lardner, A., & Carney, K. (2013, January). An exploratory factor analysis of motivations for participating in Zooniverse, a collection of virtual citizen science projects. In System Sciences (HICSS), 2013 46th Hawaii International Conference on (pp. 610-619). IEEE.

Rotman, D., Preece, J., Hammock, J., Procita, K., Hansen, D., Parr, C., ... & Jacobs, D. (2012, February). Dynamic changes in motivation in collaborative citizen-science projects. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work (pp. 217-226). ACM.

Simpson, R., Page, K. R., & De Roure, D. (2014, April). Zooniverse: observing the world's largest citizen science platform. In Proceedings of the 23rd international conference on world wide web (pp. 1049-1054). ACM.

Swanson, A., Kosmala, M., Lintott, C., Simpson, R., Smith, A., & Packer, C. (2015). Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna. Scientific data, 2.

Swanson, A., Kosmala, M., Lintott, C., & Packer, C. (2016). A generalized approach for producing, quantifying, and validating citizen science data from wildlife images. Conservation Biology, 30(3), 520-531.

Tinati, R., Van Kleek, M., Simperl, E., Luczak-Rösch, M., Simpson, R., & Shadbolt, N. (2015, April). Designing for citizen data analysis: a cross-sectional case study of a multi-domain citizen science platform. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 4069-4078). ACM.

U.S. EPA (2009) National Coastal Condition Assessment: Field Operations Manual. EPA-841R-09-003. U.S. Environmental Protection Agency, Washington, DC

USEPA (2015) National Coastal Condition Assessment; Field Operations Manual. EPA-841-R-14-007. US Environmental Protection Agency, Washington, DC

USEPA, 2016 National Coastal Condition Assessment 2010 Great Lakes Technical Memorandum.

Wick M., Lietz J., Pawlowski M. 2017. Using National Coastal Condition Assessment Underwater Video to investigate nearshore substrate type. International Association for Great Lakes Research 60th Annual Meeting, Detroit, Michigan.

Wiggins, A., & Crowston, K. (2011, January). From conservation to crowdsourcing: A typology of citizen science. In System Sciences (HICSS), 2011 44th Hawaii international conference on (pp. 1-10). IEEE.




Appendix 1


Underwater Video Analysis Protocol


Overview:

Underwater videos (UVID) were collected in from 2009 - 2016 for the National Coastal Condition Assessment throughout each Great Lake and in the Huron-Erie Corridor (Detroit River, Lake St. Clair, and St. Clair River), and St. Marys River. These videos can be used to supplement traditional grab sampling techniques to characterize benthic habitat, and identify the presence, and possibly relative density, of invasive species such as Dreissenids. The contractor is responsible for analyzing these videos using the protocol developed by EPA and described below.


Conducting Underwater Video Analysis

A database, which is set up with relevant site information, fields, and UIDs needing analysis, will be provided by the EPA along with the underwater videos. The database will include filenames/links to each underwater video needing analysis. A single analyst should be responsible for analyzing all videos to produce consistent results.


Overview of steps in analysis:

  1. Analysts will view each video at least twice in its entirety and portions of videos that are unclear or questionable will need to be viewed additional times slowly.

  2. After viewing the videos, the Underwater Video Analysis will be completed by filling out the required fields in the database based on the observations of the video, and the criteria described below and in the attached supporting materials.

  3. Review the video again following completion of the analysis to verify results.


The database guide (see below) provides detailed instructions for filling out each field. All fields in the database that include dropdown options must be completed for each video. Every effort should be made to use the options available, however if necessary new options could be added. Comment fields are also included. Be as descriptive as possible in the comment fields. A VIDcomment should be included for every video. VidFishID, VidFishComment, VidVegComment, and VidMusselComment should be included for each video that includes fish, vegetation, and mussels, respectively.


Training Tools

The following tools have been developed to train analysts to conduct the underwater video analysis:

  • UVID Database Guide.xlsx: This explains what each field in the database means and instructions for how to fill it out.

  • VideoQualityClarityFlowChart.pdf: Flow chart for assigning videos a quality and clarity rating (From Lietz et al., 2015).

  • UVID Analysis Example Screenshots.pdf: Document with examples of screenshots from videos of the UVID analysis for various fields.

  • UVIDTrainingDatasetFinal.accdb: Database with results of UVID analysis for 44 training videos completed by EPA analyst, along with a blank sheet for contractor to practice UVID analysis.

  • 44 Training Videos from 2015 and 2010 NCCA Sites


Quality Assurance: QA set

A subset of the 2009 - 2016 videos in the analysis will be analyzed for quality assurance (QA) by EPA once training is complete and the contractor has begun analyzing the dataset. This Initial QA set will consist of the first 30 videos analyzed by the contractor. This initial QA set will “test” the reproducibility of the analysis and help identify any discrepancies between Contractor and EPA analysts. The contractor will adapt the implementation of the methodology based on the results of this preliminary QA analysis, and discussion with EPA analysts.


The table below describes the measures that will be used to evaluate the contractor’s analysis for quality assurance. Percent Agreement refers to the percent of videos where the Contractor's result for the field of interest exactly matches the EPA's result for the field of interest. While all fields will be reviewed (with a target for 90% agreement for non-comment fields), the Priority QA fields are the fields that EPA will use to evaluate success for the contractor.


Measures for evaluating Contractor/EPA Quality Assurance Videos

Fields In Database

QA Measure

Criteria

Priority QA Fields

UID

Completion

Completed

 

VidTime

Completion

Completed

 

CameraType

Completion

Completed

 

Analyst

Completion

Completed

 

VidBottom1

Percent Agreement

90%

Yes

VidBottom2

Percent Agreement

90%

Yes

VidBottomDom

Percent Agreement

90%

 

VidBottom3

Percent Agreement

90%

 

VidFish

Percent Agreement

90%

Yes

VidFishID

Completion

Completed

 

VidFishComment

Comments Present if fish observed by EPA

TRUE

 

VidGobyRating

Percent Agreement

90%

 

VidVeg

Percent Agreement

90%

Yes

VidVegCover

Percent Agreement

90%

 

VegHeight

Percent Agreement

90%

 

VegTypeDom

Percent Agreement

90%

 

VegTypeSubDom

Percent Agreement

90%

 

VidVegComment

Comments Present

TRUE

 

VidMussels

Percent Agreement

90%

Yes

VidMusselDensity

Percent Agreement

90%

 

VidMusselRating

Percent Agreement

90%

Yes

VidMusselComment

Comments Present

TRUE

 

VidRating

Percent Agreement

90%

Yes

VidH2OClarityRating

Percent Agreement

90%

Yes

Controllable?

Percent Agreement

90%

 

ControllableID

Percent Agreement

90%

 

Uncontrollable

Percent Agreement

90%

 

UncontrollableID

Percent Agreement

90%

 

VidComment

Comments Present

TRUE

 

Priority QA Fields

Percent Agreement

90%

Yes

All Fields

Percent Agreement

90%

 


NCCA Underwater Video Analysis Database Guide

Field

Description

Possible Entries/Options

Instructions

UID

What is the video's Unique ID?

UID

The UID will be given for each video in the database provided.

CameraType

Which type of Camera was used?

SeaViewer, GoPro

Choose what type of camera was used for the video. SeaViewer files are .avi files and have a SeaViewer logo on the bottom of the video. GoPro videos are .mp4 files.

Analyst

Who completed the analysis?

Analyst

Indicate Name of Analyst.

VidBottom1

What is the dominant substrate at site?

Soft (silt/clay/sand); Hard (gravel/cobble/boulder/rock/bedrock); Unknown; Unknown/veg

*Choose NA for VidBottom2 if the site only has one sediment type.
*If two substrate types are present in equal proportions, check yes for VidBottomCodominance.
*In some cases, the substrate is completely obscured by vegetation beds (indicated with a VidVegCover = Dense below). In those cases, the substrate should be marked as unknown/veg. In other cases, the site may be dominated by a veg bed, but the camera does get glimpses of the substrate and it can be identified. In those cases the dominant substrate should be chosen. If the substrate is not able to be identified for any other reason (e.g. camera movement, distance, turbidity, light issues, etc), then "unknown" should be chosen.

VidBottom2

What is the secondary substrate at the site?

Soft (silt/clay/sand); Hard (gravel/cobble/boulder/rock/bedrock); Unknown; Unknown/veg; NA

VidBottomCodominance

Are VidBottom1 and VidBottom2 codominant?

Check box

VidBottomComment

Include any comments about substrate type.

Text as needed

Note any observations/comments about substrate type. If you are confident in a more refined substrate determination (e.g. sand or boulder or gravel), you can add that here, or if there are any additional observations of the bottom including wood or foreign objects. Include confidence in substrate claims.

VidFish

Were fish observed in the video?

Yes, No, Unknown

Indicate if any fish were observed in the video. Choose yes if fish were observed. Choose no if no fish were observed. Choose unknown if fish may be present but you are not confident, or if video quality, video clarity, etc prevents observation of fish.

VidFishID

If fish were observed, what species were identified?

Species names and quantity, or any notes about type of fish, or unknown.

If fish are present in the video, the VidFishID field should be filled out with what species can be identified. If species cannot be identified, enter unknown and any additional information available (juvenile, approximate quantitiy, etc). Indicate uncertainty/confidence as necessary. Round gobies are a species of interest, so try to identify any fish that may be round gobies if possible, and note confidence.

VidFishComment

Include any comments about fish in the video.

Text as needed

Include any comments about fish observed, along with times in the video when fish are observed (if not present throughout video), and confidence in identification.

VidFilamentousVeg

Are filamentous forms of vegetation observed in the video?

Yes, No, Unknown

This field asks whether filamentous forms of vegetation are observed or not. This aims to identify sites that have algae present. Use unknown if you are not confident in your claim that there is or isn't filamentous vegetation, or if video quality, video clarity, etc prevents observation of the bed/vegetation at all. If you are confident there is filamentous vegetation present, even if it is at very small quantities, choose yes.

VidVegOther

Are other types of vegetation besides filamentous vegetation observed in the video?

Yes, No, Unknown

Indicate if the video shows the site has any vegetation other than filamentous vegetation types. Use unknown if you are not confident in your claim that there is or isn't vegetation other than filamentous vegetation, or if video quality, video clarity, etc prevents observation of the bed/vegetation at all. Make sure to say yes even if there is only a few stems.

VidVegCover

What is approximate cover of all types (filamentous and other) of vegetation?

Absent (No vegetation)
Sparse - individual/isolated plants (<10% cover).
Medium - Patchy vegetation cover.
Dense - site is completely vegetated (>90% cover)
Unknown

Mentally integrate the entire video to estimate the cover of all vegetation types over the entire site represented by the video, using the bins described here.

VegHeight

What is the general/dominant height ofall types (filamentous and other) of vegetation?

Absent (No vegetation)
Low (< 0.3 m)
High (>0.3 m)
Unknown

Indicate the general stature of dominant vegetation at the site: low or high. Canopy forming vegetation is always high. Choose Unknown if video clarity, video quality, etc. prevents an estimate from being made.

VidVegComment

Include any comments about the vegetation in the video.

Text as needed

Include any additional observations of vegetation. If it can be identified, include types of vegetation (finely branched, leafy, ribbon leaved, emergent/floating, filamentous, etc) and/or species you can easily identify with little effort, and any notes on structure, type, variability, etc. Mention confidence in veg claims.

VidMussels

Are live dreissenids present at the site?

Yes, No, Unknown

In order to indicate that mussels are present, live mussels must be observed. If only shells are observed or no mussels or shells are observed, choose No. If you think that mussels may be present but are not confident, or if the video quality or clarity prevent a determination, choose unknown.

VidMusselDensity

What is the density of live mussels at the site?

Absent (no live mussels)
Sparse (< 10% cover - a few isolated spots)
Medium (10 - 90% cover - consistently present in a patchy manner)
Dense (>90% cover - most surfaces covered)
Unknown

Mentally integrate the entire video to estimate the density of live mussels over the entire site represented by the video, using the bins described here. If it is difficult to estimate density, err on the conservative side. For example, if the video is blurry or fast-moving and you only see one shot of a small area where you can confidently identify mussels, but there may be many other areas that are just too blurry to confidently identify them, estimate density only based on the ones that you can confidently identify (e.g. Sparse). Indicate Unknown if vegetation, video clarity, video quality, etc. prevents an estimate from being made.

VidMusselComment

Include any comments about the mussels observed in the video.

Text as needed

Include comments on mussels. Include notes on variability, density, attachment to surfaces, confidence in claim, etc. Include timestamp for any "fleeting glimpse" mussels sightings if they are not present throughout the entire video.

VidQualityComment

Comment on video quality.

Text as needed

Comment on video quality, including if there were technical issues, or other issues with the video. E.g. video never gets close enough to bottom/rocks/mussels, camera stirs up bottom, improper lighting, improper camera angle/view, turbidity/dark color, blurry images, fish don't present themselves, vegetation is too thick, bouncy image, spinning camera, only small area is viewable, or just can't make things out. Discuss confidence of individual claims in VidMusselComment, VidVegComment, VidFishComment, and VidBottomComment.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created0000-00-00

© 2024 OMB.report | Privacy Policy