Jump to Main Content
Making inference with messy (citizen science) data: when are data accurate enough and how can they be improved?
- Clare, John D. J., Townsend, Philip A., Anhalt‐Depies, Christine, Locke, Christina, Stenglein, Jennifer L., Frett, Susan, Martin, Karl J., Singh, Aditya, Van Deelen, Timothy R., Zuckerberg, Benjamin
- Ecological applications 2019 v.29 no.2 pp. e01849
- algorithms, automation, citizen scientists, data quality, models, remediation, screening
- Measurement or observation error is common in ecological data: as citizen scientists and automated algorithms play larger roles processing growing volumes of data to address problems at large scales, concerns about data quality and strategies for improving it have received greater focus. However, practical guidance pertaining to fundamental data quality questions for data users or managers—how accurate do data need to be and what is the best or most efficient way to improve it?—remains limited. We present a generalizable framework for evaluating data quality and identifying remediation practices, and demonstrate the framework using trail camera images classified using crowdsourcing to determine acceptable rates of misclassification and identify optimal remediation strategies for analysis using occupancy models. We used expert validation to estimate baseline classification accuracy and simulation to determine the sensitivity of two occupancy estimators (standard and false‐positive extensions) to different empirical misclassification rates. We used regression techniques to identify important predictors of misclassification and prioritize remediation strategies. More than 93% of images were accurately classified, but simulation results suggested that most species were not identified accurately enough to permit distribution estimation at our predefined threshold for accuracy (<5% absolute bias). A model developed to screen incorrect classifications predicted misclassified images with >97% accuracy: enough to meet our accuracy threshold. Occupancy models that accounted for false‐positive error provided even more accurate inference even at high rates of misclassification (30%). As simulation suggested occupancy models were less sensitive to additional false‐negative error, screening models or fitting occupancy models accounting for false‐positive error emerged as efficient data remediation solutions. Combining simulation‐based sensitivity analysis with empirical estimation of baseline error and its variability allows users and managers of potentially error‐prone data to identify and fix problematic data more efficiently. It may be particularly helpful for “big data” efforts dependent upon citizen scientists or automated classification algorithms with many downstream users, but given the ubiquity of observation or measurement error, even conventional studies may benefit from focusing more attention upon data quality.