Jump to Main Content
Systematic Error Removal Using Random Forest for Normalizing Large-Scale Untargeted Lipidomics Data
- Fan, Sili, Kind, Tobias, Cajka, Tomas, Hazen, Stanley L., Tang, W. H. Wilson, Kaddurah-Daouk, Rima, Irvin, Marguerite R., Arnett, Donna K., Barupal, Dinesh K., Fiehn, Oliver
- Analytical chemistry 2019 v.91 no.5 pp. 3590-3596
- cohort studies, data collection, quality control, standard deviation, variance
- Large-scale untargeted lipidomics experiments involve the measurement of hundreds to thousands of samples. Such data sets are usually acquired on one instrument over days or weeks of analysis time. Such extensive data acquisition processes introduce a variety of systematic errors, including batch differences, longitudinal drifts, or even instrument-to-instrument variation. Technical data variance can obscure the true biological signal and hinder biological discoveries. To combat this issue, we present a novel normalization approach based on using quality control pool samples (QC). This method is called systematic error removal using random forest (SERRF) for eliminating the unwanted systematic variations in large sample sets. We compared SERRF with 15 other commonly used normalization methods using six lipidomics data sets from three large cohort studies (832, 1162, and 2696 samples). SERRF reduced the average technical errors for these data sets to 5% relative standard deviation. We conclude that SERRF outperforms other existing methods and can significantly reduce the unwanted systematic variation, revealing biological variance of interest.