Jump to Main Content
Multimodel ensembles improve predictions of crop–environment–management interactions
- Wallach, Daniel, Martre, Pierre, Liu, Bing, Asseng, Senthold, Ewert, Frank, Thorburn, Peter J., van Ittersum, Martin, Aggarwal, Pramod K., Ahmed, Mukhtar, Basso, Bruno, Biernath, Christian, Cammarano, Davide, Challinor, Andrew J., De Sanctis, Giacomo, Dumont, Benjamin, Eyshi Rezaei, Ehsan, Fereres, Elias, Fitzgerald, Glenn J., Gao, Y., Garcia‐Vila, Margarita, Gayler, Sebastian, Girousse, Christine, Hoogenboom, Gerrit, Horan, Heidi, Izaurralde, Roberto C., Jones, Curtis D., Kassie, Belay T., Kersebaum, Kurt C., Klein, Christian, Koehler, Ann‐Kristin, Maiorano, Andrea, Minoli, Sara, Müller, Christoph, Naresh Kumar, Soora, Nendel, Claas, O'Leary, Garry J., Palosuo, Taru, Priesack, Eckart, Ripoche, Dominique, Rötter, Reimund P., Semenov, Mikhail A., Stöckle, Claudio, Stratonovitch, Pierre, Streck, Thilo, Supit, Iwan, Tao, Fulu, Wolf, Joost, Zhang, Zhao
- Global change biology 2018 v.24 no.11 pp. 5072-5083
- climate change, crop models, data collection, prediction, variance, wheat
- A recent innovation in assessment of climate change impact on agricultural production has been to use crop multimodel ensembles (MMEs). These studies usually find large variability between individual models but that the ensemble mean (e‐mean) and median (e‐median) often seem to predict quite well. However, few studies have specifically been concerned with the predictive quality of those ensemble predictors. We ask what is the predictive quality of e‐mean and e‐median, and how does that depend on the ensemble characteristics. Our empirical results are based on five MME studies applied to wheat, using different data sets but the same 25 crop models. We show that the ensemble predictors have quite high skill and are better than most and sometimes all individual models for most groups of environments and most response variables. Mean squared error of e‐mean decreases monotonically with the size of the ensemble if models are added at random, but has a minimum at usually 2–6 models if best‐fit models are added first. Our theoretical results describe the ensemble using four parameters: average bias, model effect variance, environment effect variance, and interaction variance. We show analytically that mean squared error of prediction (MSEP) of e‐mean will always be smaller than MSEP averaged over models and will be less than MSEP of the best model if squared bias is less than the interaction variance. If models are added to the ensemble at random, MSEP of e‐mean will decrease as the inverse of ensemble size, with a minimum equal to squared bias plus interaction variance. This minimum value is not necessarily small, and so it is important to evaluate the predictive quality of e‐mean for each target population of environments. These results provide new information on the advantages of ensemble predictors, but also show their limitations.