Forecasts With a Radar Reveal True Robustness

The forecasting world loves a clean score. A single number, a neat SMAPE or MAPE, and we feel like we understand a model’s performance. But in the messy real world, data behave like weather: they shift, surprise, and reveal different strengths and weaknesses at different moments. The new study behind ModelRadar argues that this hunger for one number blinds us to how forecasts actually behave as conditions change. The authors propose a different lens, one that treats forecast evaluation as a multi dimensional map rather than a flat average. It is a shift from a spotlight on a single performance metric to a radar-like scan of model behavior across data quirks, horizons, and time series quirks. And it comes with a practical payoff: better, more robust decisions about which forecasting method to trust in which situation. The work is led by Vitor Cerqueira, Luis Roque, and Carlos Soares from the University of Porto, with collaborators at LIACC and Fraunhofer Portugal AICOS, and it invites practitioners to rethink model choice as a disciplined, condition aware act rather than a race for the lowest average error.