The heat map shows the influence of weather on yields on 844 US Corn Belt counties.Â It is based on a decision-tree regression algorithm applied to monthly weather and yield data for the period 1910-2012.
As is well known, July weather has by far the biggest influence on corn yields.
However several other interesting effects show up. For example, June precipitation also influence yields in Indiana and Ohio, while cold September nights are damaging in Minnesota.Â July rainfall is less important in Nebraska due to widespread use of irrigation there.
Random forest and gradient boosting are leading data mining techniques. They are designed to improve upon the poor predictive accuracy of decision trees. Random forest is by far the more popular, if the google trends chart below is anything to go by.
Correlation between predictors is the data miners’ bugbear. It is an inevitable fact of life in many situations. Multicollinearity can lead to misleading conclusions and degrade predictive power. A natural question is: Which approach handles multicollinearity better? Random forest or gradient boosting?
Suppose there are observations and potential predictors . Assume that
where is the amplitude of gaussian noise (mean zero and unit variance). Only 2 of Â potential predictors actually play a role in generating the observations. The Â are independently distributed ()Â with the exception of which is correlated with Â (correlation ).
As the correlation increases, it becomes harder for a data mining algorithm to ignore , even though Â is not present in (A) and it is not a “true” explanatory variable.
Variable importance charts for this class of problem show that gradient boosting does a better job of handling multicollinearity than random forest. The complex trees used by random forest tend to spread variable importance more widely, particularly to variables which are correlated with the “true” predictors. The simpler base learner trees of gradient boosting (4 terminal nodes in the above example)Â seem to have greater immunity from the evils of multicollinearity.
Random forest is an excellent data mining technique, but it’s greater popularity compared to boosting seems unjustified.