Weather influence on US corn yields

The heat map shows the influence of weather on yields on 844 US Corn Belt counties. It is based on a decision-tree regression algorithm applied to monthly weather and yield data for the period 1910-2012.

As is well known, July weather has by far the biggest influence on corn yields.

However several other interesting effects show up. For example, June precipitation also influence yields in Indiana and Ohio, while cold September nights are damaging in Minnesota. July rainfall is less important in Nebraska due to widespread use of irrigation there.

March 23, 2014 · joe · No Comments
Posted in: Uncategorized

random forest or gradient boosting?

Random forest and gradient boosting are leading data mining techniques. They are designed to improve upon the poor predictive accuracy of decision trees. Random forest is by far the more popular, if the google trends chart below is anything to go by.

Correlation between predictors is the data miners’ bugbear. It is an inevitable fact of life in many situations. Multicollinearity can lead to misleading conclusions and degrade predictive power. A natural question is: Which approach handles multicollinearity better? Random forest or gradient boosting?

Suppose there are  n observations  \left\{ y \right\} and potential predictors  \left\{x_1 \cdots x_p \right\}. Assume that

(A)   \[ {y = x_1 +x_2 + \sigma\mathcal{N}} \]

where  \sigma is the amplitude of gaussian noise \quicklatex \mathcal{N} (mean zero and unit variance). Only 2 of  p potential predictors actually play a role in generating the observations. The  \left\{x_1 \cdots x_p \right\} are independently distributed (\quicklatex \mathcal{N}) with the exception of \quicklatex x_3 which is correlated with \quicklatex x_1  (correlation  \rho).

(B)   \[ x_3 =  \rho x_1 + \sqrt{1-\rho^2} \mathcal{N} \]

As the correlation  \rho increases, it becomes harder for a data mining algorithm to ignore \quicklatex x_3, even though \quicklatex x_3 is not present in (A) and it is not a “true” explanatory variable.

 

 

Variable importance charts for this class of problem show that gradient boosting does a better job of handling multicollinearity than random forest. The complex trees used by random forest tend to spread variable importance more widely, particularly to variables which are correlated with the “true” predictors. The simpler base learner trees of gradient boosting (4 terminal nodes in the above example) seem to have greater immunity from the evils of multicollinearity.

Random forest is an excellent data mining technique, but it’s greater popularity compared to boosting seems unjustified.

R code

 

 

February 11, 2014 · joe · No Comments
Posted in: Uncategorized