One Statistician’s View of Big Data

Recently I've had several questions about using machine learning models with large data sets. Here is a talk I gave at Yale's Big Data Symposium on the subject.

I believe that, with a few exceptions, less data is more. Once you get beyond some "large enough" number of samples, most models don't really change that much and the additional computation burden is likely to cause practical problems with model fitting.

Off the top of my head, the exceptions that I can think of are:

  • class imbalances
  • poor variability in measured predictors
  • exploring new "spaces" or customer segments

Big Data may be great as long as you are adding something of value (instead of more of what you already have). The last bullet above is a good example. I work a lot with computational chemistry and we are constantly moving into new areas of "chemical space" making new compounds that have qualities that had not been previously investigated. Models that ignore this space are not as good as ones that do include them.

Also, new measurements or characteristic of your samples can make all the difference. Anthony Goldbloom of Kaggle has a great example from a competition for predicting the value of used cars:

The results included for instance that orange cars were generally more reliable - and that colour was a very significant predictor of the reliability of a used car.
"The intuition here is that if you are the first buyer of an orange car, orange is an unusual colour you're probably going to be someone who really cares about the car and so you looked after it better than somebody who bought a silver car," said Goldbloom.
"The data doesn't lie - the data unearthed that correlation. It was something that they had not taken into account before when purchasing vehicles."

​My presentation has other examples of adding new information to increase the dimensionality of the data. The final quote sums it up:

The availability of Big Data should be a trigger to really re-evaluate what we are trying to solve and why this will help.

Recent Changes to caret

Here is a summary of some recent changes to caret.

Feature Updates:

  • train was updated to utilize recent changes in the gbm package that allow for boosting with three or more classes (via the multinomial distribution)

  • The Yeo-Johnson power transformation was added. This is very similar to the Box-Cox transformation, but it does not require the data to be greater than zero.

New models referenced by train:

  • Maximum uncertainty linear discriminant analysis (Mlda) and factor-based linear discriminant analysis (RFlda) from the HiDimDA package were added.

  • The kknn.train model in the kknn package was added. This is basically a more intelligent K-nearest neighbors model that can use distance weighting, non-Euclidean distances (via the o Minkowski distance) and a few other features.

  • The extraTrees function in the package of the same name was added. This generalizes the random forest model by adding randomness to the predictors and the split values that are evaluated at each split point.

Numerous bugs were also fixed in the last few releases.

The new version is 5.16-04. Feel free to email me at mxkuhn@gmail.com if you have any feature requests or questions.