Tuesday, November 24, 2009

When sharing isn't a good idea

Ensemble models seem to be all the buzz at the moment. The NetFlix prize was won by a conglomerate of various models and approaches that each excelled in subsets of the data.

A number of data miners have presented findings based upon using simple ensembles that use the mean prediction of a number of models. I was surprised that some form of weighting isn’t commonly used, and that a simple mean average of multiple models could yield such an improvement in the global predictive power. It kinda reminds me of Gestalt theory phrase "The whole is greater than the sum of the parts". It’s got me thinking, when it is best not to share predictive power. What if one model is the best? There is also a ton of considerations regarding scalability and trade-off between additional processing , added business value, and practicality (don’t mention random forests to me..), but we’re pretend those don’t exist for the purpose of this discussion :)

So this has got me thinking do ensembles work best in situations where there are clearly different sub-populations of customers. For example Netflix is in the retail space, with many customers that rent the same popular blockbuster movies, and a moderate number of customers that rent rarer (or far more diverse, ie long tail) movies. I haven’t looked at the Netflix data so I’m guessing that most customers don’t have hundreds of transactions, so generalising the correct behaviour of the masses to specific customers is important. Netflix data on any specific customer could be quite scant (in terms of rents/transactions). In other industries such as telecom, there are parallels; customers can also be differentiated by nature of communication (voice calls, sms calls, data consumption etc) just like types of movies. Telecom is mostly about quantity though (customer x used to make a lot of calls etc). More importantly there is a huge amount of data about each customer, often with many hundreds of transactions per customer. There is therefore relatively lesser reliance upon supporting behaviour of the masses (although it helps a lot) to understand any specific customer.

Following this logic, I’m thinking that ensembles are great at reducing the error of incorrectly applying insights derived from the generalised masses to those weirdos that rent obscure sci-fi movies!  Combining models that explain sub-populations very well makes sense, but what if you don’t have many sub-populations (or can identify and model their behaviour with one model).

But you may shout "hey what about the KDD Cup".  Yes, the recent KDD Cup challenge (anonymous featureless telecom data from Orange) was also a won by an ensemble of over thousand models created by IBM Research.  I'd like to have had some information about what the hundreds of columns respresented, and this might have helped better understand the Orange data and build more insightful and performing models.  Aren't ensemble models used in this way simply a brute force approach to over learn the data?  I'd also really like to know how the performance of the winning entry tracks over the suebsequent months for Orange.

Well, I haven’t had a lot of success in using ensemble models in the telecom data I work with, and I’m hoping it is more a reflection of the data than any ineptitude on my part. I’ve tried simply building multiple models on the entire dataset and averaging the scores, but this doesn’t generate much additional improvement (granted on already good models, and I already combine K-means and Neural Nets on the whole base).  During my free time I’m just starting to try splitting the entire customer base into dozens of small sub-populations and building a Neural Net model on each, then combining the results and seeing if that yields an improvement. It’ll take a while.

Thoughts?

8 comments:

Dean Abbott said...

One thing to check with your models is how much they differ. If the correlation in predictions is, say, 0.95, it doesn't matter how many models you average together. Ensembles work best when there is sufficient disagreement in the models.

I think you are right that sub-populations, or more precisely, different behaviour among subpopuluations can be a big driver in how much improvement ensembles provide. In these cases, there is a great deal of complexity in the data that a single model may have difficulty finding.

Matthew said...

Always thought that, say, Modeler's ensemble stuff was a sop to a naive notion that if one model was good then three must be better so plug them in and let magic happen. That's nothing like the kind of ensembling that we saw in KDD or that Salford won all their prizes with. Ensemble automates the easier part, which is combining the predictions. The hard part is generating a large number of models that are meaningfully different from each other. Read how TreeNet does it if you haven't. If you're trying to combine good models into a better model you may already be barking up the wrong tree. Think combining bad models into a great model.

James Pearce said...

My favourite method is to use gradient boosted models or random forests which ensemble as they go.

--james

Anonymous said...

Hi Tim,

I would agree that the practice of combining different very different models is starting to make data mining look more like alchemy / cooking than true science...

In an older post, I noticed that "The Elements of
Statistical Learning" was part of your library. The most recent edition of this book has a new chapter on random forests, which does a great job of explaining how combining models with low intra-correlation helps reduce variance.

It also explains why weighing models is not required (in this case), and how de-correlation is achieved.

While I'm there, I've been reading this blog for a while, and thought you might be interested in checking out some data mining visualizations I've developed: see http://www.data-applied.com. Or perhaps even give me a plug on your blog :)

Dom.

Anonymous said...

The recent AusDM Analytic Challenge used over 1,000 models from the 2 top netflix teams to see who could ensemble them best.

Most people who ensemble do use straight averaging. The idea of this contest was to stimulate research into different ways you can ensemble.

All the contestants papers were put together in a document which can be downloaded.

http://www.tiberius.biz/ausdm09/results.html

Some other research I did looked at the submissions from another competion. What is interesting is the 'best' ensemble picked base models constructed using 7 different algorithms.

http://www.tiberius.biz/pakdd07.html

Anonymous said...

it's odd how this topic has recently been discussed among the data mining practitioners as if it were a relatively new thing. it's been around for decades - originally created by statisticians, mostly for time series applications.

Steve said...

I liked what Anand Rajaraman said on this subject. He had his Stanford data mining class students working on the Netflix Prize. Here's what he had to say:

"Different student teams in my class adopted different approaches to the problem, using both published algorithms and novel ideas. Of these, the results from two of the teams illustrate a broader point. Team A came up with a very sophisticated algorithm using the Netflix data. Team B used a very simple algorithm, but they added in additional data beyond the Netflix set: information about movie genres from the Internet Movie Database (IMDB). Guess which team did better?

Team B got much better results, close to the best results on the Netflix leaderboard!! I'm really happy for them, and they're going to tune their algorithm and take a crack at the grand prize. But the bigger point is, adding more, independent data usually beats out designing ever-better algorithms to analyze an existing data set. I'm often suprised that many people in the business, and even in academia, don't realize this."

data mining courses said...

Anand Rajaraman from Stanford seems to be teaching his students the right things.