Technology

Researchers Pulled Out $1M Netflix Victory in Last Half Hour

Dramatic, down-to-the-wire finishes aren’t the sorts of things one generally associates with technology prizes. However, the team behind the US$1 million prize awarded by Netflix says that’s exactly how that contest went down.

The team received the award Monday in a New York ceremony hosted by the online movie rental company, which called the contest to see if anyone could solve a problem that had vexed its own researchers: how to improve by at least 10 percent the recommendations engine that suggests what movies Netflix members should rent next.

Contestants received a database stripped of personally identifiable details to work on.

In late July, as the contest was drawing to a close, the team that would ultimately win, “BellKor’s Pragmatic Chaos,” found itself second on the public leaderboard with 24 hours remaining. The BellKor team had to come up with a 0.01 percent improvement in order to win.

Eleventh-Hour Tweak

“Predictors were blended in,” the team wrote on its blog. “New techniques were tried out. Code was written. Nothing seemed to be helping to tip the scale.”

In the end, the team said, with 30 minutes left to go, they found a small adjustment, tossed it into the code and submitted the final method. They initially believed their top competitors, a team called “Ensemble,” had won, but BellKor later learned they had eked out a victory.

“Our approach was very pragmatic, as our name implied, said Martin Chabbert, one of the members of the award-winning team. The group was the amalgam of three previous teams that had competed against one another for the prize.

While the solution is complex, two concepts are key, Chabbert said.

Frequency and Probability

One is frequency — the number of ratings a user has made on a given day. That’s important, the team found, because it appears that a large number of ratings on a single day is designed more to “feed” the Netflix recommendation engine than it is a reflection of recent movie-watching experiences.

Such feed-based ratings appear to be biased, Chabbert said, because “when rating a batch of movies, users indicate how they remember movies that they saw a while ago, rather than indicate the feeling from the movie that they just saw.”

The second point, Chabbert said, is that the team used models to predict a probability distribution, rather than assigning a specific predicted rating to a movie.

“There is a subtle difference, but we believe that these types of models are closer to the task given to a Netflix subscriber,” he said.

Next Million-Dollar Challenge

The rules of the Netflix prize require the research team to publish their findings and allow them to license the technology to other companies.

The team’s work will be beneficial to other recommendation engines, Chabbert said, but it is not likely to have research implications beyond that narrow scope.

The team has not yet decided whether to enter the second $1 million contest announced yesterday by Netflix, which the company said will focus on “the much harder problem of predicting movie enjoyment by members who don’t rate movies often, or at all.”

That work will use demographic and behavioral data, the company said.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels