Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Announcements Media Technology

Netflix Prize May Have Been Achieved 83

MadAnalyst writes "The long-running $1,000,000 competition to improve on the Netflix Cinematch recommendation system by 10% (in terms of the RMSE) may have finally been won. Recent results show a 10.05% improvement from the team called BellKor's Pragmatic Chaos, a merger between some of the teams who were getting close to the contest's goal. We've discussed this competition in the past."
This discussion has been archived. No new comments can be posted.

Netflix Prize May Have Been Achieved

Comments Filter:
  • by Daimanta ( 1140543 ) on Friday June 26, 2009 @08:17PM (#28490045) Journal

    C'mon, the Netflix prize isn't THAT well known. At least you could have given some basic info about it.

  • for simple intellectual satisfaction, like a giant puzzle or a game of chess

    money is not the motivation for everything in this world

  • by quanticle ( 843097 ) on Friday June 26, 2009 @08:36PM (#28490151) Homepage

    Well, just like the Ansari X Prize didn't cover the costs of developing and launching a suborbital rocket, the Netflix Prize isn't really meant to be a large enough prize to fully fund the development of a new recommendation algorithm. The purpose of the prize is to stimulate interest and get people started. The real reward will come when they turn their algorithm into commercialized software - the rewards from making such a thing applicable outside of Netflix could be large indeed.

  • by neokushan ( 932374 ) on Friday June 26, 2009 @08:41PM (#28490177)

    Pretty sure having it on their CV means they can effectively write their own pay cheque in terms of job opportunities.

  • by morgan_greywolf ( 835522 ) on Friday June 26, 2009 @09:01PM (#28490291) Homepage Journal

    Well, it was for AT&T. No, they don't want the prize money; they're donating it charity. But what they do have now is an algorithm that can be turned into a commercial product or service. The individual researchers may not have had money as their primary motivator, but their employer sure has hell did.

  • Re:Interesting (Score:4, Insightful)

    by Trepidity ( 597 ) <[gro.hsikcah] [ta] [todhsals-muiriled]> on Friday June 26, 2009 @10:08PM (#28490683)

    Trying to recommend unpopular movies is problematic. Is the computer program going to be able to discern under-rated (Glengarry Glen Ross) or just crap (Ishtar)

    That is indeed an interesting question, and I think it's what the grandparent meant when he pointed out Netflix's contest didn't really address it. The performance measure Netflix used was root-mean squared error, so every prediction counts equally in determining your error. Since the vast majority of predictions in the data set are for frequently-watched films, effectively the prize was focused primarily on optimizing the common case: correctly predict whether someone will like or not like one of the very popular films. Of course, getting the unpopular films right too helps, but all else being equal, it's better to make even tiny improvements to your predictions of films that appear tons of times in the data set, than to make considerable improvements to less popular films' predictions, because the importance of getting a prediction right is in effect weighted by the film's popularity.

    You could look at error from a movie-centric perspective, though, asking something like, "how good are your recommender algorithm's predictions for the average film?" That causes you to focus on different things, if an error of 1 star on Obscure Film predictions and an error of 1 star on Titanic predictions count the same.

  • by Anonymous Coward on Saturday June 27, 2009 @12:26AM (#28491371)
    You're being elitist and silly. Perhaps you could do some research, but I have seen no evidence that interest in simple vs. complex plots has anything to do with intelligence. Certainly the type of plot one likes in a movie is something reasonable to consider. But assuming a relationship to IQ or EQ from that is silly.
  • Re:Interesting (Score:3, Insightful)

    by Trepidity ( 597 ) <[gro.hsikcah] [ta] [todhsals-muiriled]> on Saturday June 27, 2009 @02:29PM (#28496237)

    That's true, but since there's not a huge range in ratings, that root-squaring doesn't have nearly as big an effect as the many orders of magnitude difference in popularity. I don't recall the exact numbers offhand, but I think the top-10 movies, out of 17,500, account for fully half the weight.

  • Re:Interesting (Score:2, Insightful)

    by coaxial ( 28297 ) on Monday June 29, 2009 @11:14AM (#28514361) Homepage

    1.) Rare could also be defined as unpopular. Trying to recommend unpopular movies is problematic. Is the computer program going to be able to discern under-rated (Glengarry Glen Ross) or just crap (Ishtar)

    You know what. I actually like Ishtar. I really do. The blind camel, and the line "We're not singers! We're songwriters!" gets me every time.

    So really, the even harder problem is to know when to buck your your friends and go with the the outlier. It's hard, because kNN methods work pretty well, and they're all about going with the consensus of whatever cluster you're in.

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...