Netflix Announces Second Data Mining Contest 56
John Snodgrass writes "Neil Hunt, Chief Product Officer at Netflix, has announced on the Netflix Prize Forums that they are planning to hold a new data mining competition. The second competition will have some twists and is expected to be shorter in duration. It will feature two grand prizes, to be awarded in a 6 and 18 month time frame. A previous competitor still active on the board has already dubbed it: 'The Sparse Matrix: Reordered' and 'The Sparse Matrix: Factorizations.'"
Re:Usefullness? (Score:2, Interesting)
As someone who used to watch 3 movies a day for about 3 years straight, I still found the system to be useful.
I thought I'd seen everything that was worth watching but if you're really dedicated to finding more quality films then any help is good help, and this is one of the better systems for finding new films (more accurate than trawling imdb but maybe not quite as fun)
Human reaction machines. . . (Score:4, Interesting)
There's nothing at all wrong with studying how the human automatic processes work, but "Psychology for Prizes" does have a very Neil Stephenson feel to it.
The public eagerly jumping for the chance to teach corporate bodies how to better advertise to them seems a little preposterous. In a world where everybody's objective is openness and self-study for the betterment of humankind, this sort of thing would be laudable, but here it's a bald-faced attempt to fine-tune manipulation techniques.
What would be cool would be if Netflix, upon offering you a suggestion, would also explain what reasoning they used to offer that suggestion to you. Open-source advertising. If every billboard had an explanation of the psychology behind it, we could learn much more about ourselves. The amount of free will that we use every day versus automatic behavior can only increase when the illusion of free will is broken down and examined.
-FL
Re:Usefullness? (Score:4, Interesting)
Re:Contests (Score:5, Interesting)
It allows the researchers to "cheat" a bit too via an argument by authority, which is not always good, but does at least make the researcher's job easier. A big issue in data mining is that it isn't purely a technical field, but one with both conceptual and technical issues. The over-arching goal is something like, "get useful and/or interesting information out of data". But what is "useful", what is "interesting", and how do we measure when we've gotten it or not? Usually you have to defend why your problem is the right one, why your metric is the right way to measure success on it, etc. Working on the Netflix competition lets you sidestep all that, because Netflix has already decreed exactly what the goal is, and what performance metric will be used to judge success at that goal, leaving only the technical problems.