Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Technology

Why Good Data Can Be Hard to Find Online 39

WSJdpatton writes to mention that Carl Bialik has an interesting look at why good data can be hard to find, much less understand, online. He cites a couple of examples, both Google's first-quarter performance numbers and Alexa's revamp of their number-tracking process. "Now Alexa is incorporating other sources of data -- though it says the prior ranking 'wasn't wrong before, but it was different.' Some sites saw big changes in their rankings following Alexa's move: The tech blog TechCrunch said it fell far from its prior position in Drudge Report territory (rarefied air in Web-traffic terms). On Friday afternoon, Drudge Report ranked 545th, compared with TechCrunch's ranking of 1,784th, according to Alexa's new math."
This discussion has been archived. No new comments can be posted.

Why Good Data Can Be Hard to Find Online

Comments Filter:
  • Alexa? No. (Score:4, Informative)

    by Slashdot Suxxors ( 1207082 ) on Friday April 18, 2008 @09:17PM (#23124568)
    This [slashdot.org] isn't exactly on topic, but I think you should give it a read before you make a final opinion on what the article is trying to stay.
    • Re:Alexa? No. (Score:5, Insightful)

      by jd ( 1658 ) <imipak@ y a hoo.com> on Friday April 18, 2008 @09:24PM (#23124598) Homepage Journal
      The article and the slashdot story seem to say the same thing - the numbers produced are just numbers out of a hat. They don't represent anything meaningful and indeed can't because the participants are self-selecting and therefore not a random sample of the population. This is obvious and always has been. The popularity of a site (or a TV show or anything else) cannot be measured by any simple means, if it can be measured at all.
      • Re:Alexa? No. (Score:5, Interesting)

        by TubeSteak ( 669689 ) on Friday April 18, 2008 @09:33PM (#23124652) Journal

        The article and the slashdot story seem to say the same thing - the numbers produced are just numbers out of a hat. They don't represent anything meaningful and indeed can't because the participants are self-selecting and therefore not a random sample of the population.
        Even with a random, statistically relevant sample size... the saying "lies, damn lies, and statistics" still applies.

        The popularity of a site (or a TV show or anything else) cannot be measured by any simple means, if it can be measured at all.
        Tivo & other DVRs would suggest otherwise.
      • Re:Alexa? No. (Score:5, Informative)

        by Firehed ( 942385 ) on Friday April 18, 2008 @10:33PM (#23124916) Homepage
        Maybe relative tracking can't be done by simple means since it requires participation on everyone's part, but absolute local tracking is trivially easy on any server that supports server-side scripting and has some sort of database access. A couple lines of code at the bottom of your page to insert a new row on a page load and you've got nearly perfect visitor logs that can easily go beyond your standard server logs.

        Again, useless for relative popularity unless you have everyone's data. But it still tells you how popular your site is which is great for ego boosting and advertiser stats if nothing else.

        (I'd suggest that Google Analytics is going to be a lot more useful in the long run and at least has the potential to provide relative data in addition to the absolute, but anything that relies on client-side scripting is going to give less accurate numbers since clients can disable or screw around with scripting)
    • Alexa [alexa.com] confirms it.
  • by JadeAuto ( 935739 ) on Friday April 18, 2008 @09:43PM (#23124714)
    I read online somewhere that 70% of statistics online are made up. This article seems to prove the point. 4 out of 5 slashdotters agree! ;)
  • On Friday afternoon, Drudge Report ranked 545th, compared with TechCrunch's ranking of 1,784th, according to Alexa's new math.


    I don't know what they're complaining about, mine went down over a million positions [alexa.com]!
  • by Anonymous Coward
    Another example besides Alexa of "readjustment" is Hitslink. Last November, they revised their figures for OS share for March through October 2007. Linux went from a reported .81% share in October, to .50%. They made only a brief allusion on their site to filtering out "unrepresentative" hits from their data. Recently, they again revised their Linux share for January 2008, from the original .67% to .64%. Even though Hitslink seems to have trouble deciding how many Linux users there are, that doesn't keep pe
  • by zappepcs ( 820751 ) on Friday April 18, 2008 @10:45PM (#23124964) Journal
    Just observing the Internet and then reading this ... just wow.

    Good data is HARD to find ANY FUCKING WHERE, never mind limiting your search to just online. Seriously!

    News online? read the same story from 8 sources, form your own opinion. MSM sucks worse.

    Scientific data? Well, unless it's peer reviewed, you know it's probably suspect and need to verify it with other data. Damn, even peer reviewed scientific data should be compared to other data these days.

    How about Encyclopedic data.. There is wikipedia, but make sure to corroborate the data, right?

    Read it in a blog? Check the data before you make up your mind.

    Hmmmm this sounds a lot like trying to find good data before the Internets were active. Damn, all that data is proffered up by humans... Humans are not infallible so I'm guessing that data provided by humans is going to be a bit 'not infallible' also.

    Where does the assumption that data online should be good data come from? wtf?
    • Absolutely agree. In fact, data is *usually* bad, regardless of medium. Even parents give their children bad advice at times.

      On second thought, I suspect they had to do something to make this a story, because no one cares about Alexa really, and this wouldn't have gotten published by the WSJ of all people if it had an honest headline.
  • No, really. (Score:5, Funny)

    by v(*_*)vvvv ( 233078 ) on Friday April 18, 2008 @11:21PM (#23125078)

    The company tracks the Internet habits of users of its browser toolbar ... These rankings have long been criticized ... because Alexa users may not behave like the Internet as a whole.
    Ya, who in the world uses the Alexa toolbar!?
  • Only Bad Lore can.
  • it is just your google skills that's sucks... that's what my boss keeps telling me!
    • Jesus H Christ why so you people insist on changing your font. Especially to a font like that! Don't you realize how fucking annoying t
      it is?
      • by netdur ( 816698 )
        I did not change any font, this reply doesn't let you change font at first place, that font is there by default, ask /. not me... click "reply" and see it yourself

        btw! what's H? jesus mohammed christ?
  • With Comcast's monitoring of user traffic, they could provide reliable stats for their customer base. We ought to get something back from all this Big Brother stuff.

  • A Good Date (Score:3, Funny)

    by GalacticLordXenu ( 1195057 ) on Saturday April 19, 2008 @02:31AM (#23125722)
    I initially read this as being, "Why a Good Date Can be Hard to Find Online". Hell, I could have told you that! But alas...
  • A public health expert from Sweden - Hans Rosling, who teaches at Karolinska Institutet - has (some time ago, already) announced that he was able to persuade holders of UN-collected population data to publish their data on-line for anyone wanting to analyze it (eg, using his innovative tools for displaying it: GapMinder).

    I would say that the data which he managed to get put on-line for anyone's use might be a counterexample to the poster's claim.

    Of course, you can decide for yourself... ;-)

    See his 2nd talk
  • by Jekler ( 626699 ) on Saturday April 19, 2008 @10:21AM (#23127402)
    The reason good data is hard to find online is chiefly a problem with perspective and the models we are using to differentiate good data from bad data. That model primarily relies on the idea that it's all about numbers, or simply that more data is the same as better data. Whenever we come up with bad data, the "quantity model" dictates we just need a larger sample.

    This model is directly related to how companies measure TV show quality. The theory is, the more people who watch a show, the better that show must be. This model is so obviously faulty; almost everyone can agree that American Idol isn't even in the same qualitative ballpark as The X-Files, Arrested Development, or Star Trek. The reason the model is faulty is because of the hugely limited scope of the examination. There are a number of variable factors that aren't being considered, such as people own more TVs than when Star Trek was on, and they're mistaking curios interest with enjoyment. Average person will stop and watch a car wreck for roughly the same amount of time they'll play with a yo-yo, that doesn't mean the entertainment value of each is directly comparable, there's a whole different brain process going on in the observers of each, but the model of measuring quantities assumes that two activities which consume the same amount of time are equivalent in all ways.

    Back to internet statistics. All this data mining and gathering is designed to ignore the differences in activities, it's only cataloging information for the purpose of what's the same. As the article states, Alexa is always checking for biases. Well the biggest bias in this model is the assumption "in sufficient quantity, all things are interchangeable." It's the assumption that telemarketers and scammers work on, which is why so many people go broke buying into those schemes, because they buy into an assumption which is absolutely wrong.

    Many internet business models, specifically data miners, are designed on, assume that 1 million hits is the same regardless of where it comes from. When you consider real factors, having 1 million people see your hand-made chain pouches at a shopping mall is not going to generate the same level of interest as having 1 million people see them at a renaissance fair.

    Of course that introduces a whole different problem with assumptions about targeting (I'm not going to get into that, only state that targeted marketing makes the assumption that timing doesn't matter).

    In conclusion, you can't play people as a numbers games, People's behaviors (including their online behaviors) are complex and any model which treats people's differences as a child might divide up a bag of skittles by color is going to have a very high error rate.
    • by frsmith ( 836165 )
      The problem here is that enough people believe in the 'figures' enough to make a whole industry out of.

      Marketing seems to work if you have a big enough sample!
      It's like soaps on TV, they churn over the same X million viewers so are considered a success
      The other 48 million (uk) don't watch them and I would regard that as a failure.
      Word of mouth is now becoming more powerfull as we have the web and can spread the word so much faster.
      This is what film makers are finding out, that crap film would have don
  • This is a problem I've noted before (for example, here [dailyrevolution.net]). I have an equivalent Google page rank with sites with hundreds of times more traffic. In short, I've yet to see the metrics or analytics tool that is truly reliable.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...