Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet Technology

Company Offers Customizable Web Spidering 46

TechReviewAl writes "A company called 80legs has come up with an interesting new web business model: customized, on-demand web spidering. The company sells access to its spidering system, charging $2 for every million pages crawled, plus a fee of three cents per hour of processing used. The idea is to offer Web startups a way to build their own web indexes without requiring huge server farms. 'Many startups struggle to find the funding needed to build large data centers, but that's not the approach 80legs took to construct its Web crawling infrastructure. The company instead runs its software on a distributed network of personal computers, much like the ones used for projects such as SETI@home. The distributed computing network is put together by Plura Processing, which rents it to 80legs. Plura gets computer users to supply unused processing power in exchange for access to games, donations to charities, and other rewards.'"
This discussion has been archived. No new comments can be posted.

Company Offers Customizable Web Spidering

Comments Filter:
  • by nweaver ( 113078 ) on Monday September 28, 2009 @05:43PM (#29572693) Homepage

    Lets assume that spidering a page costs 10 kB of data.

    So thats $2 for 1M pages, or 10 GB of data download.

    So thats at least $1 of data transfer that is being shifted onto the suckers, err "volunteers" who's home network is running this app.

    • If you're running a business and you want to safely crawl your own intranet, say to index your documents, there are other options [google.com.au].
    • It's expensive and inefficient to set up your own crawlers. This gives a viable alternative -- with bandwidth and CPU that is provided by people who know what they're giving up [presumably]; and who are doing it in exchange for some other value received.

      All in all, I'd have to say this is a pretty good idea.

      • It is really easy to make a web crawler in Java. (Look at java.net.http or maybe java.awt.net.http) I made a decent one by myself in about a week. Okay, so my web crawler only does TEXT/HTML. No images, no Active X, no video. From experience, an average web page is about 10Kbytes. Now, anyone's specific application will probably be looking for key words, or else you are just re-creating Google. A key word data crawl would return a LOT less information, but would still require a lot of bandwidth and proces
        • Sure, that's mostly what I was referring to. The code for a crawler is simple - the resources to use them effectively is something else entirely; the intarwebs are just too big for a startup to do it without laying down tons of cash for hardware/"cloud" hardware.
    • It's more like 20kb per page. First it gets downloaded onto the user's PC, but unless the company goes to physically pick up his hard disk, that page will have to be uploaded through the network to their servers eventually.
    • by skeeto ( 1138903 )

      Almost everyone running Plura's crap is unaware of it. It's embedded in web pages like advertising. For example, you know the highly popular Desktop Tower Defense?

      http://www.handdrawngames.com/DesktopTD/Game.asp [handdrawngames.com]

      Look at the page sources. There's a Plura bug on it running the whole time you're playing the game. They've already been doing this for a long time.

  • Nifty... (Score:3, Interesting)

    by ZekoMal ( 1404259 ) on Monday September 28, 2009 @05:45PM (#29572729)
    But whenever I see something that is nifty combined with the internet, I immediately think "now how will this be used to spam and/or infect people..."
  • Hrm... (Score:5, Insightful)

    by vertinox ( 846076 ) on Monday September 28, 2009 @05:47PM (#29572771)

    Sounds like a legitimate front for identity thieves, spammer, or even worse... Marketers.

    I suppose its easier to do than running your own bot net.

    • Re: (Score:1, Funny)

      by Anonymous Coward

      or even worse... Marketers

      So, which one should it be - insightful or redundant?

  • free games for spare cycles/bandwidth? thats more interesting to me than that spidering stuff, how do i sign up?
    • Since it's not really free, I'd rather have some monetary compensation if I were to participate in the program.
  • Seems cheap! (Score:4, Insightful)

    by Pedrito ( 94783 ) on Monday September 28, 2009 @05:50PM (#29572809)
    Seems like an awfully cheap way to spider millions of pages of porn. It would be worthwhile if Google didn't do it already for free.
    • Turn off safesearch in Bing and search videos.

      aEN
    • by Maxmin ( 921568 )

      It would be worthwhile if Google didn't do it already for free.

      You've missed the point... or you've never tried to use Google programmatically.

      Google's search APIs are all bound to Javascript now. There is no way to connect with them from your Java, Python or Ruby application. Not, that is, if you don't want to get your IP(s) blocked for running too many queries.

      This spidering service provides something similar to what Alexa Web Search once did.

  • Buried in Digsby (Score:4, Informative)

    by Anonymous Coward on Monday September 28, 2009 @05:50PM (#29572821)

    This is apparently the service that caused a lot of controversy when people discovered it was somewhat hidden in Digsby [wikipedia.org].

    • From the wikipedia entry you cite:

      Digsby developer "chris" has stated that CPU usage is limited to 75% for desktops, and 25% for laptops unless operating on battery power.

      Does that sound like an insane amount of CPU usage for damn IM client to be using to anyone else? Why the hell would the embed plura into an IM client anyway? This whole thing seems too fishy to me.

      • Re: (Score:1, Insightful)

        by Anonymous Coward

        Why the hell would the embed plura into an IM client anyway?

        Unfortunately, it's all about money.

  • They are currently recruiting only flash game developers but I can imagine this getting as big as advertising is right now. It could even keep newspapers alive. "Do you want to access my free content? Sure, but gimme 10% of your processing power." As long as there is demand for this computing power, we are quite able to harness it.
  • There is a spider crawling the web that claims to be building a free, downloadable web index for similar purposes.
    Torrent link for the index and information at http://www.dotnetdotcom.org/ [dotnetdotcom.org].

  • Another rationalization to spend more money on my computer hardware next upgrade.
  • by 93 Escort Wagon ( 326346 ) on Monday September 28, 2009 @06:18PM (#29573131)

    I can see how they might get a fair number of people to donate their spare cycles for this, if the rewards are seen as sufficiently interesting. But are there really a whole bunch of startups (or other companies) that are really champing at the bit to create a new search engine? Other than marketers or malware purveyers, I mean. And do these searches honor robots.txt exclusions?

    BTW I took a quick look at 80legs' website in an attempt to get these answers. I came up empty in that regard - so I will comment on how the CEO's hair makes him look like an in-disguise member of the Conehead family. Seriously, what's with the hair?

  • Occam's razor. (Score:3, Insightful)

    by icepick72 ( 834363 ) on Monday September 28, 2009 @06:41PM (#29573345)
    The levels of indirection present to support this system -- distributed clients, incentives for being a distributed client, power supply vs demand, payment for custom spidering -- make the system many things at the same time and unnecessarily complex, because those things already exist for free and in less complex ways. Many needs are sufficed by the simpler mechanisms and always have been.
  • Reality... (Score:2, Insightful)

    by JuSTCHiLLiN ( 605538 )
    Plura gets computer users to supply unused processing power in exchange for access to games, donations to charities, and spyware.
  • I am surprised that a post containing the words "SETI", "80 legs", "crawling", "computer", "spider", "farm", and "unused power" does not have the plot of Jodi Foster listening to radio telescope and discovering evil giant mutant cyborg space spiders are trying to invade earth and capture humans as batteries
  • by PCM2 ( 4486 ) on Monday September 28, 2009 @07:40PM (#29573947) Homepage

    Is there really a big demand out there for outsourced spidering? I had not heard of this market. They seem to be implying that there are all these start-up outfits out there who have invented really amazing, unique UIs that allow people to find exactly what they need on the Web, and all they need to be successful is access to a searchable index. Huh??

    I mean, if you're going to be some kind of start-up search engine or "semantic company" (whatever that means), shouldn't Web spidering be your core competency? If you're going to differentiate yourself in the market, how can you buy spidering as a commodity? How to you expect to attract any investment if you're telling potential investors that you rent your spidering capability from another start-up -- let alone one that uses some kind of half-baked P2P technology to do the work?

    Seriously, in a world where Google seems willing to partner with just about anybody who needs any kind of searching for reasonable rates, what is this company's proposed customer base? (And no, the Technology Review article includes no quotes from customers at all.)

    • by mgkimsal2 ( 200677 ) on Monday September 28, 2009 @08:49PM (#29574745) Homepage
      "I mean, if you're going to be some kind of start-up search engine or "semantic company" (whatever that means), shouldn't Web spidering be your core competency? If you're going to differentiate yourself in the market, how can you buy spidering as a commodity?"

      Raw spidering is pretty much a commodity already. You're issuing GET requests over HTTP (for the most part). The "semantic" stuff comes in to play analyzing the results and doing interesting things with raw information you get back. If people can spend more time focused on doing the 'interesting bits' and less time on having to scale up to pull in the raw data to analyze, they'll be better off for it and more likely to be able to focus on creating something new/interesting/distinguishing.

      People (generally) don't write their own web servers, nor their own TCP/IP stacks, often don't write their own session handling logic, or security code. All of these things have been commoditized. Perhaps too many people are relying on 'cloud computing' these days, but hosting and storage 'in the cloud' is where all the cool kids are playing right now (I don't necessarily agree with it, and probably wouldn't put all my eggs in that basket myself, but others are doing so). Spidering may be the next frontier to get commoditized.

      Perhaps not everyone is comfortable 'partnering' with Google for everything? If someone was going to work on developing the 'next big thing', would you rather invest in something where the people had spent an inordinate amount of time building network capacity up to do drone work, or used a service like 80legs, or built the prototypes on Google's servers? Depending on the project, any of those make sense, but I'd prefer to use a service like 80legs myself. They're small enough and hungry enough they should give top notch customer service at this stage, whereas Google's not going to give you a number to call for direct service (maybe they do if you're spending loads of money, but then you're back to wise use of money).

      The P2P aspect of how they're doing the spidering may be clever, but I'd rather see a more direct use of data-center resources around the globe, rather than relying on a seti-like participation model.
      • Re: (Score:3, Informative)

        by Jack9 ( 11421 )

        Advertising uses a fair amount of spidering for such things as contextual searching (where has a user been and what are their interests). Amazon was completely apatheic, in regards to a company who offered 50 mil for sending them crawling business. I was surprised, to say the least. When it was attempted to do so piecemeal, Amazon got very upset. So there's a demand, but it's probably not very large (# of capitalized consumers).

  • Rent our botnet! (Score:3, Interesting)

    by Animats ( 122034 ) on Monday September 28, 2009 @11:03PM (#29575909) Homepage

    This looks like an attempt to monetize a botnet. What, exactly, do the people running their "client" get out of this? Do they know they're sucking bandwidth, and possibly being billed for it, on behalf of someone else?

    I run a web spider [sitetruth.com] of sorts. And I know the people who run a big search engine. Reading the web sites isn't the bottleneck. Analyzing the results and building the database is. Outsourcing the reading part doesn't buy you much. If this just did a crawl, it would be of very limited value. That's not what it does.

    What they're really doing [pbworks.com] is offering a service that lets their customers run the customer's Java code on other people's machines in the botnet. That's worrisome. There are some security limits, which might even work. Supposedly, all the Java apps can do is look at crawled pages and phone results home. Right.

    This thing uses the Plura botnet. [pluraprocessing.com] "Plura® is a grid computing system. We contract with affiliates, who are owners of web pages, software, and other services, to distribute our grid computing code. We utilize the excess resources of peripheral computers that are browsing the internet when such browsing leads to a web page of one of our affiliates. That web page has imbedded code that allows the visitor to participate in the grid computing process. We also utilize embedded code in software and other services to allow such participation." Not good.

    The main infection vector is apparently the Digsby chat client [lifehacker.com], which comes bundled with various crapware. The Digsby feature list [digsby.com] does not mention that Plura is in their package.

    This thing needs to be treated as hostile code by firewalls and virus scanners.

    • Re: (Score:2, Interesting)

      by javajedi ( 81810 )

      Outsourcing the reading part doesn't buy you much. If this just did a crawl, it would be of very limited value. That's not what it does.

      Wrong. If I want to spider a single web site, many sites have rate-limiters that kick in and will block me after a while. This would allow me to hit it from multiple machines.

      There are some security limits, which might even work. Supposedly, all the Java apps can do is look at crawled pages and phone results home. Right.

      Why the sarcasm? This seems like a perfect use case for the JVM's security mechanism.

      • by Ant P. ( 974313 )

        many sites have rate-limiters that kick in and will block me after a while. This would allow me to hit it from multiple machines.

        Many sites have rate limiters to prevent denial-of-service attacks. This would allow easy DDoS attacks.

        ftfy

      • by Animats ( 122034 )

        If I want to spider a single web site, many sites have rate-limiters that kick in and will block me after a while. This would allow me to hit it from multiple machines.

        The better web spiders run very slowly as seen from each site. At one time, Google only read about one page every few minutes per site. The Internet was slower then. Cuil's crawler is known to be overly aggressive, but that's a design flaw. (Too much distribution, not enough coordination.)

        At SiteTruth, we never read more than 20 page

  • Can we generate a list of applications known to use plura? or does one already exist?

  • http://www.insuma.de/ [insuma.de] offers a similar service

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...