Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google Businesses The Internet Hardware

The Google Search Server 178

An anonymous reader submitted a reasonably indepth review of the Google search appliance. The guys from anandtech put it through it's paces, and included a variety of pictures and comments on one of those Google products most of us will probably never play with.
This discussion has been archived. No new comments can be posted.

The Google Search Server

Comments Filter:
  • Neat insides (Score:5, Insightful)

    by AKAImBatman ( 238306 ) * <akaimbatman AT gmail DOT com> on Tuesday September 06, 2005 @11:06AM (#13489902) Homepage Journal
    Let's see here:
    1. Took lots of pretty pictures [Check]
    2. Tore the box apart wondering if we could finally find a flux capacitor [Check]
    3. Tried to play with all the hardware and software we've been encouraged to leave alone. [Check]
    4. Actually tested how the device performed doing its intended function? [Why would you want to do that?]
    • Re:Neat insides (Score:5, Informative)

      by b0r1s ( 170449 ) on Tuesday September 06, 2005 @11:13AM (#13489953) Homepage
      These are neat little boxes - we've managed 2 (the yellow appliance, and the blue mini appliance), and the performance of both was pretty nice.

      The tools google provides (very easy binary updates, strong web control panel, for example) turn the relatively common task into a dead-simple, point-and-click configuration.

      They even provide a decent interface for skinning the search pages, and while it's not perfect, it's certainly adequate for even the best looking sites on the internet.
      • Re:Neat insides (Score:3, Interesting)

        by hackstraw ( 262471 ) *
        I wish we would get one of those google appliances instead of whatever horrible search "solution" we have now. I use google with site:mysite.com to search our website.

        When looking at the google appliances, I thought it was really cool how it learns your specific terms and acronyms and it will do the "Did you mean correctspellingword?" like google does.

        Pretty slick from what I gather. I have no direct experience except for google proper.
    • by op12 ( 830015 )
      4. Actually tested how the device performed doing its intended function? [Why would you want to do that?]

      Quit complaining, it's not like this was being called an indepth review.....oh, wait.
    • by Anonymous Coward
      Actually tested how the device performed doing its intended function?

      You can do this yourself; try searching the Anandtech site. It's quick, and the results look like Google results.

    • 2. Tore the box apart wondering if we could finally find a flux capacitor [Check]

      I must say I'm disappointed that this [anandtech.com] is what Google passes off as a flux capacitor.
  • by stecoop ( 759508 ) * on Tuesday September 06, 2005 @11:08AM (#13489916) Journal
  • by DeadSea ( 69598 ) * on Tuesday September 06, 2005 @11:12AM (#13489948) Homepage Journal
    The Mini considers any unique URL string to be a unique document, which makes sense (but is a bit surprising the first time that you run an index). After four hours of indexing, the Mini had managed to reach its document limit and we had to improvise.
    Anybody who doesn't know that search engines consider each url to contain a unique document does't know much about getting their site to be properly represented in search engines.

    Their solution was to create a list of urls for the appliance to crawl. If they had to do that for the search appliance, there is no way that googlebot, msnbot, or yahoo slurp is going to be able to properly index their site.

    Your public accessable urls need to managed and canonicalized through judicious use of robots.txt, 302 redirects, site wide linking, and just plain thinking out the layout of your site.

    • All of your points are valid. But you need to include countless digital photos to make sure that people think you know what it is you are talking about. Just like Anandtech.
    • by Anonymous Coward
      ... which flows right into this statement:
      A word to the wise: don't let the Mini crawl your entire site without keeping a close eye on it.

      The same could be said of any search engine, or any automated process for that matter. We use ht://Dig and the issues are the same, except ht://Dig can be run locally on the server, saving bandwidth (and speeding up the indexing process) by indexing locally and re-writing urls for static files, through apache for dynamic, it's free, and you aren't limited to 100000 docum
    • The real Google has duplicate detection to handle situations like these. Their crawl ends up with something like 30% duplicates from different sources, things like the same online manual repeated on dozens of different sites, mirrors, multiple servers, etc. They use various approximate-matching algorithms to find near-duplicates and merge them, so that the search results don't show the same document with a hundred different urls.

      Unfortunately feature holes like this are why the thing hasn't taken off. If
    • Keep in mind, AnandTech's previous search systems were all on the DB end, so it only counted each article once. Google Mini on the other hand counts the normal view of an article, the print view, etc. It is a very important consideration if you're moving from DB-based searching.
  • Was this a review? (Score:2, Informative)

    by defkkon ( 712076 )
    Was this a hardware review, or was this an instruction manual?

    I gotta say, I was looking for benchmarks, usability scores, maybe some test scenarios. Even better, compare this to other products available out there.

    It looked promising at the start, but when you get to the last page it leaves you wondering if they forgot the hyperlinks for the rest of the article!!

  • by hey ( 83763 )
    So Google subcontracted a company called GigaByte to make this box.
    I was disappointed to see GigaByte didn't use MegaByte to make some subcomponent.
    • I was disappointed to see GigaByte didn't use MegaByte to make some subcomponent.

      Maybe he was too busy trying to take over Mainframe? :o)
      • Maybe he was too busy trying to take over Mainframe? :o)

        Wow! That's a pretty obscure Reboot reference. I had totally forgotten about that show . . .
  • Oh come on (Score:4, Funny)

    by Black Perl ( 12686 ) on Tuesday September 06, 2005 @11:19AM (#13490008)
    First, it wasn't a review. They didn't review anything.

    Second, it was a Google Mini.

    Third, they didn't "put it through its paces" at all.

    Lousy article, misleading /. blurb. But it was about Google! Gooooooooogle!
  • Good, but... (Score:5, Interesting)

    by hazzey ( 679052 ) on Tuesday September 06, 2005 @11:21AM (#13490020)

    While this is an interesting article, it really isn't much of a review of the Google Mini. All they do is take it apart, take pictures, and tell you that they set it up after a little bit of trouble. There is nothing about how well it actually works. No benchmarks. No comparisons. They just say that it worked well and leave it at that. Anandtech has had more indepth reviews of mice before.

    It is more information that I have seen anywhere else though.

    • Re:Good, but... (Score:3, Interesting)

      by Donny Smith ( 567043 )
      I was surprised that they've done what they did.
      The terms & conditions probably forbid reverse engineering and/or disassembly of the appliance.

      It would have been veeerrry easy to rip out the HDD and mount it on a Linux box to check out its internals....
      They must have thought of that. As they've already ruined the warranty (by opening the box), it was probably the EULA or something like that that made them stop short of reviewing contents of the hard disks.
  • by nudeatom ( 740966 ) on Tuesday September 06, 2005 @11:21AM (#13490023)
    Thats it, I gotta get me one of those just for the tee.
  • It's "its"! (Score:5, Informative)

    by dtmos ( 447842 ) on Tuesday September 06, 2005 @11:21AM (#13490029)
    The guys from anandtech put it through
    it's paces

    It's really easy: It's "his", hers", and "its". Even a flower [angryflower.com] knows!

    --cycling through grammar Nazi mode. Please wait.

    • by dtmos ( 447842 )
      Like a perfect vacuum, I believe nature abhors a grammar Nazi post without a grammatical error.

      Make that "It's "his", her", and "its".

      *sigh*

      --completed grammar Nazi mode. Resuming normal operation.
      • Sorry mate, Grammer Nazi errors are recursive... when you opened the double quote to quotate the phrase containing your own grandparent post error, you didn't close it!

        Should've used single quotes there in the first place, and confused everyone cos on computers they're drawn the same as apostrophes ;-)

        J.
    • Re:It's "its"! (Score:4, Informative)

      by Traa ( 158207 ) on Tuesday September 06, 2005 @11:36AM (#13490145) Homepage Journal
      Use "it's" when you can replace it with "it is"

      Well, that is what someone told me anyway. English is not my primary language, if the above is not correct then please don't shoot me.
      • That's not just correct, it is correct by definition. "It's" isn't a word in itself or anything, it's just short for "it is". The apostrophe is used to signify the removal of the space and the second "i". Nothing more, nothing less.
    • Yeah, it's really easy. Use an apostrophe when you want to show possession: Bob's, Sam's, It's.

      Oh, wait, I followed the wrong rule in my "proof."

      At least the whole "I before E" thing has a little rhyme that usually works.
  • where's the raid? (Score:5, Interesting)

    by Darth_Burrito ( 227272 ) on Tuesday September 06, 2005 @11:22AM (#13490034)
    Did it strike anyone else as insane that this thing only had one hard drive? For $3,000, where's the raid array? Ok, sure it's a search appliance and doesn't really hold any mission critical data, but if the hard drive crashes, how long is your search functionality going to be down? You'll need to get a replacement drive and rebuild your whole database (a slow crawl process). What about your configuration settings?
    • by horati0 ( 249977 ) on Tuesday September 06, 2005 @11:29AM (#13490087) Journal
      Did it strike anyone else as insane that this thing only had one hard drive? For $3,000, where's the raid array?

      Here. [gmail.com]
    • Re:where's the raid? (Score:5, Informative)

      by slim ( 1652 ) <john@hartnupBLUE.net minus berry> on Tuesday September 06, 2005 @11:32AM (#13490109) Homepage
      I guess if you want RAID, you pay more than $3,000.

      What you're really buying here is closed-source software, wrapped in the hardware that turns it into an "appliance". Assume $2,000 of that $3,000 pays for the software.

      By specifying the hardware in this way, and by keeping the BIOS and root passwords to themselves, Google greatly simplify their support role.

      This is common practice: an IBM HMC (Hardware Management Console) is a 1U PC with a custom Linux distribution and the management software preinstalled. You don't get the root password; you just use the software as delivered.
      • > I guess if you want RAID, you pay more than $3,000.

        Now that's just plain silly. A basic x86 1U server [pogolinux.com] runs around $1100 with two hard drives configured in software RAID1, which works wonderfully other than not allowing hotswap and preventing boot if the first drive is the one that fails. For another $150 or so you can add a hardware RAID card to fix both of those things and get slightly better performance.

        There is absolutely NO excuse for not running a raid on any modern server. Drives are the most
    • And what if the power supply fuzzes out?

      And what if a ram chip goes faulty?

      What if a capacitor on the motherboard starts leaking?

      Just get two of the damn things, place them in seperate data centers, and round robin them if search is a critical feature.
      • That's a good point, but consider that with the things you describe above (psu, mem, mobo), the problem can be solved by replacing the part. With a dead hard drive, all of the data needs to be recrawled and all of the settings need to be restored. It could be a more painful process, but like you said, if you're really worried about it, you could buy two.
        • That's a good point, but consider that with the things you describe above (psu, mem, mobo), the problem can be solved by replacing the part.

          I think the point is that Google doesn't want you replacing parts yourself. If you can deal with sending the device back to Google for servicing, then you can deal with reindexing.
    • The mini doesn't have raid, you have to buy one of the higher end models for that.

      Chris

  • by openSoar ( 89599 ) on Tuesday September 06, 2005 @11:24AM (#13490048)
    Maybe it takes a while for the documents to be indexed but you'd think they would have added it manually given the nature of the article.
  • From the Summary: "a reasonably indepth review of the Google search appliance."

    If, by "resonably indepth review", you mean lots of pretty pictures and a narrative about opening the box and the case, then sure.

    Rather than calling this a review, perhaps it could be re-titled "One man's demonstration of the Google search appliance."

    That said, I'm a little concerned about how many URLs it can handle... 100,000? According to TFA, 40,000 documents overloaded this thing.

    The article did not address how th
    • That said, I'm a little concerned about how many URLs it can handle... 100,000? According to TFA, 40,000 documents overloaded this thing.


      My reading of TFA was that the Mini was encumbered with an arbitrary limit of 40,000 documents.

      That is, if you want to index >40,000, Google wants more money from you. It's purely to do with software licensing.
      • Anandtech said "The mini allows for 100,000 documents/URLs to be stored in a collection, and AnandTech contains approximately 40,000 articles, news and blog entries."

        But if each article is 3 pages long on average, that's 120,000 documents/url's right there.
      • My reading of TFA was that the Mini was encumbered with an arbitrary limit of 40,000 documents.

        The appliance can index 100,000 at the lowest licencing level. Even if you only have 40,000 documents, you need to keep an eye on the crawler, and make some changes if it starts counting pages twice (printable/alternate versions, or multiple pages of single documents perhaps).

    • http://www.anandtech.com/IT/showdoc.aspx?i=2523&p = 4 [anandtech.com]

      The mini allows for 100,000 documents/URLs to be stored in a collection, and AnandTech contains approximately 40,000 articles, news and blog entries.

      When we first set up the Mini, we told it to start in each of the website's sections (for example, http://www.anandtech.com/it/ [anandtech.com]) and in the web news area. The Mini considers any unique URL string to be a unique document, which makes sense (but is a bit surprising the first time that you run an index).

      • I am aware of what TFA said. My point is this: 100k URLs is not a lot; I was merely pointing out that 40k docs can be > 100k URLs, and this means that capacity become an issue very quickly.

        I guess TFA being from the you-know-for-the-kids-dept explains it pretty well.
    • RTFA (and actually read it). The Google Mini has a built-in limit of 100,000 documents; it's not that it can't index more because of a lack of CPU power or HD space or whatever, it's just that if you want (or need) more than that, Google wants you to buy their regular Search Appliances instead.

      All this info can also be gotten from http://www.google.com/enterprise/ [google.com], which is exactly 1 (one) click away from Google's index page.
    • If your not careful when setting up your crawlers many search engines will index every link they find in a document. Including the headers and footers on the page that point to About, Legal, Copyright, Sponsors and Links.

      Depending on how you have configured things it may also go ahead and read your banner ads and such as well. If you havent expliclty told your crawler to stay within someurl.com then it will go ahead and index the links that go to outside sites as well.

      The solution that was presented in the
  • Google ate my server (Score:5, Interesting)

    by PIPBoy3000 ( 619296 ) on Tuesday September 06, 2005 @11:28AM (#13490078)
    A few months ago, we asked for a demo of the product. My main involvement was to help compare with our existing search strategy. Just to cut to the chase, we generally had a very positive experience with it. Searches would bring up what we wanted more often than not. Our existing search system, which was based around IIS and custom SQL code, was pretty good, though it couldn't beat Google for pulling up relevant pages. We did have a few quirky things happen, though.

    We had a couple times when the appliance locked up and had to be rebooted. That was probably the most distressing as it had to be on 24x7 to support our organization and I wasn't looking forward to the help desk calls.

    More amusing, though, was the way it crawled content. Google works like any other crawler - it goes around and clicks hyperlinks. Unfortunately it's not too bright, not paying attention to the text of the hyperlink, like if it said "delete" or something like that.

    Unfortunately I had a poorly secured application that Google was able to sneak into via another link I wasn't aware of. It held the custom links for each of our departments to display a personalized set of links on the home page. Unfortunately it went through the admin tool and clicked every delete link it could find. I was paged the next morning and was fairly unhappy. My fault, though.

    The irony is that the budget money evaporated and we aren't getting it after all.
    • by Anonymous Coward
      Unfortunately I had a poorly secured application that Google was able to sneak into via another link I wasn't aware of. It held the custom links for each of our departments to display a personalized set of links on the home page. Unfortunately it went through the admin tool and clicked every delete link it could find.

      Sounds like it wasn't much of an admin tool if it required no authorization...any employee could have done what Google did, just not as quickly.
      • Don't ridicule his misery, AC, unless you're willing to post your name. Someday, once you graduate from high school, you will encounter this situation and you'll wish you weren't so critical.
      • Yep. It was clearly "my fault" in this particular case. It was one of those applications secured by NT groups. Unfortunately we had some issues where security got screwed up by an overzealous administrator and this one didn't get fixed. I ended up changing the security model after the fact, switching to my typical database authorization method.
    • by Augusto ( 12068 ) on Tuesday September 06, 2005 @11:56AM (#13490293) Homepage
      The problem is not google, is the way your app is designed!

      Universal Resource Identifiers -- Axioms of Web Architecture : Identity, State and GET [w3.org]

      In HTTP, GET must not have side effects.

      In HTTP, anything which does not have side-effects should use GET

      If somebody visited your site with a pre-fetching tool like the google web accelerator, you will also find the "delete" button being checked automatically like this. Change those deletes to use POST instead.
    • Sigh, the exact same thing happened to me, except it was a non-google search engine (I forget which) that explicitly disobeyed robots.txt. Ditto as to my fault. Still annoying.

      Thank god for backups..
      • it was a non-google search engine (I forget which) that explicitly disobeyed robots.txt

        Robots.txt has the protective power of a big red Don't Push button on a public street. Heck, I keep an eye on anyone that comes to my datacenter, in case their eyes start to fixate on the EPO button...

    • The HTTP spec says that a GET should not perform anything, i.e. not change data. This is why "delete" hyperlinks should at least have an "are you sure" page with a posting form before actually deleting anything. Just a hint for your next project!

  • This was an interesting review if you had never seen what a google appliance looks like, but it wasn't very in-depth at all.

    I was certainly looking forward to some overclocking and linux installing. I mean, I'm sure they voided whatever agreement they had with google just by opening the case up, so why no go all out and give us the review we really want to read.

    I didn't even realize the review was over until I realized there was no "next" button on that last page.
  • Just a matter of time before it's reverse engineered :)
  • by Anonymous Coward
    I can search the 63,000 online documents with http://www.google.com/search?q=site:www.anandtech. com
  • From TFA (Score:5, Funny)

    by Anonymous Coward on Tuesday September 06, 2005 @11:41AM (#13490189)
    The screw is threaded - it just can't be undone with a regular screwdriver.

    Right.. Only unthreaded screws can be opened by a regular screwdriver.
  • I thought Google used pigeons ...
  • by Homicide ( 25337 ) on Tuesday September 06, 2005 @11:44AM (#13490207) Homepage
    I admin a full blown Google Search Appliance, the mimi's big brother.

    If you want the specs:
    Dual Xeon 2.6GHz
    12GB RAM
    4 250GB HD's in RAID(something) with a hot-swap spare.

    Never tried taking off the cover though, since we want to keep the warranty.

    All of the money you pay is a license for the software on the box, the system itself is effectively free, so once the 2 year warranty expires, you've effectively got a nice powerful linux box for free. You can keep running the software, but without any support.

    As for performance, this thing works great, we have about 250,000 pages that it can index, both public and private (and it can do searches cleverly checknig username/pasword to see if you should have access to certain results), and we've had nothing but positive responses from our users. The results come up quickly, they're the results people want, and the results that management think should be at the top, are at the top.
  • What happens after the BIOS screen and before you "log in" to the web interface? Surely it runs some sort of operating system?
  • by msblack ( 191749 ) on Tuesday September 06, 2005 @11:53AM (#13490277)
    We evaluated on of those yellow Google search appliances (GSA) and experienced very mixed results. The appliance is very easy to set-up and launch an initial scan of our website.

    The GSA will blindly search all web servers in your domain. When setting-up the GSA, you give it an initial page from which to start crawling and baseline domains. For example:

    Inital page: http://www.slashdot.org/ [slashdot.org]
    Domain(s): .slashdot.org,slashdot.org

    The leading dot on the first domain entry says to search all hosts in the domain.

    Problem: GSA does not provide very good status of where or what it is searching. It only has a dashboard light to say it is crawling. No details.

    Problem: We found that the GSA would get caught in an endless loop if it encountered a user website controlled by a database. It would endlessly follow the next and previous links to find every database entry.

    Our university library subscribes to a number of electronic databases, such as, EBSCO PsychINFO, etc. The GSA indexed every possible look-up.

    Our eval licenses was limited to 1.5 million pages. Some of these databases contain hundreds of thousands of pages. Solution: Those setting up their own web server must employ proper robots.txt files or risk having their entire server blocked from indexing.

  • by BenEnglishAtHome ( 449670 ) on Tuesday September 06, 2005 @11:55AM (#13490287)
    The pictures are pretty and I'll assume the thing works. Some folks, however, won't buy it because they don't want their intranets to work like you or I might expect. Let me explain.

    I work for a large TLA govt agency. I've begged our people to get something like this. I know, from working with our folks and doing my own digging, that we have a wealth of knowledge tucked away, here and there, on local group shares and out-of-the-way internal web sites. And yet our internal search function is ludicrously bad. It works off "key words" that are simply a manually maintained (I think) list of useless, often off-the-mark descriptions of approved sites of general interest. Special-interest pages are not indexed in this way. The crawler, if you want to call it that, is terrible at doing its job. Enter a string of text and get a hit on a known, universally accessible web page containing that exact string? Not a chance. I test it occasionally and find that it remains as ridiculous as ever, with a level of functionality that would have been technologically uninteresting the better part of a decade ago but is, in this day, infuriating to users.

    The reason for all this is that if our intranet were automatically crawled, well indexed, and truly searchable, people would be able to find things. People in Work Area A would be able to see how they might be impacted by something going on in Work Area B. Horrors! That would mean that management would lose much of their ability to keep employees selectively in the dark.

    All this came to a head a number of years ago. At that time, our intranet content was maintained by IT. Anybody that wanted a site (literally anybody) could just get their first-line manager to approve the request and they'd get server space and some help setting up a page or two. The exchange of information that started happening was highly disruptive, so a "Communications and Liaison" office was set up that wrenched control of the intranet from IT and required (what seems to be essentially political) approval of the business case for anything that went online. No web sites unless the Communications gods approved.

    Nowadays, the employees of one division are only vaguely aware that other divisions exist or have web sites. Each individual fiefdom is protected from the ravages of communications that don't strictly follow the org chart lines. I guess the executives in charge are happy in their insulated little worlds.

    If you're going to sell an effective intranet search tool, you're going to have to face the fact that lots of large organization leaders (and you find the same attitudes in both the public and the private sector) would recoil in horror at the thought of having their intranet be effectively searchable. It's too threatening.
    • Based on my experiences working in government, my guess is it was more that they wanted to have control over what was on their internal web site more than they wanted to restrict information sharing. Of course, it might be that where you work is just a lot more dysfunctional than where I work.

      I set up a search for our intranet at my govt agency (one part of a larger cabinet agency) many years ago. For some reason I never understood, the one guy who controls the intranet site decided that the search link s
    • I seriously considered getting a Google Mini for my law office. The desktop search stuff wasn't really doing it for us, and we have boatloads of work that we reuse on a regular basis -- pleadings/contracts/settlement agreements, etc. are sort of like code in that respect -- we always want to reuse our knowledge rather than reinventing the wheel. My concern was that the regular Google appliance was too expensive. The mini seemed reasonable, but I still was resisting the idea of paying that much for search.

      In
  • Curious... (Score:2, Interesting)

    Given the actual content of their review, I'm very surprised they didn't pull the drive and have a stroll around the filesystem. They've pretty much toasted the warranty as it is, anyway.
  • Nice review (Score:2, Interesting)

    by zlogic ( 892404 )
    I like this kind of reviews. A bit of what packaging looks like (noone writes that, although it's quite interesting for me personally: how does packaging for a $10000 unit differ from a $300 maching), a bit of a view from the inside, a bit about the software. Nothing too complicated, because that would make the article dull to read. What the article provides is the general feel of the product.
    One thing I wonder is that Google can probably use the included modem to download private company data which the ser
  • It's not clear from the article but I know that Google's server farm runs on Linux. Does the same apply for these machines and, if so, do they come with the source code to the GPL-ed parts of the server software?
  • I am currently in the midst of setting up setting up a Google mini. I have noticed most articles mention that getting the *initial* crawl setup is quite easy. It is. Even this article mentions "The last thing that we worked on was making the Mini look like it is part of AnandTech.com. There are two ways to go about this in the Mini admin. One is to use their built-in page layout helper, which allows you to wrap the search screens with a custom header and footer. The other way (which we prefer) is to use
  • EnterFind appliance (the product I helped developing last year) is cheaper, handles native Windows shares(not just HTTP) as well as databases and has web-services API.
  • We looked into the testing of the Google appliance for searching our printer ink [time4ink.com] site. We found using our Google ad sense account gave our printer ink customers the ability to search our site and suited our small business needs just fine. You can see our search box at the top of our site let's the search happy people search away. If they go somewhere else we felt being a directory will allow us to keep them coming back due to our printer help sections. Why buy a big Google [google.com] appliance??? -- Especially with t
    • good idea and great if it fits your purposes.

      In the case of your application I would say it was a good call.

      In the case of more content rich sites that may have varied types of articles as well as the desire to have a more intergrated look and feel the applicance is more neccassary.

      There are also many intranets that have tons of content that is not available to the Net at large however the people who manage and use these networks would still like to be able to search the content they have on thier internal
    • Correct me if I'm wrong, but the google applience is not for other people to search your site. It's for YOU to search your non-web, private data, right ?

      Say you are a lawyer and have 10 years worth of electronic versions of communications on 50 different computers. You can buy a google appliance, configure it index everyone of those computers (you have to network share the drives in some way), and the "cache" link also works as a sort of backup. You don't want any jackass on the web searching that stuff.
  • Carpetting (Score:3, Funny)

    by ukleafer ( 845880 ) on Tuesday September 06, 2005 @03:45PM (#13492579)
    Anyone else think the Anandtech server room has some lovely, lovely carpets [anandtech.com]?
  • I loath the google appliance. I liked it for the price and it was supposed to be like an appliance. Plug it in, turn it on and click a few buttons and off you go.

    It locked up for me waay to many times even though google cites this as rare. I wasted way to much time on support for a device which should not need this level of babysitting.

    When my contract ends, I'm switching to Nutch.
     
  • Google has many production quality problems with its distributor. I had to return 2 units before I received a functioning unit the 3rd time. I benchmarked the functioning Google Mini the other day. I havent published detailed results yet, but I can tell you that the performance was very poor considering the performance expection from a brand like Google. While I think the appliance is very capable, neither the Google Mini nor the larger yellow appliance are suitable for wide enterprise deployment. I be

E = MC ** 2 +- 3db

Working...