Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Yahoo! Businesses The Internet

Yahoo's YSlow Plug-in Tells You Why Your Site is Slow 103

Stoyan writes "Steve Souders, performance architect at Yahoo, announced today the public release of YSlow — a Firefox extension that adds a new panel to Firebug and reports page's performance score in addition to other performance-related features. Here is a review plus helpful tips how to make the scoring system match your needs.
This discussion has been archived. No new comments can be posted.

Yahoo's YSlow Plug-in Tells You Why Your Site is Slow

Comments Filter:
  • /. gets a D (Score:5, Funny)

    by LoadWB ( 592248 ) * on Wednesday July 25, 2007 @09:28AM (#19982425) Journal
    Interesting utility. Slashdot gets a D on the homepage, F on a comments page. Many media sites score Fs, mostly thanks to numerous ad and cookie sites.
    • by JuanCarlosII ( 1086993 ) on Wednesday July 25, 2007 @09:30AM (#19982447)
      Even better than that, http://developer.yahoo.com/yslow/ [yahoo.com] gets a D for performance.
      • Re:/. gets a D (Score:5, Interesting)

        by jrumney ( 197329 ) on Wednesday July 25, 2007 @09:54AM (#19982683)
        My own site also got a 'D', so that seems to be the standard grade. Everything that matters, it got an 'A' for, except for using non-inlined CSS which it got a 'B' for the test that said you shouldn't (to reduce HTTP requests), and an N/A for the test that says you should (to take advantage of caching). Then there were a whole lot of irrelevant things that it got an 'F' for. The fact that none of my site is hosted on a distributed network, the fact that I leave the browser cache to make its own decision about expiring pages since I don't know in advance when I'm going to next change it, and something about ETags, I'm not sure whether it is saying I should have more of them, or I should get rid of the ones I've got.
        • Everything that matters, it got an 'A' for, except for using non-inlined CSS which it got a 'B' for the test that said you shouldn't (to reduce HTTP requests), and an N/A for the test that says you should (to take advantage of caching).

          That seems silly. Isn't one of the advantages of having a separate CSS file that you reduced redundancy across multiple pages? Sure, it's an additional file to load - the first time.

          • Re:/. gets a D (Score:4, Interesting)

            by daeg ( 828071 ) on Wednesday July 25, 2007 @01:17PM (#19985535)
            It depends on the headers (server), browser, and method, actually. Under some circumstances, for instance under SSL, full copies of all files will be downloaded for every request. As HTTP headers get more complex (some browsers with toolbars, etc, plus a plethora of cookies), the HTTP request/response cycle expands. It may not seem like a lot, but a .5kb request header multiplied by dozens of elements and you can quickly use up a lot of bandwidth. Firefox does a much better job than Internet Explorer under SSL, but not by much unless you enable disk-based caching.

            Something I would love to see are some of the headers condensed by the browser and server. For instance, on first request the browser sends the full headers. In the reply headers, the server would set a X-SLIM-REQUEST header with a unique ID that represents that browser configuration's set of optional headers (Accept, Accept-language, Accept-encoding, Accept-charset, User-agent, and other static headers). Further requests from that browser would then simply send the X-SLIM-REQUEST header and unique ID and the server would handle unpacking it -- if the headers are even needed. Servers that don't supply the header would continue to receive full requests, preserving full backward and forward compatibility.

            There are a few things to reduce request sizes for web applications. MOD_ASIS is one of the best ones. We use it as one of the last steps of our deployment process. All images are read in via script, compressed if they are over a certain threshold, and minimal headers are added. Apache then delivers them as-is -- reducing load on Apache as well as the network (the only thing Apache adds is the Server: and Date: lines). ETags and last-modified dates are calculated in advance. Also certain responses such as simple HTTP Moved (Location:) responses, GZip isn't used -- GZiping the response actually *adds* to the size due to their very small document size.
            • by jrumney ( 197329 )
              Most requests will fit into a single TCP/IP packet anyway so its not worth complicating the HTTP protocol with a requirement that servers remember information about browser capabilities for indeterminate time and the extra round trip for the "308 Forgot your Headers" responses that would be needed to recover from such situations would undo any savings that you'd gain.
        • My own site also got a 'D', so that seems to be the standard grade.

          *Your* site got a 'D' and *therefore* that seems to be the standard grade?

          I think I see a flaw in your logic there, batman.
          • by jrumney ( 197329 )
            I'll give you the benefit of the doubt and assume that you filter out all +5 Funny posts, so didn't see the two ancestors to my post commenting about the D grades of both Slashdot and Yahoo. It seems like mostly A and F grades get handed out for specific tests, since you either do something already or you don't. And it also seems like any mixture of A's and F's results in an overall D grade. Hence my comment that it seems to be the standard grade (at least two of the tests are mutually exclusive, so straig
        • by reed ( 19777 )
          I always use CSS files by reference when the stylesheet is shared by multiple pages. You know, caching and stuff...
    • Re: (Score:2, Informative)

      by MinorFault ( 1132861 )
      We started with websiteoptimize here at Zillow, but Steve's tool is much more useful. His upcoming O'Reilly book is also quite good. We've taken seconds off our of our user response time with it. Steve came and spoke and it was very well attended and liked by a bunch of Seattle Web 2.0 folks.
    • Interesting they rate down the comments pages on bannination.com [bannination.com] because they have stylesheets outside the document head, yet why i look at the code the stylesheets are where they are supposed to be... weird.
    • Yeah, those ads that I turned off due to being a subscriber yet still see...

      Right, right, "for accounting purposes". Shut up, you "anti-advertising" frauds.
    • These are all common-sense tips, but having them all automated and tallied is a great little helper. I'll most definitely be checking all my current sites with YSlow to see how my design practices hold up.

      Especially for "indie" sites with small audiences, responsiveness can be a big selling point because you don't have that brand "power" to draw people in, but a snappy site will be noticed.
  • Sure but (Score:4, Funny)

    by loconet ( 415875 ) on Wednesday July 25, 2007 @09:29AM (#19982433) Homepage
    I bet it doesn't actually tell you your site is being /.ed
  • Another tool (Score:3, Informative)

    by Klaidas ( 981300 ) on Wednesday July 25, 2007 @09:29AM (#19982437)
    Web developer [mozilla.org](a must-have) has a speed analyzing tool by default (well, more of a link to a website that does the job), I prefer to use that one. Here's an example [websiteoptimization.com] of slashdot's report.
    • Re: (Score:3, Insightful)

      by gblues ( 90260 )
      I see your point, but keep in mind that the website server iikely has a far better uplink to the Internet than you do. A plug-in like this gives you real-world performance data if you're using it on, say, a residential DSL line.
      • Re: (Score:3, Informative)

        by Klaidas ( 981300 )
        It provides download times for all kinds of connections, from 14.4K to 1.44Mbps. Also, seperate download times for objects.
      • If you limit is local, then it's not really reflecting the speed of your site. It's reflecting the speed of your local connection. Only when the limiting factor is the site, or when a reliably stable transfer speed has been established, can the speed of the site relative to another site be reliably tested.
    • by danbert8 ( 1024253 ) on Wednesday July 25, 2007 @09:46AM (#19982611)
      I think you slashdotted a website efficiency report of Slashdot. Shouldn't that cause a black hole or something?
    • Re: (Score:1, Interesting)

      by Anonymous Coward
      I tried the piggiest page on my own site (and thank you for the link BTW) just out of curiosity. Note that all images are almost completely necessary (it is, after all, about visual art). And I wrote it way back in 1998. IIRC there is a reprint somewhere on K5, sans graphics.

      URL: http://mcgrew.info/Art/ [mcgrew.info]
      Title: Steve's School of Fine Art
      Date: Report run on Wed Jul 25 09:10:42CDT2007

      Total HTML: 1
      Total HTML Images: 13
      Total CSS Images: 0
      Total Images: 13
      Total Scripts: 1
      Total CSS imports: 0
      Total Frames: 0
      Total Ifr

    • The big problem I have with this is that it doesn't work for HTTPS requests, and I wouldn't want it to. Relying on an external website to test your secure site's performance is not a great idea.
  • by brunascle ( 994197 ) on Wednesday July 25, 2007 @09:30AM (#19982451)
    that's all well and good, but it's slow because of the server-side scripts, not anything client side. and no browser plugin will ever know that.
    • by awb131 ( 159522 )

      that's all well and good, but it's slow because of the server-side scripts, not anything client side. and no browser plugin will ever know that.

      Why not? Couldn't a browser-side plugin simply measure the wallclock seconds it takes for the http request to complete? It could figure out what's being dynamically generated and what's being served from static by comparing all the requests for the same host and comparing the transfer rates.

      • How does it differentiate between a slow server side script and a slow network easily.
        • by edwdig ( 47888 )
          How does it differentiate between a slow server side script and a slow network easily.

          Unless you're dynamically ALL of the content, some things will load faster than others. Odds are most of your images are static, so when your 10 KB HTML page takes longer to transfer than your 30 KB images, you blame the server side scripting. If you design a site in ColdFusion, it won't send any page data until the script finishes running. In scenarios like that, the delay before receiving data is an indication that somet
    • by mabinogi ( 74033 )
      You haven't even looked have you?
      Dynamic server side performance is very rarely the main cause of speed problems - http latency from too many objects and poor placement of scripts and CSS are usually the problem.

      Even if it takes two whole seconds for the server to generate the page, that's still a small fraction of the fifteen seconds it takes to completely download and render some more complicated sites.
  • Saw this demoed at web2.0 This is a very useful plugin. Especially so for developers who may not be familiar with a lot of the reasons sites can load or feel slow.
  • The damned article makes a point to say it is an extension to Firebug not Firefox. Whats the difference?
    • by JuanCarlosII ( 1086993 ) on Wednesday July 25, 2007 @09:37AM (#19982517)
      YSlow require Firebug to previously be installed in order to run. It is an extension of the capabilities of Firebug and so is an extension of an extension, a meta-extension if you will.
    • Firebug is a plugin for Firefox; Yslow is an extension to Firebug.
    • The damned article makes a point to say it is an extension to Firebug not Firefox. Whats the difference?

      I cannot install YSlow as a browser extension unless I also have the Firebug extension enabled.

      And since Firebug for some reason causes my browser to climb to 100% CPU and become unresponsive if I leave it enabled too long, I guess I won't be giving YSlow a try.
  • by Anonymous Coward
    Nice one Yahoo. Now people can optimize their website without bothering to read up on HTTP and thinking about what they're doing.

    Since 9/10 web developers can't even be bothered using a validator, I predict great success for this tool.
    • by kat_skan ( 5219 ) on Wednesday July 25, 2007 @12:48PM (#19985077)

      The Anonymous Coward here is spot on. This thing gives awful, awful advice. Some of these in particular I really hated as a dialup user.

      CSS Sprites are the preferred method for reducing the number of image requests. Combine all the images in your page into a single image and use the CSS background-image and background-position properties to display the desired image segment.

      This is only a win if your images are tiny. Why are you optimizing for this? Tiny images do not take long to download, even on dialup, because they are tiny. Frankly I would prefer to have all the site's little icons progressively appear as they become available than have to wait while a single image thirty times the size of any one of them loads. Or, perhaps, fails to load, so that I have to download the whole thing again instead of just the parts I have.

      Inline images use the data: URL scheme to embed the image data in the actual page. This can increase the size of your HTML document. Combining inline images into your (cached) stylesheets is a way to reduce HTTP requests and avoid increasing the size of your pages.

      This is hands down the stupidest idea I have ever heard. Ignoring for the moment that it won't even work for the 70% of your visitors using IE, sending the same image multiple times as base64-encoded text will completely swamp any overhead that would have been introduced by the HTTP headers.

      Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all stylesheets into a single stylesheet.

      Less egregious than suggesting CSS Sprites, but it still suffers from the same problems. These are not large files, and if they are large files, the headers are not larger.

      As described in Tenni Theurer's blog Browser Cache Usage - Exposed!, 40-60% of daily visitors to your site come in with an empty cache. Making your page fast for these first time visitors is key to a better user experience.

      What, seriously? Are you really optimizing for your visitors who load one and only one page before their cache is cleared? Even though you "measured... and found the number of page views with a primed cache is 75-85%"?

      Add an Expires Header

      ...

      Browsers (and proxies) use a cache to reduce the number and size of HTTP requests, making web pages load faster. A web server uses the Expires header in the HTTP response to tell the client how long a component can be cached. This is a far future Expires header, telling the browser that this response won't be stale until April 15, 2010.

      Expires: Thu, 15 Apr 2010 20:00:00 GMT

      ...

      Keep in mind, if you use a far future Expires header you have to change the component's filename whenever the component changes.

      And if you ever change something but forget to change the file name, your visitors will have to reload everything on the damn page to get the current version of the one thing you changed. Assuming, of course, they even realize there should be a newer version than the one they're seeing. And assuming that they actually know how to do that.

      Put CSS at the Top

      While researching performance at Yahoo!, we discovered that moving stylesheets to the document HEAD makes pages load faster. This is because putting stylesheets in the HEAD allows the page to render progressively.

      Um. Duh? link elements are not valid in the body. style elements are

      • by Evets ( 629327 ) *
        CSS Sprites - agreed. These aren't that useful, but in terms of a simple page and a long/slow connection they can improve performance a bit. I see them in the wild very rarely. "A list apart" has an implementation article somewhere thats worth a gander if you can find it.

        Inline Images - agreed. Dead on. Quite stupid.

        Combined Files - I've flip-flopped a great deal about this one myself. While a single file can greatly reduce data transfer overhead by eliminating headers and ensuring packets are their f
      • by imroy ( 755 )

        This is only a win if your images are tiny. Why are you optimizing for this? Tiny images do not take long to download, even on dialup, because they are tiny.

        And because they are tiny and numerous, the overhead from the HTTP headers is huge. Headers can easily be a few hundred bytes. Looking at the default 'icons' that come with Apache, the majority are little GIF's under 400 bytes. So if you go and download them with individual HTTP requests, you're throwing away 30-50% of your bandwidth just in HTTP overh

        • by kat_skan ( 5219 )

          This is only a win if your images are tiny. Why are you optimizing for this? Tiny images do not take long to download, even on dialup, because they are tiny.

          And because they are tiny and numerous, the overhead from the HTTP headers is huge. Headers can easily be a few hundred bytes. Looking at the default 'icons' that come with Apache, the majority are little GIF's under 400 bytes. So if you go and download them with individual HTTP requests, you're

          • by imroy ( 755 )

            Perhaps the problem of lots of little images vs a single 'sprite' is more psychological. Perhaps it just appears fast seeing lots of individual images load.

            Well, it's not even the case that you can just make the image easy to cache and be home free. Eventually you're going to want to change some part of the image.

            True. You'd really only want to use 'sprites' on site graphics that don't change very often.

            Honestly, the more I think about this strategy, the less sense it makes. If you have to change the nam

            • by kat_skan ( 5219 )

              Perhaps the problem of lots of little images vs a single 'sprite' is more psychological. Perhaps it just appears fast seeing lots of individual images load.

              I would agree with this. As I said, loading a big image tended to make the content itself take longer. If I'm reading while the images load, I'll not notice or honestly even care if the page as a whole is 100% larger. Conversely, if you've done something to cut the load time in half, but I have to wait for the entire thing before I can actually use an

      • As described in Tenni Theurer's blog Browser Cache Usage - Exposed!, 40-60% of daily visitors to your site come in with an empty cache. Making your page fast for these first time visitors is key to a better user experience.

        What, seriously? Are you really optimizing for your visitors who load one and only one page before their cache is cleared? Even though you "measured... and found the number of page views with a primed cache is 75-85%"?

        Daily Visitors != Page Views

        Making up random numbers and fudging to a perfect caching system for convenience:

        10 people hit your site on a given day.

        3 have never been there before, have an empty cache, say, "Damn, this shit's slow," and leave.
        2 have never been there before, have an empty cache but endure, surfing 5 pages each.
        The other five are regular users and have files cached. They surf the same 5 pages.

        Total: 3x1 + 2x(1+4) + 5x5 = 38 total pages.

        (5 out of 10) 50% of daily visitors had an empty cache.
        (

        • by kat_skan ( 5219 )

          So, both quotes are correct: 50% of daily unique visitors came in with an empty cache, 87% of total page requests were made with a primed cache.

          Sorry, I didn't mean to suggest that their numbers didn't add up, just that small optimizations that service half your visitors don't make sense when they are something that only has any impact on the first request. The disadvantages of aggregating files together in the manner they are suggesting just outweigh that small benefit.

          It's a cla

        • Obviously those numbers are pulled out of my anatomical /dev/null and make some major assumptions

          You can't read (much) from /dev/null, and your numbers don't look like they come from /dev/zero either — those would be rather repetative.

          I think, you meant /dev/random...

      • > [CSS Sprites are] only a win if your images are tiny. Why are you optimizing for this? Tiny images do not take long to download, even on dialup, because they are tiny.

        The point is to reduce the number of HTTP connections, and thus avoid pointless latency. A TCP connection takes time to set up because there's a back-and-forth, and if the client is far from the server this can introduce a significant delay in loading static resources. Not to mention that the browser may have to reflow the page as the new
        • by kat_skan ( 5219 )

          [CSS Sprites are] only a win if your images are tiny. Why are you optimizing for this? Tiny images do not take long to download, even on dialup, because they are tiny.

          The point is to reduce the number of HTTP connections, and thus avoid pointless latency. A TCP connection takes time to set up because there's a back-and-forth, and if the client is far from the server this can introduce a significant delay in loading static resources.

          It's significant in r

        • A TCP connection takes time to set up because there's a back-and-forth,

          Connection: Keep-Alive mitigates that somewhat.

      • As somebody, who has to explain to clients, that an odd performance metric from some miracle site is not Alpha And Omega of judgement,here I go...

        And if you ever change something but forget to change the file name, your visitors will have to reload everything on the damn page to get the current version of the one thing you changed. Assuming, of course, they even realize there should be a newer version than the one they're seeing. And assuming that they actually know how to do that.

        For one, having the Exp

        • by kat_skan ( 5219 )

          And if you ever change something but forget to change the file name, your visitors will have to reload everything on the damn page to get the current version of the one thing you changed. Assuming, of course, they even realize there should be a newer version than the one they're seeing. And assuming that they actually know how to do that.

          For one, having the Expires header reduced the load-latency - Your JS and CSS files are unlikely to change within a scope of a day or an hour. In theory, the browser do

  • Why sites are slow (Score:2, Interesting)

    by Anonymous Coward
    Sites are only as fast as the slowest path through the site.

    If your site has 10 different affiliate links/sponsors, all hosted on different providers, your site will be slow.

    Similarly, if your site has 100 different java/javascript crapplets,widgets, your site will be even slower.

    Here is a simple guide for site creators:

    1. Don't overload on ads, I'm not going to view them anyway
    2. Put some actual content I'm interested in on your site
    3. Don't overload me with java/javascript crap, I don't care what my mouse
    • by Klaidas ( 981300 )
      Most of those should be understood by default, they're simply common sense, but nowadays not many developers do that. I just hate it when, for example, slashdot point me to a website with an article, but before I even see the title, I must scroll down "by two screens worth of space". Sometimes that might be a good excuse to not RTFA (I kid, i kid!)
      When building a photogallery (sig), I thought it'll be pretty much like a photo in the center, and then two buttons to view the previous/next one. Yet, when I f
      • People are moving away from simple mother's maiden name and last four digits of ssn to biometric authentication. And you publish your cornea for the whole world to see. Your id will be stolen in a moments notice buster.
    • 3. Don't overload me with java/javascript crap, I don't care what my mouse pointer looks like, just let me click
      4. Not everything needs a php/mysql front/back end.

      You have to build up your resume some how in order to keep your job or to get a better one. What better way than to develop shit that the project really doesn't need but will sure look great on a resume!

      And it's not just techies. Back in the mid nineties, it seemed that every CIO was moving his system from mainframe to distributed architecture.

    • by Sparr0 ( 451780 )
      Uhm, how/why would 10 affiliate links/sponsors slow down your site?
      • Uhm, how/why would 10 affiliate links/sponsors slow down your site?


        He means having banners or other content that is actually retrieved from
        the affiliate/sponsor's site, thereby insuring your page will load at
        the response rate of the *slowest* of those ten sites.

        Chris Mattern
        • by Sparr0 ( 451780 )
          Hate to break it to you, but a properly designed web page will not wait for one (or ten) image to load before showing you the content.
    • Here is a simple guide for site creators:

      1) Throw out the baby with the bathwater and pretend it's still 1996 . . . so that you can increase the number of impossible-to-please-anyways slashdot ACs that visit your site.

      Yeah - that sounds like a real good plan.

    • by pooh666 ( 624584 )
      I would second this on ads. I see a lot of very big sites, that are fine, except waiting for the banners...
  • by jea6 ( 117959 ) on Wednesday July 25, 2007 @09:56AM (#19982709)
    F: You are co-located at 365 Main.
  • hmmm... (Score:5, Insightful)

    by Tom ( 822 ) on Wednesday July 25, 2007 @09:59AM (#19982741) Homepage Journal
    Interesting approach, with lots of flaws.

    For example "use CDN" (aka Akamai, etc.) - yeah, right. For Yahoo.com that's an idea. For my private website, that's bullshit. If they really use this internally to rate sites, their rating sucks by definition.

    So in summary there are a couple good points there, and a couple that are not really appropriate. Expires: Headers are a nice idea for static webpages. But YSlow still gives me an F for not using one on a PHP page that really does change every time you load it.
    • by Ant P. ( 974313 )
      For most websites it's BS anyway, Coral seems to take 5 minutes to load anything.
    • by DavidTC ( 10147 )

      Yeah, many of these are stupid.

      Not only do they recommend CDNs, which is absurd for any page that gets less than a million hits a day, they also complain about ETags, despite all the stuff I want cached actually having Etags. They whine that 'different servers can produce different etags' or something, like my site is random distributed over a dozen servers where images and CSS randomly get sent from different ones. Um, nope, just one server, as you apparently figured out when complaining about not using C

    • by nologin ( 256407 )

      Well, from the YSlow web page itself...

      YSlow analyzes web pages and tells you why they're slow based on the rules for high performance web sites.

      This criteria can be subjective (as to what a high performance web site is). I would certainly expect that Yahoo's tool would likely grade sites that have the same magnitude of number of hits that they would get. I don't even think that slashdot.org would even qualify in that category.

      Their tips definitely do make sense if you have a site in the "millions of

  • Finally, someone tells what web developers have known for years [yahoo.com]: optimizing the site is not a matter of splitting your content into as many images as possible over an enterprise app, but good clean design and code.

    For years, as a web designer, every time I got ready to deploy I encountered some nitwit who would say, "You're going to break up that giant image, aren't you? We can put it on nine servers!" -- creating organizational havoc, a completely unmanageable asset mess of a project, and driving everyone

  • it does run on Linux. :-)
  • Lets you figure out why your site is slow, eh? Cool! Now if only the web developers at Yahoo could use this wonderful tool to learn how to make their script-laden web pages (yes, Yahoo Mail Beta, I'm looking at you) load on my laptop in under 30 seconds. :)
  • by HitekHobo ( 1132869 ) on Wednesday July 25, 2007 @11:04AM (#19983503) Homepage
    I think I'd prefer it to use a bit more realistic reporting. How about: 1) Your web developer is a complete incompetent. 2) Buy more hardware, tightwad. 3) There is no need to add every script plugin you come across. 4) Animated gif's are annoying as well as slow to load. 5) Yes, it does take time to download and render an entire book in html.
    • by fishdan ( 569872 )
      You forgot 6) Flash content is often filtered out at the corporate router level 7) Flash is great for compression of audio/video but terrible for navigation/text
  • YSucks - reveals why your site sucks.
    YMe - translates your site into emo-speak.

  • #!/usr/bin/perl -w
    use strict;
    print "You website is slow because: your (average) webmaster/sysadmin/architect cannot " .
    "tell the difference between www.thedailywtf.com and good code\n";
  • In my experience "slow" is a very subjective measure of a web site. It really depends on how quickly the content is displayed, not how quickly the entire page is loaded and rendered.

    Lets say you visit, oh, dilbert.com (just to pick on a geeky site) to get your daily dose of dilbert. If the first thing that is rendered on your screen is the actual comic, you don't really care that it takes another 10-20 seconds to display the buttons, menus, sidebars, topbars, bottombars, animations, ads and ads for ads. I
    • Brilliant idea for a firefox extension! Although the interface would be key to it's usefulness.
  • Comment removed based on user account deletion
  • Now, if Yahoo would only use it on their own sites to find out why they are always so darn slow.
  • Would you really trust anything that Yahoo puts out? Yahoo has previously ratted on journalists and bloggers to the Chinese Authorities. Worse: They were unapologetic about it, and kept doing it. One Yahoo 'satisfied customer' got ten years jail for criticizing the Government.

    So when Yahoo trundles along offering me neat tracking software, umm, no thanks. There's no telling where you might end up reading about it. Now sure, in the U.S. you don't get locked up for criticizing the government, but things do ge
    • Dude, the guys at Exceptional Performance aren't some kind of secret cabal.
  • Hopefully this are not the same people at Yahoo that tell he world to send 404 instead of 410 return code for deleted pages [conficio.com]. K<0>

One man's constant is another man's variable. -- A.J. Perlis

Working...