Are Long URLs Wasting Bandwidth? 379
Ryan McAdams writes "Popular websites, such as Facebook, are wasting as much as 75MBit/sec of bandwidth due to excessively long URLs. According to a recent article over at O3 Magazine, they took a typical Facebook home page, looked at the traffic statistics from compete.com, and figured out the bandwidth savings if Facebook switched from using URL paths which, in some cases, run over 150 characters in length, to shorter ones. It looks at the impact on service providers, with the wasted bandwidth used by the subsequent GET requests for these excessively long URLs. Facebook is just one example; many other sites have similar problems, as well as CMS products such as Word Press. It's an interesting approach to web optimization for high traffic sites."
Can they not use... (Score:5, Insightful)
Re: (Score:3)
handshake for the compression, and packet headers would probably become more than the potential benefits, not worth the effort.
Re: (Score:2)
You mean something like mod_gzip?
That leave only the url in the request header, the rest should (already) be compressed by mod_gzip.
Re: (Score:2)
Re:Can they not use... (Score:5, Informative)
Most of the time, yes, but then there's a question of trade-off. Small URLs are generally hashes and are hard to type accurately and hard to remember. On the other hand, if you took ALL of the sources of wastage in bandwidth, what percentage would you save by compressing pages vs. compressing pages + URLs or just compressing URLs?
It might well be the case that these big web services are so inefficient with bandwidth that there are many things they could do to improve matters. In fact, I consider that quite likely. Those times I've done web admin stuff, I've rarely come across servers that have compression enabled.
Re: (Score:2)
Those times I've done web admin stuff, I've rarely come across servers that have compression enabled.
Not sure why you would see that. Even for small sites that don't come close to hitting their minimum bandwidth allocation, using mod_gzip increases the visitor's experience because the HTML and CSS files download a lot faster and the processing overhead is minimal.
As for this story, I think whoever wrote it had this epiphany while they were stoned. There are so many other ways that Facebook could save bandwidth if they wanted to that would be easier.
75Mb/s is probably nothing to a site like Facebook. Let's
Re:Can they not use... (Score:5, Insightful)
Depending on your network type, you may not get any benefit from shorter URLs at all. Many networking protocols use fixed-size frames, which then get padded with zeroes up to the end of the frame. For example, in ATM networks, anything up to 48 bytes is a single frame, so depending on where that URL occurs relative to the start of a frame, it's possible that it would take a 48 byte URL to cause even one extra frame to be sent.
Either way, this is like complaining about a $2 budget overrun on a $2 billion project. Compared with the benefits of compressing the text content, moving all your scripts into separate files so they can be cached (Facebook sends over 4k of inline JavaScript with every page load for certain pages), generating content dynamically in the browser based on high density XML without all the formatting (except for the front page, Facebook appears to be predominantly server-generated HTML), removing every trace of inline styles (Facebook has plenty), reducing the number of style sheet links to a handful (instead of twenty), etc., the length of URLs is a trivial drop in the bucket.
Re:Can they not use... (Score:5, Funny)
They should just move all the GET parameters to POST. Problem solved. ;)
Re: (Score:2)
If pages continually POST to each other, hitting the browser's back button will display the annoying alert asking you to "Resend POST data".
Re: (Score:2)
Then dump CGI-like syntax completely and use applets that send back data via sockets.
Re:Can they not use... (Score:5, Informative)
You're missing the joke... GET requests look like this:
POST requests look like this:
Same amount of content... URL looks shorter, but the exact same data as the querystring gets sent inside the request body. Thus, switching from GET to POST does not alter the bandwidth usage at all, even if it makes the URL seen in the browser look shorter.
Re:Can they not use... (Score:5, Informative)
And even with the wink, this still got initially moderated "Interesting" instead of "Funny".... *sigh*
To clarify the joke for those who don't "GET" it, in HTTP, POST requests are either encoded the same way as GET requests (with some extra bytes) or using MIME encoding. If you use a GET request, the number of bytes sent should differ by... the extra byte in the word "POST" versus "GET" plus two extra CR/LF pairs and a CR/LF-terminated Content-length header, IIRC.... And if you use MIME encoding for the POST content, the size of the data balloons to orders of magnitude larger unless you are dealing with large binary data objects like a JPEG upload or something similar.
So basically, a POST request just hides the URL-encoded data from the user but sends almost exactly the same data over the wire.
Re:Can they not use... (Score:5, Funny)
Re:Can they not use... (Score:5, Funny)
Re:Can they not use... (Score:5, Funny)
That's nothing. This [tinyurl.com] is the most disgusting shit you'll ever see on the Internet.
Re:Can they not use... (Score:5, Funny)
Re: (Score:3, Funny)
I was disappointed, too. I was expecting a link to idle.slashdot.org.
Re:Can they not use... (Score:5, Informative)
Using a cookie, TinyURL allows you to enable previews [tinyurl.com], i.e., view where a TinyURL points to before following the link.
Re:Can they not use... (Score:5, Funny)
http://tinyurl.com/6rywju [tinyurl.com]
Tiny url is not all bad, this is one example of a positive use.
Re: (Score:2, Informative)
That's not compression, that's hashing.
Re: (Score:2, Interesting)
because they got more requests than the number of unique things TinyURL or whatever can handle.
Better is to use a better way of doing AJAX other than using GET....they can use POST and make sure gzip is on.
I think if they really put their minds on it, they can also implement clientside JSON compression using some of the javascript compression libraries that are out there (or use a simple flash wrapper to do the dirty work).
Just throw a bunch of kiddies (or 21yr olds) in a room and offer them free pizza/beer
Re:Can they not use... (Score:4, Insightful)
Sure they can TinyURL
No, because the long URL is still out there. For example: http://tinyurl.com/c9fjov [tinyurl.com] translates into http://www.nerve.com/CS/blogs/scanner/2008/11/16-22/pervert.jpg [nerve.com].
Re: (Score:3, Insightful)
How do you bookmark a specific lower level page if no variables are stored in the URL?
Wordpress has the option (Score:5, Informative)
For SEO purposes it's always handy to switch to the more popular example: http://www.mysite.com/2009/03/my-title-of-my-post.html [mysite.com].
Suggesting that we cut URL's that help Google rank our pages higher is preposterous.
Re: (Score:2)
All the data is still there meaning mod_rewrite dosen't help with the "bandwidth" issue at all. It just looks pretty.
Re: (Score:2)
Actually, it does help bandwidth a little. (Score:2)
The querystrings get passed to the CGI script, but that's done completely server-side: the bandwidth isn't wasted because the querystring never goes through the pipes. So you end up saving a little bit bandwidth-wise
In any case, this is a case of grossly premature optimization. Very, very few URLs even come close to the bandwidth of a favicon.ico file, which is itself considered a pittance. There are far more effective ways to cut down on bandwidth than these near-trivial aspects.
Re: (Score:2)
Maybe one day soon Google will have some way to expand mysite.com/5sfg to mysite.com/my_title_of_my_post.html. Having said that, how much of the importance of pagerank (and similar techs) is based on the url rather than title tags or links to it?
Re: (Score:2)
The problem is I don't feel like going to Google when I could just change
http://www.example.com/2009/3/foo.html [example.com]
to
http://www.example.com/2009/3 [example.com]
which will DTRT in Wordpress AFAIK unless your server is broken.
Re: (Score:2)
Default for Google:
http://www.google.com/search?client=opera&rls=en&q=google&sourceid=opera&ie=utf-8&oe=utf-8 [google.com]
Equivalent:
http://www.google.com/search?q=google [google.com]
Re:Wordpress has the option (Score:4, Insightful)
Do we know what 75MBps as a percentage of total site traffic is? It seems like if that number is 1% or less, there would be more important areas to optimize. A little slack can be more valuable than bandwidth in a complex system.
Re: (Score:2)
The original intent of my comment was to point out the fact that shortening URLs could have a negative impact on a site pondering SEO.
Who knows? (Score:5, Funny)
Read the rest of this comment
Re:Who knows? (Score:5, Insightful)
In reality I think by watching one youtube movie you've used more bandwidth than you will on facebook URLs in a year.
Re: (Score:2, Funny)
One man's waste is another man's treasure. Some say, "The world is my oyster." I say, "The world is my dumpster."
Wasted bandwidth, indeed.
Re: (Score:2)
I discussed it with myselves, but there was no agreement. Well, other than the world should use IPv6 or TUBA and enable multicasting by default.
Better way of doing it (Score:5, Informative)
Re: (Score:3, Insightful)
You mean that ?area=51 crap? How is http://mysite.com/?area=51 [mysite.com] usable?
(Unless the page is about government conspiracies, I guess.)
Re: (Score:3, Informative)
This is a very, very simple method. You seem to want to make it out to be the best thing in the world. The problem is, it needs some form of descriptive characteristic.
In my own little personal CMS/framework I do it similarly, except with a 1-16 character string. This way I can set some form of description.
It's really, very easy to do. Basically need a table with (id,parentid,page_title,page_content). parentid is the id of the parent section, leave NULL if it is the top level. This way you can seek in the D
Re: (Score:3, Informative)
Of course, it's a totally different paradigm that requires a database instead of XML for the page metadata. But what it enables in being able to relate the sections of the site to on
Depending on your viewpoint (Score:5, Insightful)
The short Facebook URLs waste bandwidth too ;)
Re:Depending on your viewpoint (Score:4, Informative)
I've always found stories along the lines of "$ENTITY wastes $X amount of $RESOURCE per year" dubious. Given enough users who each use a piece of $RESOURCE, the total amount of used resources will always be large no matter how little each individual user uses. There's no way to win.
Re: (Score:2)
For most users, anything they can access on Facebook is already present on 127.0.0.1.
Wordpress? (Score:4, Informative)
By default Wordpress produces short urls.
Waste of effort (Score:5, Interesting)
Of all things that could be optimized, urls shouldn't have a high priority (unless you want people to enter them manually).
I'm pretty sure their HTML, CSS, and javascript could be optimized way more than just their urls.
But rather than simply sites, people often what it to be filled with crap (which nobody but themselves care about).
ps, that doesn't mean you should try to create "nice" urls instead of incomprehensible url that contain things like article.pl?sid=09/03/27/2017250
Re:Waste of effort (Score:5, Insightful)
Of all things that could be optimized, urls shouldn't have a high priority (unless you want people to enter them manually). I'm pretty sure their HTML, CSS, and javascript could be optimized way more than just their urls. But rather than simply sites, people often what it to be filled with crap (which nobody but themselves care about).
ps, that doesn't mean you should try to create "nice" urls instead of incomprehensible url that contain things like article.pl?sid=09/03/27/2017250
Of all things that could be optimized, urls shouldn't have a high priority (unless you want people to enter them manually). I'm pretty sure their HTML, CSS, and javascript could be optimized way more than just their urls. But rather than simply sites, people often what it to be filled with crap (which nobody but themselves care about).
ps, that doesn't mean you should try to create "nice" urls instead of incomprehensible url that contain things like article.pl?sid=09/03/27/2017250
To your ps, most of that is easily comprehensible It was an article that ran today; only the 2017250 is unmeaningful in itself. Perhaps article.pl?sid=09/03/27/Muerte/WasteOfEffort would be better but we're trying to shorten things up.
Re:Waste of effort (Score:5, Interesting)
Exactly. If they wanted to try optimize the site, they could start looking at the number of Javascript files they include (8 on the homepage alone) and the number of HTTP requests each page requires. My Facebook page has *20* files getting included alone.
From what I can judge, a lot of their Javascript and CSS files don't seem to be getting cached on the client's machine either. They could also take a look at using CSS sprites to reduce the number of HTTP requests required by their images.
I mean, clicking on the home button is a whopping 726KB in size (with only 145 KB coming from cache), and 167 HTTP requests! Sure, a lot seem to be getting pulled from a content delivery network, but come on, that's a bit crazy.
Short URIs are the least of their worries.
Re: (Score:2)
Re:Waste of effort (Score:5, Informative)
This very type of analysis is what YSlow [yahoo.com] is for :)
Re: (Score:2)
Depending on the link density of one's pages that are actually served out to users, the bits used by the links themselves might be a large proportion of the page that is served. Yes, there's other stuff (images, javascript), but from the server's perspective those might be served someplace else -- they're just naming them. If the links can be shortened, especially for temporary things not meant to be indexed, it can save some bandwidth.
I'm not saying it's a primary way to save bandwidth, just that it's an
Irrelevant (Score:5, Insightful)
It's irrelevantly small portion of the traffic, while at the scale of Facebook, it could save some traffic, but does not make any impact on the bottomline worthwhile the effort!
150 chars long url = 150 bytes VS 50KILObytes + Images of rest of the pageview....
I'm throwing out of my head that 50kilobytes for the full page text, but a pageview often runs at over 100kb.
So it's totally irrelevant if they can shave off the 100kb a whopping 150bytes.
Re: (Score:2)
ya. i hav better idea. ppl shuld just talk in txt format. saves b/w. and whales. l8r
Seriously, though, I don't exactly get how a shorter URL is going to Save Our Bandwidth. Seems like making CNET articles that make you click "Next" 20 times into one page would be even more effective. ;)
The math, for those interested:
So to calculate the bandwidth utilization we took the visits per month (1,273,0004,274) and divided it by 31. Giving us 41,064,654. We then multiplied that by 20, to give us the transfer in kilobytes per day of downstream waste, based on 20k of waste per visit. This gave us 821293080, which we then divided by 86400 which is the number of seconds in a day. This gives us 9505 kilobytes per second, but we want it in kilobits, so we multiply it by 8. Giving us 76040, finally we divide that by 1024 to give us the value in MBits/sec. Giving us 74Mbit/sec. One caveat with these calculations is that we do not factor in gzip compression. Using gzip compression, we could safely divide the bandwidth wasting figures by about 50%. Browser caching does not factor in the downstream values, as we are calculating the waste just on the HTML file. It could impact the upstream usage as not all objects maybe requested with every HTML request.
Re:Irrelevant (Score:4, Interesting)
So to calculate the bandwidth utilization we took the visits per month (1,273,0004,274) and divided it by 31. Giving us 41,064,654. We then multiplied that by 20, to give us the transfer in kilobytes per day of downstream waste, based on 20k of waste per visit. This gave us 821293080, which we then divided by 86400 which is the number of seconds in a day. This gives us 9505 kilobytes per second, but we want it in kilobits, so we multiply it by 8. Giving us 76040, finally we divide that by 1024 to give us the value in MBits/sec. Giving us 74Mbit/sec. One caveat with these calculations is that we do not factor in gzip compression. Using gzip compression, we could safely divide the bandwidth wasting figures by about 50%. Browser caching does not factor in the downstream values, as we are calculating the waste just on the HTML file. It could impact the upstream usage as not all objects maybe requested with every HTML request.
roflmao! I should've RTFA!
This is INSULTING! Who could eat this kind of total crap?
Where the F is Slashdot editors?
Those guys just decided per visit waste is 20kb? No reasoning, no nothing? Plus, they didn't see on pageviews, just visits ... Uh 1 visit = many pageviews.
So let's do the right maths:
41,064,654 visits
Site like Facebook would probably have around 30 or more pageviews per visit. let's settle for 30.
1,231,939,620 pageviews per day.
150 average length of url. Could be compressed down to 50. 100 bytes to be saved per pageview.
123,193,962,000 bytes of waste, 120,306,603Kb per day, or 1392Kb per sec.
In other words:
1392 * 8 = 11136Kbps = 10.875Mbps.
100Mbps guaranteed costs 1300$ a month ... They are wasting a whopping 130$ a month on long urls ...
So, the RTFA is total bullshit.
Re:Irrelevant (Score:4, Informative)
You missed the previous paragraph of the article where they explained where they got the 20k value, perhaps you should read the article first. :)
They rounded down the number of references, but on an average Facebook home.php file there are 250+ HREF or SRC references in excess of 120 characters. They took that these could be shaved by 80 bytes each. Thats 80 bytes x 250 references = 20,000 bytes or 20k.
Your math is wrong, its taking into account just one URL, when there are 250 references on home.php alone! They did not even factor in more than one page view per visit. If they did it your way, you would be looking at far more bandwidth utilization that 74MBit/sec.
Facebook? Go after Twitter. (Score:2, Interesting)
Twitter clients (including the default web interface) auto-tinyURL every URL put into it. Clicking on the link involves not one but *2* HTTP GETs and one extra roundtrip.
How long before tinyurl (and bit.ly, ti.ny, wht.evr...) are cached across the internet, just like DNS?
Most likely insignificant (Score:4, Informative)
This is ridiculous. If I have a billion dollars, I'm not going to worry about saving 50 cents on a cup of coffee. The bandwidth used by these urls is probably completely insignificant.
Re: (Score:2)
That's a funny way to look at it. If I save 50 cents a day on my cup of coffee I will have another billion dollars in just 5479452 years (roughly). And that's excluding compound interest!
Re: (Score:2, Offtopic)
Just how interesting are the compounds in coffee, anyway?
Re:Most likely insignificant (Score:5, Interesting)
I think the O3 article and the parent have missed the real point. It's not the length of the URL's that's wasting bandwidth, it's how they're being used.
A lot of services append useless query parameter information (like "ref=logo" etc. in the Facebook example) to the end of every hyperlink instead of using built-in HTTP functionality like the HTTP-Referer client request headers to do the same job.
This causes proxy servers to retrieve multiple copies of the same pages unnecessarily, such as http://www.facebook.com/home.php [facebook.com] and http://www.facebook.com/home.php?ref=logo [facebook.com], wasting internet bandwidth and disk space at the same time.
Re: (Score:3, Insightful)
You can't ever rely on the HTTP-Referer header to be there. Much of the time, it isn't; either the user has disabled it in his browser, or some Internet security suite strips it, or something. I'm amazed at the number of sites that use it for _authentication_!
Plus (Score:3, Insightful)
The HTTP-Referer isn't designed for ?ref=somesource
Your stat software wants to know if more people click to your page through the logo ?ref=mylogo or through a link in the story ?ref=story. The Referer can't give you that info.
The HTTP-Referer also is no good for aggregation. It only give you a URL. If you didn't append something like ?campaign=longurl, it would be almost impossible to track things like ad-campaigns.
HTTP-Referers *are* good for dealing with myspace image leeches. If you haven't I sugges
Really? (Score:2)
How many times are the original pages called? Is this really the resource hog?
What about compressing images, trimming them to their ultimate resolution?
How about banishing the refresh tags that cause pages to refresh while otherwise inactive? Drudgereport.com is but one example where the page refreshes unless you browse away from it...
If you really want to cut down on bandwidth usage, eliminate political commenting and there will never be aneed for Internet 2!
Re: (Score:2)
Replacing all the images with random links to adult sites would save considerable bandwidth and I doubt the users would notice the difference.
Wow. Just wow. (Score:4, Informative)
75 whole freaking megabits? WOWSERS!!!!
They must be doing gigabits for images, then. Complaining about the URLs is complaining about the 2 watts your wall-wart uses when idle, all the while using a 2kW air conditioner.
Re: (Score:2)
Re: (Score:2)
Even worse, it's like complaining about one person's wall-wart in an entire city of homes using air-conditioners.
Mental Masturbation (Score:5, Insightful)
This is a stupid exercise. Oh my gosh, there's an extra few characters wasted. They're talking about 150 characters, which would be 150 bytes, or (gasp) 0.150KB.
10 times the bandwidth could be saved by removing a 1.5KB image from the destination page, or doing a little added compression to the rest of the images. The same can be said for sending out the page itself gzipped.
We did this exercise at my old work. We had relatively small pages. 10 pictures per page, roughly 300x300, a logo, and a very few layout images. We saved a fortune in bandwidth by compressing the pictures just a very little bit more. Not a lot. Just enough to make a difference.
Consider taking 100,000,000 hits in a day. Bringing a 15KB image to 14KB would be .... wait for it .... 100GB per day saved in transfers.
The same can be said for conserving the size of the page itself. Badly written pages (and oh are there a lot of them out there) not only take up more bandwidth because they have a lot of crap code in them, but they also tend to take longer to render.
I took one huge badly written page, stripped out the crap content (like, do you need a font tag on every word?), cleaned up the table structure (this was pre-CSS), and the page loaded much faster. That wasn't just the bandwidth savings, that was a lot of overhead on the browser where it didn't have to parse all the extra crap in it.
I know they're talking about the inbound bandwidth (relative to the server), which is usually less than 10% of the traffic. Most of the bandwidth is wasted in the outbound bandwidth. That's all anyone really cares about. Server farms only look at outbound bandwidth, because that's always the higher number, and the driving factor of their 95th percentile. Home users all care about their download bandwidth, because that's what sucks up the most for them. Well, unless they're running P2P software. I know I was a rare (but not unique) exception, where I was frequently sending original graphics in huge formats, and ISO's to and from work.
Re: (Score:3, Informative)
it's actually not even 0.15Kb, it's 0.146kb >;)
and 100mil hits, 1kb saved = 95.36Gb saved.
You mixed up marketing, and in-use computer kilos, gigas etc. 1Kb !== 1000 bytes, 1Kb === 1024bytes :)
Re: (Score:2)
Nah, I just never converted the KB (Bytes) of file size and string size (8 bit characters are 1 byte), so I never converted it down to the Kb/s (kilobits per second) for bandwidth measurement. :)
Re: (Score:2)
While you have a good point, your argument can be summed up as "I've already been shot, so it's okay to stab me."
Re: (Score:2)
Naw, it's more like, I'd rather be poked with that blunt stick than shot with a cannon. :)
Re:Mental Masturbation Try the new ebay (Score:3, Insightful)
ebay has "upgraded" their local site http://my.ebay.com.au/> and "my ebay" is now a 1M byte download. That's ONE MILLION BYTES to show about 7K of text and about 20 x 2Kb thumbnails.
The best bit is that the htm file itself over 1/2 Mbytes. Then there's two 150K+ js files and a 150k+ css file.
Web "designers" should be for
Re: (Score:2)
This is a stupid exercise. Oh my gosh, there's an extra few characters wasted. They're talking about 150 characters, which would be 150 bytes, or (gasp) 0.150KB.
Perhaps, but I'm reminded of the time when I started getting into the habit of stripping Unsubscribe footers (and unecessarily quoted Unsubscribe footers) from all the mailing lists (many high volume) that I subscribed to. During testing, I found the average mbox was reduced down in size by between 20 and 30%.
If you accept the premise that waste is
what is that as a proportion? (Score:2)
Re: (Score:2)
tag: dropinthebucket (Score:5, Insightful)
Seriously. Long URL's as wasters of bandwidth? There's a flash animation ad running at the moment (unless you're an ad-blocking anti-capitalist), and I would expect it uses as much bandwidth when I move my mouse past it as a hundred long URL's.
I'm not apologizing for bandwidth hogs... back in the dialup days (which are still in effect in many situations), I was a proud "member" of the Bandwidth Conservation Society [blackpearlcomputing.com], dutifully reducing my .jpgs instead of just changing the Height/Width tags. My "Wallpaper Heaven" website (RIP) pushed small tiling backgrounds over massive multi-megabyte images. But even then, I don't think a 150-character URL would have appeared on their threat radar.
It's a drop in the bucket. There are plenty of things wrong with 150-character URLs, but bandwidth usage isn't one of them.
Re: (Score:2)
lol, i used to run Wallpaper Haven :)
People came to me complaining "it's not haven, it's heaven!" Ugh ... Didn't know what Haven means :D
5kb per typed page (Score:2)
Besides, if not for those incredibly long and in need of shortening URL's, how else would we be able to feed rick astley's music video youtube link into tiny
Re: (Score:3, Interesting)
Actually, when I had my web page designed (going on 4 years ago), I specifically asked that all of the pages load in less than 10 seconds on a 56k dialup connection. That was a pretty tall order back then, but it's a good standard to try and hit. It's somewhat more critical now that there are more mobile devices accessing the web, and the vast majority of the country won't even get a sniff at 3G speeds for more than a decade. There is very little that can be added to a page with all the fancy programming we
Compared to what? (Score:2)
What's the percentage savings? Is it enough to care or is it just another fun fact?
Simplifying / nanoizing / consolidation javascript and reducing the number of sockets required to load a page would probably be more bang for the buck. Is it worth worry about?
Re: (Score:2)
Simple javascript compression would probably save them 1000 as much as shortening urls from 150 chars to 50.
Re: (Score:3, Interesting)
Interestingly (or maybe not), Google doesn't gzip their analytics javascript file...
absolute number and 'wasted' (Score:2)
Second, define 'waste'. Most rational people would argue that facebook i
Whatever (Score:2, Funny)
75 MBit/s? What's that in Libraries of Congress per decifortnight?
Re: (Score:2)
Giraffe
Its all about the evolution of "it" (Score:2)
Currently, we can apply said metaphor with internet connections. We started with jpegs. We had low baud modems. We then moved on to moving pictures we needed to download. They upped it to cable. Now we are to the point where the demand for fiber to your house is going to be needed in most situations.
Think how we've moved from dumb terminals to workstati
I can top that. Try the Globe and Mail! (Score:5, Interesting)
For an even more egregious example of web design / CMS fail, take a look at the HTML on this page [theglobeandmail.com].
$ wc wtf.html
12480 9590 166629 wtf.html
I'm not puzzled by the fact that it took 166 kilobytes of HTML to write 50 kilobytes of text. That's actually not too bad. What takes it from bloated into WTF-land is the fact that that page is 12,480 lines long. Moreover...
$ vi wtf.html
Attention Globe and Mail web designers: When your idiot print newspaper editor tells you to make liberal use of whitespace, this is not what he had in mind!
And that wouldn't have mattered... (Score:3, Insightful)
...except they aren't using mod_gzip/deflate. At first I thought you browsed the web RMS style [lwn.net] and maybe wc* didn't support compression** and you were just getting what you deserved***, but then I checked in firefox and lo and behold:
Response Headers - http://www.theglobeandmail.com/blogs/wgtgameblog0301/ [theglobeandmail.com]
Date: Fri, 27 Mar 2009 23:39:54 GMT
Server: Apache
P3P: policyref="http://www.theglobeandmail.com/w3c/p3p.xml", CP="CAO DSP COR CURa ADMa DEVa TAIa PSAa PSDa CONi OUR NOR IND PHY ONL UNI COM NAV INT DEM STA
Re:I can top that. Try the Globe and Mail! (Score:5, Interesting)
...the first 1831 lines (!) of the page are blank...Attention Globe and Mail web designers: When your idiot print newspaper editor tells you to make liberal use of whitespace, this is not what he had in mind!
Believe it or not, someone had it in mind. This is most likely a really, really stupid attempt at security by obscurity.
PHB:My kid was showing me something on our website, and then he just clicked some buttons and the entire source code was available for him to look at. You need to do something about that. ::whispering to WebGuy #2:: Just add a bunch of empty lines. When the boss looks at it, he won't think to scroll down much before he gives up.
WebGuy:You mean the html code? Well, that actually does need to get transferred. You see, the browser does the display transformation on the client's computer...
PHB:The source code is out intellectual property!
WebGuy:Fine. We'll handle it.
PHB:Ah, I see that when I try to look at the source it now shows up blank! Good work!
Re: (Score:3, Interesting)
Wow. Judging by the patterns that I see in the "empty" lines, it looks like their CMS tool has a bug in it that is causing some sections to overwrite (or in this case, append instead).
I'd bet that every time they change their template, they are adding another set of "empty" lines here and there, rather than replacing them.
Customer bulletin (Score:5, Funny)
In order to maximize the web experience for all customers, effective immediately all websites with URLs in excess of 16 characters will be bandwidth throttled.
Sincerely,
Comcast
An elliptic assault on Net Neutrality. (Score:3, Interesting)
It's not the URL in the GET, it's URLs in the HTML (Score:2, Insightful)
I hope this is obvious to most people here, but reading some comments, I'm not sure, so...
The issue is that a typical Facebook page has 150 links on it. If you can shorten *each* of those URLs in the HTML by 100 characters, that's almost 15KB you knocked off the size of that one page. Not huge, but add that up over a visit, and for each visit, and it really does add up.
I've been paying very close attention to URL length on all of my sites for years, for just this reason.
Better idea (Score:5, Funny)
Just use a smaller font for the URL!
No (Score:5, Insightful)
No. But this article is.
Focusing on the wrong problem... (Score:4, Insightful)
Isn't Facebook itself the huge waste of bandwidth as opposed to just the verbose URLs it generates?
Notsomuch (Score:2)
Usability is the problem, not bandwidth (Score:3, Insightful)
The problem isn't bandwidth, it is that long URLs are a pain from a usability standpoint. They cause problems in any context where they are spelled out in plain text (instead of being hidden as a link). For example, they often get broken in two when sent in plain text email. When posting a URL into a simple forum that only accepts text (no markup), a long URL can blow-out the width of the page.
Where does this problem come from? It comes from SEO. Website operators realized that Google and other search engines were taking URLs into account, so CMSs and websites switched from using simple URLs (like a numeric document ID) to stuffing whole article titles into the URL to try to boost search rankings. One of the results of this is that when someone finds a typo in an article title and fixes it, the CMS either creates a duplicate page with a slightly different URL, or the URL with the typo ends up giving a 404 error and breaks any links that point to it.
What I don't understand is why search engines bother to look at anything beyond the domain name when determining how to rank search results. How often do you see anything useful in the URL that isn't also in the <title> tag or in a <h1> tag? If search engines would stop using URLs as a factor in ranking pages, people would use URLs that were efficient and useful instead of filling them with junk. The whole thing reminds me of <meta> keyword tags -- to the extent that users don't often look at URLs while search engines do, website operators have an opportunity to manipulate the search engines by stuffing them with junk.
Acrnym (Score:3, Funny)
To fthr sav our bdwdth, ^A txt shld be cmprsd into acrnyms!
Re: (Score:2)
This is precisely the kind of thing that makes me annoyed with people in general... if people didn't do this all the time (complain about the dike leaking a drip when six feet over there's a ten-foot hole) I wouldn't be so anti-social.