Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet

Netscape Restores RSS DTD, Until July 134

Randall Bennett writes "RSS 0.91's DTD has been restored to it's rightful location on my.netscape.com, but it'll only stay there till July 1st, 2007. Then, Netscape will remove the DTD, which is loaded four million times each day. Devs, start your caching engines."
This discussion has been archived. No new comments can be posted.

Netscape Restores RSS DTD, Until July

Comments Filter:
  • Redirect (Score:3, Insightful)

    by cynicalmoose ( 720691 ) <giles.robertson@westminster.org.uk> on Wednesday January 17, 2007 @11:30AM (#17646718) Homepage
    And they can't set up a redirect to the new hosting location?
  • Re:Redirect (Score:3, Insightful)

    by Otter ( 3800 ) on Wednesday January 17, 2007 @11:44AM (#17646978) Journal
    Wouldn't they then be serving 4 million redirects per day? The point is that they need to eventually break it to make people stop relying on that path.
  • Re:Redirect (Score:5, Insightful)

    by werewolf1031 ( 869837 ) on Wednesday January 17, 2007 @11:45AM (#17646986)
    And they can't set up a redirect to the new hosting location?
    What in the world would be the point? That would merely duplicate the problem to a different location. As was clearly stated in the article by Mr. Finke, four-million hits every day is a crapload of bandwidth wasted re-downloading a file that will never change. The RSS 0.91 spec is finished, complete, and yes, for all intents and purposes, written in stone. Stop looking at it every damned day. It will not change. Ever. It's truly stupid for client-side software to be accessing it over the Internet to read its forever-static contents. That's like checking the writings of a dead poet every day to see if anything's changed.

    And any dev who codes his app to check a file like this every day instead of caching it client-side should be smacked oh-my-god-so-frickin-hard.
  • by 140Mandak262Jamuna ( 970587 ) on Wednesday January 17, 2007 @11:51AM (#17647046) Journal
    No one ever writes a new XML (and most other Web2.0) application from the scratch. They all take an app they are familiar with and modify it to do new things. And some of the initial boot-strap processes are never looked into. If it works, dont mess with it attitude is pervasive. So someone long ago may be in a galaxy far away wrote an application that replicated and mutated by developers and others took it and did more mutations and it propagated. One side effect of this and similar cut&paste code development tactics is that bugs, security holes, inefficient algorithms, brain dead implementations also propagate.

    Richard Dawkins asks this very fundamental question, why reproduce (sexually or asexually) using seeds and embryos? Why not propagate by cuttings and cloning? It happens in nature. Many fern like plants do it. Bananas have been reproducing by new shoots. Then he discusses how harmful mutations too propagage and how going back to the basics and recreating the embryo selects the beneficial mutations and puts a check on deletrious mutations. Books The Selfish Gene, Climbing the Mount Improbable.

  • Re:I don't get it (Score:3, Insightful)

    by Anonymous Coward on Wednesday January 17, 2007 @12:25PM (#17647566)
    PUBLIC doctypes simply give the URI of the DTD, and are exptected to always resolve to the same content. But there's no requirement that you use the default resolver.
  • by Anonymous Coward on Wednesday January 17, 2007 @12:48PM (#17648024)
    It seems to me that having the ability to track the src and dest address of every website viewed (nearly) would be a huge financial gain to companies willing to sell that information. Netscape (read AOL) never really struck me as a "feel good, do good" company and I am surprised that they would not try to profit off of this. I distinctly remember thinking this as motive back when they declared everyone must use their DTD in the first place.
  • by kabdib ( 81955 ) on Wednesday January 17, 2007 @01:09PM (#17648386) Homepage
    This is why whenever I hear the words "architecture" and "web" in the same sentence that I snicker. Unpolite, but OMFG who designed this junk?

    Oh, right. Nobody, really. It's amazing it works at all (... and sometimes it doesn't!)

    Djikstra's quip, "If programmers build houses they way they built programs, the first woodpecker to come along would topple civilization" was and remains insightful.
  • they don't (Score:3, Insightful)

    by jonasj ( 538692 ) on Wednesday January 17, 2007 @01:39PM (#17648898)
    until this story broke, I didn't realize they still existed.
    They don't. They haven't existed since 2003. AOL is just using the name for a portal and IIRC a dial-up ISP service.

    http://www.google.com/search?q=%22Brand+Necrophili a%22&safe=off [google.com]
  • Re:Redirect (Score:3, Insightful)

    by Albanach ( 527650 ) on Wednesday January 17, 2007 @01:41PM (#17648908) Homepage
    To be fair, the article points out that they have already put in place a redirect.

    They point out that it might not be entirely sensible for millions of newsreaders to rely upon downloading a static file from the web each time they open a feed. Most newsreaders (like the one built into Firefox use a local cached copy.

    They restored the file so these newsreaders will continue to work for a period long enough that they can be altered to use a local copy.

    Whether it's reasonable or not for them to remove the file is, I guess, up to the reader to decide. Personally though, I think it's a fair point that you should never rely on a file hsoted on a server which you have no control over - the file can be altered, vandalised, or in this case simply removed without warning and without you being able to do anything about it.
  • by Anonymous Coward on Wednesday January 17, 2007 @01:52PM (#17649088)
    You need to put a certain DTD URI into your documents because they essentially act like "magic cookie" values in binary file formats. It's the only way to tell if you're supposed to treat a document as HTML 1.0, 2.0, 3.0, 4.01, XHTML, HTML strict, HTML transitional, whatever. That information isn't encoded in the DTD, so there's no way to identify a file format simply by pointing at a random location with the identical DTD.

    The point of the URI is to act as an opaque identifier for a particular file format. Being able to fetch it is just a bonus, and a good programmer shouldn't rely on the resource being there at run time. URIs are used because the domain name system already delegates responsibility for namespaces; a different scheme could be used, but using DNS leverages the existing infrastructure. It's not perfect (as the RSS 0.91 example shows), but it works 90% of the time.
  • by Kelson ( 129150 ) * on Wednesday January 17, 2007 @03:08PM (#17650358) Homepage Journal
    Sending Expires and Cache-Control headers [slashdot.org] that say "Don't bother retrying for 3 years" might help mitigate some of the bandwidth waste.

    That said, he's got a point that the feed readers should work if the DTD isn't retrievable -- but deliberately removing it looks like a great way to say "Netscape isn't reliable."

egrep -n '^[a-z].*\(' $ | sort -t':' +2.0

Working...