Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet Technology

XHTML 2 Cancelled 222

Jake Lazaroff writes "According to the W3 News Archive, the charter for the XHTML2 Working Group — set to expire on December 31st, 2009 — will not be renewed. What does this mean? XHTML2 will never be a W3C recommendation, so get on the HTML 5 bandwagon now. According to the XHTML FAQ, however, the W3C does 'plan for the XML serialization of HTML to remain compatible with XML.' Looks like with HTML 5, we'll get the best of both worlds."
This discussion has been archived. No new comments can be posted.

XHTML 2 Cancelled

Comments Filter:
  • Good (Score:5, Informative)

    by orta ( 786013 ) on Friday July 03, 2009 @11:37AM (#28572213) Homepage Journal
    I know a lot of web developers who dont know the difference between XHTML and HTML, and I hear XHTML as a buzzword all the time. The less confusion the better in my opinion. The HMTL5 spec is quite readable,but if you've not taken a stab at working with HTML5 (it runs all browsers) yet this article should be pretty useful: http://www.phpguru.org/static/html5 [phpguru.org]
    • I know a lot of web developers who dont know the difference between XHTML and HTML

      I've used both, and because of the strictness and use of lower case tags of xhtml I prefer it. Maybe there's only a few people it bothers but using all large cap tags bothers me. I also like it that xhtml separates content from structure. I don't know much about html5 but I hope it includes these.

      The HMTL5 spec is quite readable,but if you've not taken a stab at working with HTML5 (it runs all browsers) yet this article sho

      • HTML5 is very similar to XHTML. I think the only major difference in the code is the self closing tags. Just like xhtml, most attributes that have to do with styling in tags have been deprecated.
      • The thing is, you're not really serving XHTML to the browser, the browser still interprets it as text/html. The DOCTYPE does nothing except trigger standards mode in IE.

        Unless you're actually getting your server to send content-type: application/xhtml+xml in the response header (which IE6 can't handle, so nobody does it), the browser just treats it as malformed HTML (technically, >br /> is invalid HTML).

        I code in XHTML "style" (lowercase, self-closing tags, etc) as well, but "strictness" doesn't real

        • If you're going to send the header, you might as well detect what it supports first, and send application/xhtml+xml if it's supported. It's a simple stristr() on HTTP_ACCEPT. I've been doing it an all of my sites since I started with PHP, and do it with Perl and Python as well..

      • by Homburg ( 213427 )

        I've used both, and because of the strictness and use of lower case tags of xhtml I prefer it. Maybe there's only a few people it bothers but using all large cap tags bothers me. I also like it that xhtml separates content from structure.

        Neither of these are unique features of XHTML. HTML is case insensitive, so you can use lower case tags if you wish, and XHTML 1 has exactly the same semantics as HTML 4.01, so XHTML and HTML are equally strict and separate content from structure to exactly the same extent.

    • Re: (Score:3, Funny)

      by schon ( 31600 )

      I know a lot of web developers who dont know the difference between XHTML and HTML, and I hear XHTML as a buzzword all the time.

      Duh. Everyone knows that the "X" is for Xtreme! It's Xtreme HTML, right?

      • Duh. Everyone knows that the "X" is for Xtreme! It's Xtreme HTML, right?

        Brought to you by Doritos, Mountain Dew and the W3C.

    • Re:Good (Score:5, Insightful)

      by Hurricane78 ( 562437 ) <deleted&slashdot,org> on Friday July 03, 2009 @01:20PM (#28573209)

      The main key is, that, while HTML5 is based on the superior SGML (because of more freedom), XHTML had started to enforce strictness and cleanness. This meant the browser did not have to support a ton of typos, just because the editor was a freakin' lazy ass. Imagine a compiler that would eat any typo. Missing brackets, braces, semicolons, object-function separators, completely meaningless semantic messes. HTML4 browsers eat it all.

      It is horrible, and actively supports the dumbing down of people. (Those who want to write websites.)
      Face it: If they have to, they will learn it. Nobody is too stupid for that. Some just repeat so often that they are stupid, that they actually become stupid. But this can be reversed in exactly the same way. (Ask any psychotherapist about self-fulfilling prophecies.)

      Another great feature of XHTML, was its modularity and cross-language features.
      You could integrate XHTML, SVG, MathML, etc, into one document. Imagine a P tag inside a SVG circe, containing a math formula, and you begin to understand the sheer power of that concept.

      Now if they implement HTML5 right, and we get the same cleanness that XHTML 1.1 had (Strict only. No transitional shit.), and they add cross-language abilities too (trough SGML), then I'm all for it!
      But if not, this could be a huge step backwards, into the web development mess of IE6 times!

      • Re:Good (Score:5, Interesting)

        by Reaperducer ( 871695 ) on Friday July 03, 2009 @01:50PM (#28573501)

        Imagine a compiler that would eat any typo. Missing brackets, braces, semicolons, object-function separators, completely meaningless semantic messes. HTML4 browsers eat it all.

        So, what you're saying is that the computer works for people instead of the other way around?

        • Re:Good (Score:4, Insightful)

          by css-hack ( 1038154 ) on Friday July 03, 2009 @03:03PM (#28574101)

          But by working that way, the computer encourages people to create unreadable messes, that other developers can't easily understand.

          Simpler parsing rules are more a boon for the people than for the computers. Think about it.

        • Re:Good (Score:5, Insightful)

          by ultranova ( 717540 ) on Friday July 03, 2009 @04:40PM (#28574887)

          So, what you're saying is that the computer works for people instead of the other way around?

          No, what it means is that the computer tries to guess what some dyslexic jackass who insists on writing code despite being functionally illiterate and proud of it meant. Since we have no sentient computers yet, and since the markup diarrhea these people produce would be challenging even for a human to decrypt, the task is hopeless, and the websites that result will break in fascinating ways with each new browser version, or whenever whoever visits them has a different screen resolution than the "designer", or the stars are not just right. And whenever that happens, the website gets replaced by a new, equally broken version, and the designer gets paid for delivering said abomination.

          And of course whenever the browser fails to extract meaning from the chaos that would horrify even Cthulhu, it's the user who gets blamed: he didn't use the right version of the right browser, running at the right resolution, with the right versions of the right plugins installed. That, or he has Linux installed on another and unrelated machine.

      • Re:Good (Score:4, Insightful)

        by moderatorrater ( 1095745 ) on Friday July 03, 2009 @03:25PM (#28574273)

        It is horrible, and actively supports the dumbing down of people.

        This is where I take issue with your argument. I completely agree that having the page break catastrophically when there was an error would be easier and better for people who design websites professionally (like me), especially in this day and age.

        However, I don't believe that it supports the dumbing down of people, I believe it support less knowledgeable users. To use the compiler as an example, when my sister-in-law learned programming, she learned Java; to get to the point where she could do basic things like "hello world," she had to instantiate objects and call functions. My wife learned with php, and she had to type one line, a command and a string. The barrier for entry was much lower and she was rewarded much faster, thereby gaining a greater desire to learn more.
        Br. At the time, browsers taking incorrect HTML was the same philosophy: you lower the barrier of entry. When someone writes a lot of web pages, they tend to become more knowledgeable, not less. There are exceptions that make everyone serious about the craft cringe and want to beat their heads against a brick wall, but for the most part skills are progressing. I don't know whether the web landscape would be better or worse right now if they'd required strict HTML for every web site, but I can tell you that a lot of people who were enthusiastic supporters and creators of web pages in the early days wouldn't have gone down that route had the barrier for entry been higher.

        • Re: (Score:3, Insightful)

          by Waccoon ( 1186667 )

          I blame development tools.

          Every web browser should have a development mode where it will tell you about simple syntax problems with HTML, CSS, JavaScript, and so on. This should have been standard since day one! I mean, really simple things that won't bloat the browser. Complex things like validation can be handled with extensions, like the Web Developer and HTML Validator extensions for Firefox.

          Most people I know who do web development aren't aware of a simple typo here and there, and it's hard to vali

      • Re: (Score:3, Insightful)

        Imagine a compiler that would eat any typo. Missing brackets, braces, semicolons, object-function separators, completely meaningless semantic messes.

        Must... resist... must... resist...PHP! Bloody PHP! Bloody E_NOTICE!

        Oh dear, there goes my karma...

        • Re: (Score:3, Insightful)

          Must... resist... must... resist...PHP! Bloody PHP! Bloody E_NOTICE!
          Oh dear, there goes my karma...

          In attempt to preserve your "karma", I give you asolution:

          function errorHandler($code, $message, $filename, $line) { die($code .': '.$message.' at '.$filename.' ('.$line.').'); }

          set_error_handler('errorHandler');


          You know, less talk, more action ;).

          P.S.: You could also throw an exception, which is the most convenient option, as you can handle the errors in some cases.

      • Re: (Score:3, Informative)

        by Ant P. ( 974313 )

        HTML 5 is based on the DOM. The HTML4-compatible syntax is defined from scratch, it isn't based on SGML because no web browser actually parses SGML correctly. Most of them don't do HTML4.01 fully for that matter (IE doesn't do simple things like <q>, Moz doesn't support all the weird table-column align stuff...).

      • Re: (Score:3, Informative)

        by Tacvek ( 948259 )

        HTML5 comes in two forms.

        It comes in an SGML-inspired format, that is not strictly SGML but matches real word HTML almost exactly. The big difference from HTML4 besides the new tags is that it does not use a DTD, nor does it support the shortag features of SGML, with the exception of the short attribute feature. Thus "<title/</<body/".

        (Yes, that has three open brackets, zero close brackets, and 3 slashes) is not valid HTML5, despite being valid HTML4. (At least once you add the DTD).

        There is also a

      • Now if they implement HTML5 right, and we get the same cleanness that XHTML 1.1 had (Strict only. No transitional shit.), and they add cross-language abilities too (trough SGML), then I'm all for it!

        1. There is an XML mode for HTML5, see HTML vs. XHTML [whatwg.org]. HTML5 even uses the same xmlns="http://www.w3.org/1999/xhtml/" namespace.
        2. HTML5 tries to defines exactly how a browser should handle the billions of unclean documents out there. This will help browser interoperability in the real worldwide web of garbled HTML, and has huge benefits for script parsing HTML because the DOM contents after reading in HTML should be somewhat similar in different browsers.
        3. Despite this, HTML5 specifies very clearly how
      • Re: (Score:3, Interesting)

        by Blakey Rat ( 99501 )

        The main key is, that, while HTML5 is based on the superior SGML (because of more freedom), XHTML had started to enforce strictness and cleanness. This meant the browser did not have to support a ton of typos, just because the editor was a freakin' lazy ass. Imagine a compiler that would eat any typo. Missing brackets, braces, semicolons, object-function separators, completely meaningless semantic messes. HTML4 browsers eat it all.

        Totally wrong. One of the most important rules in software is: "be liberal in

        • Re: (Score:3, Insightful)

          by Draek ( 916851 )

          In the ideal world, software would *do what I mean*, not *do what I say*.

          No it shouldn't, and the reason is quite simple. And no, it's not 'elitism' or any of those red herrings you're throwing.

          Lack of formalized languages has done enough harm elsewhere, you can take a relatively complex phrase in English and two people will come up with two different meanings for it. Perhaps they'll only differ sightly, perhaps not, but chances are they won't be perfectly interchangeable. Extrapolate that to software, and you have pretty much the same situation as today only worse: IE interpret

  • XHTML merged (Score:3, Interesting)

    by werfu ( 1487909 ) on Friday July 03, 2009 @11:41AM (#28572257)
    They should have never created XHTML. They should have XMLized HTML in the first place. But, XHTML has corrected many wrong thing that HTML developpers used to do. Now, HTML5 should simply pick up the best of both world while still being XML compliant.
    • Re:XHTML merged (Score:5, Informative)

      by RaceProUK ( 1137575 ) on Friday July 03, 2009 @11:47AM (#28572317)

      They should have XMLized HTML in the first place.

      They did. It's called XHTML.

      Unless you mean XML-ise HTML 3.2 or earlier, but I believe XML didn't exist back then.

      • Re: (Score:3, Interesting)

        They should have XMLized HTML in the first place.

        They did. It's called XHTML.

        And now it's failed. What does that tell us?

        • Re:XHTML merged (Score:4, Informative)

          by Ant P. ( 974313 ) on Friday July 03, 2009 @04:40PM (#28574891)

          That most web page authors are too incompetent to even follow XML's validity rules, let alone HTML's?

        • by Tacvek ( 948259 )

          XHTML 2 is being canceled not because it failed, but because the only advantage over XHTML 1 was being more modular, which nobody really cared about. Besides, HTML5 will define XHTML5, which will be a significant improvement on XHTML 1.

    • Ditto. XHTML is just another "combined technology" term like DHTML (although standardised) imho; it was an incomplete compromise between two still-developing technologies.

      XHTML's demise was a natural one. HTML is the foremost "static" web language, and has been for decades already; it is only normal that the "best" of other lesser-used languages be integrated into it to make a more performant protocol whole.

    • Re: (Score:3, Insightful)

      by sakdoctor ( 1087155 )

      XHTML would have been great standard.

      When fed invalid XHTML, the browser chokes, which would have gone a long way to eliminating much of the crap code, and crap "web developers" out there.
      I don't see why it's the browsers business to be THAT lenient, and second guess the developer all the time.

      • Re: (Score:2, Interesting)

        by DoktorSeven ( 628331 )

        Exactly. XHTML is not that hard to get right, and it makes a web page "clean" in that there doesn't have to be any guessing going on in the browser to figure out what a page designer wants.

        The best thing in the world would have been browsers adapting a rigid HTML standard to begin with and browsers simply saying "Sorry, this page has invalid HTML" on bad pages.

        I can dream, can't I?

        • Re: (Score:3, Informative)

          by sakdoctor ( 1087155 )

          Yes, we can dream.
          How lazy do you have to be not to close your tags, and nest them properly? It's a low barrier, but given people's infinite laziness, they will write their code until it is just not-too-crappy to render in IE. Then call it a day.

          A strict standard would also give MS less wiggle room to subvert the standard in their IE implementation of it.

        • Re: (Score:2, Insightful)

          by Tanktalus ( 794810 )

          Getting a web page clean is a hard problem ... when you accept user input in something approaching HTML format, like /. does. Or we can all be forever subjected to incomplete wiki-style markup that can only do about half of what the user wants. I find myself constantly going back to html in mediawiki to get the formating I want - whether mediawiki supports it or not, I don't know, because at some point the wiki markup gets to be just as convoluted and hard-to-read as html, so I use HTML. Other times, I k

          • Re:XHTML merged (Score:4, Informative)

            by pizzach ( 1011925 ) <pizzach@gmail.EULERcom minus math_god> on Friday July 03, 2009 @01:26PM (#28573257) Homepage

            Getting a web page clean is a hard problem ... when you accept user input in something approaching HTML format, like /. does

            No it is not. Have php run the user input through tidy. Even if it doesn't display as the user wanted in their browser, at least it displays consistently between browsers which is more important imho. Just go. Install it now in php [php.net]. Seriously, if you are not checking html code coming in from users, something is not right. They could destroy your page with some of those unclosed tags.

          • Re:XHTML merged (Score:5, Insightful)

            by TheRaven64 ( 641858 ) on Friday July 03, 2009 @01:50PM (#28573499) Journal

            Getting a web page clean is a hard problem ... when you accept user input in something approaching HTML format, like /. does.

            No it isn't. You should not ever, ever, be inserting user-provided HTML directly in to a document. If you do, then well done, you've just created a cross-site scripting vulnerability. And you've let pranksters submit &lt!-- and hide half of your page.

            The correct way of handling user-provided HTML is to parse it with an HTML parser, construct a DOM tree, navigate this stripping out any tags that aren't on your whitelisted set, and then use the result. Most of the time, you want to run it through a very relaxed HTML parser because hand-typed HTML in a web form is likely to be full of errors. When you dump the DOM tree as HTML, it can be XHTML 1, HTML 3.2, or any other dialect you want.

            • Seconded. Drupal doesn't parse it directly, but it does use regex replacement to strip out tags an administrator declares unacceptable (usually everything except tables, lists, the basic formatting tags, and links). The only user who defaults to having full HTML (unstripped) is the root user, and you still have to enable another module to allow direct insertion of PHP.

          • But when people put <a hrefhttp://www.goatse.cx> in the comments, you can end up with the rest of the page being a hyperlink to some horrible picture. That's not a good thing, and you need to do some checking to stop that from happening.

        • Re: (Score:3, Informative)

          by maxume ( 22995 )

          A key feature of html5 is that is specifies the algorithms to use when normalizing poorly formed markup. It doesn't eliminate ambiguous cases, but it gets rid of many of them, meaning that the presentation and DOM should almost always be the same, regardless of the browser.

      • Re:XHTML merged (Score:5, Insightful)

        by Phroggy ( 441 ) <slashdot3@@@phroggy...com> on Friday July 03, 2009 @12:55PM (#28572969) Homepage

        XHTML would have been great standard.

        When fed invalid XHTML, the browser chokes, which would have gone a long way to eliminating much of the crap code, and crap "web developers" out there.
        I don't see why it's the browsers business to be THAT lenient, and second guess the developer all the time.

        The problem is, a lot of web pages today are not a single coherent document, they're a bunch of little code fragments concatenated together (template, content, advertising, etc.). When coders get sloppy, this can result in invalid markup. When browsers choke, the content producer may have no idea how to fix the problem - it may not even be their problem.

        What HTML5 tries to do is clearly define exactly how broken markup is supposed to be handled, so all browsers can try to "second guess the developer" in exactly the same way.

        Kudos to Firefox for reigniting the browser war. In Browser War 2.0, all the major players are striving toward standards compliance, trying to bring their behavior in line with a single unified goal instead of adding their own proprietary features to HTML itself. Five years from now, when IE6 and IE7 are as distant a memory as IE4 and IE5 are now, web development is going to be a lot easier.

        • by trifish ( 826353 )

          trying to bring their behavior in line with a single unified goal instead of adding their own proprietary features to HTML itself.

          I guess that's why Mozilla implemented support for the Ogg Theora codec with the tag? Because that's not in any standard. Firefox 3.5 added a proprietary extension that is not based on any existing standard.

          Drafts can change any time. HTML5 is nothing but a draft now.

          • by trifish ( 826353 )
            Hmm, I posted the message as Plain Text. Yet Slashdot stripped the <video> tag from the sentence.
            • by Phroggy ( 441 )

              Slashdot's idea of "plain text" differs from that of any rational human. The option you're looking for is labeled "extrans".

    • Re: (Score:3, Interesting)

      Agreed. XHTML was rather pointless. It didn't add any particularlly interesting features, made pages more difficult to author, and its claim that it made life easier for browser authors was belied by poor support and slow rendering. Making things more "XMLish" with closed tags and quoted attributes was a good idea, but in reality writing XML-conformant CSS/Javascript XML was a pain in the butt and usually not done.

      I suppose XHTML might have been useful as part of a document management/transformation system,

      • Re: (Score:2, Informative)

        by xorsyst ( 1279232 )

        We made great use of it once in an internal web-based system. There was a command-line client that basically just did a GET/POST and then parsed the xhtml with an xml parser to display the output, which made implementing that a doddle. Coding the website to be xhtml compliant added very little overhead, much less than defining a whole separate soap service or similar.

        • Yes, that sounds like a perfect application for XHTML.

          I wish more people were arguing along your lines instead of simply claiming that it's Better(TM) [actually it slows down page loads considerably while offering no new features], or wishfully thinking it would eliminate their competition for jobs.

      • Re: (Score:3, Insightful)

        by PenguSven ( 988769 )

        Agreed. XHTML was rather pointless. It didn't add any particularlly interesting features, made pages more difficult to author, and its claim that it made life easier for browser authors was belied by poor support and slow rendering. Making things more "XMLish" with closed tags and quoted attributes was a good idea, but in reality writing XML-conformant CSS/Javascript XML was a pain in the butt and usually not done. I suppose XHTML might have been useful as part of a document management/transformation syst

        • Re: (Score:3, Insightful)

          > The ability to parse a web document using native XML methods is pointless?

          In the general sense, yes. Web documents are nearly always served to web browsers, and every single web browser does a faster & better job of parsing HTML over XHTML.

          As I mentioned, there certainly are cases when XML can be useful, but the usual situation of serving content to end users isn't one of them.

      • Re: (Score:3, Insightful)

        Anyone too lazy to code nice neat xhtml shouldn't be allowed to create web pages.
        • Re: (Score:3, Interesting)

          by Haeleth ( 414428 )

          Lazy? Writing messy HTML takes more effort than writing clean XHTML. If you use a decent editor -- one that can take advantage of the structure and parseability of XML to provide validation, auto-completion, etc. on the fly -- then XHTML practically writes itself.

        • Re:XHTML merged (Score:4, Insightful)

          by Blakey Rat ( 99501 ) on Friday July 03, 2009 @08:06PM (#28576339)

          Bullshit. Every person on Earth should be allowed, and encouraged, to create web pages. I hate this elitist crap.

          • Re: (Score:3, Insightful)

            by grcumb ( 781340 )

            Bullshit. Every person on Earth should be allowed, and encouraged, to create web pages. I hate this elitist crap.

            You're conflating 'putting content on the web' with 'writing HTML'. They don't mean the same thing.

            There is something to be said for your perspective, though: The majority of the 'tag soup' that's crufted up the Web these days is software-generated, not hand-crafted by so-called stupid users.

            XHTML would have forced makers of stupid (i.e. non-XML-compliant) software applications to fix their engines. That would have required lots of effort, but the value of such an effort is philosophically similar to enforc

    • Re:XHTML merged (Score:5, Insightful)

      by Phroggy ( 441 ) <slashdot3@@@phroggy...com> on Friday July 03, 2009 @12:42PM (#28572841) Homepage

      But, XHTML has corrected many wrong thing that HTML developpers used to do.

      No it hasn't! Writing valid code (be it HTML 4.01, XHTML, or HTML 5) and checking it with a validator [w3.org] is what has corrected many wrong things that HTML developers used to do. Valid HTML 4.01 is still just as legitimate as XHTML ever was.

  • Much like the sun rising in the east tomorrow. I never quite understood what w3c thought it was doing trying to override browser developers.

    • Re: (Score:3, Insightful)

      by werfu ( 1487909 )
      The W3C oversee current net content standard evolution. As it's name imply, it's a consortium. It regroup browser developpers, server developpers, thier application developpers, and many others. It doesn't try to override browser developpers. It oversee them on a technical standard view point. Browser developpers submit improvement for them to be included in the norm. This way it garantee that browsers don't split too much from each others.
    • Much like the sun rising in the east tomorrow. I never quite understood what w3c thought it was doing trying to override browser developers.

      Yea, the W3C should have let the browser makers create their own non-compatible markups so we'd have a worthless web. Or one dominated by a single company, sorry to be repetitive.

      Falcon

  • Yawn (Score:2, Insightful)

    by bwintx ( 813768 )
    Combined with this information and the browser manufacturers having whupped the W3C over the codecs stuff [slashdot.org], not to mention my continuing requirement to support a large number of slackjawed technophobes who don't know there's something better than IE 6, I can't help but feel I'm gonna be stuck coding "HTML 4.01 strict" for a long, long time.
  • CSS 3 spec (Score:5, Insightful)

    by Piata ( 927858 ) on Friday July 03, 2009 @12:08PM (#28572487)

    More importantly, when are they going to finish the CSS3 spec?

    I love that HTML5 is getting pushed to the forefront and browsers are advancing more than ever, but as a web designer that CSS3 spec needs to get done and pushed on the browser developers because it will be another 2 - 5 years before mass adoption and I'm pretty tired of CSS2.1's limitations.

    • Re:CSS 3 spec (Score:4, Informative)

      by BZ ( 40346 ) on Friday July 03, 2009 @01:53PM (#28573519)

      There is no "CSS3 spec". There is a whole bunch of separate specs all advancing along the REC track separately. They're at various stages of readiness.

      For example, CSS Namespaces is in CR ("spec work done, implement it please"). It'll become a REC once there is a test suite and two interoperable implementations and the various paperwork involved in becoming a REC is done.

      Selectors Level 3, CSS Color Level 3, CSS Multi-column layout are all in Last Call, with the next step being either CR or PR (PR is "this is done implemented and all; just needs sign-off from the W3C staff"). Same for Media Queries, CSS Basic User Interface, CSS Marquee Level 3, CSS Print Profile, etc.

      Was there a particular part of "CSS3" you were interested in seeing specced and implemented?

    • Re: (Score:3, Interesting)

      by jilles ( 20976 )

      That's the whole problem. All the experts are working for the browser vendors. The W3C never had any business overriding them. Css3 will never happen (standardized & widely implemented). But of course the relevant bits have long been implemented and now those await standardization. It would be nice if w3c bureaucracy could catch up here.

      Basically what's wrong here is that after a agile start in the nineties, w3c turned into yet another standards organ. Essentially, for most of the past ten years they've

  • What I liked about XHTML was the conceptual clarity regarding the creation of compound documents. Like XML, XHTML is modular, precise and fully extensible via XML namespaces. This allowed one to augment XHTML without needing to fully revise the XHTML spec: one simply needed to use an alternate XHTML namespace inside of the XHTML document. So, for example, this made it very easy to use XHTML in conjunction with SVG, another XML application. I know that HTML5 defines ways in which it may be used in conjunctio
    • by TheRaven64 ( 641858 ) on Friday July 03, 2009 @12:42PM (#28572845) Journal

      XHTML 1 is basically HTML4 with the added requirement that the document must also be well-formed XML. This is useful, because it allows you to put any other arbitrary (but properly-namespaced) XML data in the same file. XHTML 2 was meant to dramatically reduce the number of valid tags, clean up HTML even more than HTML 4 did, and split the spec into a large collection of smaller standards. No one really liked it; it was developed in the traditional W3C 'let's create a new standard without thinking too hard about how it will be implemented' way.

      HTML 5 is an evolution of HTML 4 backed by people who actually implement these standards and developed in a more incremental way. Unlike HTML 4, HTML 5 doesn't specify the representation. It has SGML and XML bindings. HTML 5 with the SGML binding looks like classic HTML, HTML 5 with the XML binding looks like XHTML. HTML 5 with the XML binding has all of the advantages of XHTML; you can mix it with any other XML data in the same file, and have a unified DOM tree.

      • ``HTML 5 with the XML binding has all of the advantages of XHTML; you can mix it with any other XML data in the same file, and have a unified DOM tree.''

        Thanks for pointing that out. Suddenly, I don't resent HTML5 anymore.

        And yes, I lower my head in shame for not having found that out on my own. I've been too busy with other things to find time to try finding good things about something that has been promoted as everything I never wanted HTML to become.

      • by Xest ( 935314 ) on Friday July 03, 2009 @01:20PM (#28573207)

        "XHTML 1 is basically HTML4 with the added requirement that the document must also be well-formed XML"

        It also deprecated a lot of the older tags that were made obsolete by CSS hence encouraging better separation of document structure and presentation. Unfortunately HTML5 undoes this particular good work because it caters to the lowest common denominator (i.e. bad developers who don't "get" separation of concerns and trivially parsable markup).

        "HTML 5 is an evolution of HTML 4 backed by people who actually implement these standards and developed in a more incremental way."

        The problem is, those people implementing those standards have proven time and time again how incompetent they are at implementing those standards. The state of standards compliance in browsers has for well over a decade been utterly shameful and that really goes for Firefox as much as it does IE. I'd argue it's those who use the standards that know best - people building the biggest sites on the net because they're the ones who need the markup to be able to support large scale application development. Browser vendors need to be able to implement that standard, don't get me wrong, but putting faith in them as the ones who guide the standards has time and time again proven disastrous - look at the HTML5 video tag debacle for perhaps the most recent example.

        I'm not disagreeing with you though, XHTML2 wasn't brilliant, but I'm not convinced HTML5 is even any better than XHTML1 which was also an evolution of HTML4 and IMHO a better one. It was designed with those people building enterprise applications for the web in mind rather than joe average, who is more content using the likes of MySpace and Facebook to manage their content for them in the first place.

        Of course, HTML5 can do everything XHTML does for the reasons you state, but sadly it seems to encourage bad practice whereas XHTML discouraged it. One final beef I have with HTML5 is that accessibility seems to have been ignored in it's creation, for example there were no real efforts to ensure easy inclusion of subtitles the previously proposed audio/video formats. Again, we really just don't seem to be any further on with web standards than we were at the start of the decade and again, the people to blame are the browser vendors as much as the W3C and it's allowed not particularly ideal or portable proprietary tools such as Flash to gain a lot of ground as a result.

        • by TheRaven64 ( 641858 ) on Friday July 03, 2009 @01:44PM (#28573427) Journal

          It also deprecated a lot of the older tags that were made obsolete by CSS hence encouraging better separation of document structure and presentation. Unfortunately HTML5 undoes this particular good work because it caters to the lowest common denominator (i.e. bad developers who don't "get" separation of concerns and trivially parsable markup).

          I think you read a different version of HTML 5 to me. It still deprecates or removes all of the tags that HTML 4 and XHTML 1 removed, for example removing the center and font tags which were only deprecated by HTML 4.

          Where it introduces new tags, it is for expressiveness. A lot of the 'separation of content and presentation' folks seem to think that HTML just needs three tags; span, div, and object. HTML 5 doesn't add more presentation elements, but it does add more tags with well-defined semantics. Examples of this include section and nav tags. These don't specify anything about the presentation, they just indicate that a part of the document is a section, or a set of navigation commands. A mobile browser, for example, might have an option to hide and show the nav section to conserve screen space.

          Take a look at the current draft of HTML 5 [w3.org]. You'd be hard-pressed to find anything presentation-related. Presentation still goes in the stylesheets, HTML 5 just adds tags for common things so you don't need quite so many class attributes.

          • by Xest ( 935314 )

            "Presentation still goes in the stylesheets, HTML 5 just adds tags for common things so you don't need quite so many class attributes."

            Even if that were true, it still leads to the issue of inconsistency where you have half your markup using these pre-defined tags and the other half using the classic spans and divs because there aren't generically predefined tags. It also means that more likely that not, as the web evolves some of those tags will become obsolete and just unneeded cruft on the spec.

            The reaso

          • > Examples of this include section and nav tags

            I find this encouraging because it starts to make HTML actually "semantic" for real world web pages, as opposed to the physics paper approach of pretending everything is a heading, paragraph, or generic block (DIV).

          • by Bogtha ( 906264 ) on Friday July 03, 2009 @02:34PM (#28573903)

            Take a look at the current draft of HTML 5. You'd be hard-pressed to find anything presentation-related.

            I think this attitude is more a case of wishful thinking and sometimes outright denial rather than than reality. Take a look at some of these, for instance:

            1. <br> and <pre> - explicitly control line-breaking (<pre> has ASCII art as a use case!).
            2. <ul> and <ol> - the order of HTML elements already forms part of their semantics. The real reason both element types are kept around is because one is numbered and one is not.
            3. <small> - nuff said.
            4. <i> - I'll quote this, because it's utterly laughable: "The i element represents a span of text in an alternate voice or mood, or otherwise offset from the normal prose, such as a taxonomic designation, a technical term, an idiomatic phrase from another language, a thought, a ship name, or some other prose whose typical typographic presentation is italicized." - or, in other words, "let's list every case we can think of where using italics is the typographical convention so we can pretend it isn't an element type that means use italics." Are there any real shared semantics between a ship name and a thought? No, they just want to use italics.
          • Re: (Score:3, Insightful)

            Count me in as one of the "give me more expressiveness" crowd. Span, div, and object are good enough for most purposes, but have their own problems. Writing [X]HTML/CSS pages for all media--conventional browser, print, and screen readers--is a bear. Having sane defaults for tags like STRONG and EM--that is, a certain inflection for the screen reader, and a decent-looking print default--saves developers a lot of time.

      • by Bogtha ( 906264 ) on Friday July 03, 2009 @02:00PM (#28573585)

        Unlike HTML 4, HTML 5 doesn't specify the representation. It has SGML and XML bindings. HTML 5 with the SGML binding looks like classic HTML

        No, HTML 5 has an XML serialisation and a tag-soup-compatible serialisation that, yes, looks like classic HTML, but in fact isn't based on SGML. It's based on the way popular browsers parse HTML rather than what they are supposed to do according to previous HTML specifications. One consequence of this is that obscure parts of previous versions of HTML that are not well-supported by popular browsers are not supported by HTML 5 - it's not completely backwards-compatible with previous versions of HTML.

  • by MassacrE ( 763 ) on Friday July 03, 2009 @12:28PM (#28572675)

    XHTML 1 was the XML-ization of the existing HTML 4 stuff.

    XHTML 2 was a new HTML version that sought to remove lots of HTML cruft (including non-XML syntax) and add new capabilities. Basically, it was working toward a new HTML version. This effort has died, because browser makers are not behind it - they are all behind HTML 5.

    HTML 5 has always had an XML profile called XHTML 5, and that won't go away.

    • HTML 5 has always had an XML profile called XHTML 5, and that won't go away.

      So, we should still be ensuring that all tags have matching close tags? What will the document header be?

      I have been told that making page uses XML compatible HTML makes for a more predictable browsing experience and also lowers memory requirements. This being the case I will try to maintain the approach, on the condition that I can take advantage of HTML5.

      • by SendBot ( 29932 )

        So, we should still be ensuring that all tags have matching close tags?

        All tags that need a closing tag should have one, yeah. singular tags like br and img simply close themselves, <br /> , <img src="blah" />

      • Well, I think the most important thing will be, how strict the browsers will actually be.
        If they are just as strict as with XHTML 1.1, then we will get easily parsable, nicely crawlable (eg. by Google) and always properly rendered pages, no matter if it's XML or not. (Although it is sad, that it is not SGML anymore, as I read.)
        If they are as "forgiving" (read: crappy messes of interpreters that foster laziness and stupidity) as HTML 4 Transitional browser engines, then we can say goodbye to Google's search

        • by Homburg ( 213427 )

          we can say goodbye to Google's search quality

          Yeah, it's a good thing everyone has been using valid XHTML since 1996; Google would just fall apart if it had to crawl non-valid HTML.

      • by BZ ( 40346 )

        > So, we should still be ensuring that all tags have matching close tags?

        Only if you want to use the XML serialization.

        > What will the document header be?

        Not sure what this is asking.

        > I have been told that making page uses XML compatible HTML makes for a more predictable
        > browsing experience and also lowers memory requirements.

        Tossing in random "/>" has no such effect. Properly nesting your tags (i.e. avoiding ) most certainly helps reduce memory requirements.

        With HTML5 the browsing experie

        • > What will the document header be?

          Not sure what this is asking.

          The doctype.

          • Re: (Score:3, Informative)

            The doctype.

            Not sure you'll like the answer : ) :

            <!doctype html>

            I believe because they wanted to keep it short and simple, and hixie doesn't believe in versioning HTML - having a version-less doctype forces people to keep it backwards-compatible when html6 rolls around. Perhaps someone else who followed the process better can chime in here.

          • by BZ ( 40346 )

            <DOCTYPE HTML>

            And even that was only kept in for backwards-compatibility reasons: it's the shortest "doctype" needed to trigger standards mode in all current (non-HTML5-aware) browsers.

            Thing is, the XML serialization doesn't really need a doctype (and never has; XHTML1 without the doctype is not conformant, but works just fine in any reasonable XML processor), and the non-XML one is no longer an SGML application, so a doctype doesn't actually make sense.

      • So, we should still be ensuring that all tags have matching close tags?

        Well, its not going to hurt, even if it won't magically transform stuff into "proper" XML.

        What will the document header be?

        Anything that cares is supposed to look at the MIME type that it s served with.

      • by Homburg ( 213427 )

        I have been told that making page uses XML compatible HTML makes for a more predictable browsing experience and also lowers memory requirements.

        You've been told wrong. Making your HTML or XHTML valid [w3.org] does make for a more predictable browsing experience, and may even lower memory requirements. Writing HTML that looks a bit like XML (e.g., using self-closing tags) and then serving it as HTML is completely pointless [hixie.ch].

  • by Animats ( 122034 ) on Friday July 03, 2009 @02:30PM (#28573861) Homepage

    At least with XML you have a simple, well-defined way to convert the XML text to a tree. With HTML 5, there's only "well-defined error handling". Read the sort-of specification [whatwg.org] for this.

    Here's what's supposed to happen for just one of the hard cases. (There are dozens of other cases, some at least as ugly.) Parsing is in "body" mode (normal content) and an end tag whose tag name is one of: "a", "b", "big", "code", "em", "font", "i", "nobr", "s", "small", "strike", "strong", "tt", "u" has been encountered.

    Follow these steps:

    1. Let the formatting element be the last element in the list of active formatting elements that:
      • is between the end of the list and the last scope marker in the list, if any, or the start of the list otherwise, and
      • has the same tag name as the token.

      If there is no such node, or, if that node is also in the stack of open elements but the element is not in scope, then this is a parse error; ignore the token, and abort these steps.
      Otherwise, if there is such a node, but that node is not in the stack of open elements, then this is a parse error; remove the element from the list, and abort these steps.
      Otherwise, there is a formatting element and that element is in the stack and is in scope. If the element is not the current node, this is a parse error. In any case, proceed with the algorithm as written in the following steps.

    2. Let the furthest block be the topmost node in the stack of open elements that is lower in the stack than the formatting element, and is not an element in the phrasing or formatting categories. There might not be one.
    3. If there is no furthest block, then the UA must skip the subsequent steps and instead just pop all the nodes from the bottom of the stack of open elements, from the current node up to and including the formatting element, and remove the formatting element from the list of active formatting elements.
    4. Let the common ancestor be the element immediately above the formatting element in the stack of open elements.
    5. Let a bookmark note the position of the formatting element in the list of active formatting elements relative to the elements on either side of it in the list.
    6. Let node and last node be the furthest block. Follow these steps:
      1. Let node be the element immediately above node in the stack of open elements.
      2. If node is not in the list of active formatting elements, then remove node from the stack of open elements and then go back to step 1.
      3. Otherwise, if node is the formatting element, then go to the next step in the overall algorithm.
      4. Otherwise, if last node is the furthest block, then move the aforementioned bookmark to be immediately after the node in the list of active formatting elements.
      5. Create an element for the token for which the element node was created, replace the entry for node in the list of active formatting elements with an entry for the new element, replace the entry for node in the stack of open elements with an entry for the new element, and let node be the new element.
      6. Insert last node into node, first removing it from its previous parent node if any.
      7. Let last node be node.
      8. Return to step 1 of this inner set of steps.
    7. If the common ancestor node is a table, tbody, tfoot, thead, or tr element, then, foster parent whatever last node ended up being in the previous step, first removing it from its previous parent node if any.
      Otherwise, append whatever last node ended up being in the previous step to the common ancestor node, first removing it from its previous parent node if any.
    8. Create an element for the token for which the formatting element was created.
    9. Take all of the child nodes of the furthest block and append them to the element created in the last st
    • by Tiles ( 993306 ) on Friday July 03, 2009 @02:43PM (#28573967)

      Now try to imagine Microsoft, Opera, Mozilla, and Google implementing that compatibly.

      I believe they already do, for the most part. HTML5 parsing rules were mostly reverse-engineered from existing browsers' HTML parsing rules, which are more or less consistent across modern browsers, so it's only documenting what most existing browsers already do.

      What the spec is defining is a limited subset of an SGML-like language (whose entire parsing rules, if incorporated into HTML, would span for pages) and how to transform it into a DOM. It isn't mandating any new parser rules, it only documents them for the benefit of new implementations of the spec, and to align what minor variations there are between browser parsing models together. Compared to SGML rules (of which HTML 4.01 is technically a subset), this is a great improvement.

      • by Animats ( 122034 )

        Formally, HTML 5 is no longer based on SGML; the spec says this. The only SGML directive allowed is <!DOCTYPE ... >. One advantage of this is that bogus (but not uncommon) HTML comments of the form <!This is an invalid comment> can now be parsed unambiguously. There's no need to worry about parsing unexpected SGML directives inside HTML.

  • by Pfhorrest ( 545131 ) on Friday July 03, 2009 @05:19PM (#28575207) Homepage Journal

    I see a lot of debate here about XML versus SGML (or SGML-like) serialization and parsing rules, and plenty of people (rightly) pointing out that there is an XML version of HTML 5.

    But what about those features which those of us who already code strictly to spec either way really care about? New elements that were scheduled to debut in XHTML 2.0 such as nl, h and section, the ability to put href and src attributes in any element, etc [xhtml.com]?

    Those are the sorts of things which made the debate for me, not some silly XML vs SGML, strict vs lenient debate - either way I'll be writing my code for strict compliance with spec. I'm more concerned with what the features of the spec will be! Less so with how it deals with people out of compliance with it.

    So any news on whether X/HTML 5 will be incorporating any of these, now that it's a W3C project and XHTML 2 is dead?

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...