Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet Technology

W3C and WHATWG Sign Agreement To Collaborate on a Single Version of HTML and DOM (w3.org) 104

W3C and the WHATWG signed an agreement today to collaborate on the development of a single version of the HTML and DOM specifications. From a blog post: The Memorandum of Understanding jointly published as the WHATWG/W3C Joint Working Mode gives the specifics of this collaboration. This is the culmination of a careful exploration effective partnership mechanisms since December 2017 after the WHATWG adopted many shared features as their work-mode and an IPR policy. The HTML Working Group which we will soon recharter will assist the W3C community in raising issues and proposing solutions for the HTML and DOM specifications, and bring WHATWG Review Drafts to Recommendation.

Motivated by the belief that having two distinct HTML and DOM specifications claiming to be normative is generally harmful for the community, and the mutual desire to bring the work back together, W3C and WHATWG agree to the following terms: W3C and WHATWG work together on HTML and DOM, in the WHATWG repositories, to produce a Living Standard and Recommendation/Review Draft-snapshots. WHATWG maintains the HTML and DOM Living Standards. W3C facilitates community work directly in the WHATWG repositories (bridging communities, developing use cases, filing issues, writing tests, mediating issue resolution). W3C stops independent publishing of a designated list of specifications related to HTML and DOM and instead will work to take WHATWG Review Drafts to W3C Recommendations.

This discussion has been archived. No new comments can be posted.

W3C and WHATWG Sign Agreement To Collaborate on a Single Version of HTML and DOM

Comments Filter:
  • it's almost as if browsers, a frankenstein collection of disparate code execution and rending pieces sewn together, were "designed" to allow your computer to be co-opted for marketers, spyware mongers and other darker purposes, etc.

    Just like windows, come to think of it.

    The browser needs a redesign from the ground up.

    • by Anonymous Coward

      Browsers are fine IF AND ONLY IF they are limited to displaying the results of a markup language.

      The problem came because idiots wanted to use them as an application platform, something they are particularly bad at, ill designed for, ill equipped for, and demonstrably insecure, bloated, and slow. Script execution in the browser has been responsible for more malware, identity theft, UI hijacking, scraping of personal data, and other issues than anything since Microsoft released Windows 1-3 without memory pr

      • by CanadianMacFan ( 1900244 ) on Tuesday May 28, 2019 @10:51AM (#58666384)

        Browsers are fine IF AND ONLY IF they are limited to displaying the results of a markup language.

        Even then many designers weren't happy with that because they wanted their web pages to look exactly the same on every browser on every computer. Never mind that my window might be open to smaller dimensions than whey they expect or I don't let the browser change the fonts or a wide variety of other things. If your content has to be the same everywhere then create a PDF and link to it.

        • If your content has to be the same everywhere then create a PDF and link to it.

          We had that, with PDF displayed in the browser. It turned out to be a bad idea because PDF is a security problem when fully implemented.

          • It's not all-or-nothing. PDF solved some problems and created others. We can learn from and adopt from PDF's what works, and toss what doesn't.

            Note that having a WYSIWYG-based front-end does NOT prevent user-side screen size adjustments. If the server knows the client screen dimensions or preferences (device type), it can format the content on the server side to fit a given user.

            This has three advantages. First, you only have to test one rendering/formatting engine (for different screen sizes). With browser

            • by tepples ( 727027 )

              It does have the drawback of using more server horsepower, and if you tilt your device, it would take longer to reformat, since it has to do a round trip to the server.

              Third, if the device loses its data connection momentarily, it also loses its ability to display the document. This can happen, for example, with a laptop or tablet used for reading HTML documents while its user is riding public transit without tethering it to a cellular hotspot.

              • by Tablizer ( 95088 )

                It would hopefully have a "retry" mechanism. You don't want to do important transactions on a dodgy connection anyhow.

      • The safe way to run applications online is to run them on the server. We could then turn off all browser-side scripting and still enjoy the flexibility of online applications. We would have to give up such features as instant feedback on field fill errors, but I would rather see the best browser features that we now run as script to become permanent additions to HTML.

        • The safe way to run applications online is to run them on the server. We could then turn off all browser-side scripting and still enjoy the flexibility of online applications. We would have to give up such features as instant feedback on field fill errors,

          It seems like this could be fixed by providing enough functionality to replace the contents of named elements from a form submission. Give a new property to forms that states where their output goes.

        • by tepples ( 727027 ) <tepples.gmail@com> on Tuesday May 28, 2019 @11:54AM (#58666694) Homepage Journal

          Say you are building an online whiteboard application, and you want to let users draw lines on a canvas. It's possible to accept a click through a form with a server-side image map, with a subsequent full page reload showing the result of a click. But I haven't seen any script-free input in HTML that accepts a drag gesture.

          Or must anything requiring drag input be distributed as OS-specific executables?

          • by Tablizer ( 95088 )

            subsequent full page reload showing the result of a click.

            Full-page? Only the changes would need to be sent, a "delta", not everything.

            I too feel that more should be rendered/controlled by the server (as ranted about elsewhere in this topic). But a remote-desktop-like standard may be too far in that direction. Thus, a compromise that shifts more of the load to the server but leaves some things client-side may be the best mix.

            Rather than pixel-based, make it vector-based, and support a degree of client-side

            • by tepples ( 727027 )

              subsequent full page reload showing the result of a click.

              Full-page? Only the changes would need to be sent, a "delta", not everything.

              HTML without script-in-the-browser has no concept of "deltas", other than cacheable subresources. Instead, the standard as it exists in 2019 anticipates that each web application will implement its own delta mechanism in terms of script-in-the-browser.

              How many widgets or activity would have to be supported on the client side would probably require experimentation.

              It'd be a bikeshed-fest that rivals the JavaScript itself.

              As far as a drag gesture, perhaps a dashed outline of the widget or vector could be dragged (if marked as "draggable") .

              In this particular case, a platform would end up needing to support two different gestures: dragging to move where only the start and end positions matter, and dragging to paint with a pencil or brush t

              • by Tablizer ( 95088 )

                HTML without script-in-the-browser has no concept of "deltas"

                I thought we are talking about ditching or reworking HTML to get something more "state-friendly".

                It'd be a bikeshed-fest that rivals the JavaScript itself.

                Without experiments, one cannot really say. A "fix" to Bloated Browser Syndrome will require R&D. I truly doubt what we have now is anything close to the pinnacle of UI clients. It's a nested kludgefest that even its mother hates.

                • I truly doubt what we have now is anything close to the pinnacle of UI clients.

                  X is better.

                  • by Tablizer ( 95088 )

                    Many say X has too much latency when used over the internet. It was meant for LANs. That's why some UI IO should be buffered in my opinion.

        • We would have to give up such features as instant feedback on field fill errors

          We already have instant feedback (at least, really quick feedback) that goes to the server and back, for things like autocomplete. It seems to work well enough.

        • The safe way to run applications online is to run them on the server. We could then turn off all browser-side scripting and still enjoy the flexibility of online applications. We would have to give up such features as instant feedback on field fill errors, but I would rather see the best browser features that we now run as script to become permanent additions to HTML.

          Maybe for field validation we should just put a regex into the form tag, and if the user has form validation turned on, then the browser can highlight the fields based on the regex. This would be a lot safer than allowing general-purpose code to run, where the code achieves its aims through side effects. With a regex you can leave the side effects (eg, highlighting the incomplete field) up to the browser.

          That would completely solve the field error problem.

    • Maybe when WebAssembly get access to the DOM, it can be used to replace HTML/CSS/JavaScript.

      If you want your browser limited to static pages, it means more and more OS specific apps with even more security concerns then web pages.

      • So returning to Java applets and Flash applications? No thanks you.

        • by tepples ( 727027 )

          Would it be a better idea to return to "We're sorry! This application is not yet available for your operating system"?

          • by Anonymous Coward

            Would it be a better idea to return to "We're sorry! This application is not yet available for your operating system"?

            Yes it would.

            The alternative is an insecure mess that spies on everyone for commercial profit, mass witch hunts, and identity theft. A mess that wastes so much computing resources that even a 4-core CPU running at 3GHz with 8GB of RAM isn't enough to run a modern web browser at a decent speed without some form of script blocker. A mess that brings entire cities to their knees and disrupting

            • by tepples ( 727027 )

              Would it be a better idea to return to "We're sorry! This application is not yet available for your operating system"?

              Yes it would.

              I am surprised that you would embrace a situation that would be tantamount to users of macOS not being able to exchange documents with users of Windows.

              All for what amounts to a handful of "applications" that are used on a regular basis. [...] All of which have some form of alternative desktop client in some form.

              Would "some form" require purchasing a license for Microsoft Windows OS and installing it into a virtual machine? I ask because a lot of these "alternative desktop client[s]" are not available for the X11/Linux family of distributions.

              Consistent environment and API? Java / some other non web browser bytecode VM on a client. That has been a thing for decades.

              Oracle has a copyright claim over Java. It has no copyright claim over WebAssembly. Besides, Oracle has tried to segment the

          • Needs to be reworded to make it sound like the user's OS is to blame, rather than the product.
      • I'm still not clear on what WebAssembly gives us that we don't already have. I don't see that speed is the bottleneck in most cases, unless you are doing something weird. A faster mess is still a mess. Our standards are borked and speed won't fix that.

        If the browser supported missing UI widgets for common needs, then we wouldn't need to emulate them such that scrolling speed etc. would no longer be an issue. Why do we have to still emulate GUI idioms that have been around 30+ years?

    • Comment removed based on user account deletion
    • I would love to build a new browser with a new engine but that's a huge undertaking, at least for one person. The browser would be for the Mac and possibly Linux (I'd want to use Swift for the language but I don't know how much support there is for building graphical applications with Swift on Linux). The focus would be on privacy, adhering to the standards, and resource optimization.

    • Just my opinion, but: What we think of as "browsers" should just be split into two different products.

      The first should be a classic browser, which is mostly a passive document viewer. It shouldn't really need extensive scripting support or access to the filesystem, hardware, or OS. The assumption should be that you're mostly viewing static pages, or pages that are rendered server-side, but not running "web applications".

      The second product should be a cross-platform application framework. Developers can then use this to make distinct applications that the user chooses to trust and run, and should have all kinds of access controls to allow users to control exactly what each application has access to.

      One of the problems we have is that the classic browser (static HTML viewer) has evolved to encompass an application framework. As a result, you might be browsing static pages, click on a link, and suddenly you've loaded a page that tries to access your webcam and mic and filesystem. The browser developers have created that functionality so that people can make web applications, but you didn't mean to run an application. You just clicked on a link.

      We've completely blurred the lines between "static document I just want to view" and "trusted application that I want to run with tons of access into my private files". Of course we're running into security and privacy concerns. Of course that lack of distinction is going to get exploited. These are totally different things, but we're treating them like there's no distinction.

      • We've completely blurred the lines between "static document I just want to view" and "trusted application that I want to run with tons of access into my private files".

        On which side of your proposed bright line do you see online shopping sites? They need to take input from users as to which products to include in a side-by-side comparison, which products to buy, where to ship, and how to pay. This includes validating postal codes, credit card numbers, and the like, and it includes sending a cookie that distinguishes one user's shopping cart from another's.

        • Unless you just bought that account from somebody, you're old enough to remember a time when websites routinely used static HTML for all those things, and it worked fine. In fact if you were to load one of those old sites in a modern browser, most of the features would display faster using the old site.

          • Late 1990s and early 2000s web stores from the pre-AJAX era did not let users select products for quick preview or side-by-side comparison without requiring a navigation or form submission and its attendant full page reload. Nor did they offer real-time feedback as to whether the credit card number or postal code was mistyped. Even the CSS checkbox hack to expand and contract product category trees wasn't well known then.

            • So? Starting from the comment by nine-times, what exactly is your point?

              Are you saying that live side-by-side comparison without a reload is like the Sun and everybody dies without it, or do you have some other stupid point? I know it isn't a good point, because if you had one, you'd have included it with your comment.

              • by tepples ( 727027 )

                All other things being equal, users will choose to shop using websites that support side-by-side comparison and real-time validation of shipping and billing information over websites that do not. Websites that do not will lose sales to websites that do.

                All other things being equal, users will choose to shop using browsers that support side-by-side comparison and real-time validation of shipping and billing information over browsers that do not. Browsers that do not will lose usage share to browsers that do.

      • Just my opinion, but: What we think of as "browsers" should just be split into two different products

        You already have that ability. [mozilla.org]

        The first should be a classic browser, which is mostly a passive document viewer

        Create a profile that has everything turned off.

        The second product should be a cross-platform application framework

        Create a profile that has everything turned on.

        As a result, you might be browsing static pages, click on a link, and suddenly you've loaded a page that tries to access your webcam and mic and filesystem

        That's not a problem of the tool you're using, that's a problem of the person who created the site. You'll only fix that by fixing the developers of such a horribly created website. Lots of luck.

        • That's not a problem of the tool you're using, that's a problem of the person who created the site. You'll only fix that by fixing the developers of such a horribly created website. Lots of luck.

          That is circular and absurd.

          Because a tool can be used well or used poorly does not guarantee that it is impossible to improve the tool to reduce or eliminate specific types of poor results.

          What people are talking about is the idea of taking away a bunch of the features that are frequently used poorly, and replacing them with features that couldn't be used in the same way. They could still make things suck in some way, but they'd be mostly limited to making it look awful.

    • Google(Alphabet)'s gonna handle that for us. Pretty soon, there are only going to be two browsers -- Chrome and Firefox -- that actually render (most) websites. Everything except Firefox will have Chrome guts. And eventually Firefox will probably fade away. Unfortunately, from Google's POV, their customer is the advertisers not the users. We users? Just part of the environment. Our obligation is to keep our hardware up to date and to make sure that the latest version of their bloated advertising plat

    • The browser needs a redesign from the ground up.

      Redesign it then. There's several open source solutions to copy from. That said, redesigning it won't allay your woes here. The issue isn't the tool, it's the folks using the tool, or in this case the folks providing content to your browser. There's money to be made on the Internet, and when you solve the issue of placating people's need to make money off of the Internet, you'll actually have your solution. Short of that, you're just purposing a cat and mouse game that will have no winners.

    • by Tablizer ( 95088 )

      frankenstein collection of disparate...The browser needs a redesign from the ground up.

      Amen! And split it into sub-standards: one for mostly static documents, one for media/movies/gaming, and one for "productivity" CRUD/GUI's. Trying to be everything to everyone has created a mess.

      There will be some overlap for basic/shared features, and a given screen "section" may use a different sub-standard than another, if the user permits such.

      Web application development has turned into rocket science for even run-of-

      • I will agree under the right conditions, web app development can be relatively smooth, but too much has to go right in terms of stack and staff management. Most shops bungle it in practice, afraid of picking the wrong trend, confused by too much choice, and suckered by dazzling (but fragile) JavaScript gizmos.

        What is the right stack? Or even, what is a stack that works?

  • It won't happen, but: the best thing they could do would be to simplify things. Remove cruft, remove stuff meant only to support marketing and tracking (example: the "ping" attribute on hyperlinks). Do the same to CSS - actually CSS needs it even worse - and we might get our web back...

    Won't happen, of course. Standards only get more complex, never less.

    • In other words, migrate the best browser script features into the HTML standard, so we can turn off scripting.

    • Re:Simplify? (Score:4, Interesting)

      by slack_justyb ( 862874 ) on Tuesday May 28, 2019 @11:57AM (#58666704)

      Remove cruft, remove stuff meant only to support marketing and tracking (example: the "ping" attribute on hyperlinks).

      I get the motivation for this but your example that you gave runs counter to your original statement.

      the best thing they could do would be to simplify things

      The ping attribute of the a element [w3.org] was created to simplify something that was already being done either through javascript or via another method like tracking pixels. However, many web browsers already ignore the ping attribute either out right [webplatform.news] or when DNT is enabled. Which means most sites already do a combination of them all to gather information. So while the "ping" attribute was meant to undo all the JS hacks and tracking pixels, it's failed to really gain any kind of traction.

      Won't happen, of course. Standards only get more complex, never less

      As with anything that's trying to reach a compromise between a lot of different interests.

      and we might get our web back

      I keep telling people, that "no" you're not getting anything back, no matter how much you try to change the tool, the tool isn't the problem here it's just the means that the actual problem implements itself. The problem is that there's money to be made online and people are going to attempt to make that money. We could all go back to Gopher [wikipedia.org] and we'd eventually have all the problems we are having today. Your "solution" here is pretty much akin to "there's too many ads on TV, let's make TVs simpler to fix that".

      And I'm not denying that the standard is complex or that tracking on the Internet is a problem. What I am saying is that your reason to simplify the HTML spec is pretty hollow and your solution to tracking is a non sequitur.

  • .. is companies like Google on the committee pushing stuff to ease tracking/ads/... HTML5 canvas, embedded auto-playing video/audio, ...

  • That is the what the web has evolved into. Unless the W3C/Whatwg create an alternative to Google and Mo$illa then then HTML is now CML.
  • by Errol backfiring ( 1280012 ) on Tuesday May 28, 2019 @11:06AM (#58666454) Journal
    The W3C clearly stated that there would not be a next version of HMTL, and for some reason clung to XHTML that never worked and no one wanted. That is why WHATWG took over the development over what was clearly needed. W3C made itself irrelevant with respect to HTML, so it should stop pretending to still hold the standard. They don't. They have given up on it, and left the pieces to WHATWG. Leave it at that.
  • So why do we have two standards groups?

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      From the whatwg faq [whatwg.org]:

      The WHATWG was founded by individuals of Apple, the Mozilla Foundation, and Opera Software in 2004, after a W3C workshop. Apple, Mozilla and Opera were becoming increasingly concerned about the W3C’s direction with XHTML, lack of interest in HTML, and apparent disregard for the needs of real-world web developers. So, in response, these organisations set out with a mission to address these concerns and the Web Hypertext Application Technology Working Group was born.

      That is, the w3c

    • by raymorris ( 2726007 ) on Tuesday May 28, 2019 @12:24PM (#58666876) Journal

      HTML was originally designed to describe the contents of a web page, its structure. It had tags like "top level heading". The presentation aspects, such as color and font, were to be controlled by a separate style sheet. The standard was published by W3C.

      In HTML 3.2, WC3 experimented with mixing style information into the HTML structure. It introduced tags like (font) and attributes like "color". It quickly became apparent that was a mistake, so in less than a year W3C rolled out HTML 4 trying to undo the styling fuck up. Too late. HTML 3.2 was "easy" and there was no way to put tue horse back in the barn. HTML had become a mess.

      W3C discussed different ways to move forward and the decision was that to get the language clean again, and have the best foundation for the future, the next version should valid XML and we'd need to lose backward compatibility with the mistakes of HTML 3.2. The new, well-designed version was XHTML.

      Some people thought that losing backward compatibility with html 3.2 was too high a price to pay. They wanted to gradually try to clean things up, and they were okay with the standard being a bit messy, having some obvious problems. These people developed a new backward-compatible version of HTML. It wasn't as clean and well-designed as XHTML, but it was backward compatible with the abomination known as html 3.2. This group called themselves WHATWG.

      So we had the original group trying to make the best language, taking their time to get it right, and a new group trying to make new things fast. XHTML lost.

      After it became apparent that XHTML wasn't the future of the web, the two groups entered into a relationship similar to Fedora and Red Hat Enterprise Linux. WHATWG worked fast. They published what people were actually doing, new browser-specific HTML tags. Not what HTML SHOULD be, bit what it actually WAS. They were more concerned with getting something done quickly than getting it done right.
      W3C watched what was happening with WHATWG.
      They saw that some new properietary tags were good, others didn't turn out so well. W3C periodically published a standard of what web developers and browsers SHOULD do.

Today is a good day for information-gathering. Read someone else's mail file.

Working...