Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Internet Explorer Bug Mozilla The Internet IT

IE Shines On Broken Code 900

mschaef writes "While reading Larry Osterman'a blog (He's a long time Microsoftie, having worked on products dating back to DOS 4.0), I ran across this BugTraq entry on web browser security. Basically, the story is that Michael Zalewski started feeding randomly malformed HTML into Microsoft Internet Explorer, Mozilla, Opera, Lynx, and Links and watching what happened. Bottom line: 'All browsers but Microsoft Internet Explorer kept crashing on a regular basis due to NULL pointer references, memory corruption, buffer overflows, sometimes memory exhaustion; taking several minutes on average to encounter a tag they couldn't parse.' If you want to try this at home, he's also provided the tools he used in the BugTraq entry."
This discussion has been archived. No new comments can be posted.

IE Shines On Broken Code

Comments Filter:
  • by Eponymous Cowboy ( 706996 ) * on Tuesday October 19, 2004 @07:25AM (#10563394)
    Since it may not be obvious to all readers, be aware that when you can make a program crash by feeding it bad data, you can typically further manipulate the data you are sending it to take control of the program. That means a security hole. This is how buffer-overruns work. You can't always do it, but you can think of each way you can crash a program as a little "crack" in its exterior. If you can figure out a way to pry apart the crack, you've got yourself a hole.

    So many of these "bugs" in Mozilla, Opera, Lynx, and Links are likely security holes as well.

    It is interesting, then, to see that Internet Explorer did so well on this, with its notoriously bad history [trustworthycomputing.com] on security. My first instinct would be that the HTML parsing engine in Internet Explorer was written by a different team of programmers than worked on the rest of the software, and they used proper programming techniques (such as RAII [google.com] in C++, or perhaps used one of their .NET languages, rather than programming in straight C like the others) which as a side effect prevented such problems.

    Let's hope that all these bugs are taken care of in the other browsers quickly before the black hats find ways to make use of them.
  • by richie2000 ( 159732 ) <rickard.olsson@gmail.com> on Tuesday October 19, 2004 @07:26AM (#10563402) Homepage Journal
    It's strangely fitting that the response I first got was the error message: "Nothing for you to see here. Please move along." The Slashdot effect has finally spread to the browser.

    However, my Mozilla passed the test without crashing. :-P

  • by Anonymous Coward on Tuesday October 19, 2004 @07:29AM (#10563417)
    Without looking at it more closely my first reaction is, well for his values of "random" this might be true. But random is a horribly abused term in computer science. First of all it was no doubt arbitrary within certain ranges rather than random. Then, we need to look at why he chose those ranges.
  • Off Topic (Score:2, Insightful)

    by z0ink ( 572154 ) on Tuesday October 19, 2004 @07:30AM (#10563429)
    With such a powerful parsing engine you would thing IE could parse web standards a little better.
  • Excellent! (Score:2, Insightful)

    by Mysticalfruit ( 533341 ) on Tuesday October 19, 2004 @07:31AM (#10563434) Homepage Journal
    I suspect that the mozilla developers will be busy using this same tool to vigorously debug their application...

    *shrugs*

    So, if you feed IE random crap it doesn't crash? Too bad when you feed it stuff you'd like it to crash on (auto execution of malicous code, etc, it works just fine...)

    When all is said and done, I still feel 100% safer surfing the web with some Gecko deriviative...
  • Re:Security Issues (Score:5, Insightful)

    by mccalli ( 323026 ) on Tuesday October 19, 2004 @07:31AM (#10563436) Homepage
    Does the fact that most of the browsers crash mean that they are vunerable in some way?

    Potentially.

    does the fact that they do crash a good thing?

    No. Never ever is it a good idea to crash on receipt of invalid data. It's up to the program to try and parse this, realise it can't do so successfully, then act ccordingly (error message, best-guess try, whatever. I prefer error message myself, but can understand those who prefer best-guess).

    Cheers,
    Ian

  • by Darren Winsper ( 136155 ) on Tuesday October 19, 2004 @07:32AM (#10563442)
    I don't know if they still use it, but the Linux kernel developers used to use a program called "crashme" to help test kernel stability. Essentially, it generated random code and tried to execute it. Something like this for web browsers would make for a very useful procedure. Generate the code, throw it at the browser and log the code if it crashed the browser.
  • Frontpage (Score:2, Insightful)

    by bmongar ( 230600 ) on Tuesday October 19, 2004 @07:32AM (#10563444)
    Of course IE can handle broken html. That's what Microsoft Frontpage produces. They have to be able to handle their own product.
  • by judmarc ( 649183 ) on Tuesday October 19, 2004 @07:33AM (#10563452)

    It encourages web authors to make pages that don't work in other (standards-compliant) browsers. But even MS is getting a bit tired of this, because (1) there are now plenty of pages that don't work even with IE (I encounter them all the time at work), and (2) all the error correction code helps to keep IE bloated and slow.

  • by InsaneCreator ( 209742 ) on Tuesday October 19, 2004 @07:37AM (#10563467)
    My first instinct would be that the HTML parsing engine in Internet Explorer was written by a different team of programmers than worked on the rest of the software

    I's say the same about outlook express. Most security holes in OE were due to bad "glue" between components. And if I'm not mistaken, most holes in IE are also caused by bad integration.
    It sure looks like the expert programmers create components which are then bolted together by an army of "learn programming in 24 hours" drones.
  • Re:This is known (Score:5, Insightful)

    by Mr_Silver ( 213637 ) on Tuesday October 19, 2004 @07:38AM (#10563472)
    It's quite known that broken code runs quite well on IE.

    Great, but then it also encourages people to write bad code - see all that code with broken tables and a million tags that remain unclosed?

    You're confusing two seperate things here:

    1. Broken HTML which doesn't render properly.
    2. Broken HTML that causes corruptions, crashes and the potential for security issues.

    This guy has been testing for (2) and not (1). Bad HTML should never cause crashes, memory corruption and buffer overflows. Period.

    Finally, you can't go blaming the users for bad input. One of the golden rules of software design is that all software should either reject or handle gracefully bad input. Crashing is not graceful.

  • by tomstdenis ( 446163 ) <tomstdenis@gmGINSBERGail.com minus poet> on Tuesday October 19, 2004 @07:39AM (#10563484) Homepage
    Assuming this MSFT guy is not lying...

    Yes it's a slap in the face. But seriously this is what OSS is supposed to be about. Full public disclosure. If he did find scores of DoS related bugs then the OSS crowd [who like to show their names when the attention getting is good] ought to pay attention and fix the problems.

    You can't gloat how open and progressive you are if you scowl and fight every possible negative bit of news.

    And "mentioning how bad MSIE is" is not a way to make your product any better [just like "he's not bush" isn't a bonus for Kerry].

    So shape up, take it in stride and get to the board.

    Oh and while you're at it make Mozilla less bloatware. 30MB of tar.bz2 source could be your first problem....

    Tom
  • by Anonymous Coward on Tuesday October 19, 2004 @07:45AM (#10563521)
    I'm just wondering which is better. IE keeping running in weird state after malformed HTML (in some cases bypassing all security zones) or the browser crashing and forcing user to start it from "fresh".
  • by Anonymous Coward on Tuesday October 19, 2004 @07:46AM (#10563523)
    They hire people right out of school, then get as much work out them as possible. A lot of their programmers have probably never had time to read books like that, or understand the reason why they should.
  • by Zarf ( 5735 ) on Tuesday October 19, 2004 @07:46AM (#10563527) Journal
    The same person tells us [asp.net] that Apache [secunia.com] sucks when compared [asp.net] with IIS [secunia.com]. Does this mean we've all been wrong about Microsoft products? If we take Microsofts word for it we have indeed and should seriously consider switching back to IIS. After all, [THE FOLLOWING IS SARCASM:] this conclusively proves that IIS is far superior to the Linux Apache Mysql Perl/Python/Php system.
  • Re:Excellent! (Score:3, Insightful)

    by LiquidCoooled ( 634315 ) on Tuesday October 19, 2004 @07:47AM (#10563532) Homepage Journal
    I still have a link on my xp desktop here called "crashme.htm". I used to be able to bring IE to its knees with it.

    It consists of 11 characters - a Style opening tag and some malformed crap after it. It doesn't make it crash anymore, but I keep it there as a reminder.

    MS mustv done a major cleanup of code to prevent egg on their faces. SP2 has also done a lot to cure problems of this nature.

    I think MS might actually (finally) have an upper hand with this. Throwing manpower and resources at the problem will no doubt have assisted. However, this is only one facet in a much larger stack of cards.

    Of course, just like you I don't see it as a problem and the OS developers will cure all issues allowing us to browse easy once again :)
  • by MadFarmAnimalz ( 460972 ) on Tuesday October 19, 2004 @07:47AM (#10563534) Homepage
    His only criterion is whether the browser crashes or not. Somehow, it disturbs me more that IE doesn't crash; what precisely is the effect of the bad code then?

    I agree that these are all potential security holes, yet the article author mistakenly correlates crashing with vulnerability.

    What if IE is similarly vulnerable yet simply doesn't crash?

    From that point of view, at least crashing is an indication that sometihng is not right.

    All the same, I consider it a good beginning.
  • by Anonymous Coward on Tuesday October 19, 2004 @07:51AM (#10563558)
    My firefox managed to handle them all without incident too. Either the artical is out of date or we have a good case of FUD. Which reminds me I really need to update firefox from 0.9.3.
  • by poptones ( 653660 ) on Tuesday October 19, 2004 @07:52AM (#10563565) Journal
    I cat binaries all the time - not biggie. Did you cat it to stdout or to your browser or something? Choosing to do something bad on your desktop and causing a crash isn't nearly equatable to something that could allow Ivan's porn TGP to rootkit your machine simply by sending it a properly formed TEXT file.
  • Re:This is known (Score:5, Insightful)

    by StrawberryFrog ( 67065 ) on Tuesday October 19, 2004 @07:53AM (#10563572) Homepage Journal
    No. I don't care how bad the input is, if my program reads the input and throws an access violation, then it is my job to fix my program, test the input more, assume less about it or whatever, until my program does something more sensible and less dangerous with the input - like giving up with an error message or even an assertion failure.

    I repeat: code that crashes with a null pointer error is wrong. End of story.
  • by Anonymous Coward on Tuesday October 19, 2004 @07:55AM (#10563587)
    It seems unlikely that the same programmers who wrote this marvelous HTML parsing engine, which can take anything thrown at it without even so much as choking, would also have written code that does so little in the way of data validation in all other parts of IE.

    There are around fifty critical security updates listed on the Critical Security Updates for IE [microsoft.com] page, going back to 2001. How many of those mention HTML parsing? Zero.

    If it's the same programmers, something went seriously wrong with them when they switched to working on the other parts of IE.
  • Re:Frontpage (Score:1, Insightful)

    by Angostura ( 703910 ) on Tuesday October 19, 2004 @07:55AM (#10563590)
    Why does this tedious MS bashing get modded insightful. Funny, at a push. But insightful?
  • by ClosedSource ( 238333 ) on Tuesday October 19, 2004 @07:56AM (#10563600)
    So your theory is that having access to the source of these browsers has enabled an individual to crash them, but having thousands of people have the source code hasn't resulted in the problems being fixed already.
  • by nmg196 ( 184961 ) on Tuesday October 19, 2004 @08:01AM (#10563633)
    > all the error correction code helps to keep IE bloated and slow.

    Bloated compared to what?!

    Slow compared to what?

    IE has quite a small footprint for a web browser. I've opened this page in IE and Firefox. Currently IE is using 19Mb of ram and Firefox is using 28Mb. In fact, currently the top three processes using the most RAM on my machine are all open source products (the top two being Firefox and the enormously memory hungry Thunderbird which is currently using 58mb of RAM). All the commercial software comes later.

    IE also tends to render pages faster than Firefox under most circumstances (except where Linux advocate article authors have carefully crafted CSS heavy pages which cause IE to slow down a bit).
  • by metlin ( 258108 ) * on Tuesday October 19, 2004 @08:02AM (#10563641) Journal
    No, my theory is that while these bugs are serious in themselves, if I had access to IE's source I maybe able to come up with code that would crash it, too.

    Although thousands _can_ see them, very few actually do. There's a difference.

    Either way, kudos to him, and a good job at finding the bugs. But that still wouldn't make me change to IE anytime soon.
  • by grinder ( 825 ) on Tuesday October 19, 2004 @08:02AM (#10563645) Homepage

    Case in point.

    Last week I wrote some Perl to process an mbox mail folder. I just wanted a quick and dirty way to view its contents in a web page. A couple of CPAN modules and a few dozen lines of code and thing was done. Then I started to get fancy and dealing with stuff like embedded MIME-encoded GIF images. This was pretty simple to do, but I made a mistake. Once I had the decoded GIF data lying around, I wrote it to the HTML file of the current e-mail message, rather than writing it to a seperate file and writting <img src="foo.gif"> in the HTML file.

    I was viewing the results with Firefox 0.10.1. When it got to a message with an embedded GIF, with a big slodge of GIF binary data sitting in the middle of the page, Firefox either just sat there spinning its hourglass, or crashed and burned.

    Then I looked at the same file with IE, and the GIF image showed up. I was puzzled for a while until I noticed that in the directory where I had created the file, no GIF files had been created. It is of course arguable that IE should not have attempted to render the GIF image from the binary data sitting in the middle of the page, but it did so without complaint. Not rendering it would also be acceptable.

    Firefox, on the other hand, has a number of better alternatives to crashing or hanging. Should it display gibberish (like when you forget to set up your bz2 association correctly) or nothing, or the image? I don't know, and don't particularly care about which course of action is taken. Anything is better than crashing, especially when IE doesn't.

    Anyway, I fixed the Perl code, and all is well.

    The End

  • by Erasmus Darwin ( 183180 ) on Tuesday October 19, 2004 @08:04AM (#10563649)
    "I'm not saying that the bugs do not exist, but if I had access to all that code (and presumably to IE too, since he's been at MS that long) - then it's quite conceivable that he came up with stuff that will crash on these browsers."

    Except that he 1) provided a copy of the random malformed HTML generating tool that he used and 2) managed to crash the closed-source Opera, as well.

    It's a little ridiculous to suspect that he spent countless hours searching the mozilla, links, and lynx source code to find HTML-interpreting crash-causing bugs and then created a random malformed HTML generator as a cover story as to how he found the bugs.

  • by Anonymous Brave Guy ( 457657 ) on Tuesday October 19, 2004 @08:04AM (#10563652)
    I don't see how this is a bad thing. This just means that IE does not catch some of the malformed code people use to cause havoc on the net.

    But the other browsers not only didn't catch it, they actually crashed when parsing it. I'm all for compatibility and standards compliance where possible, but a crash/potential security hole is far more serious an issue than letting through some sloppy HTML. (Besides which, as a user, I find it infuriating that Mozilla/Firefox are so stuck up on perfectly standard HTML that they just don't work with some web sites that are perfectly usable in IE anyway.)

  • Re:Excellent! (Score:2, Insightful)

    by Anonymous Coward on Tuesday October 19, 2004 @08:05AM (#10563660)
    contrary to belief access to thousands of programmers does not necessarily mean everyone looks at the code

    there is a difference
  • by fwitness ( 195565 ) on Tuesday October 19, 2004 @08:05AM (#10563662)
    While it's great that IE can handle 'bad' web code, it really is a seperate issue from security. Now, when the other browsers actually *crash*, this is a concern. Yes crashes *can* be used to determine an exploit, but that doesn't mean they *do*.

    To beat the dead horse of the car analogy, if my car doesn't start, it may be the entire electrical system, or maybe my battery is just dead. The moral is don't try to make a mountain out of a mole hill.

    Meanwhile, I absolutely despise the fact that IE does handle a lot of 'bad' code. This is a side effect of the IE monopoly on the browsing world. We're not talking about it handling variables that arent declared before they are used or sumsuch. We're talking about code which *should* be causing errors. Since they don't cause errors most of the time (or are hidden from the user) and most web authors only test with IE, there is a massive amount of bad code on the net which is never fixed.

    Now I'm glad that the author has found these crashing bugs in the other browsers. This obviously needs fixing, and I'm glad IE is at least stable when it encounters malformed code, but more error reporting needs to be done to the user on all browsers.

    Summary:Good review, brings up great points, kudo's to MS for stability. Now everyone go back to work on your browsers and add blatant *THIS WEBSITE AUTHOR DOES NOT WRITE PROPER CODE* dialogs to all your error messages. It's the web author's fault, it's time we told them so.
  • by Anonymous Coward on Tuesday October 19, 2004 @08:05AM (#10563664)
    Idiot. Crashing = denial of service attack.

    *Your* first lesson in computer security is, and write this a thousand time: *crashing* on malicious code is *BAD*, whereas *recovering* from the situation and responding with an *error message* is *GOOD*.
  • by Dink Paisy ( 823325 ) on Tuesday October 19, 2004 @08:17AM (#10563724) Homepage
    Perhaps Konqueror is better than other browsers, or perhaps the involvement of Apple means that Safari is better tested than Mozilla or Opera.

    Unfortunately, handling the test cases provided with the post doesn't mean it's ok. Browsers were run with randomly generated test cases, and the article says that usually a hundred or so tests were required to cause a crash. The probability of a poorly written browser crashing on any given case is low.

    The chances of any single test case being problematic are low, so passing the cases provided doesn't mean Safari is safe. If someone downloaded the cgi that automatically generated tests and left Safari with that for a few hours without crashes, we could then say that it lacks the flaws that Mozilla and Opera have.

  • by fstrauss ( 78250 ) on Tuesday October 19, 2004 @08:20AM (#10563737) Homepage
    You aren't a developer are you?

    Programs crash because they execute invalid code in memory. Somewhere a pointer changed, making the program execute some code it never should get to, this can be exploited if you can put your own instructions in the new place the program starts executing.

    Getting invalid data shouldn't crash the program, that's very wrong, it should exit cleanly or carry on executing, ignoring the invalid data.

    I've never before heard anyone claim that crashing is a good way for a program to deal with incorrect data.

    if( data == confusing ) // what the hell???!?
    x = x / 0; // abort abort abort!
  • by Hobbex ( 41473 ) on Tuesday October 19, 2004 @08:23AM (#10563753)
    Furthermore, this kind of test is standard within Microsoft (feed random inputs to all possible input locations).

    So what you are saying is that this article consists of a Microsoft employee applying one type of stability test, one that happens to be used inside Microsoft, to their own browser, which has been patched against exactly this test, and others. Permit me to say I am somewhat underwhelmed by IEs amazing performance.

    This is the security equivalent of Microsoft's "benchmarks" where the benchmark is decided first, then just those operations are optimized, and, wow and amazement, Micrsoft's products perform great.

    While it is bad that the open source browsers crash on random input, this is only one, rather limited, test of security. Security against targetted attacks is a much harder, different problem. (CRC32 performs great at spotting random changes in Inputer - want to use to digitally sign your payments?)
  • Re:Off Topic (Score:4, Insightful)

    by SpaghettiPattern ( 609814 ) on Tuesday October 19, 2004 @08:24AM (#10563761)
    With such a powerful parsing engine you would thing IE could parse web standards a little better.

    Has it ever occurred to you that it is in MS interest to parse bad HTML? Maybe even to encourage bad HTML so IE is considered the best browser by the man in the street. Now where's my tin foil hat?
  • by Khazunga ( 176423 ) * on Tuesday October 19, 2004 @08:24AM (#10563765)
    I think it all depends on the definition of crash. A top-level thrown Exception is a good crash. A GPF is the kind of crash that might evolve onto a buffer overflow.
  • by cascadingstylesheet ( 140919 ) on Tuesday October 19, 2004 @08:25AM (#10563767) Journal

    ... and here's why.

    With correct data (in this case, HTML), there is a specified action that is "correct". In other words, a correctly marked up table will get layed out, according to the W3C rules for laying out tables. A paragraph will get formatted as a a paragraph, etc.

    With malformed markup, the "correct" thing to do is indeterminate. If every browser just takes its best guess, they will all diverge, and the behavior is wildly unpredictable. Even from version to version of the same browser, the "best guess" will change.

    "So? You've just described the web!" Well, exactly, but it could have been avoided. Bad markup shouldn't render. It ain't rocket science to do (or generate, though that can be a harder problem) correct markup. If you had do it to get your pages viewed, you would. Ultimately, it wouldn't cost anymore, and would actually cost less (measure twice, cut once).

    Of course, what I just wrote only really applies in a heterogenous environment ... which MS doesn't want ... fault tolerance in your own little fiefdom can make sense.

  • by DrSkwid ( 118965 ) on Tuesday October 19, 2004 @08:27AM (#10563783) Journal

    did you count in the memory of the Windows DLL's IE loads ?

    You remember those, the one's that have arbitrary IE functions spread across them to make removing IE from windows "unpossible"

  • by Anonymous Coward on Tuesday October 19, 2004 @08:34AM (#10563826)
    That's not actually accurate. Konqueror only shows that bug when there are javascript errors in the page.
  • HTML is out there, and millions of malformed pages exist. Most of this is a result of mistakes by authors, but some of it is a result of the moving target that HTML has presented in the past.
    While your argument is attractive in principal, in practice it's misguided. The horse has bolted. in 2004, no-one would use a browser that didnt work with a huge proportion of the web's content. This is an area where pragmatism is required.
    And to respond to the ubiquitous MS-bash, let's step back and remind ourselves that this /. story is also about how various browsers, including the saintly Firefox, can be made to *crash* given certain input. Just thought that should get a mention :)
    (And BTW, I speak as a Firefox user)
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Tuesday October 19, 2004 @08:56AM (#10563989)
    Comment removed based on user account deletion
  • by wheany ( 460585 ) <wheany+sd@iki.fi> on Tuesday October 19, 2004 @09:03AM (#10564049) Homepage Journal
    A program must never crash because it received bad data. You always have to validate user input and there must always be sanity checks. If the browser receives malformed code, at worst it can give an error message, but it must never crash.

    Crashes are always considered bugs.
  • by BabyDriver ( 749379 ) on Tuesday October 19, 2004 @09:05AM (#10564058)
    IE also tends to render pages faster than Firefox under most circumstances (except where Linux advocate article authors have carefully crafted CSS heavy pages which cause IE to slow down a bit).

    A person writing a 'Linux advocate article' is likely to be making extensive use of CSS in order to properly separate the document structure and style. In other words they create documents that are easier to maintain and comply with the standards.

    I don't suppose most of them will cry if IE renders it slowly but I doubt that's their primary reason for doing it, although they'll certainly be aware that IE's CSS compliance sucks.

  • by mistered ( 28404 ) on Tuesday October 19, 2004 @09:12AM (#10564119)
    Oh come on. While I'm as much of a Microsoft-basher as the next guy, this is a very useful technique (and not some obscure Microsoft idea). It's a pretty good idea for looking for bugs at all levels of protocol parsing. You send as much valid data as is needed to get the random garbage into the layer or module you want to test. If malformed data can crash the browser, there's a good chance it could be exploitable too.

  • by Tim C ( 15259 ) on Tuesday October 19, 2004 @09:13AM (#10564129)
    With malformed markup, the "correct" thing to do is indeterminate.

    Well, that's debatable, but one thing that's for certain is that you absolutely should not crash due to malformed data. That's bad enough in a browser that doesn't support tabs, but in a tab-capable browser it's unforgivable. That one unparsable webpage that causes the browser to crash is a serious annoyance if it takes out another dozen or so tabs in the process.

    Even ignoring questions of pragmatism (raised by other respondents), at the very most the browser should display some sort of "malformed page, unable to display" error. User input (which this is essentially) should not be able to crash an application. My compiler doesn't crash because of syntax errors in my code, why should my web browser?
  • Re:Security Issues (Score:5, Insightful)

    by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Tuesday October 19, 2004 @09:17AM (#10564154) Homepage Journal
    But is that according to the people who wrote the XHTML standard, or the user who just wants to see the web page?

    Just to be clear, unparseable XHTML is not XHTML. In "Matrix" terms, there is no web page. Instead, there is a string of text that may resemble XHTML to the casual observer but that doesn't really represent anything at all.

    Arguing that browsers should half-support broken XHTML is like saying that a C compiler should do something whenever it encounters invalid C, since the user obviously wants to run the code and isn't interested in bowing to the pedantic demands of some irrelevant standards committee.

    One is rather more important than the other in this context.

    I agree completely, but I don't think it's the one that you picked.

  • by jlipkin ( 638357 ) on Tuesday October 19, 2004 @09:18AM (#10564158)
    The reason that Mozilla and Firefox don't work with some sites that do worrk for IE, is that IE doesn't work very well with web standards. Developers are forced to write code that is non-standards compliant so that it will render properly in IE. This causes Mozilla and Firefox, which are standards compliant, to display pages 'improperly'. What some developers do is to write their HTML and CSS according to W3C guidelines, so they have valid code, then create a series of workarounds so that the code will render properly in IE. You can look at sites like simplebits.com and stopdesign.com to see how they've done this.
  • by MarkedMan ( 523274 ) on Tuesday October 19, 2004 @09:19AM (#10564170)
    This is all to the good, but bugs and holes are not the real source of the Windows et. al. vulnerabilities. Microsoft products are insecure *by design*. Giving a scripting language (visual basic for applcations) the equivalent of root privileges so fundamentally violates security that it can't be fixed. Ditto for components (ActiveX) on unknown Web Pages. You at Microsoft obviously know this, since your solution is to tell users to turn these features off. However, you run smack up against your business plan which is to encourage end users to depend on these sorts of proprietary "features". It's a real bind for you guys - and as much as buttoning up your code is a good thing (TM), it's like fixing the acne on a plague victim...
  • by CausticPuppy ( 82139 ) on Tuesday October 19, 2004 @09:29AM (#10564262)
    I don't see how this is a bad thing. This just means that IE does not catch some of the malformed code people use to cause havoc on the net.

    Let's turn it around... if it was IE that was crashing on bad HTML, and the other browsers simply ignored it, would you be making the same argument? IMO, the slashdot headline would then be "IE Crashes on simple malformed HTML."

    How is it a bad thing when other browsers refuses to read that code. Isn't that a good thing? A good example is a compiler most compilers catch overflows and don't allow you to finish compiling.

    NO, no, no, no!! It is a BAD thing, because at the very minimum it's a sign of non-existent exception handling. You should never get a runtime error from bad input. In some cases, you create an infinite loop-- is there any excuse for that?
    And considering the nature of the crashes (one of the links caused Firefox 1.0PR to die with a windows memory error, shutting down ALL instances of firefox) this means that some memory was accessed that shouldn't have been, which means that you could conceivably put executable code into memory simply by constructing the right "invalid" HTML. Lo and behold, you now have a buffer overflow exploit for Firefox. And we're telling all the IE users on Windows to switch to Firefox!

    I'm a firefox user, and there's no way I'm switching back to IE, but this MUST be fixed. Now that it's well known, I'm sure there will be a patch for Firefox fairly soon, though I have a feeling the code changes will be somewhat involved.

  • by Hatta ( 162192 ) on Tuesday October 19, 2004 @09:36AM (#10564317) Journal
    Besides which, as a user, I find it infuriating that Mozilla/Firefox are so stuck up on perfectly standard HTML that they just don't work with some web sites that are perfectly usable in IE anyway.

    As a user, I find it infuriating that people write non-standard compliant HTML that only works in one proprietary browser.
  • by David Gerard ( 12369 ) <slashdot@@@davidgerard...co...uk> on Tuesday October 19, 2004 @09:40AM (#10564343) Homepage
    This is why they do nightlys. Whereas compiling Mozilla or Firefox for yourself is extremely laborious, you can use this device to generate crashers, reduce them to test cases, see which nightlys they break and file the bug reports and talkbacks.

    This could be the greatest Mozilla stability enhancement tool yet seen!

  • by Anonymous Coward on Tuesday October 19, 2004 @09:40AM (#10564346)
    Not to mention the horribly annoying bug that hasn't been fixed since 0.8 that renders Slashdot (HTML 3.2?) incorrectly. You'd think they'd want to fix that bug since a lot of their users read Slashdot.
  • by David Gerard ( 12369 ) <slashdot@@@davidgerard...co...uk> on Tuesday October 19, 2004 @09:42AM (#10564373) Homepage
    Ah, sorry, I should read before posting. Gecko's internals are arcane indeed, but if you really want to dive in then:

    1. Get it compiling on your system.

    2. See if you can help with a bug that's in the system already (a crasher or even a misrendering).

    3. Find the Gecko hackers and pick their brains.

  • by Alomex ( 148003 ) on Tuesday October 19, 2004 @09:45AM (#10564396) Homepage
    Does this mean we've all been wrong about Microsoft products?

    Actually yes. People here always talk about Microsoft products being buggier than the average, without any evidence to back it up beyond their own prejudices.

    They use to laugh at the "much inferior" IE code, until the mozilla project got started and it turned out Netscape had the inferior code base.

    OSSers used to laugh at the "bloat" of the windows source code.... until Linux got to have a decent user interface that is, and guess what? source code size is comparable to Windows.

    There are many reasons to loathe the evil empire (monopolistic bully for one), but buggy code is not one of them. That is just something OSSers tell each other to feel better about what they do.

  • Is it just me... (Score:2, Insightful)

    by HerculesMO ( 693085 ) on Tuesday October 19, 2004 @09:46AM (#10564406)
    Or does it seem rather amusing that IE, a poorly written browser with many security holes that *brilliantly* links into the OS (not), allows poor code not to crash it...

    Then again... properly written code it seems to have problems with.

    Oh well.. I guess that's what you get.
  • by jefftp ( 35835 ) on Tuesday October 19, 2004 @10:19AM (#10564695)
    Jon Postel [postel.org] said it best: "Be liberal in what you accept, and conservative in what you send." A web browser that crashes due to invalid HTML fails this test. Execution must stop at once... yes, but the program should handle the error. A program that crashes is the Operating System handling a program with bug. See RFC 1122 [ietf.org] for more details about the Requirements for Internet Hosts. Section 1.2.2 about the Robustness Principle explains better than I can why you're wrong.
  • If you take it seriously, you'd also note that out of the 20+ patches released on "patch day" this month, only ONE was for XP-SP2. All the rest were for legacy code written before the SWI program was in place.
  • this could be bad (Score:3, Insightful)

    by An ominous Cow art ( 320322 ) * on Tuesday October 19, 2004 @10:54AM (#10565102) Journal
    I think a lot of people are missing the point here. I don't have time to personally verify whether the author's claims are correct; let's assume they are. The type of errors he saw are potentially the type that could be exploited via the tried-and-true buffer overflow method. This is the kind of thing that leads to "execution of arbitrary code", in other words, 0wnage. If Bad Guys craft a apecial web page, they could target those of us who use these non-IE browsers.

    In other words, the people who have been defending IE (and Microsoft in general) by saying "your Mozillas and Operas have been safe from security problems only because nobody uses them" will not only have a field day, but now the clock is ticking. There's proof that there are promising means of attack against these browsers. Someone is surely going to research this. I really hope the good guys developing these browsers rise to the challenge and tighten up the code before we (the people who have been recommending them for years) start losing credibility. I'm inspired to look into helping them out.

    Sorry if this seems incoherent, I keep getting interrupted as I type this. Stupid work...
  • by Patoski ( 121455 ) on Tuesday October 19, 2004 @10:55AM (#10565113) Homepage Journal
    If you take it seriously, you'd also note that out of the 20+ patches released on "patch day" this month, only ONE was for XP-SP2.

    We're talking about security and security bulletins of which there are "only" 10 this month. I'd love to move to XP SP2 but to be honest there is still some software we're waiting to become fully SP2 compatible and SP2 is still too wet behind the ears for us to deploy anyhow. That said we are testing XP SP2 ATM and are addressing several issues we've found with it in our environment.

    http://www.microsoft.com/technet/security/curren t. aspx

    All the rest were for legacy code written before the SWI program was in place.

    Not true. 70% of these vulns listed 2003 Server as vulnerable which was released way after SWI.
  • by julesh ( 229690 ) on Tuesday October 19, 2004 @11:48AM (#10565869)
    Microsoft's is called JScript, not Javascript.

    So that'll be why it calls it when I use a url of the form "javascript:[code]", or a tag like "<script language=javascript>" then?

    MS's software all internally understands that this language is called Javascript. It's only MS marketing that decided to use a different name for it.
  • by Anonymous Coward on Tuesday October 19, 2004 @11:53AM (#10565924)
    I ran into similar issues with IE ignoring stuff and Mozilla catching it a few years ago when I was developing one of my first web applications with servlets. Mozilla was a great browser for testing my web apps during development. I remember I had a bug in my code that was supposed to populate a dropdown with options from a database but instead choked somewhere in the model layer and populated it with a bunch of breakspaces. Mozilla would show me the space characters in the dropdown where IE simply ignored them and pretended like the dropdown was empty.

    It's a real pain in the neck for IE to not try to show what's actually there because when I first looked at the page in IE I assumed that I just wasn't getting what I wanted from the database since my dropdown was empty. In reality it wasn't empty, IE just didn't want to show me 1000 breakspaces as an option in my dropdown which is bad from a developer's standpoint. However, masking and hiding bad code and data is something that I absolutely want a browser to do when the application is out in production being used by my clients.

    The bottom line is you should always develop your web applications with a browser like Mozilla that is going to catch your mistakes but once your application is out the door it's better for clients to be using a broswer that will hide any mistakes you didn't catch!
  • by drfuchs ( 599179 ) * on Tuesday October 19, 2004 @12:12PM (#10566163)
    Netscape made the first widely-used browser. Netscape's browser accepted non-conforming HTML all over the place (perhaps because many HTML files were hand-crafted).

    IE came later. IE was forced to maintain compatability with all the non-conforming stuff that Netscape introduced; otherwise, the user-experience was "NS works; IE doesn't". And, having worked at the time on yet a different browser, let me tell you that it was a huge pain to try to even figure out the exact language that Netscape accepted, and what the exact semantics of all the non-conforming HTML that it accepted were. I don't know how IE managed to get so much of it to match. (It's hard enough to write to a spec; but when you've also got a, well, white box that you're trying to match the behavior of, good luck).

    So, all this business about how it's all a MS plot may be fun to claim, but it doesn't match up with the reality of history. (OK, ok, this is not to say that once IE gained the lion's share of the market that they didn't pull the sort of trick being suggested; but they sure weren't first.) The world would be a much better place today if Netscape had been rigid in the language it accepted; by the time IE came on the scene, it just didn't have the option to enforce strictly-conforming HTML.
  • by danila ( 69889 ) on Tuesday October 19, 2004 @12:21PM (#10566283) Homepage
    You don't need to be paranoid. Even if there was no evil plan, rendering broken HTML correctly is a trait that benefits the survival of the browser. Just as rendering standard code is a beneficial trait for HTML-authoring tool.

    Mozilla, Opera, Safari and the rest are simply less suited in this respect. They can argue as much as they want about their adherence to standards, but in the end they must learn to display malformed HTML/CSS/JS correctly or fail miserably (because some pages will only work in IE).
  • by |<amikaze ( 155975 ) on Tuesday October 19, 2004 @12:22PM (#10566300)

    That's why you have to do it over and over :). Make some noise. Get the problem noticed. If it's a commercial site, and they start believing they're losing customers over it, then they might take notice. Or they might lose you as a customer.

  • by toolz ( 2119 ) on Tuesday October 19, 2004 @12:32PM (#10566439) Homepage Journal
    I wonder how much of IE's "stability" is because of good code and good parsing, and how much of it is due to the fact that IE didn't *know* it's eggs were scrambled?

    It wouldn't be the first time that a program went on "running", only to *eventually* go out to lunch and not come back.

    At least the others *noticed* that something was wrong - and died.
  • by rjkimble ( 97437 ) on Tuesday October 19, 2004 @01:55PM (#10567346) Homepage Journal
    I don't think that anybody is suggesting that IE is the best browser on the market. This guy has performed a useful service by devising a test script that highlights the flaws in other browsers and demonstrates that IE's HTML rendering engine doesn't have these same flaws. Nothing could be simpler. There is no need for all the open source advocates to run around flaming everything in sight, just because we now have an example that highlights a place where IE is better than our browsers of choice.

    Pretty simple stuff, really.
  • by WebCowboy ( 196209 ) on Tuesday October 19, 2004 @02:42PM (#10567776)
    It mentions there is no scripting or stylesheets in this test--just random HTML tag soup. So yes, there was a very defined, arbirtrary range established. The results are disappointing for IE competitors for sure, but given IE is at major release SIX and at the end of its life cycle, while the likes of Thunderbird are for less mature (barely at 1.0 and at the early stages of its life cycle) I would expect and demand no less than perfection from IE.

    Given the arbitrary limits on this test, it appears to be designed specifically to make IE look better than its competitors and prove some point rather than be an objective investigation. It is well known that the most serious problems in IE are with scripting and CSS support being unstable, broken or incomplete. A similar test should be conducted of IE should be done with these included. Kudos to the author of the bugtraq entry for doing this kind of testing, but I don't think the editorial commentary regarding the amount of testing of these browsers or their attention to security is warranted or productive.

    The author freely admits he did not seriously analyse the source code for the root cause of these crashes (and in the case of IE, he cannot do so even if he wanted to--but that doesn't stop him from proclaiming it as superior quality). He also provides no evidence that these bugs compromise security in any way beyond consuming system resources, so it was not exactly appropriate to attack their security abilities without further study.

    As to the jibe about lack of testing...Many of these alternatives are open source projects, not yet at official 1.0 release yet people! Being open source, the whole point of exposure is to get many eyes looking at the code, and get people involved in improving the code. He seems to know a great deal about programming so I suggest he volunteer some of his spare time to the Mozilla project to make things right, if he is indeed THAT concerned about the issue.

  • by DunbarTheInept ( 764 ) on Tuesday October 19, 2004 @02:50PM (#10567854) Homepage
    You said:
    "there's nothing stopping me from doing it in slashdot's code."

    What about that bit at the bottom of the "Post Comment" form that always says:

    Allowed HTML <B> <I> <P> <A> <LI> <OL> <UL> <EM> <BR> <TT> <STRONG> <BLOCKQUOTE> <DIV> <ECODE> <DL> <DT> <DD> (Use "ECODE" instead of "PRE" or "CODE".)
  • by mdfst13 ( 664665 ) on Tuesday October 19, 2004 @03:12PM (#10568127)
    "The fact that IE passes a test, while other's don't, that it was made to pass, that says somethign positive about IE's security, and is not to be blown off."

    No, I disagree with that. It is reasonable to blow off that IE passes its own test cases. What is not reasonable is to blow off that other browsers do not.

    IE still includes some basic security flaws due to faulty design. For example, there is phishing attack that displays http://www.bankname.com/ on mouseover but goes to http://ip.nu.mb.er on click. This is a security flaw in IE that should not exist (the same routine should be used to determine the URL for both mouseover and on click). Incidentally, this flaw does not exist in FireFox.

    More relevant test cases are always good. New versions of Firefox, et. al. should be able to handle these test cases as well as those that they handle now that IE does not.

To the systems programmer, users and applications serve only to provide a test load.

Working...