Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Internet Explorer Bug Mozilla The Internet IT

IE Shines On Broken Code 900

mschaef writes "While reading Larry Osterman'a blog (He's a long time Microsoftie, having worked on products dating back to DOS 4.0), I ran across this BugTraq entry on web browser security. Basically, the story is that Michael Zalewski started feeding randomly malformed HTML into Microsoft Internet Explorer, Mozilla, Opera, Lynx, and Links and watching what happened. Bottom line: 'All browsers but Microsoft Internet Explorer kept crashing on a regular basis due to NULL pointer references, memory corruption, buffer overflows, sometimes memory exhaustion; taking several minutes on average to encounter a tag they couldn't parse.' If you want to try this at home, he's also provided the tools he used in the BugTraq entry."
This discussion has been archived. No new comments can be posted.

IE Shines On Broken Code

Comments Filter:
  • by LiquidCoooled ( 634315 ) on Tuesday October 19, 2004 @07:30AM (#10563426) Homepage Journal
    My guess is this was recompiled with the new SP2 compilers?

    But I guess I would have to rtfa for that (which I'm gonna do now)

    One thing, if it is the compiler thats automagically cleaning up the code, does the gcc compiler support the same optimisations?

    If not, why not, if so Woooohooooooo get recompiling.
  • by GabrielPM ( 633823 ) <gabriel&individualism,ro> on Tuesday October 19, 2004 @07:31AM (#10563439) Homepage
    OK, IE is the king of malformed HTML. It can take all the sh*t in the world and still run. Good job coding the parser.

    But what about valid code? What about standard code?

    I'd rather have full XHTML 1.1 support and abstrain from visiting sites made by monkeys with typewriters , or by Frontpage authors, than have an uncrashable parser for a browser who chocked on the proper XHTML content type.
  • by UfoZ ( 680310 ) on Tuesday October 19, 2004 @07:34AM (#10563456) Homepage
    or perhaps used one of their .NET languages, rather than programming in straight C like the others

    Not likely, since IE was created ages before .NET, and I don't quite think they decided to scrap and rewrite the entire parsing engine since then :)

    As for the malformed HTML, it didn't crash my firefox, but I'll try again a couple of times just in case ;)
  • All Other Browsers? (Score:3, Interesting)

    by polyp2000 ( 444682 ) on Tuesday October 19, 2004 @07:37AM (#10563470) Homepage Journal
    While I must admit that this is a great technique that can be employed by the various alternative browser vendors such as the firefox team to weed out problems. With its track record I find it rather dubious that the guy was unable to crash IE. Im willing to bet there are a couple of people here on Slashdot who know a few tricks that will crash IE with nothing more than a couple of lines of code. Which would enevitabley point to a flaw in his system. If anything at all this highlights IE's highly forgiving HTML parsing.
  • by BrianHursey ( 738430 ) on Tuesday October 19, 2004 @07:39AM (#10563478) Homepage Journal
    I don't see how this is a bad thing. This just means that IE does not catch some of the malformed code people use to cause havoc on the net. Malformed java script and html can be known to automatically download things like adware via security holes. How is it a bad thing when other browsers refuses to read that code. Isn't that a good thing? A good example is a compiler most compilers catch overflows and don't allow you to finish compiling. From what I am reading her this is IE allows errors like this to keep on running. To me this is a very very bad thing.
  • by freedom_india ( 780002 ) on Tuesday October 19, 2004 @07:40AM (#10563490) Homepage Journal
    I don't get it.

    Microsoft Press writes the BEST books on how to write good code like Code Complete; but their "manufacturing" dept. does not follow their own best-practices and produce crap like IE 5.0/5.5.

  • by dioscaido ( 541037 ) on Tuesday October 19, 2004 @07:41AM (#10563492)
    Your first instinct would be wrong, at least when it comes to it being built by a separate team. The fact is, as hard to believe at it is, for the past year Microsoft has put in place for every product systematic development techniques that directly target the security of an application (Threat Modeling, Secure coding techniques). Furthermore, this kind of test is standard within Microsoft (feed random inputs to all possible input locations). And once all the coding is done, the source still has to pass inspection through a security group within Microsoft! You can read about this stuff at the secure windows initiative [microsoft.com].

    And this shift is working. The trend per-product is a significant reduction in security vulnerabilities. That is not to say there aren't any, that would be impossible, but if you look at the vulnerability graph for, say, Win2k Server since it's release, and win2k3 Server since it's release, there is a significant drop in the amount of vulnerabilities that are coming in since the release of the product. Furthermore, a large part of the vulnerabilities are found from within the company. The same thing can be said for most products, including IE, IIS, Office, etc... We're getting there....

    Now, go off and run as LUA [asp.net], and nip this stupid spyware problem in the bud.
  • by hwestiii ( 11787 ) on Tuesday October 19, 2004 @07:41AM (#10563498) Homepage
    I saw something like this (not quite, but similar) a few years ago working with Java Script.

    I wasn't that experienced with it, and as a result, certain pieces of my code were syntactically incorrect. Specifically, I was using the wrong characters for array indexing; I think I was using "()" instead of "[]". I would never have known there was even a problem if I hadn't been doing side by side testing with IE and Mozilla. A page that rendered correctly in IE would always show errors in Mozilla. This made absolutely no sense to me.

    It wasn't until I viewed the source generated by each browser that I discovered the problem. IE was dynamically rewriting my JavaScript, replacing the incorrect delimiters with the correct ones, whereas Mozilla was simply taking my buggy code at face value.
  • Re:Security Issues (Score:5, Interesting)

    by Trillan ( 597339 ) on Tuesday October 19, 2004 @07:44AM (#10563510) Homepage Journal
    XHTML is supposed to be refused if malformed; HTML prior to 4.0 is supposed to be best-guessed. I'm not sure what the behaviour of 4.0 Transitional and 4.0 Strict is supposed to be, but I'm sure it's documented as part of the spec.
  • by Zarf ( 5735 ) on Tuesday October 19, 2004 @07:50AM (#10563549) Journal
    I think I was using "()" instead of "[]".

    MSIE was embracing and extending your new syntax. They were effectively defining their own JavaScript variant. Meaning their JavaScript was a SuperSet of the real JavaScript standard. That means you can more easily fall into the trap of writing MSIE only JavaScript and inadverdently force your clients/customers/company to adopt MSIE as your standard browser.
  • Re:Tested Konqueror (Score:5, Interesting)

    by Anonymous Coward on Tuesday October 19, 2004 @07:50AM (#10563550)
    http://lcamtuf.coredump.cx/mangleme/mangle.cgi

    You're right, none of the samples work with Konqueror, however after doing a little testing myself with the above page it just took me about five tries to make it crash.

    Bad luck? Maybe, but just try it yourself.
  • by ragnar ( 3268 ) on Tuesday October 19, 2004 @07:50AM (#10563554) Homepage
    I may be a little paranoid (heck, I actually am) but I've long suspected the IE support for loose HTML was a strategic decision. Go back to the days when Netscape would render a page with a unclosed table tag as blank. IE rendered the page, and I often encountered sites that didn't work on Netscape.

    It could be a coincidence, but the loose HTML support of IE led to a situation where some webmasters conclude that Netscape had poor HTML support. You can argue about standards all day long, but if one browser renders and another crashes or comes up blank there isn't much of a contest.
  • by dioscaido ( 541037 ) on Tuesday October 19, 2004 @07:51AM (#10563561)
    That's certainly a good point (pre 2000).

    The good news is that now people are required to know Writing Secure Code [microsoft.com], and (more recently) Threat Modelling [microsoft.com] by heart. I can tell you first hand those approaches have been adopted company wide. While Threat Modelling can be time consuming, I've personally found possible issues in code that we wouldn't have noticed without it. Plus we got other people outside our department looking at our code. All in all this is the best approach we could be taking. Microsoft is not sitting on it's ass about this issue.
  • by Erasmus Darwin ( 183180 ) on Tuesday October 19, 2004 @07:51AM (#10563564)
    "My guess is this was recompiled with the new SP2 compilers?"

    My understanding of the SP2 compilation changes is that existing buffer-overflows still exist and will still cause the program to crash. The difference is that overflows which previously allowed the attacker to execute arbitrary machine code will instead crash before the code is executed.

  • by BarryNorton ( 778694 ) on Tuesday October 19, 2004 @07:57AM (#10563603)
    Not likely, since IE was created ages before .NET, and I don't quite think they decided to scrap and rewrite the entire parsing engine since then
    Indeed. It would be interesting to know how much of it is preserved from the pre-Microsoft Mosaic code...
  • by SmilingBoy ( 686281 ) on Tuesday October 19, 2004 @07:57AM (#10563611)
    The author gave some examples that are supposed to crash Mozilla, Opera, Links and Lynx at the following URL:

    http://lcamtuf.coredump.cx/mangleme/gallery/ [coredump.cx]

    I opened all the pages in tabs in Firefox 0.10.1 under Windows 2000, and Firefox did not crash. It became somewhat unresponsive, but I could still select other tabs, minimise and maximise. I could not load new pages anymore.

    Can someone else test this as well, please?

    And can someone tell us whether this has security implications or not?

  • Re:so? (Score:5, Interesting)

    by Maestro4k ( 707634 ) on Tuesday October 19, 2004 @08:02AM (#10563640) Journal
    • So what? I have never had a problem with my Firefox crashing (ever). Sure, if you try to make something crash, it eventually will. Considering how much security holes IE has, IE could be the missing link, and I still wouldnt use it.
    Just because you haven't crashed it doesn't mean it's not happening. I switched my Mom over to Firefox for her computer's safety about 2 months back. She's still using it, but it crashes for her regularly and it's becoming a big frustration for her. As she put it "why does Firefox crash so much, IE never crashed on me?" If Mozilla/Firefox/Opera/etc. hope to continue gaining ground on IE, then this type of thing needs to be addressed.

    As I see it the major problem that Mozilla/Firefox has is the vast majority of those using it (and most definitely the vast majority bothering to report bugs/crashes) are techies. Why is that a problem? Well we probably don't spend our time to going to "silly" E-card sites and joke sites that use bad flash/html. Sure we can dismiss those sites as not important, because to us they aren't, but to a large portion of the average users out there they're one of the most important things they do in a browser because to them they're fun.

    So I'm betting Mozilla/Firefox actually crashes regularly on non-techies simply because they visit sites that most techies don't bother to test the browser on.

  • Re:Excellent! (Score:5, Interesting)

    by eht ( 8912 ) on Tuesday October 19, 2004 @08:03AM (#10563646)
    One guy with ten minutes came up with ways to crash Mozilla, Lynx, and Links, yet hundreds of thousands of programmers with years of access to the same code haven't fixed these same bugs.
  • by Dashing Leech ( 688077 ) on Tuesday October 19, 2004 @08:05AM (#10563655)
    Uh, can somebody repeat this guy's test. It sounds like nobody can repeat his results, which is generally a sign of a poorly performed experiment. I can do it, but it will be a few weeks before I have time.
  • Re:Tested Konqueror (Score:3, Interesting)

    by Anonymous Coward on Tuesday October 19, 2004 @08:06AM (#10563668)
    Tested several (> 30) time. No crashes here!
    Version 3.3
  • Re:Security Issues (Score:5, Interesting)

    by FireFury03 ( 653718 ) <slashdot&nexusuk,org> on Tuesday October 19, 2004 @08:08AM (#10563674) Homepage
    XHTML is supposed to be refused if malformed; HTML prior to 4.0 is supposed to be best-guessed.

    This reenforces my belief that XHTML is the way forward since it reduces the code complexity of the browser:

    XHTML: Try to parse - fail - give up
    HTML: Try to parse - fail - Try to reconstruct - hit bug - crash

    XHTML is also good because it removes the fuzzy area of what to do if the code is crap - with HTML, a web developer will write a page, won't bother to validate it and just check it works in IE. Since different browsers have different methods of fixing broken code, the results of this page are not platform independent. With XHTML, if the developer writes broken code it just plain won't work. The management who pay the web developer probably don't know anything about standards compliance and if it works in IE the developer gets paid, but if it just sits there with a parse error the developer will either have to fix it or not get paid (Good Thing).

    That said, IMHO there is something to be said for a couple of additions to the XHTML spec:

    1. a button on the "parse error" page which tells the browser to render it as tag soup - that way the end user can try to view the page anyway even if it's broken (whilest still being informed that it really is broken code).
    2. an automatic feedback system in which the browser will post details of the parse error back to the server. Otherwise the developer may never know there's a problem (especially important with dynamically generated markup which may not be easilly validated).

    Similarly, it would be really nice, IMHO, if browsers made it clear (by placing a big X on the status bar or something) when they are viewing broken *HTML* code since this would indicate to the user why the page might not look quite right and would be an indication to the management not to pay the web designer they hired since he is obviously lacking in the ability to do his job.
  • by Anonymous Coward on Tuesday October 19, 2004 @08:09AM (#10563684)
    Sounds like an excellent idea for an apache module!
  • by Khazunga ( 176423 ) * on Tuesday October 19, 2004 @08:13AM (#10563698)
    This is all shiny and great, but ignores the fact that present IE incarnations were developed before the Secure Windows Initiative.
  • by ronobot ( 739113 ) on Tuesday October 19, 2004 @08:23AM (#10563759)

    Since you've brought up JavaScript, there's something I'd like to note:

    When IE encounters an infinite loop, it starts devouring the CPU and can only be shut down through the Task Manager. On the other hand, Mozilla (and Opera, if I recall correctly) will, after a few seconds, detect that it's an infinite loop and bring up an alert box giving you the option of shutting the script off without shutting down the browser.

  • Re:Security Issues (Score:2, Interesting)

    by xoran99 ( 745620 ) on Tuesday October 19, 2004 @08:27AM (#10563782)
    The damage is done! Malformed html is now a part of the culture. Have you ever tried to validate Microsoft.com? Madness.
  • by afidel ( 530433 ) on Tuesday October 19, 2004 @08:31AM (#10563807)
    The difference is that overflows which previously allowed the attacker to execute arbitrary machine code will instead crash before the code is executed.

    Almost, it's more like they will crash and there is a near zero chance of the code being executed even by another running process because the area has been flagged as non-executable and the cpu will refuse to run anything found in that memory space.
  • maybe its a fluke.. (Score:3, Interesting)

    by Anonymous Coward on Tuesday October 19, 2004 @08:31AM (#10563809)
    I tried this script on both Mozilla firefox at least 40 X now, and it hasn't crashed yet...

    You'll also notice none of this random code tests activex security either, or many of the MS extensions which "enchance" security either.. So I think the tests should be taken more with a grain of salt.. Also while he did say null dereferences, its potentially due to all the same 1 or two flaws, and may not be exploitable at all..

    Take this with a grain of salt I'd say, because when you check the tags being tested, there aren't a great amount..
  • by LiquidCoooled ( 634315 ) on Tuesday October 19, 2004 @08:39AM (#10563867) Homepage Journal
    I have a file sitting on my desktop here at work which says IE was still growing up in July of this year.

    It was an 11 byte html file which made IE go BOOOOOOOOM. I aptly named it "crashme.htm".

    It remains on my desktop as a reminder of MS crap :)
  • by Corngood ( 736783 ) on Tuesday October 19, 2004 @08:45AM (#10563911)
    So what you are saying is that you prefer a negative end-user experience? That, and you'd like to close a dialog for basically every page you visit?
  • by Halo- ( 175936 ) on Tuesday October 19, 2004 @08:56AM (#10563991)
    Wow, what a great test tool! I do software dev for a living, and the hardest part is when a user says: "umm, I did something, and it crashed... I dunno what..." and then you can't reproduce the problem. The problem exists, but due to the complexity of software, its environment, and the subtleties between the way individuals use it, it's hard to reduce the problem down to a few variables...

    A tool like this would let the average wanna be contributer find a reproducable bugs and try to fix them. Which brings me to my dumb question: Is the Mozilla gecko engine more easily built/tested than the whole of Firefox? I love FF, and wouldn't mind throwing some cycles at improving it, but the entire build process is a bit more than I really want to take on... If I could just build and unit-test the failing component I'd be more likely to try.

    Anyone have pointers beyond the hacking section at MozillaZine?

  • by Seahawk ( 70898 ) <tts@nOsPAm.image.dk> on Tuesday October 19, 2004 @09:09AM (#10564093)
    Why are you reading slashdot then?

    Slashdot would be unreadable if your browser did not accept fucked up html?

    Not saying that I have a better solution - just pointing out that you're idea is not really a great idea! :)
  • by StrawberryFrog ( 67065 ) on Tuesday October 19, 2004 @09:21AM (#10564179) Homepage Journal
    Bad markup shouldn't render.

    It shouldn't crash the browser either.

    If any input, even a pathological one, can crash my program, then I need to fix my program. Always.
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Tuesday October 19, 2004 @09:22AM (#10564189)
    Comment removed based on user account deletion
  • by jallen02 ( 124384 ) on Tuesday October 19, 2004 @09:24AM (#10564208) Homepage Journal
    I think you mis-estimate how hard it is to manage projects with the complexity of Internet Explorer. Even teams of really good developers with noe one "non-expert" can be brought down by the integration trap. It can probably all be led back to the Waterfall development paradigm where you do things in huge chunks: "Requirements, Design, Implement, Integrate, Pray, Test". Each of those is done as a discreet phase. Any devleopment process still following that basic model tends to fall apart somewhere around Integrate. Even with better development paradigms such as agile development there are considerable challenges in integrating something so large as IE.

    But that *IS* the point of Agile development, to ensure that every step of the way things are working toghether smoothly. The basic point is regardless of the paradigm IE is a big project with many different components requiring a high degree of integration. A key problem with many different components that are highly integrated is the fact that these components tend to "trust" each other to much. Meaning they just assume this component is friendly. If all integrated components were a little less trusting I think software as large and as complex as IE could be more secure.

    This is just a guess, I don't know much about internal Microsoft culture. I have however seen security problems of this scale in projects I have cleaned up and worked on and the problems stem from the exact problems I describe. So its reasonable to assume that somewhere along the way MS has made the same mistakes everyone else does in the software world. Just because they have LOTS of smart people doesn't mean they are any better at managing software processes. Just look at what they are doing with the LongHorn requirements :)

    Jeremy
  • by CTachyon ( 412849 ) <`chronos' `at' `chronos-tachyon.net'> on Tuesday October 19, 2004 @09:34AM (#10564294) Homepage

    The non-executable flagging (Data Execution Prevention in MS parlance) only applies when Windows is running on an architecture that supports it, which is pretty much only AMD64 at this point. They use stuff like stack canaries to protect x86, which makes an attack harder but not impossible.

  • by sw155kn1f3 ( 600118 ) on Tuesday October 19, 2004 @09:35AM (#10564302)
    Executed by another process? What are you talking about? Processes in windows cannot mess with each other address space.
    And no, it's not NX bit fix that makes sp2 secure (NX bit works only at AMD64 processors and above last time I checked). SP2 includes NX support but it doesn't work on majority of computers.
    What they did, is to randomize things and other software stack-protection techiniques built into the compiler (vs.net 2005).
  • by cowens ( 30752 ) on Tuesday October 19, 2004 @09:36AM (#10564310)
    The guy who writes Code Complete does not work for Microsoft. He is published by Microsoft Press. There is a large difference between the two.
  • Re:Excellent! (Score:5, Interesting)

    by roca ( 43122 ) on Tuesday October 19, 2004 @09:38AM (#10564332) Homepage
    On any given day we know of many HTML inputs that will crash Mozilla, and many that will crash IE, and ditto for other browsers. Which ones get fixed is simply a matter of priorities. And we prioritize by looking at the crash to see if it looks like it could be turned into a security hole; looking at talkback data to see which crashes people are hitting most frequently; focusing on the ones that occur on actual real websites, and maybe after that when there's nothing else to do we fix the ones exposed by artificial testcases.

    No-one has enough resources to fix every bug, not even Microsoft.
  • by Patoski ( 121455 ) on Tuesday October 19, 2004 @09:39AM (#10564340) Homepage Journal
    Your first instinct would be wrong, at least when it comes to it being built by a separate team. The fact is, as hard to believe at it is, for the past year Microsoft has put in place for every product systematic development techniques that directly target the security of an application (Threat Modeling, Secure coding techniques). Furthermore, this kind of test is standard within Microsoft (feed random inputs to all possible input locations). And once all the coding is done, the source still has to pass inspection through a security group within Microsoft! You can read about this stuff at the secure windows initiative.

    Your comment really burned me up since I have to deal with this crap every month in an enterprise environment. You can talk to me about "trend(s) per-product" all you want but how on Earth can you brag about the great job you're doing when this month you released 10 frigging hot fixes with 7 of those being critical? You should be keeping your head down this month trying not to attract attention to your miserable situation. Only Microsoft would think to brag about how well they're doing wrt to security during a month like this.

    Look at the non-chalant way MS is handling the security vulns. In particular I'm thinking about MS04-028 (the JPEG vuln) which was just last month. On top of being one of the worst written security write-ups I've ever read, the tool you initially provided to detect the problem was worse than useless. Some random guy on the web managed to produce a useful tool by himself before MS did. With all the resources MS has and all the attention you're putting into security how can this be? Also, how could you release such a horrible, broken tool in the first place? Surely MS knew the tool was broken when they released it if they tested it at all!
    http://seclists.org/lists/bugtraq/2004/Sep/0 328.ht ml

    If you want me to take MS seriously wrt security then do not attempt to spin how severe a vulnerability is (like you did with MS04-028). How is anyone supposed to take you seriously when MS says things like this:
    "Microsoft does not consider this a high risk to customers given the amount of user action required to execute the attack and is not currently aware of any significant customer impact".
    http://news.com.com/Security+researchers +say+JPEG+ virus+imminent/2100-7349_3-5387380.html

    If MS04-028 is not a high security risk then why did MS mark it as critical??? It is one thing to say that you think the JPEG vuln was overhyped (which it was to an extent) but to say that it isn't "high risk" while marking something as "critical" stinks of spin.

    Also, I found it hypocritical for MS to yell from the mountain tops about how security is a top priority and yet at the same time refusing to back port very important workstation security enhancements to non-XP OSes (about 1/2 your install base). Note that this also includes security enhancements to IE6.
    "We do not have plans to deliver Windows XP SP2 enhancements for Windows 2000 or other older versions of Windows," the company said in a statement. "The most secure version of Windows today is Windows XP with SP2. We recommend that customers upgrade to XP and SP2 as quickly as possible."
    http://arstechnica.com/news.ars/post/20040923-42 24 .html

    Finally, security is a design decision. If security is not a design decision (hello... ActiveX) then you'll constantly be chasing your tail trying to plug holes in the dam. The hack you suggest in your sig to "solve" the spyware issue really underlines this point. RunAs is broken and doesn't work in many cases and the fast user switching does not work very well unless all one does is browse the web and get email. I won't even get into all the reasons why the fast user switching idea is a non-starter except to say that many applications *require* admin privledges to even run. Both Macs and Linux have had an elegant solution for some time (ask the user for the Admin/root password) in order to elevate privleges. Why is that so hard?
  • by Anonymous Coward on Tuesday October 19, 2004 @09:58AM (#10564514)
    I hope you have reported the hang on http://bugzilla.mozilla.org . I would even mark it as security sensitive and as a possible Firefox 1.0 blocker.

    It's only when people actually report these kind of problems, that they will actually get fixed. Otherwise you can still wait a long time for a fix, as it's not very likely someone else will hit this bug.
  • by mr_mischief ( 456295 ) on Tuesday October 19, 2004 @10:25AM (#10564783) Journal
    Interestingly enough, it did crash my other instance of the oh-so-secure IE. Infinite loop variety, actually. IE 6 SP1 with 5 updates since then.

  • by Anonymous Coward on Tuesday October 19, 2004 @10:36AM (#10564900)
    A looong time ago I designed hardware. A young software guy was given the task of writing a diagnostic for a memory board I had designed. He spent a lot of time investigating bit patterns and data patterns that caused failures in the specific memory chips we used.

    About 4 weeks after he started, he came to me and congratulated me on my marvelous design: his test had run for 2 days solid with no memory failures. Not that I have trouble taking a compliment or anything... I asked to see the test setup. I unplugged one memory chip and the test still passed.

    A week later he came back and told me that now it worked. Yeah, it would detect one chip pulled in one bank but when I pulled a different chip from each bank on the memory board it only detected the first bank failure over and over again.

    The point of this story is: the software engineer was only interested in whether or not the test ran. I was interested in whether or not the test actually caught errors.

    Now: why in the hell did it take a Microsoftie to code up this kind of test and run it on these browsers? How did the coders of these browsers test? By just feeding each browser properly formed HTML and making sure that they rendered right? That is only 1/2 the job!

    I now write embedded firmware for a living. I only spend about 1/4 of my time getting the software to do what it is supposed to do with well-formed input. I spend the other 3/4 of the time making sure that mal-formed input does not cause the code to do anything hinky!

    Credit where credit is due: kudoes to Microsoft for this particular facet of their browser. And shame, shame, SHAME on the open source coders responsible for those browsers!
  • by Anonymous Coward on Tuesday October 19, 2004 @10:38AM (#10564912)
    > Reality Distortion Fields ON!

    What the hell are you trolling about? It's a _FACT_ that mozilla and the other browsers crash on the HTML code he provided, regardless of what his opinions on Apache and ASP.NET are.
  • by dark_panda ( 177006 ) on Tuesday October 19, 2004 @10:50AM (#10565059)
    If it's of any help, I tried all of the examples in both Konqueror from KDE 3.3.1 and the latest Safari, and none of them caused either of the browsers to crash. (konq and safari both use KHTML, of course...)

    J
  • Re:Tested Konqueror (Score:2, Interesting)

    by crazy blade ( 519548 ) on Tuesday October 19, 2004 @11:23AM (#10565534)

    Which version people?

    I've loaded the above URL at least 50 times in Konqueror 3.3 (not even 3.3.1) with no crashes.

  • Plugins (Score:3, Interesting)

    by phorm ( 591458 ) on Tuesday October 19, 2004 @11:28AM (#10565599) Journal
    I used to have Firefox crash and burn regularly on various webpages - linux and windows. The browser would segfault and be gone, sometimes without visible errors

    Eventually I noticed it seemed to be mostly with pages having Flash content. I ended up nuking my plugins folder and reinstalling the flash/Java plugins, now it crashes a lot less.

    While the article indicates it is possible to crash on bad code, you might want to check your plugins too just in case.
  • by Anonymous Coward on Tuesday October 19, 2004 @11:45AM (#10565830)
    If you think this is cool, you should also google on something called 'Delta Debugging'. DD takes over after the random test generator finds a bug-exposing input -- it systematically and automatically simplifies the input until a minimal bug-inducing input is found.

    Random testing got a bum rap decades ago when running tests was expensive. Today it's extremely cheap, and being able to produce and run millions of tests on idle machines gives you enormous testing power with little manual effort.
  • I love random input (Score:3, Interesting)

    by John Jorsett ( 171560 ) on Tuesday October 19, 2004 @11:51AM (#10565896)
    I did much the same to test a user interface written by another programmer on a project we were assigned to. The interface wasn't a gui, it was a pure ASCII type, so I wrote a random character generator and threw the output at her interface for days at a time. Crash. Crash. Crash. It was wonderful. I don't know that it found every flaw, but I'll bet no one ever killed her interface by leaning on the keyboard (as actually happened on an earlier project I'd heard about).
  • by Anonymous Coward on Tuesday October 19, 2004 @12:05PM (#10566095)
    NSCA Mosaic successfully survives all of these pages. This just shows that programmers these days Just Plain Suck, and Mozilla is the living embodyment of this.

    There's no good reason that code should be vulnerable to integer or buffer overflow issues like this.
  • by sharper56 ( 142142 ) <antisharper@NospaM.hotmail.com> on Tuesday October 19, 2004 @12:05PM (#10566096) Journal
    Those pages were just examples to prove that various browsers had repeatable crashable points.

    If you run his mangle.cgi test against Konqueror it takes 15 secs for it to crash the browser!

    Better fire up the KHTML bug track. :-)
  • by divad27182 ( 823483 ) on Tuesday October 19, 2004 @12:24PM (#10566321)
    I have to ask:

    When saying that Microsoft Internet Explorer didn't crash, does he mean that the window never went away, or that the program iexplore.exe stayed running? I can't prove it, but I suspect that the "IE" window would survive a crash of the rendering engine, because the window is actually provided by explorer.exe, which is the desktop manager.

    I also suspect that several of the open source browsers could defend themselves against this kind of crash within a day or two, simply be using a two process model. Personally, I would rather they did not! (I want to see it fail, otherwise I would not know something was wrong.)
  • by davidsyes ( 765062 ) on Tuesday October 19, 2004 @01:05PM (#10566837) Homepage Journal
    I watched "Moon Child", the VCD verison, and BOTH times when Xine reached the end of the VCD, as the last of the credits rolled, the entire laptop locke up.

    Nothing worked, not even ctrl+fn+someletter would work. I am an avid Lxr, and I only use win98 inside Win4Lin, but last night I even had that off, I reniced MySQL to something like +15, and at one point reniced Xine to -9 or -10 before shutting it and restarting it.

    However, I was off-line, and did not even have Firestarter nor Etherape running, so my CPU is not being overloaded. I do notice, unrelated to Xine, that running Win & in X4ce starts MUCH faster (as when KDE is FRESHLY run) than currently in KDE. I imagine one of my hidden dot files or authority or temp files is trashed, or maybe something in a win4lin path is hozed. Maybe my box has been cracked weeks ago. I dunno. Have to check my logs.

    However, I accept that xine is .9.x release, and has not normally done this on other DVDs or VCDs I've watched. Just with Moon Child, the Japanese version with English & Jpns/Chns subtitles. Periodically, after taking 10 screen snaps with Xine's camera icon, Xine would just disappear. I'd then have to stop Alsa, by running /etc/rc5.d/S17also force-restart

    And if Xine was alive at that point, the force-restart would kill it dead.

    I know this is not exactly along the lines of browsers, but if VCDs and DVDs are being watched while a user is on-line, who knows if a batch of bad code is being fed to the userspace tools? Who knows if other compromises already on the box are aggregating to worsen things with a new breach's arrival. Fortunately, I prefer to yank my ethernet anytime I'm not surfing. I tend to yank my bband router, too, to keep its visibility low.

  • by Animats ( 122034 ) on Tuesday October 19, 2004 @01:21PM (#10566981) Homepage
    There's also "ntcrash2" [attrition.org] which generates random Win32 calls. It saves what it's doing in a file, so when you crash, there's a record. After the reboot, it starts up again, avoiding all recorded crashes in its log. Microsoft was very upset about that.

    That's not even a very tough test. A tougher test would be to generate calls which are permuted slightly from valid ones.

  • by CTachyon ( 412849 ) <`chronos' `at' `chronos-tachyon.net'> on Tuesday October 19, 2004 @01:23PM (#10567000) Homepage

    No, this is the stack canary in action. To emulate per-page NX on a processor without it, Windows would have to single-step all your programs, making it slower than VMware. (VMware doesn't even emulate at that level of detail.)

    (Technically, it could get by without single-stepping: it could mark your NX pages no-read, then handle the page fault by checking the instruction at the fault address, emulating a MOV or similar instruction but killing the program on a RET or similar. However, that's horrendously slow, since each page fault involves two context switches (one into ring 0, one back to ring 3), which would easily slow your program by 100-fold. Your 3GHz computer would effectively max out at 300MHz.)

  • by CTachyon ( 412849 ) <`chronos' `at' `chronos-tachyon.net'> on Tuesday October 19, 2004 @01:25PM (#10567025) Homepage

    I forgot to point out that you can prove this by compiling your program with an older or non-MS compiler. Write up a test C program, then compile it with Cygwin or MinGW GCC, and run it on an XP SP2 system running on a plain x86 processor. It should still overflow normally. Switch to Microsoft's compiler, and it should raise an error instead.

  • by Anonymous Brave Guy ( 457657 ) on Tuesday October 19, 2004 @01:59PM (#10567384)
    Just to be clear, unparseable XHTML is not XHTML. In "Matrix" terms, there is no web page.

    Just to be clear, the user doesn't care. In "user" terms, there is a web page, and a browser that fails to render it is broken. This is intensely annoying to those of us (and this includes me as well as you) with technical mindsets, but it is true nonetheless.

    I'm well aware of the nature of XML, and what the standards people would like to have happen. I'm also well aware of how frustrating it is to run a web site that has to work in browsers that's don't support the standards; I maintain a large site myself using XML, XSLT, HTML and CSS amongst other buzzabbreviations. However, the simple fact is that in the real world, standards are simply a means to an end (or not). If you're writing software for users, then the user experience is all that matters. Anything else -- standards, security, UI, file formats, whatever -- matters only to the extent that it affects the user experience.

    As an interesting aside, you might like to read the AC reply to your previous post as well, and note that in fact C compilers do frequently have to deal with not-quite-right code because C programmers, like web developers, make mistakes.

  • Meanwhile... (Score:3, Interesting)

    by tsarin ( 217882 ) on Tuesday October 19, 2004 @02:28PM (#10567645)

    100% valid CSS and XHTML continues [tudelft.nl] to crash IE.
  • by malfunct ( 120790 ) on Tuesday October 19, 2004 @02:32PM (#10567683) Homepage
    The new compiler has a whole slew of tricks to prvent aribtrary code execution by buffer overrun. Most of it seems to be memory re-ordering as well as extra detection. Its pretty good stuff from what I've seen but it doesn't replace correct coding. If he is testing SP2 of IE the fact that IE has fewer crashes would have a lot to do with the new standards at MS in both development and testing.
  • by Hockney Twang ( 769594 ) on Tuesday October 19, 2004 @03:01PM (#10567995)
    Wanna see something neat? Copy their
    <IMG
    SRC="data:image/gif;base64,R0lGODdhMAAwAPAAAAAAAP ///ywAAAAAMAAw
    AAAC8IyPqcvt3wCcDkiLc7C0qwyGHhSWpjQu5yqmCYsapyuvU UlvONmOZtfzgFz
    ByTB10QgxOR0TqBQejhRNzOfkVJ+5YiUqrXF5Y5lKh/DeuNcP 5yLWGsEbtLiOSp
    a/TPg7JpJHxyendzWTBfX0cxOnKPjgBzi4diinWGdkF8kjdfn ycQZXZeYGejmJl
    ZeGl9i2icVqaNVailT6F5iJ90m6mvuTS4OK05M0vDk0Q4XUtw vKOzrcd3iq9uis
    F81M1OIcR7lEewwcLp7tuNNkM3uNna3F2JQFo97Vriy/Xl4/f 1cf5VWzXyym7PH
    hhx4dbgYKAAA7"
    ALT="Larry">
    into an html file, open it in IE, red X, open in FF, displays perfectly.
  • Lynx gallery example (Score:3, Interesting)

    by lahvak ( 69490 ) on Tuesday October 19, 2004 @03:50PM (#10568569) Homepage Journal
    Tried the gallery examples, firefox crashes reliebly, links does too (from the error messages it looks like they actually catch the NULL pointer - it says "malloc returned NULL pointer" - but don't react to it)

    However, I was not able to crash lynx with the example. It takes a while to render the page, but it renders it just fine (considering it is actually invalid HTML). Perhaps it depends on the amount of memory you have.

    If I remember correctly, while ago there were rumors being circulated that IE is specifically designed to deal well with invalid HTML. Lot of people were of the oppinion that it is really bad, and that invalid HTML code should be rejected. Thay said IE basically encouraged sloppy web design.
  • by spectecjr ( 31235 ) on Wednesday October 20, 2004 @07:14PM (#10581079) Homepage
    LoL not counting all the ram used by windows itself where M$ keeps the large amount of the rendering engine. Just cause you hit ctrl-alt-del and saw 19mb next to iexplore.exe DOESN'T mean that IE os only using 19mb of ram. The LARGE majority of things that IE needs are preloaded by the OS.

    Yes, it does mean that actually. The value you see in Task Manager is the Working Set Size of IE - that is, the total in-memory space of all DLLs and memory allocations currently in use by IE.

    What IE ostensibly gets from being "preloaded" is faster loading times. Which are actually NOT the case - if you compile Mozilla from the Win32 source and let it run to completion, you'll find that they finally added support for rebasing and binding their DLLs. Which means that Mozilla loads at the same speed as IE if you turn off its splash screen with the /nosplash option.

    If you want to argue this point, you can do other things - like use PerfMon or ProcExp (from Sysinternals.com) to look at the Working set size of the app. Which, by the way, is the same size as the value listed in Task Manager.

    For your argument to have any teeth, Task Manager would have to be displaying the "Private Bytes" value - that is, the un-shared bytes used by the process - which it is not.

    Please don't try to speak authoritatively regarding Windows when you plainly do not know what you are talking about.

    Working Set Size - Private Bytes = amount of IE which is shared with other processes.

    Note that Mozilla can gain similar benefits by using other DLLs used by the OS.

If you want to put yourself on the map, publish your own map.

Working...