Forgot your password?
typodupeerror
Internet Explorer Bug Mozilla The Internet IT

IE Shines On Broken Code 900

Posted by timothy
from the crashing-is-unsafe dept.
mschaef writes "While reading Larry Osterman'a blog (He's a long time Microsoftie, having worked on products dating back to DOS 4.0), I ran across this BugTraq entry on web browser security. Basically, the story is that Michael Zalewski started feeding randomly malformed HTML into Microsoft Internet Explorer, Mozilla, Opera, Lynx, and Links and watching what happened. Bottom line: 'All browsers but Microsoft Internet Explorer kept crashing on a regular basis due to NULL pointer references, memory corruption, buffer overflows, sometimes memory exhaustion; taking several minutes on average to encounter a tag they couldn't parse.' If you want to try this at home, he's also provided the tools he used in the BugTraq entry."
This discussion has been archived. No new comments can be posted.

IE Shines On Broken Code

Comments Filter:
  • by Eponymous Cowboy (706996) * on Tuesday October 19, 2004 @07:25AM (#10563394)
    Since it may not be obvious to all readers, be aware that when you can make a program crash by feeding it bad data, you can typically further manipulate the data you are sending it to take control of the program. That means a security hole. This is how buffer-overruns work. You can't always do it, but you can think of each way you can crash a program as a little "crack" in its exterior. If you can figure out a way to pry apart the crack, you've got yourself a hole.

    So many of these "bugs" in Mozilla, Opera, Lynx, and Links are likely security holes as well.

    It is interesting, then, to see that Internet Explorer did so well on this, with its notoriously bad history [trustworthycomputing.com] on security. My first instinct would be that the HTML parsing engine in Internet Explorer was written by a different team of programmers than worked on the rest of the software, and they used proper programming techniques (such as RAII [google.com] in C++, or perhaps used one of their .NET languages, rather than programming in straight C like the others) which as a side effect prevented such problems.

    Let's hope that all these bugs are taken care of in the other browsers quickly before the black hats find ways to make use of them.
    • My guess is this was recompiled with the new SP2 compilers?

      But I guess I would have to rtfa for that (which I'm gonna do now)

      One thing, if it is the compiler thats automagically cleaning up the code, does the gcc compiler support the same optimisations?

      If not, why not, if so Woooohooooooo get recompiling.
      • by Erasmus Darwin (183180) on Tuesday October 19, 2004 @07:51AM (#10563564)
        "My guess is this was recompiled with the new SP2 compilers?"

        My understanding of the SP2 compilation changes is that existing buffer-overflows still exist and will still cause the program to crash. The difference is that overflows which previously allowed the attacker to execute arbitrary machine code will instead crash before the code is executed.

        • by afidel (530433) on Tuesday October 19, 2004 @08:31AM (#10563807)
          The difference is that overflows which previously allowed the attacker to execute arbitrary machine code will instead crash before the code is executed.

          Almost, it's more like they will crash and there is a near zero chance of the code being executed even by another running process because the area has been flagged as non-executable and the cpu will refuse to run anything found in that memory space.
          • by CTachyon (412849) <chronosNO@SPAMchronos-tachyon.net> on Tuesday October 19, 2004 @09:34AM (#10564294) Homepage

            The non-executable flagging (Data Execution Prevention in MS parlance) only applies when Windows is running on an architecture that supports it, which is pretty much only AMD64 at this point. They use stuff like stack canaries to protect x86, which makes an attack harder but not impossible.

    • by UfoZ (680310) on Tuesday October 19, 2004 @07:34AM (#10563456) Homepage
      or perhaps used one of their .NET languages, rather than programming in straight C like the others

      Not likely, since IE was created ages before .NET, and I don't quite think they decided to scrap and rewrite the entire parsing engine since then :)

      As for the malformed HTML, it didn't crash my firefox, but I'll try again a couple of times just in case ;)
    • by InsaneCreator (209742) on Tuesday October 19, 2004 @07:37AM (#10563467)
      My first instinct would be that the HTML parsing engine in Internet Explorer was written by a different team of programmers than worked on the rest of the software

      I's say the same about outlook express. Most security holes in OE were due to bad "glue" between components. And if I'm not mistaken, most holes in IE are also caused by bad integration.
      It sure looks like the expert programmers create components which are then bolted together by an army of "learn programming in 24 hours" drones.
      • by jallen02 (124384) on Tuesday October 19, 2004 @09:24AM (#10564208) Homepage Journal
        I think you mis-estimate how hard it is to manage projects with the complexity of Internet Explorer. Even teams of really good developers with noe one "non-expert" can be brought down by the integration trap. It can probably all be led back to the Waterfall development paradigm where you do things in huge chunks: "Requirements, Design, Implement, Integrate, Pray, Test". Each of those is done as a discreet phase. Any devleopment process still following that basic model tends to fall apart somewhere around Integrate. Even with better development paradigms such as agile development there are considerable challenges in integrating something so large as IE.

        But that *IS* the point of Agile development, to ensure that every step of the way things are working toghether smoothly. The basic point is regardless of the paradigm IE is a big project with many different components requiring a high degree of integration. A key problem with many different components that are highly integrated is the fact that these components tend to "trust" each other to much. Meaning they just assume this component is friendly. If all integrated components were a little less trusting I think software as large and as complex as IE could be more secure.

        This is just a guess, I don't know much about internal Microsoft culture. I have however seen security problems of this scale in projects I have cleaned up and worked on and the problems stem from the exact problems I describe. So its reasonable to assume that somewhere along the way MS has made the same mistakes everyone else does in the software world. Just because they have LOTS of smart people doesn't mean they are any better at managing software processes. Just look at what they are doing with the LongHorn requirements :)

        Jeremy
    • I don't get it.

      Microsoft Press writes the BEST books on how to write good code like Code Complete; but their "manufacturing" dept. does not follow their own best-practices and produce crap like IE 5.0/5.5.

      • by dioscaido (541037) on Tuesday October 19, 2004 @07:51AM (#10563561)
        That's certainly a good point (pre 2000).

        The good news is that now people are required to know Writing Secure Code [microsoft.com], and (more recently) Threat Modelling [microsoft.com] by heart. I can tell you first hand those approaches have been adopted company wide. While Threat Modelling can be time consuming, I've personally found possible issues in code that we wouldn't have noticed without it. Plus we got other people outside our department looking at our code. All in all this is the best approach we could be taking. Microsoft is not sitting on it's ass about this issue.
    • by dioscaido (541037) on Tuesday October 19, 2004 @07:41AM (#10563492)
      Your first instinct would be wrong, at least when it comes to it being built by a separate team. The fact is, as hard to believe at it is, for the past year Microsoft has put in place for every product systematic development techniques that directly target the security of an application (Threat Modeling, Secure coding techniques). Furthermore, this kind of test is standard within Microsoft (feed random inputs to all possible input locations). And once all the coding is done, the source still has to pass inspection through a security group within Microsoft! You can read about this stuff at the secure windows initiative [microsoft.com].

      And this shift is working. The trend per-product is a significant reduction in security vulnerabilities. That is not to say there aren't any, that would be impossible, but if you look at the vulnerability graph for, say, Win2k Server since it's release, and win2k3 Server since it's release, there is a significant drop in the amount of vulnerabilities that are coming in since the release of the product. Furthermore, a large part of the vulnerabilities are found from within the company. The same thing can be said for most products, including IE, IIS, Office, etc... We're getting there....

      Now, go off and run as LUA [asp.net], and nip this stupid spyware problem in the bud.
      • This is all shiny and great, but ignores the fact that present IE incarnations were developed before the Secure Windows Initiative.
      • Furthermore, this kind of test is standard within Microsoft (feed random inputs to all possible input locations).

        So what you are saying is that this article consists of a Microsoft employee applying one type of stability test, one that happens to be used inside Microsoft, to their own browser, which has been patched against exactly this test, and others. Permit me to say I am somewhat underwhelmed by IEs amazing performance.

        This is the security equivalent of Microsoft's "benchmarks" where the benchmark i
        • by Erik Hollensbe (808) on Tuesday October 19, 2004 @08:56AM (#10563989) Homepage
          This is a simple, nearly infallible rule of detecting exploits, to the point where I even know it. :)

          If you can get a program to write past the end of it's allocated memory segment, you can overwrite all sorts of fun stuff with things like shellcode and anything else you want to throw in the executable stack.

          The program (I read the SF post yesterday) generates standard things that would confuse a program in HTML - Null (ASCII 0) characters, overly large integers (Opera, IIRC, brought his system to a halt with a giant colspan="" element), things that need to be checked pre-emptively.

          Regardless of his "bias", this is a problem. In fact, sometimes the people with the most to gain do a great job giving the others the opportunity to gain instead. Either way, he just upped the bar for browser security, which benefits us all.

          Don't just blow him off.
  • by ideatrack (702667) on Tuesday October 19, 2004 @07:26AM (#10563397)
    There's a good phrase I can use to explain this one:

    If you work in a monkey house, you expect to be pelted with shit.
  • hmmm (Score:4, Funny)

    by Anonymous Coward on Tuesday October 19, 2004 @07:26AM (#10563400)
    I'd love to read the article, but the page seems to contain malformed HTML...
  • by richie2000 (159732) <rickard.olsson@gmail.com> on Tuesday October 19, 2004 @07:26AM (#10563402) Homepage Journal
    It's strangely fitting that the response I first got was the error message: "Nothing for you to see here. Please move along." The Slashdot effect has finally spread to the browser.

    However, my Mozilla passed the test without crashing. :-P

  • by Anonymous Coward on Tuesday October 19, 2004 @07:27AM (#10563409)
    They didn't say that IE also started randomly installing Bonzi Buddy et al during the test, the users' credit card numbers were automagically emailed to Romania, there was an sudden increase in outbound port 25 traffic from the system, and they ended the session with about 37 momre toolbars installed then they started with.
  • by jonwil (467024) on Tuesday October 19, 2004 @07:29AM (#10563415)
    Aparently, XPSP2 (including the new IE) was recompiled with the latest visual studio and with all the options turned on to better catch issues.
  • by Darren Winsper (136155) on Tuesday October 19, 2004 @07:32AM (#10563442) Homepage
    I don't know if they still use it, but the Linux kernel developers used to use a program called "crashme" to help test kernel stability. Essentially, it generated random code and tried to execute it. Something like this for web browsers would make for a very useful procedure. Generate the code, throw it at the browser and log the code if it crashed the browser.
    • by pohl (872) on Tuesday October 19, 2004 @09:22AM (#10564193) Homepage
      I remember crashme, and I just checked the debian packages and anybody can "apt-get install crashme" to give it a whirl.

      I'd like to second the AC's suggesting of taking these HTML test cases and constructing an apache module that creates garbage HTML like this. The result would be a great contribution all browsers.

      The mozilla project did have a test that sent the browser to random pages accross the web, which exposed it to all sorts of garbaged HTML, I'm sure, but generating randomly garbaged HTML would probably be a more strenuous test.
  • by Anonymous Coward on Tuesday October 19, 2004 @07:32AM (#10563445)
    Nothing crashed. I got blank pages, all the weird HTML and all, but no errors and nothing crashed. w00t.
  • Konqueror and bugs (Score:3, Informative)

    by Anonymous Coward on Tuesday October 19, 2004 @07:35AM (#10563458)
    Konqueror has a neat bug symbol on the lower right corner when displaying buhhy html code.
    I think this is a nice feature.
    I wish that konqueror would have been tested. It's a good browser.
  • All Other Browsers? (Score:3, Interesting)

    by polyp2000 (444682) on Tuesday October 19, 2004 @07:37AM (#10563470) Homepage Journal
    While I must admit that this is a great technique that can be employed by the various alternative browser vendors such as the firefox team to weed out problems. With its track record I find it rather dubious that the guy was unable to crash IE. Im willing to bet there are a couple of people here on Slashdot who know a few tricks that will crash IE with nothing more than a couple of lines of code. Which would enevitabley point to a flaw in his system. If anything at all this highlights IE's highly forgiving HTML parsing.
  • by tomstdenis (446163) <tomstdenis@@@gmail...com> on Tuesday October 19, 2004 @07:39AM (#10563484) Homepage
    Assuming this MSFT guy is not lying...

    Yes it's a slap in the face. But seriously this is what OSS is supposed to be about. Full public disclosure. If he did find scores of DoS related bugs then the OSS crowd [who like to show their names when the attention getting is good] ought to pay attention and fix the problems.

    You can't gloat how open and progressive you are if you scowl and fight every possible negative bit of news.

    And "mentioning how bad MSIE is" is not a way to make your product any better [just like "he's not bush" isn't a bonus for Kerry].

    So shape up, take it in stride and get to the board.

    Oh and while you're at it make Mozilla less bloatware. 30MB of tar.bz2 source could be your first problem....

    Tom
  • Tested Konqueror (Score:5, Informative)

    by unixmaster (573907) on Tuesday October 19, 2004 @07:41AM (#10563495) Homepage Journal
    None of the samples in http://lcamtuf.coredump.cx/mangleme/gallery/ [coredump.cx] was able to crash Konqueror from KDE CVS Head. Heheh time to praise Khtml developers again!
    • Re:Tested Konqueror (Score:5, Interesting)

      by Anonymous Coward on Tuesday October 19, 2004 @07:50AM (#10563550)
      http://lcamtuf.coredump.cx/mangleme/mangle.cgi

      You're right, none of the samples work with Konqueror, however after doing a little testing myself with the above page it just took me about five tries to make it crash.

      Bad luck? Maybe, but just try it yourself.
  • by hwestiii (11787) on Tuesday October 19, 2004 @07:41AM (#10563498) Homepage
    I saw something like this (not quite, but similar) a few years ago working with Java Script.

    I wasn't that experienced with it, and as a result, certain pieces of my code were syntactically incorrect. Specifically, I was using the wrong characters for array indexing; I think I was using "()" instead of "[]". I would never have known there was even a problem if I hadn't been doing side by side testing with IE and Mozilla. A page that rendered correctly in IE would always show errors in Mozilla. This made absolutely no sense to me.

    It wasn't until I viewed the source generated by each browser that I discovered the problem. IE was dynamically rewriting my JavaScript, replacing the incorrect delimiters with the correct ones, whereas Mozilla was simply taking my buggy code at face value.
    • by Zarf (5735) on Tuesday October 19, 2004 @07:50AM (#10563549) Journal
      I think I was using "()" instead of "[]".

      MSIE was embracing and extending your new syntax. They were effectively defining their own JavaScript variant. Meaning their JavaScript was a SuperSet of the real JavaScript standard. That means you can more easily fall into the trap of writing MSIE only JavaScript and inadverdently force your clients/customers/company to adopt MSIE as your standard browser.
  • by Zarf (5735) on Tuesday October 19, 2004 @07:46AM (#10563527) Journal
    The same person tells us [asp.net] that Apache [secunia.com] sucks when compared [asp.net] with IIS [secunia.com]. Does this mean we've all been wrong about Microsoft products? If we take Microsofts word for it we have indeed and should seriously consider switching back to IIS. After all, [THE FOLLOWING IS SARCASM:] this conclusively proves that IIS is far superior to the Linux Apache Mysql Perl/Python/Php system.
    • by Alomex (148003) on Tuesday October 19, 2004 @09:45AM (#10564396) Homepage
      Does this mean we've all been wrong about Microsoft products?

      Actually yes. People here always talk about Microsoft products being buggier than the average, without any evidence to back it up beyond their own prejudices.

      They use to laugh at the "much inferior" IE code, until the mozilla project got started and it turned out Netscape had the inferior code base.

      OSSers used to laugh at the "bloat" of the windows source code.... until Linux got to have a decent user interface that is, and guess what? source code size is comparable to Windows.

      There are many reasons to loathe the evil empire (monopolistic bully for one), but buggy code is not one of them. That is just something OSSers tell each other to feel better about what they do.

  • by ragnar (3268) on Tuesday October 19, 2004 @07:50AM (#10563554) Homepage
    I may be a little paranoid (heck, I actually am) but I've long suspected the IE support for loose HTML was a strategic decision. Go back to the days when Netscape would render a page with a unclosed table tag as blank. IE rendered the page, and I often encountered sites that didn't work on Netscape.

    It could be a coincidence, but the loose HTML support of IE led to a situation where some webmasters conclude that Netscape had poor HTML support. You can argue about standards all day long, but if one browser renders and another crashes or comes up blank there isn't much of a contest.
  • by SmilingBoy (686281) on Tuesday October 19, 2004 @07:57AM (#10563611)
    The author gave some examples that are supposed to crash Mozilla, Opera, Links and Lynx at the following URL:

    http://lcamtuf.coredump.cx/mangleme/gallery/ [coredump.cx]

    I opened all the pages in tabs in Firefox 0.10.1 under Windows 2000, and Firefox did not crash. It became somewhat unresponsive, but I could still select other tabs, minimise and maximise. I could not load new pages anymore.

    Can someone else test this as well, please?

    And can someone tell us whether this has security implications or not?

  • Who's Who (Score:5, Informative)

    by Effugas (2378) * on Tuesday October 19, 2004 @08:00AM (#10563626) Homepage
    Ugh. Not the best written Slashdot entry.

    Larry Osterman -- former Microsoft guy; someone forwarded him a post to Bugtraq.

    Michael Zalewski -- absurdly brilliant [coredump.cx] security engineer out of Poland. Did the pioneering work on visualizing [wox.org] randomness [coredump.cx] of network stacks, passively identifying operating systems [coredump.cx] on networks, and way way more.

    Nothing bad against Larry. But this is all Zalewski :-)

    --Dan
  • by Diplo (713399) on Tuesday October 19, 2004 @08:01AM (#10563636) Homepage

    Nevermind using random garbage to crash a browser, you can make IE6 crash with perfectly valid strict HTML.

    Try this page [nildram.co.uk] in IE6 and then hover your pointer over the link. Crash!!!

  • by grinder (825) on Tuesday October 19, 2004 @08:02AM (#10563645) Homepage

    Case in point.

    Last week I wrote some Perl to process an mbox mail folder. I just wanted a quick and dirty way to view its contents in a web page. A couple of CPAN modules and a few dozen lines of code and thing was done. Then I started to get fancy and dealing with stuff like embedded MIME-encoded GIF images. This was pretty simple to do, but I made a mistake. Once I had the decoded GIF data lying around, I wrote it to the HTML file of the current e-mail message, rather than writing it to a seperate file and writting <img src="foo.gif"> in the HTML file.

    I was viewing the results with Firefox 0.10.1. When it got to a message with an embedded GIF, with a big slodge of GIF binary data sitting in the middle of the page, Firefox either just sat there spinning its hourglass, or crashed and burned.

    Then I looked at the same file with IE, and the GIF image showed up. I was puzzled for a while until I noticed that in the directory where I had created the file, no GIF files had been created. It is of course arguable that IE should not have attempted to render the GIF image from the binary data sitting in the middle of the page, but it did so without complaint. Not rendering it would also be acceptable.

    Firefox, on the other hand, has a number of better alternatives to crashing or hanging. Should it display gibberish (like when you forget to set up your bz2 association correctly) or nothing, or the image? I don't know, and don't particularly care about which course of action is taken. Anything is better than crashing, especially when IE doesn't.

    Anyway, I fixed the Perl code, and all is well.

    The End

  • by fwitness (195565) on Tuesday October 19, 2004 @08:05AM (#10563662)
    While it's great that IE can handle 'bad' web code, it really is a seperate issue from security. Now, when the other browsers actually *crash*, this is a concern. Yes crashes *can* be used to determine an exploit, but that doesn't mean they *do*.

    To beat the dead horse of the car analogy, if my car doesn't start, it may be the entire electrical system, or maybe my battery is just dead. The moral is don't try to make a mountain out of a mole hill.

    Meanwhile, I absolutely despise the fact that IE does handle a lot of 'bad' code. This is a side effect of the IE monopoly on the browsing world. We're not talking about it handling variables that arent declared before they are used or sumsuch. We're talking about code which *should* be causing errors. Since they don't cause errors most of the time (or are hidden from the user) and most web authors only test with IE, there is a massive amount of bad code on the net which is never fixed.

    Now I'm glad that the author has found these crashing bugs in the other browsers. This obviously needs fixing, and I'm glad IE is at least stable when it encounters malformed code, but more error reporting needs to be done to the user on all browsers.

    Summary:Good review, brings up great points, kudo's to MS for stability. Now everyone go back to work on your browsers and add blatant *THIS WEBSITE AUTHOR DOES NOT WRITE PROPER CODE* dialogs to all your error messages. It's the web author's fault, it's time we told them so.
  • by cascadingstylesheet (140919) on Tuesday October 19, 2004 @08:25AM (#10563767)

    ... and here's why.

    With correct data (in this case, HTML), there is a specified action that is "correct". In other words, a correctly marked up table will get layed out, according to the W3C rules for laying out tables. A paragraph will get formatted as a a paragraph, etc.

    With malformed markup, the "correct" thing to do is indeterminate. If every browser just takes its best guess, they will all diverge, and the behavior is wildly unpredictable. Even from version to version of the same browser, the "best guess" will change.

    "So? You've just described the web!" Well, exactly, but it could have been avoided. Bad markup shouldn't render. It ain't rocket science to do (or generate, though that can be a harder problem) correct markup. If you had do it to get your pages viewed, you would. Ultimately, it wouldn't cost anymore, and would actually cost less (measure twice, cut once).

    Of course, what I just wrote only really applies in a heterogenous environment ... which MS doesn't want ... fault tolerance in your own little fiefdom can make sense.

    • HTML is out there, and millions of malformed pages exist. Most of this is a result of mistakes by authors, but some of it is a result of the moving target that HTML has presented in the past.
      While your argument is attractive in principal, in practice it's misguided. The horse has bolted. in 2004, no-one would use a browser that didnt work with a huge proportion of the web's content. This is an area where pragmatism is required.
      And to respond to the ubiquitous MS-bash, let's step back and remind ourselves that this /. story is also about how various browsers, including the saintly Firefox, can be made to *crash* given certain input. Just thought that should get a mention :)
      (And BTW, I speak as a Firefox user)
  • maybe its a fluke.. (Score:3, Interesting)

    by Anonymous Coward on Tuesday October 19, 2004 @08:31AM (#10563809)
    I tried this script on both Mozilla firefox at least 40 X now, and it hasn't crashed yet...

    You'll also notice none of this random code tests activex security either, or many of the MS extensions which "enchance" security either.. So I think the tests should be taken more with a grain of salt.. Also while he did say null dereferences, its potentially due to all the same 1 or two flaws, and may not be exploitable at all..

    Take this with a grain of salt I'd say, because when you check the tags being tested, there aren't a great amount..
  • by koi88 (640490) on Tuesday October 19, 2004 @08:46AM (#10563920)
    From many posts here I get the idea that most people didn't have the crashes the author had...
    So can those people who have tested his code write
    • used browser and version number
    • OS (exact)
    • result

    PS: I'm here at work on Mac OS 9 and all browsers are pretty old, so I don't write anything...
  • by Halo- (175936) on Tuesday October 19, 2004 @08:56AM (#10563991)
    Wow, what a great test tool! I do software dev for a living, and the hardest part is when a user says: "umm, I did something, and it crashed... I dunno what..." and then you can't reproduce the problem. The problem exists, but due to the complexity of software, its environment, and the subtleties between the way individuals use it, it's hard to reduce the problem down to a few variables...

    A tool like this would let the average wanna be contributer find a reproducable bugs and try to fix them. Which brings me to my dumb question: Is the Mozilla gecko engine more easily built/tested than the whole of Firefox? I love FF, and wouldn't mind throwing some cycles at improving it, but the entire build process is a bit more than I really want to take on... If I could just build and unit-test the failing component I'd be more likely to try.

    Anyone have pointers beyond the hacking section at MozillaZine?

  • by TheLink (130905) on Tuesday October 19, 2004 @10:17AM (#10564679) Journal
    Netscape used to crash very often. Looks like the Mozilla people didn't learn much from it.

    Mozilla is just as sucky security-wise as the old non-mozilla Netscape (3.x 4.x). Whether it is OSS or not doesn't make it secure/insecure, it's the programmers that count. Look at Sendmail and Bind (and many other ISC software), security problems year after year for many years. Look at PHPNuke - security problems month after month for years. Look at OpenSSL and OpenSSH and Apache 2.x - not very good track records. Compare with Postfix and qmail, djbdns.

    Most programmers should stick to writing their programs in languages where the equivalent of "spelling and grammar" errors don't cause execution of arbitrary attacker-code. Sure after a while some writers learn how to spell and their grammar improves but it sometimes takes years. For security you need _perfection_ in critical areas, and you need to be able to identify and isolate the critical areas _perfectly_ in your architecture.

    To the ignorant people who don't get it. Crashing is bad. A crash occurs when the (browser) process write/read data from areas where it shouldn't be touching, or tries to execute code where it shouldn't be executing. This often occurs when the process somehow mistakenly executes _data_ supplied by the attacker/bug finder, or returns to addresses supplied by the attacker...

    This sort of thing is what allows people to take over your browser, and screw up your data (and possibly take over your computer if you run the browser using an account with too many privileges).

    So while the FireFox people get their code up to scratch maybe people should reconsider IE - IE isn't so dangerous when configured correctly. Unfortunately it's not that simple to do that.

    To make even unpatched IE browsers invulnerable to 95% of the IE problems just turn off Active Scripting and ActiveX for all zones except very trusted zones which will never have malicious data. Since I don't trust Microsoft's trusted zone (XP has *.microsoft.com as trusted even though it doesn't show up in the menus), I create a custom zone and make that MY trusted zone.

    By all zones I mean you must turn those stuff off for the My Computer zone as well - but that screws up Windows Explorer in the default view mode (which is unsafe anyway).

    For more info read this: <a href="http://support.microsoft.com/default.aspx?kb id=182569">Description of Internet Explorer security zones registry entries</a>

    To make the My Computer zone visible change:
    (for computer wide policy)
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Win dows\Curr entVersion\Internet Settings\Zones\0\Flags

    To: 0x00000001

    (for just a particular user)
    HKEY_CURRENT_USER\SOFTWARE\Microsoft\Window s\Curre ntVersion\Internet Settings\Zones\0\Flags

    To: 0x00000001

    If you don't want to edit the registry and make the My Computer zone visible, you can still control the My Computer Zone settings from the group policy editor (gpedit.msc) or the active directory policy editor.

    You just have to know some Microsoft stuff. But hey, securing an OSS O/S and _keeping_ it secure (esp when u need to run lots of 3rd party software) also requires some in-depth knowledge.
  • by divad27182 (823483) on Tuesday October 19, 2004 @12:24PM (#10566321)
    I have to ask:

    When saying that Microsoft Internet Explorer didn't crash, does he mean that the window never went away, or that the program iexplore.exe stayed running? I can't prove it, but I suspect that the "IE" window would survive a crash of the rendering engine, because the window is actually provided by explorer.exe, which is the desktop manager.

    I also suspect that several of the open source browsers could defend themselves against this kind of crash within a day or two, simply be using a two process model. Personally, I would rather they did not! (I want to see it fail, otherwise I would not know something was wrong.)

If a thing's worth having, it's worth cheating for. -- W.C. Fields

Working...