Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Google Security The Internet Technology

Google Talks About the Dangers of User Content 172

An anonymous reader writes "Here's an interesting article on the Google security blog about the dangers faced by modern web applications when hosting any user supplied data. The surprising conclusion is that it's apparently almost impossible to host images or text files safely unless you use a completely separate domain. Is it really that bad? "
This discussion has been archived. No new comments can be posted.

Google Talks About the Dangers of User Content

Comments Filter:
  • by Tastecicles ( 1153671 ) on Thursday August 30, 2012 @03:17AM (#41175865)

    ...is it a server problem, with the way it interprets record data, or the browser (any browser) (maybe as instructions rather than markup)? I'm guessing server in this case, since if the stream is intercepted and there's a referrer URL that directly references an image or other blob on the same or another server on a subdomain, that could be used to pwn the account/whatever... I'm not up on that sort of hack (you can probably tell). I don't quite get how hosting blobs on an entirely different domain would mitigate against that hack, since you would require some sort of URI that the other domain would recognise to be able to serve up the correct file - which would be in the URL request! Someone want to try and make sense of what I'm trying to say here?

    • by Sarusa ( 104047 ) on Thursday August 30, 2012 @03:49AM (#41176017)

      It's fundamentally a problem with the browsers. Without getting too technical...

      Problem 1: Browsers try real hard to be clever and interpret maltagged/malformed content so people with defective markup or bad mime content headers won't say 'My page doesn't work in Browser X, Browser X is defective!'. Or if the site is just serving up user text in html, stick some javascript tags in the text. Whichever way, you end up so someone malicious can upload some 'text' to a clipboard or document site which the browser then executes when the malicious person shares the URL.

      Problem 2: There are a lot of checks in most browsers against 'cross site scripting', which is a page on site foobar.com (for instance) making data load requests to derp.com, or looking at derp.com's cookies, or even leaving a foobar.com cookie when derp.com is the main page. But if your script is running 'from' derp.com (as above) then permissions for derp.com are almost wide open, because it would just be too annoying for most users to manage permissions on the same site. Now they can grab all your docs, submit requests to email info, whatever is allowed. This is why just changing to another domain name helps.

      There's more nitpicky stuff in the second half of TFA, but I think that's the gist of it.

      • by TubeSteak ( 669689 ) on Thursday August 30, 2012 @04:00AM (#41176049) Journal

        It's fundamentally a problem with not validating inputs. Without getting too technical...

        Problem 1: Browsers try real hard to be clever and interpret maltagged/malformed content instead of validating inputs.

        Problem 2: There are a lot of checks in most browsers against 'cross site scripting', which is fundamentally a problem of not validating inputs.

        /don't forget to validate your outputs either.

        • by Sarusa ( 104047 ) on Thursday August 30, 2012 @04:53AM (#41176231)

          This is true! You could even say it's a sooper-dooper-fundamental problem of HTTP/HTML not sufficiently separating the control channel from the data channel and/or not sufficiently encapsulating things (active code anywhere? noooo.)

          But since browsers have actively chosen to validate invalid inputs and nobody's going to bother securing HTTP/HTML against this kind of thing any time soon, or fix the problems with cookies, or, etc etc etc, I figured that was a good enough high level summary of where we're at realistically. Nobody's willing to fix the foundations or 'break' when looking at malformed pages.

          • Throw up a warning screen whenever there's malformed input. Kinda like the warning screen with self-signed certs, without the stupid part of having the add the site to a permanent exception list.

            And if people want the convenience of whitelisting or just turning the message off entirely, put those in the options, just like the way browsers handle cookies.

            This warning page will show up a lot at first. But it would also ultimately shame people into fixing their outputs.

        • I'm actually not a big fan of validating inputs. I find proper escaping is a much more effective tool, and validation typically leads to both arbitrary restrictions of what your fields can hold and a false sense of security. It's why you can't put a + sign in e-mail fields, or have an apostrophe in your description field.

          In short, if a data type can hold something, it should be able to read every possible value of that data type, and output every possible value of that data type. That means that if you have a Unicode string field, you should accept all valid Unicode characters, and be able to output the same. If you want to restrict it, don't use a string. Create a new data type. This makes escaping easy as well. You don't have a method that can output strings, at all. You have a method that can output HTMLString, and it escapes everything it outputs. If you want to output raw HTML, you have RawHTMLString. Makes it much harder to make a mistake when you're doing Response.Write(new RawHTMLString(userField)).

          A multi-pronged approach is best, and input validation certainly has its place (ensuring that the user-supplied data conforms to the data type's domain, not trying to protect your output), but the first and primary line of defense should be making it harder to do it wrong than it is to do it right.

          • by dzfoo ( 772245 ) on Thursday August 30, 2012 @05:30AM (#41176361)

            I'm actually not a big fan of validating inputs. I find proper escaping is a much more effective tool, and validation typically leads to both arbitrary restrictions of what your fields can hold and a false sense of security.

            OK, fair point. How about if we expand the concept of "validating input" to include canonicalization and sanitation as well? Oh, it already does. Go figure.

            Reducing it to a mere reg-exp is missing the point. Proper canonicalization (and proper understanding of the underlying standards and protocols, but that's another argument) would allow you to use a plus-sign in an e-mail address field.

            But this won't happen as long as every kid fresh out of college wants to roll their own because they known The One True Way to fix it, this time For Real. As long as they keep ignoring everything learned before because, you know, it's old stuff and this is the new technology of The Web, where everything old does not count at all; nothing will change.

            A multi-pronged approach is best, and input validation certainly has its place (ensuring that the user-supplied data conforms to the data type's domain, not trying to protect your output), but the first and primary line of defense should be making it harder to do it wrong than it is to do it right.

            "MOAR TECH!!!1" and over-wrought protocols are no silver-bullet against ignorance, naivety, and hubris.

                        -dZ.

            • Your solution appears to be, "Do exactly what we've been doing, just more." My rebuttal to that is the entire history of computer security. While it's true that proper understanding of underlying standards and protocols would go a long way toward mitigating the problems, a more complete solution is to make such detail-oriented understanding unnecessary. Compartmentalization of knowledge is, in my opinion anyway, the primary benefit of computers, and the rejection of providing that benefit to other programme

              • by dzfoo ( 772245 )

                You misunderstood my point, and then went on to suggest that the "old way" won't work; inadvertently falling into the trap I was pointing out.

                My "solution" (which really, it wasn't a solution per se) is not "more of the same." It is the realization that previous knowledge or practices may not be obsolete, and that we shouldn't try to find new ways to do things for the mere sake of being new.

                A lot, though not all, of the security problems encountered in modern applications have been known and addressed in t

                • It's easier for a lot of coders to just bypass the step of input parsing and validation. That ease, which IMHO amounts to sloppy coding, is a major crux of things like injection problems, downstream coding errors (and far beyond things like simple type mismatch), and eventual corruption.

                  For every programmer shrugging it off, there's another wondering if someone did the work, and probing everything from packets to simple scriptycrap to break it open for giggles, grins, and profit. They write long tomes of ga

              • "Your solution appears to be, "Do exactly what we've been doing, just more."

                No. His solution is that people need to start doing it. You're solution is to ignore solid secure programming practices [ibm.com]. In other words, your solution is to keep failing to practice secure programming.

                "Some new approaches work better, some work worse, but we already know exactly what the old approach accomplishes."

                Right. And we have also seen what doesn't work. Another way to say it is: "What we've got here is failure to commun

              • by cdrguru ( 88047 )

                I'm not sure you have a firm grasp of the problem.

                The problem, from my reading, can be explained as analogous to having a numeric data item that sometimes gets letters put in it. Rather than rejecting this as invalid browsers are making stuff up as they go along so A=1, B=2, and so on and so forth. This has the obvious benefit to users of not exposing them to the improper construction of web pages, but it does create sort of a sub-standard whereby other authors recognize this work-around and decide to mak

          • You don't have a method that can output strings, at all. You have a method that can output HTMLString, and it escapes everything it outputs. If you want to output raw HTML, you have RawHTMLString. Makes it much harder to make a mistake when you're doing Response.Write(new RawHTMLString(userField)).

            Interesting technique. But how much runtime overhead do all those constructors impose for Java, C#/VB.NET, PHP, and Python?

            • But how much runtime overhead do all those constructors impose for Java, C#/VB.NET, PHP, and Python?

              Either nothing, or nothing significant, or something-but-it-fixed-a-bug-which-was-definitely-there.

            • Seriously?

              Compared to the overhead of reading from the database, building the rest of the page's HTML, and then sending over the network, practically nothing. This is not hyperbole.

              Even if it wasn't nothing, it would have to be very significant, and performance would have to be a primary factor in the software's spec before I'd consider scrapping an extremely easy to use security practice in order for a faster runtime.

              • by tepples ( 727027 )
                Interesting. Do you know of a PHP framework that uses this idea? (I have to use PHP because a lot of hosting plans lack ASP.) If I knew the canonical name of this technique, I'd search for it myself, but when I tried Google "new RawHTMLString", the only thing it found was your comment.
                • I don't. I'm not sure it's even common enough to be considered a pattern, let alone have good libraries, those names are just things I came up with in the moment.

                  What it essentially boils down to though is you create classes and conversions between those classes that always maintain the correct escaping. If you find yourself writing the same escaping method more than once, refactor. There should be One Version of the Truth, that is a pattern. You then write output routines that refuse to render unrecognized

          • The problem is you currently can't escape everything reliably.

            Why? Because the mainstream browser security concept is making sure that all the thousands of "Go" buttons are not pressed aka "escaped". But people are always introducing new "Go" buttons. If your library is not aware of the latest stuff it will not escape the latest crazy "Go" button the www/html/browser bunch have come up with.

            So in theory a perfectly safe site could suddenly become unsafe, just because someone made a new "Go" button for the l

            • by dgatwood ( 11270 )

              This is why the correct solution is always whitelisting, not blacklisting. Whitelist the allowed tags, attributes, CSS subsets, etc. that you consider safe. This way, anything added to the specification is likely to get stripped out by your filtering code.

              For example, I'm working on a website in which users provide content in a subset of HTML/XML. The only tags I'm allowing are p, span, div, select, and a couple of custom tags. The only attributes I'm allowing are the chosen value for the select elemen

            • by fatphil ( 181876 )
              > The problem is you currently can't escape everything reliably.

              You can - by escaping everything.

              > If your library is not aware of the latest stuff it will not escape the latest crazy "Go" button

              It will. It escapes everything. What bit of "everything" did you not understand.

              Sure, it won't let people have crazy "go" buttons, whatever they are, but nothing of value was lost.
          • by fatphil ( 181876 )
            I'm with you, but I think we're in the minority. However, I cling to my view because I have a strange obsession with a kind of purity that's probably because of my pure mathematics background.

            If an envelope weighs less than 60g, then the postal service should deliver it. It should presume that it contains bombs, nerve poison, and corrosives, etc. but it should deliver it in tact, and then it's the recipient's problem. It should let the recipient know that it's not to be trusted, of course.

            If a text entry fi
          • by jgrahn ( 181062 )

            I'm actually not a big fan of validating inputs. I find proper escaping is a much more effective tool, and validation typically leads to both arbitrary restrictions of what your fields can hold and a false sense of security. It's why you can't put a + sign in e-mail fields, or [...]

            That's not validation! That is trying (and failing, because you are too ignorant to read an RFC) to guess what some other software wants, even if it's none of your business. A well-formed mail address is, for most purposes, one which /lib/sendmail will not complain about.

            That does of course not mean you shouldn't validate data meant to be interpreted by *you*. It's simple: if you need to interpret it, you need to validate it. Hell, you *are* validating it by intrepreting it, even if you do a lousy job.

        • by ais523 ( 1172701 ) <ais523(524\)(525)x)@bham.ac.uk> on Thursday August 30, 2012 @06:05AM (#41176457)
          After seeing a demonstration of a successful XSS attack on a plaintext file (IE7 was the offending browser, incidentally), I find it hard to see what sort of validation could possibly help. After all, the offending code was a perfectly valid ASCII plain text file that didn't even look particularly like HTML, but happened to contain a few HTML tags. (Incidentally, for this reason, Wikipedia refuses to serve user-entered content as text/plain; it uses text/css instead, because it happens to render the same on all major browsers and doesn't have bizarre security issues with IE.)
        • by fatphil ( 181876 )
          > Problem 1: Browsers try real hard to be clever and interpret maltagged/malformed content instead of validating inputs.

          But XHTML saved us from that over a decade ago!

          Channelling Eric Naggum: Clearly we're not using enough XML!
  • Convert the file to the site supported format and quality level in sandbox.
    Tadaaaa,,,
    • by Anonymous Coward on Thursday August 30, 2012 @03:57AM (#41176037)

      As TFA points out, it is possible to create a Flash applet using nothing but alphanumeric characters. Good luck catching that in your reprocessing.

    • by Jonner ( 189691 )

      Convert the file to the site supported format and quality level in sandbox.

      Tadaaaa,,,

      If you'd read TFA, you'd know it covers that and explains why it's insufficient.

    • Convert the file to the site supported format and quality level in sandbox.

      You're applying a known transform to the image. By reversing the transform, the attacker can craft an image such that the original upload is innocent, while the reprocessed image is malicious. I've seen it done where the upload is clean, but the generated thumbnail is goatse; it shouldn't be too hard to create a clean upload that the converter turns into something IE will interpret as Javascript.

  • For all its transparency, I've yet to see a working list of security breach attempts made on Google servers. I bet there are many, and it would be useful to know just the source and method if nothing more.
  • by VortexCortex ( 1117377 ) <VortexCortex AT ... trograde DOT com> on Thursday August 30, 2012 @03:59AM (#41176043)

    This is what happens when you try to be lenient with markup instead of strict (note: compliant does not preclude extensible), and then proceed to use a horribly inefficient and inconsistent (by design) scripting language and a dysfunctional family of almost sane document display engines combined with a stateless protocol to produce a stateful application development platform by way of increasingly ridiculous hacks.

    When I first heard of "HTML5" I thought: Thank Fuck Almighty! They're finally going to start over and do shit right, but no, they're not. HTML5 is just taking the exact same cluster of fucks to even more dizzying degrees. HOW MANY YEARS have we been waiting for v5? I've HONESTLY lost count and any capacity to give a damn when we reached a decade -- Just looked it up, 12 years. For about one third the age of the Internet we've been stuck on v4.01... ugh. I don't, even -- no, bad. Wrong Universe! Get me out!

    In 20XX when HTML6 may be available I may reconsider "web development". As it stands web development is chin-deep in its own filth which it sprays with each mention, onto passers by and they receive the horrid spittle joyously not because its good or even not-putrid, but because we've actually had worse! I can crank out a cross platform pixel perfect native application for Android, iOS, Linux, OSX, XP, Vista, Win7, and mother fucking BSD in one third the time it takes to make a web app work on the various flavours of IE, Firefox, Safari, Chrom(e|ium). The time goes from 1/3rd down to 1/6th when I cut out testing for BSD, Vista, W7 (runs on XP, likely runs on Vista & Win7. Runs on X11 + OpenGL + Linux, likely builds/runs on BSD & Mac).

    Long live the Internet and actual cross platform development toolchains, but fuck the web.

    • by sgrover ( 1167171 ) on Thursday August 30, 2012 @04:14AM (#41176099) Homepage

      +1, but tell us how you really feel

    • by SuricouRaven ( 1897204 ) on Thursday August 30, 2012 @04:29AM (#41176143)
      Of course it's a mess. The combination of HTTP and HTML was designed for simple, static documents displaying predominatly text, a little formatting and a few images. By this point we're using extensions to extensions to extensions. It's a miracle it works at all.
    • by svick ( 1158077 )

      HOW MANY YEARS have we been waiting for v5? I've HONESTLY lost count and any capacity to give a damn when we reached a decade -- Just looked it up, 12 years.

      But HTML 5 is already here! It's just that it's not like the standards of old, it's a living standard. And if you don't like that, you're not agile enough.

      • by arose ( 644256 )
        Remind me, is i possible to serve XHMTL 1.0 accross the board yet? I think it just about it, and we are to the point of "why the fuck bother anymore", if you can do better at getting shit implemented go right ahead, but so far HsTML5 has made more tangible progress than just about any other single initiative of W3C.
        • It will be in April 2014 when Windows XP, the operating system for which the latest version of the bundled browser is IE 8, leaves extended support.
        • by Jonner ( 189691 )

          Remind me, is i possible to serve XHMTL 1.0 accross the board yet? I think it just about it, and we are to the point of "why the fuck bother anymore", if you can do better at getting shit implemented go right ahead, but so far HsTML5 has made more tangible progress than just about any other single initiative of W3C.

          I think IE 9 finally handles XHTML properly. Of course it's far too late, since XHTML is completely dead.

      • by Jonner ( 189691 )

        HOW MANY YEARS have we been waiting for v5? I've HONESTLY lost count and any capacity to give a damn when we reached a decade -- Just looked it up, 12 years.

        But HTML 5 is already here! It's just that it's not like the standards of old, it's a living standard. And if you don't like that, you're not agile enough.

        I'm not sure if they're on the right track in general, but at least the WHATWG is honestly recognizing that web developers have never waited for an official standard to use new browser features. It's a chicken and egg problem: if nobody used a new feature until it were described in an official standard, browsers wouldn't have much motivation to implement and test the feature.

    • by Skapare ( 16644 )

      It's posts like this that make me wish Slashdot could do moderations above level 5.

    • I think the same thing. I currently work doing "web systems". And do they work? Work, I managed to make a web application that can use a card printer. But at what price? I spent twice the time that I would spend if I did compiled desktop applications, and lost count of the many horrible hacks I had to do to similar desktop functionality using HTML
    • When I first heard of "HTML5" I thought: Thank Fuck Almighty! They're finally going to start over and do shit right, but no, they're not. HTML5 is just taking the exact same cluster of fucks to even more dizzying degrees.

      XHTML was a pretty good step in the right direction. Enforced well-formed ness is a good thing (although IMHO browsers should've had a built in "please try to fix this page" function that the user could manually run over a broken page), genericsising tags is sensible (if you're going to embed a rectangular object then it makes sense to have a single <object> tag to do it for all content, for example - no need to produce a whole new revision of the language just because someone has invented a new type

    • by Jonner ( 189691 )

      Do whatever kind of development floats your boat and pays the bills. As much as some aspects of web development suck, it is getting gradually better and it can't be ignored. The answer to web development problems certainly isn't to return to platform-specific binaries.

  • by Hentes ( 2461350 ) on Thursday August 30, 2012 @05:02AM (#41176273)

    The easiest way to secure embedded content would be a sandbox tag that allows to limit what kind of content can be inside of it.

    • Stop extending HTML! HTML does not need more tags. HTML was not designed to be a presentation language for applications and certainly not to be an environment for running applications; it was designed to be a hypertext document language (yes, "hypertext" is a word with meaning beyond HTML). The worst thing we did was to allow HTML documents with embedded programs -- applets, Javascript, etc.

      The real answer is a new standard that is designed for application presentation and deliver, that does not have
      • The real answer is a new standard that is designed for application presentation and deliver

        That's been tried, in the form of Flex and Silverlight. Good luck getting Apple to adopt your proposed new standard.

        • by Richy_T ( 111409 )

          Flex? Silverlight was just another Microsoft attempt to abuse the market and that's a play everyone has gotten wise to by now.

          • by tepples ( 727027 )

            Flex?

            Flex was Adobe's attempt to reposition Flash Player as a rich Internet application platform.

      • by Jonner ( 189691 )

        The real answer is a new standard that is designed for application presentation and deliver, that does not have so much in-band signaling. We need to get it right the first time by building security into the system, not extend an already bloated monstrosity to make up for the inevitable security problems that result from turning a language for describing documents into a platform for running distributed software with malicious users.

        Let us know how that works out.

    • by TheLink ( 130905 )

      I suggested something like that 10 years ago: http://lists.w3.org/Archives/Public/www-html/2002May/0021.html [w3.org]
      http://www.mail-archive.com/mozilla-security@mozilla.org/msg01448.html [mail-archive.com]
      But hardly anyone was interested. If implemented it could have prevented the Hotmail, MySpace, yahoo and many other XSS worms.

      There's Content Security Policy now:
      https://developer.mozilla.org/en-US/docs/Security/CSP/Introducing_Content_Security_Policy [mozilla.org]

      As far as I see security is not a priority for the browser and W3C bunch.

  • by gweihir ( 88907 ) on Thursday August 30, 2012 @07:06AM (#41176659)

    Images and text can be sanitized reliably. The problem is that this strips out all of the non-essential features. Users have a hard time understanding that, because users do not understand the trade-offs involved.

    But the process is easy: Map all images to meta-data and compression free formats (pnm, e.g.) then recompress with a trusted compressor. For text, accept plain ASCII, RTF and HTML 2.0. Everything else, convert either to images or to cleaned PDF/Postscript by "printing" and OCR'ing.

    • by Jonner ( 189691 )

      Images and text can be sanitized reliably. The problem is that this strips out all of the non-essential features. Users have a hard time understanding that, because users do not understand the trade-offs involved.

      But the process is easy: Map all images to meta-data and compression free formats (pnm, e.g.) then recompress with a trusted compressor. For text, accept plain ASCII, RTF and HTML 2.0. Everything else, convert either to images or to cleaned PDF/Postscript by "printing" and OCR'ing.

      If you'd read TFA, you'd know that it explains why this is insufficient:

      For a while, we focused on content sanitization as a possible workaround - but in many cases, we found it to be insufficient. For example, Aleksandr Dobkin managed to construct a purely alphanumeric Flash applet, and in our internal work the Google security team created images that can be forced to include a particular plaintext string in their body, after being scrubbed and recoded in a deterministic way.

    • Images and text can be sanitized reliably.

      The point of the article is that they can't. Internet Explorer can be coerced into interpreting JPEG images as HTML, interpreting ASCII text as Flash, and interpreting text/plain documents as text/html, among other things. You can also play games with the encoding-recognition code by tweaking the first few bytes of the file, such that a document uploaded as ISO-8859-1 is interpreted by IE as UTF-7, or whatever other encoding suits your purposes. Note that in all

  • Novel Solution (Score:3, Interesting)

    by Sentrion ( 964745 ) on Thursday August 30, 2012 @10:00AM (#41177825)

    This was a real problem back in the 1980s. Everytime I would connect to a BBS my computer would execute any code it came across, which made it very easy for viruses to infect my PC. But lucky for me, in the early 90's the world wide web came into being and I didn't have to run executable code just to view content that someone else posted. The PC was insulated from outside threats by viewing the web "pages" only through a "web browser" that only let you view the content, which could be innocuous text, graphics, images, sound, and even animation that was uploaded to the net by way of a non-executable markup language known as HTML. It was at this time that the whole world began to use their home computers to view content online because it was now safe for amateurs and noobs to connect their PCs to the internet without any worries of being inundated with viruses and other malware.

    Today I only surf the web with browsers like Erwise, Viola, Mosaic, and Cello. People today are accessing the internet with applications that run executable code, such as Internet Explorer and Firefox. Very dangerous for amateurs and noobs.

    • Today I only surf the web with browsers like Erwise, Viola, Mosaic, and Cello. People today are accessing the internet with applications that run executable code, such as Internet Explorer and Firefox. Very dangerous for amateurs and noobs.

      So, which are you, an amateur or a noob?

  • by kent.dickey ( 685796 ) on Thursday August 30, 2012 @11:35AM (#41178939)

    The blog post was a bit terse, but I gather one of the main problems is the following:

    Google lets users upload profile photos. So when anyone views that user's page, they will see that photo. But, malicious users were making their photos files contain Javascript/Java/Flash/HTML code. Browsers (I think it's always IE) are very lax and will try to interpret files how they please, regardless of what the web page says. So, webpage says it's pointing to a IMG, but some browsers will interpret it as Javascript/Java/Flash/HTML anyway once they look at the file. So now a malicious user can serve up scripts that seem to be coming from Google.com, and so they are given a lot of access at Google.com and break their security (e.g., let you look at other people's private files).

    Their solution: user images are hosted at googleusercontent.com. Now, if a malicious user tries to put a script in there, it will only have the privileges of a script run from that domain--which is no privileges at all. Note this just protects Google's security...you're still running some other user's malicious script. Not google's problem.

    The article then discusses how trying to sanitize images can never work, since valid images can appear to have HTML/whatever in them, and their own internal team worked out how to get HTML to appear in images even after image manipulation was done.

    Shorter summary: Browsers suck.

    • I read the TFA, that's a great summary.

      It's like waking up in a crappy mirror universe where all the work that we have done on security in the past 10 years is out the window, because unbeknownst to anyone but the browser vendors, our web browsers will go ahead and execute code embedded in non-executable mimetypes.

      Would it have been so hard to limit JavaScript execution to the handful of content types where it is supposed to be found? Apparently. So now images are Turing-complete, and all your cookies can b

      • Apparently. So now images are Turing-complete, and all your cookies can be lifted by someone who puts <script src="http://private.com/users/you/profile.jpg"></script> in a page you visit.

        It's worse than that. If you're using Internet Explorer, your cookies can be lifted by someone who puts <img src="http://private.com/users/you/profile.jpg"> in a page you visit, or your flash storage tampered by <a href="http://private.com/uploads/schedule.txt">.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...