Forgot your password?
typodupeerror
This discussion has been archived. No new comments can be posted.

Cross Site Scripting Discovered in Google

Comments Filter:
  • by Artifex (18308) on Wednesday December 21, 2005 @11:14AM (#14309026) Journal
    From TFA:
    -[ Solution

    Google solved the aforementioned issues at 01/12/2005, by using=20
    character encoding enforcement.

    --[ Acknowledgement

    The author would like to commend the Google Security Team for their=20
    cooperation and communication regarding this vulnerability.
  • It's been fixed (Score:4, Informative)

    by b4k3d b34nz (900066) on Wednesday December 21, 2005 @11:18AM (#14309061)

    Although the article details an interesting exploit, Google fixed this on the 1st of this month--The title is somewhat misleading. It is useful to know that Google fixed this vulnerability 2 weeks after it was discovered, on November 15th.

    Also, for those of us unaccustomed to DD/MM/YYYY date format, that's the format of all dates in the article.

  • Others.. (Score:5, Informative)

    by slashkitty (21637) on Wednesday December 21, 2005 @11:19AM (#14309074) Homepage
    They've had others in the past, but were quick to fix them. They have even sent t-shirts as thanks for the help. Other sites are not so friendly or fast. This site shows active security holes [devitry.com] in various sites that have gone unresolved. (CSS, insecure logins, etc)
  • by Anonymous Coward on Wednesday December 21, 2005 @11:20AM (#14309086)
    Noooo, say it ain't so, Who'd 'a thunk it?

    I turned javascript off in 1999, just one less glaring security issue for me to address. Before anyone starts talking smack about responsive web apps, just remind me what Ed Felton said about flying pigs.

    That's right, disable js and fix the web!

  • by thr0n (835565) on Wednesday December 21, 2005 @11:27AM (#14309151)
    I told them about the XSS (CSS) security holes 2 months ago -
    response was something like: "We will work on it; or we wont - but we wont tell you ;)".
    Which sucks...

    Here we go:

    Original:
    https://www.vr-ebanking.de/index.php?RZBK=0280 [vr-ebanking.de]
    MY Version (XSS):
    https://www.vr-ebanking.de/help;jsessionid=XA?Acti on=SelectMenu&SMID=EigenesOrderbuch&MenuName=&Init Href=http://www.consti.de/secure [vr-ebanking.de]
    /Fälschung --> Imitation /

    ... Hope they change their mind, sometime. :)

    Consti / thr0n

  • What bullshit... (Score:3, Interesting)

    by ninja_assault_kitten (883141) on Wednesday December 21, 2005 @11:27AM (#14309160)
    Now we're going to start posting every freaking XSS we find? This is a VERY low impact XSS vul. Hell it's not even persistent. Who freaking cares? Are we going to post the slew of recent Yahoo XSS bugs too? WHat about the bug in Google Analytics which allowed you to iterate through all the customer domains?
  • by G4from128k (686170) on Wednesday December 21, 2005 @11:31AM (#14309188)
    This example illustrates the advantages of web applications. Google was able to patch the flaw and roll it out to 100% of the user base in a short time period. Providing applications online means centralized version control and patching -- there's no waiting for all the users to patch.


    The downside is that this only works if the app provider is a proprietary vendor with a closed architecture. If 3rd parties are allowed to create extensions or if users can create their own utilities/add-ons then centralized patching would likely introduce the same types of incompatibilities and breakages that current OS patches can introduce. Worse, centralized control might mean that users have no choice but to live with the patched version.

  • This is amazing. (Score:5, Interesting)

    by dada21 (163177) * <adam.dada@gmail.com> on Wednesday December 21, 2005 @11:32AM (#14309201) Homepage Journal
    I'm always blown away by how the Internet security market works and self-correct itself without any regulation.

    A major web site has a flaw. White hat and black hat "hackers" find that flaw, exploit it, and either abuse it or let the web site know about it. The web programmers go in and close the exploit because it affects how their customers use the service and could open them up to some liability.

    This is the way the free market works. I'm a huge fan of how quickly the Internet (anthropomorphically) adapts to the changing needs of the billion of users. Some exploits that aren't fixed by the owners of code are fixed by third parties -- sometimes for profit and sometimes for free. Before we can even write one law to attempt to solve problems, others are already attacking the problems.

    I'd like to see it stay this way. Every time we move forward to create legislation to protect the end user (see CAN-SPAM and a myriad of other laws), we see failure time and again. The loopholes in the laws make them irrelevant quickly, and all we get out of that is wasted money and wasted time.

    Let the growth and expansion occur freely. We'll see some bad times (new viruses and new spam exploits) but we'll see those fixed in short order. If they don't get fixed, why is the Internet still chugging along and growing every day?
    • by elpapacito (119485) on Wednesday December 21, 2005 @12:41PM (#14309836)
      Yeah you're blown away, without any doubt.

      Market doesn't work, market doesn't eat, market doesn't do crap except market exist only as an abstact entity, a theoretical construct.

      Techies, geeks, hackers, whatever label .. it is knowledgable skilled people that do the fixing !
      Some of them are motivated primarily by money and secondarily by showing the company they're good at
      resolving upcoming problem...in hope somebody will notice when it's firing time again..but it's just daydreaming. Others, a minority, do the fixing for fixing shake..because they like to see tight well working system and like to work on them.

      Also free market theory wouldn't allow an entity such as Google to exist but for a very short amount of time as competitors would enter google market to compensate and get a cut of Google extraprofits. As a matter of FACT and not theoretical model this hasn't happened yet (and google wasn't invented yesterday) neither is it going to happen quickly as there are some strong barriers to entry in this market.

      So please don't give the freemarket bullcrap credit when credit is due to WORKERS, techies who are those who sustain most if not all of the competition stress and problems, yet receive dimes and see their future more uncertain then it ever was.

      Certainly Google staff is to be praised for quickfixing potentially serious problems

    • by ediron2 (246908) * on Wednesday December 21, 2005 @05:37PM (#14312331) Journal
      Oh, for pete's sake...

      I think the 90% of the world that doesn't like obsessing with security would disagree with you about lassez-faire and how well it is handling identity theft and other criminal conduct that has exploded thanks to the internet. My dad *deserves* legal protections from phishing attacks (a specific example: banks should be required to guarantee client accounts... that is WHAT A BANK IS!!!). And a small business should have their online transactions safe from remote fraud (with banks again being held responsible for THEIR end of any fraudulent transaction). Doing so means legally-defined minimum standards and coverages for financial institutions.

      You're quick to claim that all regulatory activity is a failure, but using your same (flawed) reasoning, technological remedies have also failed to 'solve' ID theft, viruses, trojans, spam, keyloggers, hacking, international abuses, and so on. These problems all remain, and they need a blend of tech and legal remedies. Tech wherever possible, legal to make sure that it is never cheaper/easier to deny or whitewash an expensive problem.

      We outgrew that silly business-will-self-regulate oversimplification with Love Canal and DDT, if not with child labor. Online crime is huge and growing rapidly. People's lives are being harmed. And the single biggest cause is that easy-and-unsafe technological setups are not being held accountable for damages. Time and again, the market has proven unable to accomodate safety concerns: they are ignored in a race toward the bottom line. Whether we're talking about child labor, environmental protections, social security or online fraud, the market regrettably lacks this ability. The only difference here is that it is harder to directly KILL people via online crime. Because the market seems unwilling and unable to self-correct, tech remedies alone won't solve things. Culpability and minimum standards are needed to force all businesses to work at a minimum standard of protection.

      You're wrong here because you overreach. Both tech and legal remedies fail alone because of what they're up against: a rapidly-changing landscape of attacks and remedies.

      Tech innovation is incredibly powerful. For example, as much as I hate DRM, it at least improves the aggressive segmentation of data and code, strengthens authentication (itself a two-edged sword), and gets the problem back out of joe-user's lap. And that is exactly WHERE the problem needs to not be: producers should have minimum standards of quality and be held liable whenever they undercut these minimum standards. The argument worth holding is about the threshold required, not about whether public interests are served by having legal minimum standards.

      (Really, dada, it seems like every free-market crank message I see lately is written by you. Went to Foe you a week ago and found you ALREADY are on my foes list. This is finally a flaw with mitigating my slash-addiction with alterslash.org: it can't realign you into the permanent-troll status you deserve.)
      • by dada21 (163177) * <adam.dada@gmail.com> on Wednesday December 21, 2005 @09:47PM (#14314088) Homepage Journal
        I think the 90% of the world that doesn't like obsessing with security would disagree with you about lassez-faire and how well it is handling identity theft and other criminal conduct that has exploded thanks to the internet. My dad *deserves* legal protections from phishing attacks (a specific example: banks should be required to guarantee client accounts... that is WHAT A BANK IS!!!). And a small business should have their online transactions safe from remote fraud (with banks again being held responsible for THEIR end of any fraudulent transaction). Doing so means legally-defined minimum standards and coverages for financial institutions.

        Actually, a bank is there to store your valuable money, and that's all it is to do. A mortgage company is for home loans, a personal line of credit company is for credit cards. Banks just store money -- they used to store your gold very safely and give you a note guaranteeing you that gold -- it was called a dollar bill. Banks do not have to guarantee you anything, in fact, in a free market, banks that didn't guarantee you safety would not last as people would put their money in safe banks. Don't ask laws to give you what you can have for the asking.

        You're quick to claim that all regulatory activity is a failure, but using your same (flawed) reasoning, technological remedies have also failed to 'solve' ID theft, viruses, trojans, spam, keyloggers, hacking, international abuses, and so on. These problems all remain, and they need a blend of tech and legal remedies. Tech wherever possible, legal to make sure that it is never cheaper/easier to deny or whitewash an expensive problem.

        Interesting. I don't use my ID -- ever. I don't use my social security number except when I take payments from a customer and need to fill out a 1099. I don't bank, so I don't worry about banks. I don't have credit cards anymore. Why would I worry about identity theft? Everyone that knows me, KNOWS ME. Viruses are solved -- I haven't had one in years. Anyone who gets a virus is to blame, not the virus. Spam, all that? I don't get it either. My public e-mail address here got 2 spam messages last week, and I post my e-mail address for all to see!

        We outgrew that silly business-will-self-regulate oversimplification with Love Canal and DDT, if not with child labor.

        I'm glad I'm on your foe list, because you speak nonsense, seriously. I don't mean to write any flamebait, but Love Canal was proven a government problem, not a corporate one. The government you so loved made the problem what it is [harrybrowne.org]. In fact, in the media publications of the time before the disasters, many companies were warning the school board not to build there. Your government did it, not any big bad corporation.

        As for DDT, this is another greenie myth. You might have "learned" some scary myths in your pro-environment rally or in your public school, but it's all just myths [junkscience.com].

        Don't spew authoritarian rhetoric if you're against my anti-authoritarian rhetoric. We'll just both flag each other -5 and be done with it. I personally like hearing debates against my opinions, but not when it is the same proven MYTHS over and over and over for the last decade. Come up with new things to find false, will you?
      • by Geoff-with-a-G (762688) on Wednesday January 04, 2006 @12:59PM (#14392946)
        You're quick to claim that all regulatory activity is a failure, but using your same (flawed) reasoning, technological remedies have also failed to 'solve' ID theft, viruses, trojans, spam, keyloggers, hacking, international abuses, and so on.

        Yes, clearly the unregulated (or minimally regulated) Internet has proven vastly inferior to the legally enforced areas like theft, rape, assault, and murder. It turns out that market forces don't eliminate 100% of problems, whereas clearly government regulation does.

        Or, if we drop the sarcasm and extreme oversimplification, we discover that both the mostly lassez-faire world of Internet commerce and the mostly government handled law-enforcement and personal safety realms fail to solve their problems 100%. Yes, viruses exist. This doesn't mean that tech security is a failure.

        Up until the big news virus/worm epidemics a year or two back (Blaster, Nachi, Sasser, MyDoom, etc) viruses and worms weren't really that big of a problem. Yes, they existed, yes they infected a couple computers. But that wasn't a big enough problem to justify spending lots of money addressing those issues.

        After the big news problems hit, companies started taking computer security more seriously, without govermnent regulation having to tell them to. The very large goverment organization where I work established a Chief Security Officer position and a whole department that hadn't been there before. Even Microsoft started massive pushes to hire better security-conscious programmers and prioritize security. Yes, it will take a while for these things to bear fruit, but large government programs don't move any faster than private ones.

        Neither extreme is perfect. Free market security behaviors don't completely eliminate viruses and worms and identity theft, just as government law enforcement doesn't completely eliminate crime. But both approaches do quite well, and as dada points out, the self-instituted corporate responses to computer security flaws have been quite impressive. The number of zero-day exploits remains small. Recent studies show that the vast majority of identity theft never leads to any actual harm. I don't think goverment regulation would significantly improve this area, and would perhaps make it worse.

        That doesn't mean that the complete lassez-faire approach solves all problems, but a mostly lassez-faire approach does mostly solve some problems, and it appears that this is one of them.
  • Encoded post.. (Score:2, Insightful)

    by slashkitty (21637) on Wednesday December 21, 2005 @11:40AM (#14309279) Homepage
    Does anyone have the real post that hasn't been mangled by the mailing list? What are these characters that they used? Does anyone have a working exploit of this type (encoded xss) on another site?
    • Re:Encoded post.. (Score:1, Informative)

      by Anonymous Coward on Wednesday December 21, 2005 @12:20PM (#14309649)
      Does anyone have the real post that hasn't been mangled by the mailing list? What are these characters that they used? Does anyone have a working exploit of this type (encoded xss) on another site?

      I think that the authors of the report did the responsible thing in informing Google first, waiting until the problem was fixed (within a reasonable amount of time) and then describing the vulnerability without providing an exploit.

      The message gives enough clues about how to create an exploit, though. You just have to know a bit about the UTF-7 encoding. Hint: this is not the same as UTF-8 or iso-8859-1. Once you know that, think about how one could fool a filter that is trying to remove "dangerous" characters from a text, knowing that the filter expects these characters to be encoded in iso-8859-1, while they are interpreted by the browser as UTF-7. Second hint: think about how a single character is encoded in multiple characters and how the bit shifting is done. Your goal in this case would be to encode some text in such a way that the filter expecting the default encoding would only see garbage, while the browser decoding the same text as UTF-7 would see something like "<script ...>". Writing the exploit is left as an exercise to the reader.

  • XSSholes! (Score:5, Funny)

    by digitaldc (879047) * on Wednesday December 21, 2005 @11:41AM (#14309294)
    "How common are XSS holes?"
    I had to laugh at that one.

    Only an XSShole would steal your cookies.
  • by Phosphor3k (542747) on Wednesday December 21, 2005 @11:44AM (#14309319)
    Someone is trying to get their Pagerank up by submitting the story with a name of "Security Test" and linking to their shoddy website. The site has only a few links, no content, and it says the page is for sale. Will slashdot ever get their shit together and stop posting submissions with blatent pagerank-whoring links like this?
  • by Anonymous Coward on Wednesday December 21, 2005 @11:48AM (#14309352)

    This is reported as a Google.com bug, which is partially true. But this is only one half of the problem. The other half of the problem (mentioned in the full article) is due to a dubious feature in Internet Explorer: when it gets a page without a specified character encoding, it does not rely on default values for the encoding (which should be iso-8859-1 for HTML or UTF-8 for XHTML).

    Instead, Internet Exploerer tries to guess the encoding of the contents by looking at the first 4096 bytes of the page and checking the non-ASCII characters. In the case of the cross-site scripting attack decribed here, the problem is that IE would silently set the encoding of a page to UTF-7 in case some characters in the first 4096 bytes looked like UTF-7. This silent conversion to UTF-7 by Internet Explorer in a text that Google assumed to use the default encoding allowed the attackers to bypass the way Google was filtering "dangerous" characters in some URLs.

    The article puts the full blame for the vulnerability on Google.com. I think that a part of the blame should also be shared by the Internet Explorer designers (and any other browser that does unexpected things while trying to guess what the user "really meant").

    • by Chmarr (18662) on Wednesday December 21, 2005 @01:13PM (#14310137)
      I don't think this is a IE-only misfeature. Having a look at the browsers I use:

      Camino: Default setting is "Automatically Detect Character Encoding"
      Firefox: Default setting is UTF-8
      Safari: Doesn't explicitly say, but I just fed UTF-8 into a text file, no encoding, and Safari picked it up. So, I assume that it's default is also 'automatically detect'.
    • by radtea (464814) on Wednesday December 21, 2005 @02:28PM (#14310765)
      I think that a part of the blame should also be shared by the Internet Explorer designers (and any other browser that does unexpected things while trying to guess what the user "really meant").

      The viability of the Web is entirely dependent on browsers trying to figure out from incomplete, incorrect and/or inconsistent information what users "really meant." A browser that only renders standards-compliant HTML with unambiguous character encodings would only be able to handle a few percent of the Web.

      The fundamental problem with a distributed system like the Web is that the strength of the contract between browsers and content-providers is extremely weak, and both sides are effectively encouraged to abuse that weakness by putting the blame on the other. If a browser won't render common HTML errors "properly" it is considered broken, and if a content-provider doesn't hack up their HTML to take advantage of non-standard browser extensions they are considered backward and dull.
    • As much as I hate Microsoft and Internet Explorer, it sounds to me like their crappy browser is irrelevant here; you should never trust the browser, because anything the browser sends you could actually have come from a malicious user who bypassed the browser entirely in order to send deliberately broken input, and you have to deal with that. Your error messages don't have to be graceful and verbose, but you have to trap errors even if they should never occur.
  • by 8127972 (73495) on Wednesday December 21, 2005 @12:06PM (#14309499)
    ..... seems to be very good. They acknowledged the problem quickly (the same day if I recall correctly) and fixed it within days. Maybe instead of treating this posting as if there is a bug out there that is a clear and present danger, perhaps we should be talking about how good their response was and why other software companies aren't as responsive?
  • by chunews (924590) on Wednesday December 21, 2005 @12:19PM (#14309642)
    IANAL, but I am always amazed at how these security issues are found and resolved since the exploratory phase for white and black hats are, essentially, the same. (I have a similar pet peeve around journalists, who with their hidden cameras, are able to investigate the mysteries of illicit acts without any recompense).

    While it may be one thing to pull apart IE and Windows XP (they can be done remotely, in an unconnected lab, with zero impact to a larger community), where does one acquire the balls to go and tinker with a hugely popular online site like google, where the mere act of investigation -may- impact the operational stability of the site.

    Now, I know that XSS is benign but whose to say that there wouldn't be some ping-of-death like characteristic with a bizarre UTF-7 encoding? While it's doubtful that google would have such poor quality in their applications, why does the white-hat security community get carte blanche access to test it out?

    I could be bitter because I sent a similar email to google (regarding their gmail login account and the 'continue=' varaiable) in March but never heard a reply. But to google's credit, and my defense, I only indicated that it looked highly suspicious and never took the next step to craft an actual attack and send them the code.

    If a security engineer should happen across the logs and start to see a bunch of unusual encodings, or what appears to be a recon of the website's characteristics, what level of forgiveness would be applied if the source of such network activity was from eEye, or Watchfire? And what if it was bankofamerica.com instead of google?

    I am all for giving vendors a reasonable amount of time to fix a defect and then provide full disclosure but I'm not keen to keep paying for watchfire (eEye, iss, etc..) to go to school and get free press based on unauthorized accesses to my production systems - where is the balance?

  • Cookies (Score:3, Interesting)

    by kernelfoobar (569784) on Wednesday December 21, 2005 @12:23PM (#14309683)
    I don't know if it's related, but I've noticed a couple of times that when I get the search result page, I get asked to set a cookie from one of the sites in the results, without clicking on them. (my Firefox is configured to ask me to set cookies.). This is somewhat disturbing, I mean if my FF was set to accept cookies automatically, I would have cookies for sites I have never visited...

    Did anyone else notice this?
  • Google vulnerable? (Score:5, Insightful)

    by Anonymous Cowhead (95009) on Wednesday December 21, 2005 @12:25PM (#14309698)
    It seems odd to blame this on Google. According to the linked mailing list posting, the problem is caused by the "auto detect character set" feature in IE (and probably other browsers,) and the lack of a "charset" parameter in the HTTP response from Google. The HTTP spec is pretty clear that a missing charset parameter means ISO-8859-1, not "browser should guess", and certainly not UTF-7.

    So isn't it really the "auto detect" feature in the browser that causes the vulnerability, and not Google's lack of "charset encoding enforcement" as the mailing list posting from Watchfire Research claims? Let's put the blame where it belongs. I say we should applaud Google for going the extra kilometer to protect users with non-compliant browsers.
    • by http101 (522275) on Wednesday December 21, 2005 @12:34PM (#14309777) Homepage
      I'm definitely with you on this one. The browser itself should be blamed for automatically assuming the encoding. It's like IE assumes anyone from Mexico is Italian; sure the languages share common words, but it doesn't make them the makers of spicy sausage, shoes, and Ducati motorcycles.

      As for Google going the extra 0.621371192 miles to make sure the end user is protected against this, I do praise them for their efforts. The part in the article that caught my eye was the segment near the bottom showing when this problem was fixed (see below). This is hardly news.
      --[ Solution

      =20

      Google solved the aforementioned issues at 01/12/2005, by using=20

      character encoding enforcement.

      =20
    • by pembo13 (770295) on Wednesday December 21, 2005 @12:56PM (#14309976) Homepage
      Yah, but then people will call you a Google fan boy
    • by jonwil (467024) on Wednesday December 21, 2005 @09:18PM (#14313915)
      This is not the only place Internet Explorer does something different to what HTTP says.

      As far as I know, HTTP says that if the HTTP headers have a content-type header, the browser should treat the data as though it was that content type regardless of the actual contents. But IE does not do this. IE will use the content-type header, the file extention AND the contents of the file to decide what to do with it. This means that even though the web server sent the file as text/plain, IE may not render it as plain text (for example, sending HTML as text/plain wont work since IE will render the HTML anyway).

      Mozilla and Firefox get it right and treats the content-type as authoratitive (although I think there is an exception when loading an image for an IMG tag)

      Interpreting the file type based on the contents or extention should only be done if the server does not send a content-type header.

"Your mother was a hamster, and your father smelt of elderberrys!" -- Monty Python and the Holy Grail

Working...