Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google Security

Source Code To Google Authentication System Stolen 306

Aardvark writes "More details are coming out about the extent of the break-in at Google a few months ago. The NY Times is reporting that one of the things stolen was the source code to Google's single sign-on authentication system, called Gaia. Though Google is making changes to the system, the theft raises the possibility that attackers could analyze the code to find new exploits to take advantage of in the future. No wonder that Eric Schmidt recently said they've become paranoid about security."
This discussion has been archived. No new comments can be posted.

Source Code To Google Authentication System Stolen

Comments Filter:
  • by Animaether ( 411575 ) on Monday April 19, 2010 @10:22PM (#31905554) Journal

    Stolen?

    What.. they are no longer in possession of the source code?

  • by dudpixel ( 1429789 ) on Monday April 19, 2010 @10:48PM (#31905752)

    there was no mention of whether their security system is buggy or not. The attack was made through a hacked internet site, with the help of an internal employee, not by someone "hacking into" the system. The weak link in the chain is always people, not software.

    wasn't this same attack linked to MS internet explorer 6? had to bring that up...of course I could be wrong.

    Anyone know of any large company opening up the source code to their security systems?

  • Paranoia (Score:2, Interesting)

    by Internetuser1248 ( 1787630 ) on Monday April 19, 2010 @11:06PM (#31905896)
    This sounds very very bad to me, the worst fact being that security and paranoia always lead to bad decisions and breaches of rights. Even if we believe google's do no evil policy if they are pushed far enough they will become something we don't want.
  • by nomadic ( 141991 ) <nomadicworld@@@gmail...com> on Monday April 19, 2010 @11:27PM (#31906020) Homepage
    As Bruce Schenier said, security through obscurity does not work...

    That has been a mantra on slashdot since it started and I have never been convinced that it's necessarily true. There are plenty of examples where a security hole was discovered in 10+ years old open source code. On the other hand, there's no way of knowing how many security holes are never exploited because the company whose systems have it keeps quiet.
  • by dr-alves ( 1612081 ) on Monday April 19, 2010 @11:31PM (#31906050)
    Not a rip off if you give it away and gain money/increase the readiness of the possible worker candidate pool out of it.
  • Re:Cloud security? (Score:5, Interesting)

    by GNUALMAFUERTE ( 697061 ) <almafuerte@@@gmail...com> on Monday April 19, 2010 @11:51PM (#31906164)

    Oh, except it was microsoft's operating system, and microsoft's messenger. I don't understand this concept of computing where you can click in "the wrong link". I can click in whatever link I want, and that is not supposed to destroy my computer. I use Pidgin on GNU/Linux. I can click on ANY link that I want. Clicking on the link won't do anything besides opening it on a browser, or asking me to download it. Except I sudo su and chmod +x $file and ./$file nothing is going to happen. But we hear all the time from windows users getting randomly infected with malware by just clicking on a fucking URL, or going to the wrong site, etc. Or just connecting on the wrong LAN. Clicking on a link IS NOT supposed to give ANYTHING any kind of execute permissions. I don't browse with Flash, but I do keep a Firefox-altern dir with Flash installed in case I really really need to check out something that requires Flash. I can't believe how invasive that thing is, and how many privileges it automatically grants to random content on the web. Same thing for JS. The simple fact that 'last measure' still works is living proof of how stupidly insecure certain technologies are.

    And, no, it's not the user's fault for clicking on a link.

  • by TubeSteak ( 669689 ) on Tuesday April 20, 2010 @12:29AM (#31906386) Journal

    Seriously, the bad guys already have it, so enlist the help of the security community to improve it.

    There's probably a whole lot of stuff in that source code that is either a trade secret or gives clues to trade secrets google would rather keep private.

    The most realistic course of action would be for them to hire some 3rd party pen testers and auditors to pick apart their code under a microscope.

  • by zoid.com ( 311775 ) on Tuesday April 20, 2010 @12:29AM (#31906388) Homepage Journal

    I've been sent spam recently from quite a few people who's gmail accounts have been hacked. Look at the gmail forums....

    http://www.google.com/support/forum/p/gmail/label?lid=65ac3f0a8251ca2d&hl=en [google.com]

    Filled with spam from hacked account messages. Coincidence?

  • by Vellmont ( 569020 ) on Tuesday April 20, 2010 @12:35AM (#31906426) Homepage


    and simply being concerned because the bad guys have more ability to search for flaws.

    Much of the world relies on security systems that are completely open and available to everyone. One of the prime examples is openSSH. Another prime example in openSSL. I don't hear too many people worried that these systems are more vulnerable because attackers have access to the code.

    The latter is a pretty natural human reaction to an event like this, regardless of how well designed their security system is, because all designs, and all code, potentially contains flaws, even if designed and implemented by the most brilliant security researchers.

    Panic and stupidity are also natural human reactions. Since when did something being "natural" become a justification for something? I can understand the reaction, but that doesn't mean it's right.

    It's pretty stupid to rely on code remaining secret. Code is something that's very difficult to make secret as it gets copied all over the place. How many people at Google already have access to it? It seems to me that if Google really wants to be secure they should just release the damn code so "the good guys" also have access to it, since apparently "the bad guys" already do.
       

  • by Barryke ( 772876 ) on Tuesday April 20, 2010 @04:46AM (#31907496) Homepage

    I can imagine Google decides to replace Gaia. They might opensource parts of authentication or encryption code. A public audit if you will.

  • by SharpFang ( 651121 ) on Tuesday April 20, 2010 @07:34AM (#31908160) Homepage Journal

    I worked at a big portal, and I can say it was not possible to protect our apps from -everything-.
    Some things are not possible - like keeping IPs of all the users ever vs every page in the portal visited ever. Too much data, simply.

    We depended on obscurity - keeping the code secret - in several cases:
      - make the attacker believe the attack succeeded while it didn't, to make them continue this vector instead of trying something harder which could actually succeed
      - short-lived, statistical blacklists. If you knew you got blacklisted, you'd mitigate it, say, by switching IP.
      - caches that make your results unverifiable. Even if you affected the page on current page, you'll get result from one of 100 random nodes in the cluster, which was unaffected and thus you won't see results of your attack and decide it's not working.
      - volatile personal caches. If you really want, you can change the way the site behaves - for you. Nothing and nobody else will be affected.
      - bulletproof pages - several levels of fallback in case of error. If you manage to DoS one service, the page falls back to its alternative, quietly and transparently. It looks like your attack didn't work. It did, but we won't let you know it.

    This is an efficient deceit that kills 99% of attacks dead in their tracks. Reading the sources by 3rd party would reveal it, and we'd be pretty much fucked - implementing -proper- security would cost a fortune, increase the cluster load by good 30%, and hold back current projects by months. But currently the site is built on a million of small white lies, so that if you try to break something, you never know if you succeeded or not.

  • by SharpFang ( 651121 ) on Wednesday April 21, 2010 @03:51AM (#31919544) Homepage Journal

    You seem to have never worked in 10k+ clicks per second environment.

    A farm of several hundreds of servers works at between 80 and 100% load at all times. The developer costs are minor/negligible comparing to hardware, electricity and bandwidth costs. A man-month to optimize size of a single page by 1% is well worth the investment.

    Increase of server load by 30% to remove another 0.1% of attacks is completely unacceptable. We don't care if 1% of users won't see the page. When the farm peaks in the rush hours about 5-10% won't anyway. So we're more concerned about these 5-10% than about that 1%. The core systems are properly secured - the main database and all script pages are 100% read-only from the frontend side. Devs and editors access it through dedicated link, which is properly secured. The only vulnerable parts are user-editable extras - fora, blogs, comments, polls, galleries. They are the first to cut off when the system peaks, they are sandboxed safely away so breaking them won't break the main articles, and honestly, if some of them get hacked from time to time - like someone takes over someone else's account, someone injects rude posts into someone's blog through some XSS, some poll gets skewed - nobody cares.

    The beauty of the "deceitful" methods is that they cost nothing. A 401 error page would have to be displayed anyway, what costs us to replace it with a fake 200? A lookup into memcache brings necessary user ID along with blacklist status and then actually -saves- us a costly commit to disk. A proxy is there to protect the front servers from the traffic, the side effect of confusing the attacker is a desirable but not essential consequence. These counter-measures are okay because they cost only developer effort (cheap) and no server load (expensive).

    Also, with Google eating up the lion share of the market, profit margins got much more narrow. It's not just a matter of buying another 500 servers. It's a matter of staying 3% above the break-even line, instead of 3% below. And if it comes to cutting costs, developers will be the first to cut.

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...