Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Facebook Security

Facebook, Researcher Spar Over Instagram Flaw Disclosure (exfiltrated.com) 31

msm1267 writes: A security researcher is in a bit of a scrum with Facebook over vulnerability disclosures that not only tested the boundaries of the social network's bug bounty program, but he said, also prompted hints of legal and criminal action, which Facebook has since denied. Wesley Wineberg, a contract employee of security company Synack, said today that he had found some weaknesses in the Instagram infrastructure that allowed him to access source code for recent versions of Instagram, SSL certificates and private keys for Instagram.com, keys used to sign authentication cookies, email server credentials, and keys for more than a half-dozen critical other functions, including iOS and Android app signing keys and iOS push notification keys. Wineberg also accessed employee accounts and passwords, some of which he cracked, and had access to Amazon buckets storing user images and other data prompting claims of user privacy violations from Facebook.
This discussion has been archived. No new comments can be posted.

Facebook, Researcher Spar Over Instagram Flaw Disclosure

Comments Filter:
  • by penguinoid ( 724646 ) on Friday December 18, 2015 @04:47AM (#51142129) Homepage Journal

    Post the full details, everything, on your Facebook account. That way if they don't like it they can just delete it.

  • by EmperorOfCanada ( 1332175 ) on Friday December 18, 2015 @05:25AM (#51142189)
    One of the problems with bug bounties is information control. I am not talking about the bugs leaking out or making the company look bad so much as the information is then clear for the higher ups in the company to see what bugs the outside world is discovering. Thus those in charge of security look at bug bounties as career damaging information they can't control. I am willing to guess that many submissions are made to bug bounty gathering organizations that are complete crap. People no doubt write in vague things such as "You are using the Monkey BM operating system which is known to have many flaws. You can send the cheque to ..." Thus it is probably easy for the CSO to take a genuine flaw and file it under the category "spurious". The worse the flaw, and the more clear the evidence as to how damming it is no doubt are the ones that they want to make go away the fastest. The CSO probably is used to being Tyrannical to his own employees and many other employees of the company. Can you imagine if he called your boss within the company and indicated that you were presenting a threat to the company?

    So when he pulled this shit and called up a company out of the blue he probably thought his reign of terror would apply there too.

    So if I were his boss I would not only look into this one case but I would look to see how many other cases he suppressed. Then, I would carefully look into his behaviour in the office. I would suggest that they hire an outside company that can do anonymous surveying of his immediate underlings and others that he has dealt with to see if he is a bully. I would also look into any firings that he was involved with; especially if they were outside his direct purview. Did he have some guy escorted out of the building because he wanted his parking space?
  • Facebook’s statement:

    “We are strong advocates of the security researcher community and have built positive relationships with thousands of people through our bug bounty program. These interactions must include trust, however, and that includes reporting the details of bugs that are found and not using them to access private information in an unauthorized manner"

    They forgot to end that with "...because we're the only ones that are allowed to do that, while shoveling truckloads of money into our bank accounts".

    • by Anonymous Coward on Friday December 18, 2015 @06:07AM (#51142247)
      Facebook isn't wrong though. There isn't a single white-hat penetration tester out there who will say its ok to access systems you aren't given permission to access, even if its in the act of discovering vulnerabilities that you intend to disclose. He found a vulnerability in their system and instead of reporting it immediately he decided to see how deep that particular rabbit hole went. He used credentials that did not belong to him to access systems he did not have permission to access, a direct violation of many countries' laws (including the US where those servers are housed). This "security researcher" did way more than discover and disclose a vulnerability, he also took advantage of that vulnerability without permission from facebook, in direct violation of most countries' laws. If I was facebook I wouldn't just not pay the guy, I would consider legal action as well. It should not be acceptable to be able to hack into someone's servers if only you report it to them later. Who knows if this individual "security researcher" or his company might have decided to keep some of those private certs and credentials around for future use. Just because this one might not have doesn't mean the next one wouldn't. This behavior is unacceptable from a supposed "security researcher", especially since he should know better.
      • by Cederic ( 9623 )

        Yeah, it's weird that he's pissed off with them after he's the one that broke multiple laws.

        Whether they're incompetent fuckwits exploiting two billion people is totally irrelevant, he still broke the law and shouldn't be surprised if legal action follows.

        If he's lucky it'll only be civil action.

      • Who knows if this individual "security researcher" or his company might have decided to keep some of those private certs and credentials around for future use.

        Actually, if there is a chance he has a copy of the signing keys, some of which can not be changed, Facebook should just pay the bounty, and consider itself lucky that the security researcher doesn't consider himself a criminal.

        Facebook should take a page out of the US anti-nuclear proliferation playbook. If a country is trying to get the nuke. You punish it. You bomb it back to the dark ages. On the other hand, once a country already has a new working nuke (especially more than one). You put on a show for

      • by bill_mcgonigle ( 4333 ) * on Friday December 18, 2015 @09:56AM (#51142749) Homepage Journal

        > There isn't a single white-hat penetration tester out there who will say its ok to access systems you aren't given permission to access, even if its in the act of discovering vulnerabilities that you intend to disclose.

        If you're not hired by FB but are probing their systems to look for vulnerabilities as their bounty system encourages, you cannot meet the criterion you outline.

        The goal apparently needs to be more clear: if FB's goal is to find as many problems as possible then stopping at the first problem and closing that door does not achieve the goal.

        Unless we hear that he sold the info to a third party, it looks like there's no victim here and FB looks bad for overreacting when it got caught with its pants down (wait ... Instagram, not Snapchat).

  • by phantomfive ( 622387 ) on Friday December 18, 2015 @05:58AM (#51142235) Journal

    access source code for recent versions of Instagram, SSL certificates and private keys for Instagram.com, keys used to sign authentication cookies, email server credentials, and keys for more than a half-dozen critical other functions, including iOS and Android app signing keys and iOS push notification keys. Wineberg also accessed employee accounts and passwords, some of which he cracked

    Warning: if you are going do security research, don't access all that stuff (without permission from the company), it can be completely illegal.
    People have literally gone to jail for accessing less than this guy did. Whether you think it should be illegal or not, it is illegal and you should be more careful than he was.

    • by Xest ( 935314 )

      Yeah, as much as I hate to defend Facebook here, I fail to see how Facebook is in the wrong here, it's clear the guy didn't just find an exploit, but used it to scour into the deepest depths of Facebooks network and to exfiltrate the most sensitive of data.

      That's not security research uncovering a vulnerability, that's outright hacking in to Instagram and then saying "Oh I was just doing you a favour" after the fact.

      When you find an exploit you report it, if instead you delve into the system and start to no

  • Does signing an authentication cookie actually accomplish anything? Couldn't the cookie just be copied byte-for-byte and used as is? What is the point of signing it?
    • by DarkOx ( 621550 )

      Older versions of rails deserialized cookies to a Ruby object. That is an RCE if you make a complex object. The expectation of the web application is the cookie would ddeserialize to Hash or similar object. Well if you create an object that defines some of the methods commonly used on Hashes like [], select, each etc you will be able to put whatever you want there and get it called. The security Rails had in place on that was to check the signature. If the signature was valid than the browser faithfull

      • If I were doing a test of an application for an organization I did not have a defined client relationship and I saw something like this (I actually have done this) I would generally have injected something like `nslookup somewildlonguniquestring@mydomian.com` and watched DNS server to see if it gets such a request....... The web server should log requests hopefully even things like cookies, so if you don't go shell it should be EASY for forensics to confirm you did what you said you did and no more.

        That's a good idea.

  • by Anonymous Coward

    Please don't use the word 'scrum', it conjures up images of project managers and developers furiously masturbating over epics and user stories.

Remember to say hello to your bank teller.

Working...