Google Hands Out Web Security Scanner 65
An anonymous reader writes "Apparently feeling generous this week, Google has released for free another of their internally developed tools: this time, a nifty web security scanner dubbed skipfish. A vendor-sponsored study cited by InformationWeek discovered that 90% of all web applications are vulnerable to security attacks. Are Google's security people trying to change this?"
Re: (Score:1, Insightful)
You shouldn't.
Same as anyone else, trust the code.
http://code.google.com/p/skipfish
It was linked in the article..?
Re: (Score:2, Funny)
What article?
"The" - is there another?
Re: (Score:3, Funny)
Re:I don't trust it (Score:5, Insightful)
If you want the internet to remain free, you'll have to get off your lazy ass. Start by going and downloading the skipfish source - it's under an Apache license - and audit it for us. Tell us if it's got any phone-home reporting, if it leaves out any major items from it's scans, etc.
We all know we should question everything, including Google's intentions. We're pretty smart, we get that. Instead of offering blind, childish rhetoric, you could offer proof and/or solutions. Just sayin'; calling Google a major privacy invader doesn't stop them.
Re: (Score:2)
That would be great, and in the meantime I would say that a tool that does check the security of web applications is a great idea.
I'm working on a semi-public web application used to handle telecom services in hospitals so it would be a great tool for me to ensure that I have as few holes as possible where malicious persons can cause problems.
It's also a great tool to DoS a site with! (Score:2)
... since it can hit you with up to a couple of thousand requests a second as it tries all sorts of tricks to see where you're vulnerable ...
As Spock would say ... (Score:2)
"At what rate of payment?"
Re: (Score:1)
Re: (Score:2, Interesting)
There's more to the internet than other people's web sites. The design of the web is intended for each server to control and serve its own information. This is broken by the fact that the vast majority of internet users want to share information via the web but do not run their own servers. The web was simply not designed for this use-case and cannot handle it sanely in the case of information that is private to a group of people who do not run their own servers.
That may be a good reason to assert that curr
Oh Please, GIVE IT A REST. (Score:5, Insightful)
Google is one of the most anti-privacy, intrusive evil corporations out there, second only to Facebook. They make a living over promiscuous sharing of personal data. Why should I trust them?
Have they ever lied to you about what they do? I don't use Google under any misinformed idea that they *don't* track everything I do. I go into it knowing that this *is their business*.
Where you under some other impression?
Re:BS - this is important (Score:5, Insightful)
Google didn't start the censorship in China, it wasn't their idea, and they weren't the first group to comply with what is, in China, local law. They've also been pretty clearly repulsed by the rule, hence the issues they are now having with the Chinese government. They went into a crappy situation thinking that maybe they could improve things, or at least tolerate them until it had enough time to change (and it is just a matter of time, really)... apparently they were wrong, have seen the error of their ways, and are getting the heck out while they still can.
You seem to think that isn't good enough. So do you believe that because a nation makes laws which you don't agree with, private companies should be obligated to violate those laws in those countries? That failure to do so constitutes evil?
You can't possibly think that would end well.
Re: (Score:2, Insightful)
Wasn't script kiddies that attacked Google in China. It was, as they said, a "nation-state" attack. With plants/spies on the inside of Google China. That's why Google is getting consulting from the NSA now. Google can handle any script kiddie, any botnet, any DDoS, any virus. What they don't have skill in is handling nation-state attacks. Ones that rely on not just attacking from the outside via the internet, but also attacking simultaneously from the inside with pro spies. The NSA, being in the spook biz,
Re: (Score:1, Troll)
That is like saying that you should’t badmouth Hitler, but just not to to Germany in 1942. ;)
Re: (Score:1)
Re:I don't trust it (Score:5, Insightful)
Re:I don't trust it (Score:5, Insightful)
I could just bury your comment by modding you a troll, but I'd rather correct the misinformation.
Microsoft has patents on how to sell your personal information to the highest bidder. Microsoft, Yahoo, and AOL all handed over your personal search histories to the US government. They all play ball in China. Yahoo handed over bloggers to the Chinese government.
Google targets ads to you, but they don't share your personal data out to anyone. Google tracks your information to serve up ads, but this is all machine controlled. It isn't like Google employees sit around all day reading your email.
If you don't want Google to have your information, then don't use their services. I happen to really like their services. I want the convenience of being able to get to my mail from any device without having to try and run my own mail server (dealing with SSH attacks, whitelisting, backups, etc. can be a pain). Google provides me a free service I enjoy, and thusly I willingly accept the trade-off of targeted ads.
They are VERY upfront about what they do, and they also provide tons of great open source products. They are the primary funder of Firefox, and they fund a decent chunk of Linux development. I'm sick of people calling them evil every single day without providing one single piece of evidence.
Either provide some evidence, or stop spouting FUD and lies. Personally, I'm sick of it.
Re: (Score:2)
One could also go through various proxies and firewalls, but blocking cookies, javascript and flash is enough for most people, anything beyond that is probably overdoing.
Re: (Score:1)
Re:I don't trust it (Score:4, Informative)
Re: (Score:1, Flamebait)
I like Google and their products. I use them all the time.
But I am concerned about them and every other company which keeps information
on me... It's total out of control.
While I don't have a lot of concern on what Google does with the information today..
I do worry about criminals getting a hold of the information (if they haven't, it's just a
matter of time). And I do worry that the company Google is today will not be the
same as the company Google is tomorrow.
I agree with your assertion that you are replyin
Re:I don't trust it (Score:4, Informative)
The ACLU has an interesting video regarding data retention and proliferation: http://www.aclu.org/ordering-pizza [aclu.org]
It's not quite all here yet, but it's definitely not outside the realm of probability.
Re: (Score:1)
Re: (Score:2)
Realistically, we don't have that option. Someone sends me an email from a gmail acct, poof, there I am. And I can't reply without using gmail, because that is all they use.
I do use google products quite a lot, so I'm not trying to hide from them. But they have become so pervasive that it is hard to not use them, even tangentially.
Re:I don't trust it (Score:5, Insightful)
Someone sends me an email from a gmail acct, poof, there I am. And I can't reply without using gmail, because that is all they use.
True, but not really relevant -- if they weren't using Gmail, they'd be using something else. Do you trust Yahoo or Hotmail any more than Google? How about some random ISP?
And it's not like they can track much from that, other than your conversations with someone who already keeps all their other conversations with Google.
Re: (Score:1)
Re: (Score:1, Informative)
Here's your evidence: *.doubleclick.net (e.g., g.doubleclick.net, ad.doubleclick.net) still infests the web with its ads and cookies on a great majority of websites.
They are still using Doubleclick technologies on the web in parallel with their own technologies. Doubleclick was considered as "evil" long before they were acquired by Google, and that doesn't change as long as the Doubleclick presence persists on those websites. Check it for yourself--enable your cookies and turn off your ad-blocker--Doublecli
Google API (Score:5, Interesting)
Re:Google API (Score:5, Interesting)
I'd say it's in their best interests to ensure those sites don't all become a liability to eachother by way of their centralized cloud.
Given how most websites still use homebrew code and database interactions, and that's the most common route of infection (injected code), this only covers a small range of possible attack vectors.
2 side sword (Score:3, Interesting)
Re: (Score:1, Funny)
Is VERY fast, been observed 500 request/seconds against responsive internet servers, 2000/sec when in the same lan...
Wow, it's almost like you read the FAQ [google.com] or something:
500+ requests per second against responsive Internet targets, 2000+ requests per second on LAN / MAN networks...
Re: (Score:2)
Yeah, because no one else can write a C web client any more, only Google.
</sarcasm>
Really, do you work for Fox News or something?
Can someone explain this (Score:2)
When I click on "View a sample screenshot", my browser downloads the damn PNG file instead of simply displaying it like it should. Is it something wrong on Google's side or is it my browser?
Re: (Score:2)
That is weird. Given Google Chrome does it, too, I'd assume it's something wrong on their side.
In particular, the headers for that URL are:
200 OK
Cache-Control: public, max-age=604800
Connection: close
Date: Sun, 21 Mar 2010 11:57:00 GMT
Accept-Ranges: bytes
Age: 18380
Server: DFE/largefile
Content-Length: 146941
Content-Type: image/png
Expires: Sun, 28 Mar 2010 11:57:00 GMT
Last-Modified: Thu, 18 Mar 2010 19:13:33 GMT
Client-Date: Sun, 21 Mar 2010 17:03:20 GMT
Client-Peer: 209.85.225.82:80
Client-Response-Num: 1
Content-Disposition: attachment; filename="skipfish-screen.png"
X-XSS-Protection: 0
In other words, the server is deliberately telling your browser to treat it as an opaque attachment to be downloaded (and saved with that filename), and not something to be displayed.
Re: (Score:2)
Is there any way to work around websites that do that for files that you know your browser can display by itself, such as PDF files?
Re: (Score:2)
The Open in Browser plug-in for Firefox works for files that Firefox supports natively, not sure if it can help with PDFs.
Re: (Score:2)
Yes, but it's annoying enough to be pointless. Your options are pretty much to patch your browser or to set up a proxy that filters that header. Either way, you need to think about how you're going to identify it -- with content-type, or with the filename extension? (I'd suggest content-type.)
Besides which, it actually makes sense to have this functionality. Sometimes, you have a button that says "download" explicitly. In this case, some idiot put the screenshot in the "files" area, which is intended for do
Re: (Score:2)
Well, they are linking to the "downloads" section (check out the downloads section, its the same url). It makes sense that the "downloads" should be serving stuff up as downloaded rather than embedded content.
Re: (Score:2)
Yeah, it just doesn't make sense that they put the screenshot in the downloads section.
Re: (Score:2)
Ironically, when I clicked that link, I thought "Woah! The server's trying to send me a file that's not an image! It's must be 0wned!"
But I carried on anyway because of my blind faith in all things Google, and was greeted by a rather ugly screenshot. And maybe an infected desktop or something...
The 90% figure is wrong (Score:1)
I peeked at the report, out of curiosity. They don't claim that 90% of web applications are vulnerable, they DO claim that 90 (well, 89%) of all the web vulnerabilities are in web applications (which is quite a different thing).
Here it is on GitHub (Score:1)
Skipfish vulnerability scanner (Score:3, Informative)
We configured skipfish and pointed it at our custom platform with full administrator rights. Entered our systems custom file extensions into the skipfish dictionary.
Overall the performance is quite good (>3k HTTP requests per second) after tweaking concurrent connection count. Orders of magnitude better than any scanner we have ever used.
The report UI seemed polished and provided quite a bit of useful data with summaries and drill down to detail. It would really help if instead of simply posting raw request/response data it would highlight sections of the response that lead it to make an assumption WRT a particular vulnerability.
In terms of scan results they look for quite a number of common vulnerabilities, some of the checks are quite creative. I especially liked the check for "interesting" contents. Some of our test data tripped them - this was perfectly reasonable given content.
Aborted the scanner at the 5 million http request mark ~20mins later.
In terms of actual results against our system out of the several dozen possible vulnerabilties reported from XSRF, injection..etc there were no actual problems discovered - 100% false alarms.
There is something really odd about some of the requests being made .. I don't know if its intentional to discover bugs but the folder/file parsing looks to be broken and its building stupid path names with the filename /subfolder.. This seems to be causing most of the UI not to crawl as it seems to be ending up in the 404 category. Maybe this is my fault on dictionary configuration but the system wastes way too many requests throwing the dictionary at each resource and not nearly enough time crawling the site and discovering whats available for expliot.
I then took a cursory glance at the source code.. all of the rule checking is hard-coded in C. (See analysis.c) ... which to me seems quite stupid and useless.
The tool is a start already better than many freebie tools I have used over the years.
My advice is to first and foremost abstract the analysis details out of C code. Focus more on walking even if its dynamic content and bolt in some intelligence/expert system to direct activities.
Re: (Score:2)
No, he wants the rules moved out of the source code for the same reason that anti-virus definitions are not compiled-in to anti-virus products and Nessus plugins are not compiled-in to Nessus.
New attacks are developed all the time, new vulnerabilities are discovered all the time. Having to write C code for this and re-compile the entire scanner is a massive pain and waste of time. Writing a rule should be quick and easy. And yes, even non-coders (say, sysadmins who may have never touched C or maybe anyth
90% is probably low (Score:1)
I wouldn't be surprised if the actual number is much, much higher. This has always been a problem with software development, I'm not sure why anyone thought it got better when apps became web-based. When your business depends on apps being up and running (or running the newest, coolest features) security is usually not the highest priority.
As a vendor I sit in meetings all the time with app architects and even security people (up to and including CISOs) at some of the biggest corporations in the world who
Many people are working to help App insecurity. (Score:2, Insightful)