Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security The Internet IT

New Web Application Attack - Insecure Indexing 120

An anonymous reader writes "Take a look at 'The Insecure Indexing Vulnerability - Attacks Against Local Search Engines' by Amit Klein. This is a new article about 'insecure indexing.' It's a good read -- shows you how to find 'invisible files' on a web server and moreover, how to see contents of files you'd usually get a 401/403 response for, using a locally installed search engine that indexes files (not URLs)."
This discussion has been archived. No new comments can be posted.

New Web Application Attack - Insecure Indexing

Comments Filter:
  • by Capt'n Hector ( 650760 ) on Monday February 28, 2005 @08:00PM (#11808226)
    Never give web-executable scripts more permissions than absolutely required. If the search engine has permission to read sensitive documents, and web users have access to this engine... well duh. It's just common sense.
  • by caryw ( 131578 ) <carywiedemann@@@gmail...com> on Monday February 28, 2005 @08:00PM (#11808228) Homepage
    Basically the article says that some site-installed search engines that simply index all the files in /var/www or whatever are insecure because they will index things that httpd would return a 401 or 403 for. Makes sense. A smarter way to do such a thing would be to "crawl" the whole site on localhost:80 instead of just indexing files, that way .htaccess and the such would be preserved throughout.
    Does anyone know if the Google search applicance is affected by this?
    - Cary
    --Fairfax Underground [fairfaxunderground.com]: Where Fairax County comes out to play
  • News at 11! (Score:3, Insightful)

    by tetromino ( 807969 ) on Monday February 28, 2005 @08:01PM (#11808233)
    Search engines let you find stuff! This is precisely why google, yahoo, and all the rest obey robots.txt Personally, I would be amazed if local search engines didn't have their own equivalent of robots.txt that limited the directories they are allowed to crawl.
  • by jacquesm ( 154384 ) <j@NoSpam.ww.com> on Monday February 28, 2005 @08:05PM (#11808260) Homepage
    Sure, and Konqueror never had it :)


    that's all nice and good, personally I think files that were never meant to be indexed make for the best reading by far !


  • by Eberlin ( 570874 ) on Monday February 28, 2005 @08:08PM (#11808288) Homepage
    The instances mentioned all seem to revolve around the idea of indexing files. Could the same be used for database driven sites? You know, like the old search for "or 1=1" trick?

    Then again, it's about being organized, isn't it? A check of what should and shouldn't be allowed to go public, some sort of flag where even if it shows up in the result, it better not make its way onto the HTML being sent back. (I figure that's more DB-centric though)

    Last madman rant -- Don't put anything up there that shouldn't be for public consumption to begin with!!! If you're the kind to leave private XLS, DOC, MDB, and other sensitive data on a PUBLIC server thinking it's safe just because nobody can "see" it, to put it delicately, you're an idiot.
  • by XorNand ( 517466 ) on Monday February 28, 2005 @08:09PM (#11808292)
    A smarter way to do such a thing would be to "crawl" the whole site on localhost:80 instead of just indexing files, that way .htaccess and the such would be preserved throughout.
    Yes, that would be safer. But one of the powers of local search engines is the ability to index content that isn't linked elsewhere on the site, e.g. old press releases, discontinued product documentation, etc. Sometimes you don't want to clutter up your site with irrelavant content, but you want to allow people who know what they're looking for to find it. This article isn't really groundbreaking. It's just another example of how technology can be a double-edged sword.
  • by WiFiBro ( 784621 ) on Monday February 28, 2005 @08:11PM (#11808316)
    This document in the first paragraphs describes how to get to files which are not public. So you also need to take the sensitive files out of the public directory, which is easy but hardly ever done. (You can easily make a script to serve the files in non-public directories to those entitled to).
  • Re:News at 11! (Score:1, Insightful)

    by WiFiBro ( 784621 ) on Monday February 28, 2005 @08:13PM (#11808335)
    With a scripting language capable of giving directory contents and opening files (php, asp, python, etc), anyone can write such a search engine. No degree required.
  • obvious? (Score:5, Insightful)

    by jnf ( 846084 ) on Monday February 28, 2005 @08:15PM (#11808362)
    I read the article and it seems to be like a good chunk of todays security papers, 'heres a long drawn out explanation of the obvious', I suppose it wasn't as long as it could be, but really ... using a search engine to find a list of files on a website? I suppose someone has to document it..

    I mean, I understand its a little more complex as described in the article- but i would hardly call this a 'new web application attack', at best perhaps one of those humorous advisories where the author overstates things and creates much ado about nothing- or at least thats my take;

    -1 not profound
  • by Anonymous Coward on Monday February 28, 2005 @08:37PM (#11808525)
    Oh, we are terribly sorry for taking so long!
    Don't worry, we will give you a full refund.
  • by jnf ( 846084 ) on Monday February 28, 2005 @08:40PM (#11808544)
    thank you. thats the real security risk- not the indexing agent- but rather why is there internal documentation that is 'private' or 'confidential' within the webroot on an externally accessible webserver?
  • by Grax ( 529699 ) on Monday February 28, 2005 @08:44PM (#11808579) Homepage
    On a site with mixed security levels (i.e. some anonymous and some permission-based access) the "proper" thing to do is to check security on the results the search engine is returning.

    That way an anonymous user would see only results for documents that have read permissions for anonymous while a logged-in user would see results for anything they had permissions to.

    Of course this idea works fine for a special purpose database-backed web site but takes a bit more work on just your average web site.

    Crawling the site via localhost:80 is the most secure method for a normal site. This would index only documents available to the anonymous user already and would ignore any unlinked documents as well.
  • This is old. (Score:4, Insightful)

    by brennz ( 715237 ) on Monday February 28, 2005 @09:29PM (#11808881)
    Why is this being labeled as something new? I remember this being a problem back in 1997 when I was still working as a webmaster.

    Whoever posted this as a "new" item, is behind the times.

    OWASP covers it! [owasp.org]

    Lets not rehash old things!

  • by Anonymous Coward on Monday February 28, 2005 @09:45PM (#11808994)
    Give me a freaking break. This is the same guy who found the "HTTP RESPONSE SPLITTING" vulnerability. Last years catch phrase among the wankers at Ernest and Young and Accidenture. The same type of people who consider a HTTP TRACE XSS a vulnerability. I guess it's been a slow freaking year for security research.

    Amit Klein at least used to work for Watchfire formerly known as Scrotum (Sanctum), and the same company who tried to patent the application security assessment process. I guess it's been a really slow year for vulnerability research. They need a new terminology to scare the executives at fortune 500 corporations, and sell their useless products.

    People tend to forget that to compromise data, it's easier to steal the tape from the back of a plane than it is to hack up some stupid search engine.
  • solution (Score:3, Insightful)

    by Anonymous Coward on Monday February 28, 2005 @09:58PM (#11809064)
    here's a solution thats been tried and seems to work: create metadata for each page as an xml/rdf file (or db field). XPATH can be used to scrape content from HTML et al to automate the process, as can capture from CMS or other doc management solutions. create a manifest per site or sub site that is an XML-RDF tree structure containing references to the metadata files and mirroring your site structure. finally, assuming you have an API for your search solution (and don't b*gger around using ones that dont) code the indexing application to only parse the XML-RDF files, beginning with the structural manifest and then down into the metadata files. Your index will then contain relevant data, site structure, and thanks to XPATH, hyperlinks for the web site. No need to directly traverse the HTML. Still standards based. Security perms only need to allow access to the XML-RDF files for the indexer, which means process perms only are needed, user perms are irrelevant.

    There are variations and contingencies, but the bottom line is, even if someone cracked into the location for an xml metadata file, its not the data itself and while it may reveal a few things about the page or file it relates to, certainly is bottom line much less of a risk than full access to other file types on the server.

    heres another tip for free. because you now have metadata in RDF, with a few more lines of code you can output it as RSS.
  • by tagish ( 113544 ) on Tuesday March 01, 2005 @04:24AM (#11810726) Homepage
    Bleedingly obvious and written in sufficiently pompous style that you feel obliged to read the whole thing just to verify that there really is nothing there that hasn't been common knowledge for the better part of the last decade.

    Of course in those days people actually built their sites using static HTML...
  • by Anonymous Coward on Tuesday March 01, 2005 @04:26AM (#11810733)
    Anything I put on a publicly-acessible web server, I want publicly accessible, and I want it to be as easily accessed as possible.

    Anything else goes on a pocket network or not at all.

    The only exception would be an order form, and that will be very narrowly designed to do exactly one thing securely.
  • by DrSkwid ( 118965 ) on Tuesday March 01, 2005 @12:24PM (#11813042) Journal
    Incidentally, it also breaks properly-designed retrieval mechanisms

    if they break, how can they be properly designed ?

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...