Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet Businesses Google Security Spam IT

Google Goofs On Firefox's Anti-Phishing List 168

Stephen writes "While phishing is a problem, giving one company the power to block any site that it wishes at the browser level never seemed like a good idea. Today Google blocked a host of legitimate web sites by listing mine.nu. mine.nu is available as a dynamic dns domain and anybody can claim a sub domain. All sub-domains are blocked regardless of whether phishing actually occurs on the sub-domain or not. Several Linux enthusiast sites are caught up in the net including Hostfile Ad Blocking and Berry Linux Bootable CD."
This discussion has been archived. No new comments can be posted.

Google Goofs On Firefox's Anti-Phishing List

Comments Filter:
  • Good idea? (Score:5, Interesting)

    by grasshoppa ( 657393 ) on Sunday September 21, 2008 @02:17PM (#25095549) Homepage

    While phishing is a problem, giving one company the power to block any site that it wishes at the browser level never seemed like a good idea

    Actually, giving a single company this kind of authority is usually not a bad idea. Spamhaus and email, for example.

    The issue is about trust. Even with this goofup, I trust google ( although their response to this could change that ). Hell, I trust MS here too, to a limited extent.

    • Re:Trust (Score:5, Insightful)

      by Bieeanda ( 961632 ) on Sunday September 21, 2008 @02:27PM (#25095663)
      Yeah. While I reflexively rankle at the idea of blocking a whole swathe of domains like that, it's unfortunately clear that services like dyndns and mine.nu are going to be overrun with phishers and scammers because they're just as convenient to them as they are to non-malicious Internet users.
      • Re:Trust (Score:5, Insightful)

        by calmofthestorm ( 1344385 ) on Sunday September 21, 2008 @02:46PM (#25095895)

        We need to educate users to check the URL before entering anything. Any time you rely on a technological solution to a social problem you end up with woes.

        • Re:Trust (Score:5, Insightful)

          by santiagodraco ( 1254708 ) on Sunday September 21, 2008 @03:41PM (#25096399)

          It's just not going to happen. We like to think that "everyone" is capable of understanding what is going on when they browse the web, but that's wishful thinking.

          It will be a LONG time until you can ever hope that the general public is as smart as the malicious few out there. Until then technology solutions will continue to be needed, desired and our best bet in combating this. Hell, they always will.

        • Comment removed based on user account deletion
          • Maybe, but I'm still against the false sense of security these "anti-phishing" tools provide. And although I see it may be a necessary evil, it bugs me how many legitimate sites are going to be burned by this.

      • by spazdor ( 902907 )

        My position is that dynamic DNS services have nothing to do with phishing and scamming. Since either way, the URL is phony, there's not much practical difference between running a fake hotmail site at http://h0tm4il.mine.ru/ [h0tm4il.mine.ru] rather than at http://24.64.197.48./ [64.197.48] There aren't many people out there who would be fooled by one but not the other.

        • Re: (Score:3, Funny)

          by caluml ( 551744 )
          Well, there's a pair of boobies [sedoparking.com] poking out at you on the mine.ru page.
          • Actually, only the torso on which said boobies are attached. The rest of the body is not visible so we don't really know whether she's human or some alien race we haven't met before.

        • Maybe not h0tm4il, but what about hotmailsecure.mine.ru, or www.hotmail.mine.ru? Not everyone knows that Russia is a key area for phishing, and almost nobody technical would get the link between .ru and Russia anyway. Most people wouldn't be able to tell you how a URL is formed (the combination of little and big endian can be very confusing), and even if you get the basic concepts, there are other techniques that can be used to obfuscate the URL.
      • Comment removed based on user account deletion
        • Re:Trust (Score:5, Insightful)

          by GIL_Dude ( 850471 ) on Sunday September 21, 2008 @03:19PM (#25096189) Homepage
          I don't know anything about the FWT site; it may be fine. However, do remember that just because a site is trustworthy over time doesn't mean it is trustworthy today , on this visit.

          I just had that driven home for me the other day. In my off time, I am a youth soccer coach. The website for our league has been fine for several years. Last week I visited it and got the malware warning from FireFox. I checked with the webmaster and sure enough, they had gotten hit with a SQL injection attack and had indeed gotten malware of some sort hosted on the site.

          So, FWT may be a false positive - but it is at leat possible that they also got successfully attacked.

          We really don't have a good system to evaluate trust on the fly due to the dynamic nature of internet content. A page that was fine 20 minutes ago may attack you now.
          • Comment removed based on user account deletion
            • I don't think it's a very bright idea to visit a reported attack site regardless of what browser and security addons you have. I'd leave it on and not use FWT until the alert goes away (presumably it's because it was hacked or has an evil third-party advertisement).
    • Actually, giving a single company this kind of authority is usually not a bad idea. Spamhaus and email, for example.

      I respectfully disagree. Giving a single, unaccountable group the effective power to completely kill some domain's e-mail is a bad idea, too. It's far too easy to game any one blacklist, and it's far too hard to get a domain that was added incorrectly (or that has been taken over by someone new who has no connection to the previous registrant) removed from the list again. I don't believe any sysadmin worth their salt filters based only on input from a single blacklist.

    • Actually, giving a single company this kind of authority is usually not a bad idea. Spamhaus and email, for example.

      Here's a suggestion that might help you in future debates. If you're going to provide an example to support your argument, it shouldn't be one that proves the other side's point. Spamhaus and all email blacklists are a bunch of power hungry nerds and should never be used. Giving any single organization that much control over your Internet is just setting your self up to be abused.

  • by Restil ( 31903 ) on Sunday September 21, 2008 @02:24PM (#25095625) Homepage

    Granted, I can see there are opportunities for abuse here, but if the owners of dynamic dns domains don't properly police their "customers" and spammers and/or other malicious websites start using it, then Google has every right to blacklist the entire domain. Of course, it's arguable exactly how much can be done to prevent it, but if you're really concerned about not getting your site blocked, go ahead and blow the $7 a year on your own domain, or use a smaller ddns service that can actually pay attention to the nature of the hosts it's serving.

    As far as having any one third party responsible for maintaining a blacklist, exactly how else do you intend to do it? You can always create your own blacklist, but that would first require you to "enjoy" the sites you would prefer get blocked automatically. You'll just have to trust someone to make that reasonable decision for you. Sure, there will be some mistakes, but that's the price you pay for protection.

    -Restil

    • by ccguy ( 1116865 ) * on Sunday September 21, 2008 @02:43PM (#25095865) Homepage

      Granted, I can see there are opportunities for abuse here, but if the owners of dynamic dns domains don't properly police their "customers" and spammers and/or other malicious websites start using it, then Google has every right to blacklist the entire domain.

      Countries have been banned from sites, email, IRC channels and so on with this argument.

      Just so you know, some ISPs have defacto monopolies in their countries, and everyone there get the same domain. Any idiot that say 'let ban *.il, or *.es, because I got 10 spam messages from there' should be fired on the spot.

      In fact, if he works at google whoever hired him should be fired, too.

      • by caluml ( 551744 ) <slashdot@spamgoe ... g ['m.o' in gap]> on Sunday September 21, 2008 @04:06PM (#25096651) Homepage
        Sorry dude. I block whole netblocks that I/we don't have any business with, and that fill up my logs with annoying connection attempts, and portscans, etc. I'll show you my method for blocking about 80% of probes, scans, password guessing bots, etc:

        # wget -o /dev/null -O - http://www.iana.org/assignments/ipv4-address-space/ | grep whois.apnic.net | grep ALLOCATED | cut -d " " -f 1 | xargs
        # need to add in .0.0.0 though
        for asia in 58.0.0.0/8 59.0.0.0/8 60.0.0.0/8 61.0.0.0/8 112.0.0.0/8 113.0.0.0/8 114.0.0.0/8 115.0.0.0/8 116.0.0.0/8 117.0.0.0/8 118.0.0.0/8 119.0.0.0/8 120.0.0.0/8 121.0.0.0/8 122.0.0.0/8 123.0.0.0/8 124.0.0.0/8 125.0.0.0/8 126.0.0.0/8 202.0.0.0/8 203.0.0.0/8 210.0.0.0/8 211.0.0.0/8 218.0.0.0/8 219.0.0.0/8 220.0.0.0/8 221.0.0.0/8 222.0.0.0/8
        do
        $fw -A INPUT -s $asia -j DROP
        done

        I don't get why you are getting annoyed that I (and probably many others) do things like this?

        • by ccguy ( 1116865 ) *

          I block whole netblocks that I/we don't have any business with,

          Until you happen to admin a major mail provider I couldn't care less.

        • Re: (Score:3, Informative)

          by novakreo ( 598689 )

          Sorry dude. I block whole netblocks that I/we don't have any business with, and that fill up my logs with annoying connection attempts, and portscans, etc. I'll show you my method for blocking about 80% of probes, scans, password guessing bots, etc:

          # wget -o /dev/null -O - http://www.iana.org/assignments/ipv4-address-space/ [iana.org] | grep whois.apnic.net | grep ALLOCATED | cut -d " " -f 1 | xargs # need to add in .0.0.0 though for asia in 58.0.0.0/8 59.0.0.0/8 60.0.0.0/8 61.0.0.0/8 112.0.0.0/8 113.0.0.0/8 114.0.0.0/8 115.0.0.0/8 116.0.0.0/8 117.0.0.0/8 118.0.0.0/8 119.0.0.0/8 120.0.0.0/8 121.0.0.0/8 122.0.0.0/8 123.0.0.0/8 124.0.0.0/8 125.0.0.0/8 126.0.0.0/8 202.0.0.0/8 203.0.0.0/8 210.0.0.0/8 211.0.0.0/8 218.0.0.0/8 219.0.0.0/8 220.0.0.0/8 221.0.0.0/8 222.0.0.0/8 do $fw -A INPUT -s $asia -j DROP done

          I don't get why you are getting annoyed that I (and probably many others) do things like this?

          Your rule blocks most Australian IP addresses, for starters.

          • by caluml ( 551744 )
            I actually was worried about this, but from what I can tell (resolving some Australian sites www.gov.au, etc), and checking they didn't fall in the list) they don't get caught under this system. I can't remember why now - I know you're thinking that APNIC do Australia too, which they do, but the "grep ALLOCATED" misses them - they are listed as "LEGACY" in there.
            But if you know of any Australian netblocks I've caught, please let me know.
            • Every IP address I can ever remember having falls in one of those 'ALLOCATED' blocks. In particular, 61/8, 121/8, 203/8, 210/8, and 211/8, but there are definitely more.

              I guess by checking .gov.au sites and the like, you've only found organizations who jumped on the internet bandwagon pre-APNIC.

              • by caluml ( 551744 )
                Yeah, that's what I was banking on - that most ranges would be "LEGACY". Hmm. This looks quite promising. http://www.ipdeny.com/ipblocks/data/countries/au.zone [ipdeny.com]

                Could you have a look in there, and see if netblocks you know are in there?
                • That list looks pretty comprehensive, at least for the handful of ISPs I've used.
                  I trust you'll be adding firewall exceptions for other APNIC states such as New Zealand, right?

                  Nonetheless, I still think this kind of blocking is a bad idea. It relies upon an up-to-date list of netblocks, and you'll never know if a legitimate customer from a netblock you've deemed suspicious has simply taken their business elsewhere. But that's for you to worry about, not me.

    • Google links to an enormous number of malicious sites. Should they be blocked in all web browsers for failing to police all of the sites they point people to? Can we really trust their competence in this situation if they just copy and paste sites into the block list (not to mention copying and pasting EULA's) without actually looking at what they're blocking? How do we know they don't just look at certain top level domains and assume nothing of value would come from that area?
    • by sjames ( 1099 )

      Granted, I can see there are opportunities for abuse here, but if the owners of dynamic dns domains don't properly police their "customers" and spammers and/or other malicious websites start using it, then Google has every right to blacklist the entire domain. Of course, it's arguable exactly how much can be done to prevent it, but if you're really concerned about not getting your site blocked, go ahead and blow the $7 a year on your own domain, or use a smaller ddns service that can actually pay attention to the nature of the hosts it's serving.

      Of course, .com seems even more popular for abuse. Shall we block it?

      I definitely do NOT trust any single entity to make the right decision. No matter who it is or how well intentioned it starts out, eventually some combination of power trip and laziness takes over. Next thing, the standard of evidence becomes "hearsay is good enough".

      For email, I take a poll of several RBLs. Anyone can land in a single RBL as collateral damage or other screwups. Landing in 3 or 4 generally indicates a real spammer.

  • If people thing this is a useful service, split it off, or ask someone like Spamhaus to do it,and add it some more checks and balances.

    Better yet, release the code to the web service, and allow any sysadmin to host the server side portion themselves, of course with the ability to update from a central list, and accept 0% - 100% of a given list as they see fit.

  • Great, if the blocked site makes use of frames, you just can't bypass the warning. And there's no way to permanently unblock a site...

    <sarcasm>I feel safer already</sarcasm>
  • by Anonymous Coward on Sunday September 21, 2008 @02:28PM (#25095677)

    In my mind giving this power to Google is the most objectionable thing related to the company. I know somebody who has had his legitimate business ruined because Google mistakenly added his site to this list. Why? Because it was hosted on the same physical server as a truly objectionable web site.

    People need to stop childishly sneering at Windows users and take their focus away from Microsoft. The terrible Goliath is clearly Google now. Even when it's not being evil it causes trouble just by being *clumsy*.

    • Comment removed (Score:4, Insightful)

      by account_deleted ( 4530225 ) on Sunday September 21, 2008 @02:32PM (#25095743)
      Comment removed based on user account deletion
      • by Anonymous Coward on Sunday September 21, 2008 @02:36PM (#25095781)

        What? How can you misunderstand everything quite so much?

        No, Google doesn't filter by IP address. But because the site was hosted on the same server as a bad site it added a URL block for the innocent too. Do you see?

        Secondly, the issue isn't about me using Firefox/Google. It's about customers who did and were told that the site they had browsed to was malicious. The business lost a valuable customer this way and folded.

        • Comment removed (Score:4, Insightful)

          by account_deleted ( 4530225 ) on Sunday September 21, 2008 @02:47PM (#25095905)
          Comment removed based on user account deletion
  • first time (Score:5, Interesting)

    by Toveling ( 834894 ) * on Sunday September 21, 2008 @02:29PM (#25095681)
    This is the first time we've heard about Google (or any others) making a bad block. As long as Google fixes this expediently, I'd say that it's an acceptable margin of error and the amount of phishing sites blocked is by far worth it. Now, if wikileaks suddenly gets blocked for 'phishing', something is definitely awry.
    • What makes you think that Google will change their minds? They have automated the collection of information.

      Google information for jumpbump.mine.nu:
      "Of the 4329 pages we tested on the site over the past 90 days, 0 page(s) resulted in malicious software being downloaded and installed without user consent. The last time Google visited this site was on 09/21/2008, and suspicious content was never found on this site within the past 90 days.

      Malicious software includes 7523 scripting exploit(s), 2911 trojan(s). S

      • by Tacvek ( 948259 )

        The big reason I think they will change is the fact that they have already de-blocked mine.nu.
        I think (hope) they may have placed the site on a list of sites to block only at the third-level domain not the second level. It may take time for the block list to be purged from browsers. On the other hand, My copy of Fx never got the version of the list with mine.nu included. I base the de-blocking on the removal of the warning page from clicking on the link, and the notice that the site is not currntly listed w

    • Basically any site that includes a forum can get blocked if someone in the forum links to something considered malware. I actually have no idea how places like Slashdot haven't gotten blocked for that (maybe they special-case high-profile sites?), but a bunch of smaller sites with forums like ratebeer [ratebeer.com] and Gamasutra [gamasutra.com] have gotten blocked repeatedly.

  • The summary reads as though it was google's fault that the entire domain was blacklisted, while it's more of a mozilla issue. Mozilla releases this list of "Attack Sites" and Google Search automatically blocks them. Even if I get to the site without google, FF3 still lists it as dangerous, and warns me.
    If anyone should receive blame (which IMO they shouldn't), it's Mozilla and their blacklist.
  • I dunno how much good it could do, but I suppose people could do the "Report Incorrect Forgery Alert [google.com]" thing. I'd think it really would be better if they individually added the malicious subdomains individually, rather than blocking the entire domain, which (I'd guess) contains legitimate, or at least non-harmful, sites as a majority.

    (Oh, and btw, here's Google's Safe Browsing report for mine.nu [google.com].)

  • by Mr. Gus ( 58458 ) on Sunday September 21, 2008 @02:33PM (#25095749) Homepage

    Any maintained blacklist of any reasonable size is going to end up with false positives. It's one of those things you just have to accept. People notice and report it, the entry gets removed, and we move on.

    • Re: (Score:2, Informative)

      by fxkr ( 1343139 )

      Any maintained blacklist of any reasonable size is going to end up with false positives. It's one of those things you just have to accept. People notice and report it, the entry gets removed, and we move on.

      *If* the entry gets removed.

  • by Anders ( 395 ) on Sunday September 21, 2008 @02:50PM (#25095929)

    Note that the anti-phishing feature makes Firefox slow [opensuse.org] over time.

  • by CSMatt ( 1175471 ) on Sunday September 21, 2008 @02:52PM (#25095949)

    Putting anti-phishing filters into browsers just shifts the responsibility of good security practices from the user to some blacklisting company. What incentive is there to be weary about suspicious sites if you can count on the almighty Google to hold your hand while you browse the Web? This makes about as much sense as someone installing parental controls in their machine and declaring that their Internet connection is now "kid-friendly."

    I've never had these filters turned on, and I've never exposed my financial data to others by accident. Usually this has something to do with me hovering the mouse over links and checking the URL in the status bar.

  • by Animats ( 122034 ) on Sunday September 21, 2008 @02:54PM (#25095965) Homepage

    If you're serious about blocking phishing sites, you have to accept some collateral damage. Blocking by URL stopped working last year; most attacks have unique URLs now. Many have unique subdomains. So you have to block at the second-level domain level to be effective.

    We publish a list of major domains being exploited by phishing scams. [sitetruth.com] Today, there are 46 domains listed. eBay, for example, is on the list, because eBay has an open redirector exploit. [ebay.com] Click on that URL. It says "ebay.com", right? It looks like eBay, right? It's not.

    On the other hand, "tinyurl.com", which used to be popular with phishers, has been able to get off the blacklist by cracking down on misuse of their service. It's possible to do redirection competently.

    When we started our list last year, it had about 175 exploited domains. After some serious nagging and an article in The Register, we're down to 46. And only 11 have been on the list for more than three months; the others come and go as exploits are reported and holes plugged. So this is a problem that can be solved.

    I'm glad to see Google taking a hard line on this. It's necessary that sites that do redirection feel the pain when they accept redirects to hostile sites. Google can apply much more pain that we can. Few sites will want to be on Google's blacklist for long.

    • If you're serious about blocking phishing sites, you have to accept some collateral damage. Blocking by URL stopped working last year; most attacks have unique URLs now. Many have unique subdomains. So you have to block at the second-level domain level to be effective.

      This line of reasoning ends only when the whole net is blocked.

      • by SnowZero ( 92219 )

        This line of reasoning ends only when the whole net is blocked.

        There are shades of gray, and you don't have to pick one of two extremes[1]. You can ban nuclear bombs without banning pocket knives, even thought they might both be weapons someone would like to own.

        [1] It might not seem like that in an election year though.

        • Unfortunately, if you've got a bias to one side or another and your "shade of gray" solution is ineffective, the tendency is to keep moving towards the extreme.

      • by Animats ( 122034 )

        This line of reasoning ends only when the whole net is blocked.

        No. That was the conventional wisdom when we (SiteTruth) started putting out that report. We originally thought that thousands of domains might be on that list. But no. The number of well-known domains (and we're using Open Directory, which is 1.4 million or so domains, to define "well known") being exploited stays around 50 ± 25, and as previously mentioned, only 11 of them have been on the list for more than three months. It's nece

  • by TheDarkener ( 198348 ) on Sunday September 21, 2008 @02:57PM (#25095997) Homepage

    This is something that strikes me as the first time Firefox really pushed something out by default that shouldn't be. Just for one example, people who are on LTSP networks, say, 200 users, will ALL download anti-phishing, anti-malware blacklists from Google, each in their own home directory. There's no way that I know of, anyway, to share this data - SQLite seems to make it impossible. That's the first mistake in creating a compatible, light web browser.

    The second mistake is enabling website blocking based on 3rd party blacklists by default. This is basically Microsoft UI thinking - "You *need* this because you don't know any better." Screw that. I mean, make it a checkbox on setup - "Use Google-provided anti-malware blacklists" Simple as that. I spent weeks trying to find out why, after just a few Firefox instances were launched on an LTSP server, none more would load - part of this was because every user logging in was trying to download the anti-malware stuff from Google, saturating the line, and preventing Firefox from loading for the first time.

    I hope the Firefox devs will take all scenarios into account when making changes. It seems lame that every user needs all of the stuff in places.sqlite. And even if you argue with that, at the LEAST make it cross-DB compatible, so you can put everyone's in a nice big central MySQL database.

    • Re: (Score:3, Insightful)

      by Karellen ( 104380 )

      "There's no way that I know of, anyway, to share this data - SQLite seems to make it impossible."

      Well, I doubt it's SQLite that makes it impossible, it's more that you don't want ordinary users writing to a single shared blacklist. Because if a user can download and write good data to it, they can write bad data to it.

      Suddenly all it takes is for one user to click on the dancing bunnies, and they're running a daemon without knowing it that writes bad data to the blacklist, monitors the list for changes, and

      • *snip* ...and rewrites it if any of the other users change it back to what it "should" be. That fucks things up for *everyone*, which kind of defeats the whole idea of having separate user accounts that protect everyone from each other.

        I think you're misunderstanding the usage of FF's anti-phishing blacklists. Think of it as anti-virus definitions. You only need ONE copy. See http://www.mozilla.com/en-US/firefox/phishing-protection/ [mozilla.com] for more information. Downloading individual blacklists per-user would be l

        • "Think of it as anti-virus definitions. You only need ONE copy."

          Yes, but how is that one copy updated? If it's not by a central daemon/service that runs even if no-one is logged in, then it has to be run by a user while they're running Firefox. If that is the case, that user needs write access to the shared database in order to write the updated definition. In which case, if you have a malicious user (or code running as a malicious user, thanks to a dancing bunnies error) who can write to the database, they

    • Comment removed based on user account deletion
  • by RAMMS+EIN ( 578166 ) on Sunday September 21, 2008 @03:08PM (#25096119) Homepage Journal

    Never ascribe to malice what can be equally ascribed to incompetence.

    The corollary of this is, of course, that you should still be wary of single points of failure, even if you do not believe they will fail you on purpose.

  • by lattyware ( 934246 ) <gareth@lattyware.co.uk> on Sunday September 21, 2008 @03:30PM (#25096301) Homepage Journal
    Shit happens. Yes, it sucks, but it happens. Now, should we try to blow up the googleplex? No. Google are not blocking based on a secret agenda here, and you can bypass it or turn off the feature. OK, it'd be nice if you could choose who provides the service, but overall, it's not that big a deal.
  • by LingNoi ( 1066278 ) on Sunday September 21, 2008 @04:10PM (#25096691)

    Safe Browsing
    Diagnostic page for mine.nu/

    What is the current listing status for mine.nu/?

            Site is listed as suspicious - visiting this web site may harm your computer.

            Part of this site was listed for suspicious activity 3 time(s) over the past 90 days.

    What happened when Google visited this site?

            Of the 4329 pages we tested on the site over the past 90 days, 0 page(s) resulted in malicious software being downloaded and installed without user consent. The last time Google visited this site was on 09/21/2008, and suspicious content was never found on this site within the past 90 days.

            Malicious software includes 7523 scripting exploit(s), 2911 trojan(s). Successful infection resulted in an average of 0 new processes on the target machine.

    Has this site acted as an intermediary resulting in further distribution of malware?

            Over the past 90 days, mine.nu/ appeared to function as an intermediary for the infection of 183 site(s) including culportal.info, mipt.ru, baikal-discovery.ru.

    Has this site hosted malware?

            Yes, this site has hosted malicious software over the past 90 days. It infected 932 domain(s), including bernard-becker.com, mipt.ru, dhammasara.com.

    How did this happen?

            In some cases, third parties can add malicious code to legitimate sites, which would cause us to show the warning message.

    Next steps:

            * Return to the previous page.
            * If you are the owner of this web site, you can request a review of your site using Google Webmaster Tools. More information about the review process is available in Google's Webmaster Help Center.

  • First of all, let me point out that I started PhishTank.com, which is a free + community-managed version of what Google's anti-phishing service does. Our service is in use by OpenDNS, Yahoo Mail, Kaspersky and countless other large and small companies (and researchers, too), so my thoughts are both highly informed, but also biased.

    The main issue comes down to control. When something is blocked incorrectly, as it inevitably will, do you have the ability to by pass it easily? If you are the webmaster, d
  • The biggest problem for me isn't the default blocking and a need for me to manually verify if I do indeed want to visit the site after seeing the warning. It's that I can't then tell it to go away. Even if I click "ignore", it'll then load the site, but it'll pop up the red block screen every single time I click on another link to another part of the site. It also throws away POST data when doing this, so I can't use search features on sites. There's no way to add an exception, like "foo.com really is OK, I

  • I do not recognize any proof or intention to proof that information is harmful (to child).

    Never, mind, people just use their power. Do you?

A bug in the hand is better than one as yet undetected.

Working...