Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security The Internet IT Technology

Web Hosts — One-Stop-Shops For Mass Hacking? 70

jjp9999 writes "More than 70,000 websites were compromised in a recent breach of InMotion. Thousands of websites were defaced and others had alterations made to give users a hard time accessing their accounts and fixing the damage. A similar attack hit JustHost back in June, and in a breach of Australian Web host DistributeIT just prior to that, hackers completely deleted more than 4,800 websites that the company was unable to recover. The incidents raise concern that hacker groups are bypassing single targets and hitting Web hosts directly, giving them access to tens of thousands of websites, rather than single targets. While the attacks have caused damage, they weren't as malicious as they could have been. Rather than defacing and deleting, hackers could have quietly planted malware in the sites or stolen customer data. Web hosting companies could be one of the largest holes in non-government cybersecurity, since malicious hackers can gain access through openings left by the Web host, regardless of the security of a given site."
This discussion has been archived. No new comments can be posted.

Web Hosts — One-Stop-Shops For Mass Hacking?

Comments Filter:
  • "incidents raise concern" -> as if this is something new ? it has been so since internet had become available for masses to host websites personally. anyone who had remotely got affiliated with hosting industry knows that.

    why the fuck is this submitted and accepted as if it is something new ?
    • "incidents raise concern" -> as if this is something new ? it has been so since internet had become available for masses to host websites personally. anyone who had remotely got affiliated with hosting industry knows that.

      Yup. And this is one of the reasons I host my own site myself, at home. Where there have been no intrusions (not yet, anyway). Where the backup system works, with an off-site copy updated weekly. It's not a very important site to anyone else (typically only 30 GB/month in traffic), but it's important enough to me that I look after it.

      My reaction to discovering that there are bozos with web sites who don't have backups and trust others with their site security: Sorry, fellas, but I sure hope you enjoyed g

      • Aye. That was my first thought: "How the fuck are you 'unable to recover' data these days?" Assuming that you're (a) a company that depends on data. (There is no (b)).

        I mean seriously, if you're a web hosting company would you not back things up? Maybe they'd lose a day or a week worth of updates, but losing everything? Geez.
  • unable to recover? (Score:5, Insightful)

    by joshuac ( 53492 ) on Saturday October 01, 2011 @02:13PM (#37578766) Journal

    completely deleted more than 4,800 websites that the company was unable to recover

    They host (at least) 4,800 websites yet they don't have a working backup system in place? Amazing.

    • by Anonymous Coward

      Most hosters have a "no guaranteed backup" policy. It isn't their data and they're not getting paid to operate an archive. Their job is to host web sites. You have to keep your own backups of your own data. Backups at hosting facilities are for convenience only, so that you can restore from a source that is close to the servers.

      • Still, the other side of that is the website's obligation that your data remain secure. Yes, you should back up your own data, but neither does the hosting company simply have a right to leave its systems vulnerable to penetration. After all, deleting websites is one thing, what about data theft?

        • Why would it need a "right to leave its systems vulnerable to penetration?" You could as easily say that "customers of hosting providers don't have a right to rely on the security of providers." You have to pick where to allocate responsibility. As the hosting provider usually writes the contract, guess where responsibility usually lies?

    • by guruevi ( 827432 )

      You would be amazed at how much data gets lost daily because enterprises are unable to keep a working backup system.

      I've worked for a web host and the more money they threw at their backup solution (this one is shiny, this one is integrated with your management platform, this one gives control to your customers, this one gives blowjobs, ...) the more unpredictable it got. They would fail to complete and/or fail to notify, 1 tape would fail and apparently the metadata for the whole backup was on that 1 tape.

      • by icebike ( 68054 )

        Rsync is not a backup.

        Most often the cause of data loss is the brain fart or the fat finger on the part of the operator, and a backup system that syncs a mass deletion to another drive is not something you want to put a lot of trust in Perfect copies of a corrupted database are equally useless.

        Periodic full-image backups are essential, which is why the industry went to tape in the first place.
        Substituting Disks for tape tempts people to take a cheap and dangerous way out.

        • Rsync is not a backup.

          rsync creates a copy, and is thus a backup [wikipedia.org].

          Add in domain knowledge (e.g., the database must be quiesced before the snapshot of the filesystem is taken, and you rsync from the snapshot) and with the "--backup" flag, you can even keep mutliple versions without duplication, which acts the same as a full/incremental tape rotation. This gives you an easy to maintain, reliable backup system. And, if you still need your crutch, you could back up the rsync copy to tape.

          Periodic full-image backups are essential

          That depends on the hosting company. If the

        • Nonsense. Incremental rsyncs with periodic full rsyncs is a perfect, more efficient and faster backup system. Importantly, recovery is lightning fast compared to tape. Do some research: you can rsync only files which have changed, or everything. With a simple script on the backup machine which uses redundant storage (or machines, some offsite) you can archive these snapshots by day, week, month, whatever.

          We haven't used tape for over a decade and recovering a specific file with a specific timestamp or

        • by kmoser ( 1469707 )
          Sometimes a "good enough" solution is better than a "perfect" theory.
        • Rsync is not a backup.

          No, but it is a great tool for creating and maintaining both online and offline backups. With a little scripting you can create a very efficient (in terms of storage space and bandwidth) system of snapshots in your online backups. Large files such as massive database backups need slightly different handling than smaller objects but rsync can help you with them too. And if you are backing up live database files you are doing it wrong. Any good RDBS features the ability to produce a consistent backup without

      • You know I have had great success in the past with bacula [bacula.org]. I think it still uses rsync for the actual transfer of files across the network. It maximizes disk space on the backup server by only storing one copy of any file. I am not entirly sure how it does it, but you can provide network backup for 50+ clients (this was my use case) with disk space of only about twice that of a single client assuming all your clients are running the same os.
        • by rev0lt ( 1950662 )
          We use bacula as a virtual tape backup solution for serveral servers, and I love it. We run a weekly full backup, followed by daily incremental snapshosts. That said, bacula has his shortcomings - it is no disaster recovery solution, it works on files and not at the filesystem level, so if you need to backup live data applications such as databases, either you create a dump first, which isn't practical to be made daily on big databases, or use a filesystem that supports snapshotting, back up from a snapshot
      • Backing up remotely hosted web applications, even on a dedicated server you control, is amazingly hard to do. Let's start with MySQL. We back up our database daily using Zmanda. A few months ago, we had to restore it. It took THREE DAYS. Why? Because MySQL's backup and restore workflow has basically gone nowhere in 5 years, and hasn't scaled well to accommodate gigabyte-sized partitioned databases. From what I've read, the main problem is that it has no efficient bulk-insertion protocol. It inserts one reco

        • "Let's start with MySQL. We back up our database daily using Zmanda. A few months ago, we had to restore it. It took THREE DAYS."

          Well, I can't say exactly what happens to MySQL, since I've never restored a MySQL database, but it could be a problem with Zmanda. (Didn't try Zmanda either, as a rule I avoid products whose website yell "The lider in XYZ".)

          "Suppose your server has about 100 gigabytes of data on the hard drive, 95% of which is related to your application. Tar it, download the tarball, and you've

        • A few months ago, we had to restore it. It took THREE DAYS. Why? Because MySQL's backup and restore workflow has basically gone nowhere in 5 years, and hasn't scaled well to accommodate gigabyte-sized partitioned databases.

          Use InnoDB and set "innodb_file_per_table" to "ON", and you can back up and restore database files without using SQL INSERT commands.

          It's still going to take a while with 100GB of data, but it's just hard drive (and maybe network) speed and no CPU.

        • by lgarner ( 694957 )

          You can bitch about MySQL all you want (many do, I don't), but if it takes you three days you're using the wrong database or engine. If you're using a cheesy open-source CMS that requires MySQL, you're using the wrong CMS. And so forth. Of course, I'm sure that you have other nodes running your vital site so you're not offline that whole time.

          Plus, your issue doesn't sound like it has anything to do with remote hosting, so it's not relevant here. What you describe will also occur on a local system.

          Also,

          • I used MySQL because it's free, and because it's the database I grew up with. The problem is, the moment you move more than a baby step away from mysqldump, you can pretty much forget about good documentation and free software... and non-free in this context almost always means "several hundred or thousand dollars". As a matter of reflex, I pretty much quit reading the moment I see anything that requires commercial licensing, because I know I can't afford it. MySQL used to do a decent job of maintaining it

        • by rev0lt ( 1950662 )
          You could try to use master/slave replication to do a binary backup - sychronize the slave, then shut it down and copy the datafiles. The backup is somewhat version-dependant, but it is a lot quicker than running a sql script. Also, you may want to have a look at cdp [wikipedia.org] tools.
        • by cduffy ( 652 )

          Let's start with MySQL.

          You could, but I'd rather start with PostgreSQL. As long as you have log archival, your backup process looks like this: Run pg_start_backup(), rsync the actual live database while it's still being written to, run pg_stop_backup(), done.

          (Restore? Copy those files back, start up the database, and it replays operations from the archive logs to get back from the restored dump to the current point in time... or a point in time you specify, if you'd rather replay to, say, just before someon

      • If you are backing it up to disk, take a look at rdiff-backup. It is quite similar to rsync, but creates a versioned backup so it won't propagate a corrupted database or deleted file to your backup (unless you tell it to do so).

      • I've worked for a web host and the more money they threw at their backup solution (this one is shiny, this one is integrated with your management platform, this one gives control to your customers, this one gives blowjobs, ...)

        Which one gives blowjobs? Is it suitable to be installed at home? :-)

    • http://www.smh.com.au/technology/security/4800-aussie-sites-evaporate-after-hack-20110621-1gd1h.html [smh.com.au]

      "In assessing the situation, our greatest fears have been confirmed that not only was the production data erased during the attack, but also key backups, snapshots and other information that would allow us to reconstruct these servers from the remaining data."

    • Have you ever tried doing a cron job with cloud computing? I am trying to stay clear of the evil cloud!

  • by Anonymous Coward

    Okay. Let me start off by saying that I am a highly qualified individual with an online degree in chemical mathematics again. After reading the summary, I have not only come to the conclusion that it is incorrect, but that it is also not stargazer. It is actually pew pew along the lines of magazine.

    Sorry I came to the garbage of this place and realized it.

  • This is an ongoing problem when services are concentrated under one roof: it gives potential attackers a much richer target, with many more juicy pieces of low-hanging fruit in a convenient, easy-to-hit area.

    Cloud and remote-hosting services are not bad; in many cases they are a wonderfully effective deployment tool. Customers must be careful, though, to ensure their provider implements good security practices and that their backup solution truly allows for service recovery after a disaster.

    Unfortunately, t

    • by morcego ( 260031 )

      Say what you will, but I refuse to use shared hosting.

      All my stuff is hosted on dedicated (self-managed) servers.

      When I see stuff that is made with the sole intent of "making things easier for the user", like Plex, CPanel etc, I raid an eyebrow. I can't criticize Plex directly, having very little knowledge of its internals. But from the little I've seen from CPanel, they use some very customized, less than fully patched versions that make me not willing to trust my sites to their product.

      Also, shared hostin

  • Is it necessary to point out that they could have done worse? The bank robber that could have murdered all the hostages and set fire to the bank is still a criminal is still a bank robber and still a criminal.

    What is the intent of writing things this way, to make us think they were doing us a favor?

  • The hosting industry really has segmented itself along pricing lines. The overhead to start a small hosting business is so low that there are hundreds if not thousands of hosting 'companies' that offer a very mediocre product but can get by on providing for the cheap and the clueless.

    When you see these types of operations with 'unlimited' resource plans starting at 2 or 3 bucks a month is it any surprise that system security is not a core compentency?

    While not a universal truth I've found you most often get

    • by fermion ( 181285 )
      I do not think that the low price represents security, I think it represents uptime and general service. I have used services with low prices and the only issue was uptime. I suppose for very low price services there might be an issue with backups. I also suppose with very low prices, there is going to be an issue of bandwidth and processing power.

      As has been shown, even the high end services are extremely vulnerable to attacks. No one seems to have that core competency, or at be willing to pay for i

  • by zx2c4 ( 716139 ) <SlashDot@zx2cNETBSD4.com minus bsd> on Saturday October 01, 2011 @02:53PM (#37579000) Homepage

    Most quality web hosting provides customers with shell access to the web server, or when cases where they don't, usually something like PHP is installed that usually allows for arbitrary execution.

    On a web server that hosts a few thousand sites, using the Bing IP Search [bing.com], you can find a list of all the domains. Usually there will be a lowest hanging fruit that's easy enough to pluck. Or, if you can't get shell access through a front-facing attack, you can always just sign up for an account with the hosting company yourself.

    So once you have shell, then it's a matter of being a few steps ahead of the web host's kernel patching cycle. Most shared web hosting services don't utilize expensive services like ksplice and don't want to reboot their systems too often due to downtime concerns. So usually it's possible to pwn the kernel and get root with some script-kiddie-friendly exploit off exploit-db. And if not, no doubt some hacker collectives have repositories of unpatched 0-day properly weaponized exploits for most kernels. And even if they do keep their kernel up to date and strip out unused modules and the like, maybe they've failed to keep some [custom] userland suid executables up to date. Or perhaps their suid executables are fine, but their dynamic linker suffers from a flaw like the one Tavis found in 2010 [pastie.org]. And the list goes on and on -- "local privilege escalation" is a fun and well-known art that hackers have been at for years.

    So the rest of the story should be pretty obvious... you get root and defeat selinux or whatever protections they probably don't even have running, and then you have access to their nfs shares of mounted websites, and you run some idiotic defacing script while brute-forcing their /etc/shadow yada yada yada.

    The moral of the story is -- if you let strangers execute code on your box, be it via a proper shell or just via php's system() or passthru() or whatever, sooner or later if you're not at the very tip top of your game, you're going to get pwn'd.

  • by preaction ( 1526109 ) on Saturday October 01, 2011 @03:02PM (#37579030)

    Every day someone comes into #httpd on freenode asking "How do I protect one user's site from another user's site when both are using PHP or CGI or whatever else?" and the answer is invariably "It will cost too much to bother."

    If you are a business and you are taking in customer information, you should be held responsible when another user on that server actually figures out how much money that information is worth.

    There is no excuse. A VM is about $20 a month. A DynDNS account is less. Shared hosting is for personal home pages, not businesses.

    • Re: (Score:3, Informative)

      by ista ( 71787 )

      Sorry, but VMs are just a different flavor of shared hosting and your recommendation doesn't do any good. With VMs, VPS or dedicated servers hosted on a network operated by clueless network admins simply gives you a new kind of insecurities. For example, when some other dedicated server is sending out spoofed ARP replies to take over your default gateway, you do open your box to simple man-in-the-middle attacks.
      And dedicated servers won't help if you're operating them with a clueless admin - and exactly tho

  • by bsDaemon ( 87307 ) on Saturday October 01, 2011 @03:46PM (#37579314)

    I was going to mod, but I decided to post instead. I used to work at one of the companies mentioned, and what I hear through my channels is kind of retarded. One of the so-called "admins", who really ought to have known better, set up a tunnel from a personal VPS to an internal machine which had no internet-accessible address -- just the tunnel. The VPS got popped and that gave them access to an internal machine which had SSH keys as root to every single VM node and shared hosting box, as well as every dedicated machine on which the customer didn't have root access.

    All the VPS accounts were vulnerable, because the host nodes were compromised, so even if a VPS customer had root, they were vulnerable, too. However, that was the kind of irresponsible, non-professional crap that I saw going on there and is why I left about 2 years ago: I assumed that the longer I stayed, the more likely it was to tarnish my reputation and ruin my career. Well, that and the fact they paid for shit and worked my like a salve tied to a shift bench on a factory floor. But then, I don't really know what anyone can expect web hosting is pretty much the fast food of it, and that's the level of talent that one can reasonably expect to retain for very long, or attract in the first place in most cases.

    Some how the VPS that I left hosted there didn't get whacked, though. I guess they just forgot about me.

    • Did you work for Inmotion? I have a website there, and I'm wondering if I should move it. This is not the first incident I've had there. Anyone have a suggestion for a secure and affordable web host?
  • I've seen mass compromises on Aruba in Italy and Dreamhost in the US too over recent weeks.
  • are you guys lacking original material? did the script kiddies g.f gave the moderator a blowjob?
  • As long as they leave godaddy alone, we are all safe, we can all go back to work now, phew!

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...