Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Operating Systems Software IT

Cross-Distro Remote Package Administration? 209

tobiasly writes "I administer several Ubuntu desktops and numerous CentOS servers. One of the biggest headaches is keeping them up-to-date with each distro's latest bugfix and security patches. I currently have to log in to each system, run the appropriate apt-get or yum command to list available updates, determine which ones I need, then run the appropriate install commands. I'd love to have a distro-independent equivalent of the Red Hat Network where I could do all of this remotely using a web-based interface. PackageKit seems to have solved some of the issues regarding cross-distro package maintenance, but their FAQ explicitly states that remote administration is not a goal of their project. Has anyone put together such a system?"
This discussion has been archived. No new comments can be posted.

Cross-Distro Remote Package Administration?

Comments Filter:
  • Tools exist (Score:5, Informative)

    by PeterBrett ( 780946 ) on Monday April 27, 2009 @05:14AM (#27727577) Homepage
    1. Create a local package repository for each distro.
    2. Set apt/yum to point at only the local repository.
    3. Create a cron job on each box to automatically update daily.
    4. When you want to push a package update out to all boxes, copy it from the public repository to the local one.
    5. Profit!
  • by hax0r_this ( 1073148 ) on Monday April 27, 2009 @05:28AM (#27727637)
    Look into Puppet or CFEngine (we use CFEngine but am considering switching to Puppet eventually). They're both extremely flexible management tools that will trivially handle package management, but you can use them to accomplish almost any management task you can imagine, with the ability to manage or edit any file you want, running shell scripts, etc.

    The work flow goes something like this:
    1. Identify packages that need update (have a cron job run on every box to email you packages that need updating, or just security updates, however you want to do it)
    2. Update the desired versions in your local checkout of your cfengine/puppet files (the syntax isn't easily described here, but its very simple to learn).
    3. Commit/push (note that this is the easy way to have multiple administrators) your changes. Optionally have a post commit hook to update a "master files" location, or just do the version control directly in that location.
    4. Every box has an (hourly? Whatever you like) cron job to update against your master files location. At this time (with splay so you don't hammer your network) each cfengine/puppet client connects to the master server, updates any packages, configs, etc, runs any scripts you associated with those updates, then emails (or for extra credit build your own webapp) you the results.
  • Re:Can You Script? (Score:5, Informative)

    by dns_server ( 696283 ) on Monday April 27, 2009 @05:32AM (#27727659)
    The corporate product is http://www.canonical.com/projects/landscape [canonical.com]Landscape
  • by shitzu ( 931108 ) on Monday April 27, 2009 @05:33AM (#27727661)

    /etc/init.d/yum start

    and what do you know - the updates are installed automagically without any manual intervention

  • Re:You don't want it (Score:5, Informative)

    by galorin ( 837773 ) on Monday April 27, 2009 @05:41AM (#27727707)

    Depending on how uniform your servers are, keep one version of CentOS and one version of Ubuntu running in a VM, and have these notify you when updates are available. When updates are available, test against these VMs, and do the local repository thing suggested by another person here. Do one system at a time to make sure something doesn't kill everything at once.

    Web based apps with admin privs are fine as long as they're only accessable via the intranet, strongly passworded, and no one else knows they're there. If you need to do remotely, VPN in to the site, and SSH into each box. You're an Administrtor, start administratorizing. Some things just shouldn't be automated.

  • by MrMr ( 219533 ) on Monday April 27, 2009 @05:49AM (#27727753)
    1) yum -e whateveryoudontneed
    2) chkconfig yum-updatesd on
    3) Make sure do_update = yes, download_deps = yes, etc are set in yum-updatesd.conf
    4) /etc/init.d/yum-updatesd start
    This makes your yum system self-updating.
  • Re:Tools exist (Score:5, Informative)

    by Jurily ( 900488 ) <jurily&gmail,com> on Monday April 27, 2009 @05:54AM (#27727773)

    When you want to push a package update out to all boxes, copy it from the public repository to the local one.

    Assuming of course all boxes have the same version of the OS, the same packages installed, etc.

    I suggest tentakel [biskalar.de], and that OP could have found it in 2 minutes with Google. I did.

    http://www.google.co.uk/search?q=multiple+linux+remote+administration [google.co.uk] The first hit mentions it.

  • by ^Case^ ( 135042 ) on Monday April 27, 2009 @06:16AM (#27727847)

    Puppet [reductivelabs.com] is a tool made to do exactly what you're asking for by abstracting the specific operating system into a generic interface. It might be worth checking out. Also there's a newcomer called chef [opscode.com]. And then there's the oldies like cfengine [cfengine.org].

  • by supernova_hq ( 1014429 ) on Monday April 27, 2009 @06:22AM (#27727883)
    Sorry to reply to my own post, but circlingthesun actually posted the name of it below!

    clusterssh
  • by walt-sjc ( 145127 ) on Monday April 27, 2009 @06:39AM (#27727955)

    It's called "dssh". Google is your "search" friend (we will ignore the evil side of Google at the moment... :-)

  • by BruceCage ( 882117 ) on Monday April 27, 2009 @06:51AM (#27728007)

    I'd say that it depends on a lot of factors really.

    First of all it depends on how mission critical the services that run on that system are considered and what kind of chances you're willing to take that a particular package might break something. The experience and available time of your system administrator also plays a significant role.

    There's also the very highly unlikely scenario that a certain update might include "something bad", for example when the update servers are compromised. See Debian's compromises at Debian Investigation Report after Server Compromises [debian.org] from 2003, Debian Server restored after Compromise [debian.org] from 2006, and Fedora's at Infrastructure report, 2008-08-22 UTC 1200 [redhat.com].

    I currently manage just a single box (combination of a public web server and internal supporting infrastructure) for the company I work at and have it automatically install both security and normal updates.

    I personally trust the distro maintainers to properly QA everything that is packaged. Also, I don't think any single system administrator has the experience or knowledge to be able to actually verify whether or not an update is going to be installed without any problems. The best effort one can make is determine whether or not an update is really needed and then keep an eye on the server while the update is being applied.

    In the case of security updates it's a no-brainer for me, they need to be applied ASAP. I haven't had the energy to setup a proper monitoring solution and I've never even seen Red Hat Network in action. So if I had to manually verify available updates (or even setup some shell scripts to help me here) it would be just too much effort considering the low mission criticality of the server. If there does happen to be a problem with the server I'll find out about it fast enough then I'll take a peak at the APT log and take it from there.

  • Re:Tools exist (Score:4, Informative)

    by comcn ( 194756 ) on Monday April 27, 2009 @07:06AM (#27728085) Journal

    As another "sub 7-digit guy" - there is a reason for this... There is no way I'm going to let over 60 servers automatically install patches without me checking them first! Download, yes. Install, no.

    At work we use cfengine to manage the servers, with a home-built script that allows servers to install packages (of a specified version). Package is checked and OK? Add it to the bottom of the text file, GPG sign it, and push it into the repository. cfengine takes care of the rest (our cf system is slightly non-standard, so everything has to be signed and go through subversion to actually work).

  • Re:Tools exist (Score:3, Informative)

    by Antique Geekmeister ( 740220 ) on Monday April 27, 2009 @07:11AM (#27728103)

    For RedHat? No. 'yum update' is fairly resource intensive. And changing applications in the middle of system operations is _BAD, BAD, BAD_ practice. I do _not_ want you silently updating MySQL or HTTPD in the midst of production services, because the update process may introduce an incompatibility with the existing configurations, especially if some fool has been doing things like installing CPAN or PHP modules without using RPM packages, or has manually stripped out and replaced Apache without RPM management.

    And heaven forbid that you have a kernel with local modifications and special patches for special hardware whose version number is exceeded by the next RedHa kernel, and it replace yours, and the hardware fail to boot at the next reboot. That is very painful indeed to cope with if you haven't set up remote boot management or spent extra effort to lock down your packages.

    We oldtimers, low uid or not, have been burned enough to know not to lick the frozen lightpole.

  • by dna_(c)(tm)(r) ( 618003 ) on Monday April 27, 2009 @07:49AM (#27728279)

    Never use ssh+password. Always use ssh+PKI. I think you missed "key" and focused on "[no] password"

    From Ubuntu wiki SSHHowto [ubuntu.com]:

    If your SSH server is visible over the Internet, you should use public key authentication instead of passwords if at all possible.

  • by MikeBabcock ( 65886 ) <mtb-slashdot@mikebabcock.ca> on Monday April 27, 2009 @09:13AM (#27728783) Homepage Journal

    IMHO, a good sysadmin is the one who sees such issues and finds or writes a script to do it for them.

    (That's "you're both right")

    for address in addresses; do
          ssh $address 'touch /var/lib/triggers/update'
    done

    With an obvious job on the machines watching for such trigger files (to avoid root access, etc.)

    if [ -f $TRIGGERFILE ]; then
          yum -y update 2>&1| tee /tmp/yum_trigger.log \
          && rm -f $TRIGGERFILE
    fi

  • Re:Tools exist (Score:3, Informative)

    by MikeBabcock ( 65886 ) <mtb-slashdot@mikebabcock.ca> on Monday April 27, 2009 @09:15AM (#27728811) Homepage Journal

    Its easier to run your own update mirror and have your clients pull the updates list from that.

    IMHO.

  • by Anonymous Coward on Monday April 27, 2009 @09:58AM (#27729279)

    Using keyfiles doesn't mean no password. Most ssh guides that talk about keyfiles actually suggest to protect them with a password. Why? Well, like the GP said (what you obviously somehow missed), as soon as one client is somehow compromised you would have root access to every server.

    Using passwords doesn't mean you don't use keyfiles. And using keyfiles doesn't mean you don't have to enter passwords. The way to go is ssh+PKI+password+ssh_agent.

  • Re:Exactly... (Score:3, Informative)

    by smoker2 ( 750216 ) on Monday April 27, 2009 @10:21AM (#27729583) Homepage Journal
    The internet.
    Actually, /var/log/messages to find the issue then google to see who else has the same or similar issue. Or troll a few web hosting forums. Seriously, you need to take more interest in what's happening to your machines. My servers send me status reports by email every morning regarding patches, rootkit detection, unauthorised accesses, tripwire incursions etc etc. I have a server which has had minimal patching since 2002 and still runs fine with no rootkits. I have had no unavoidable downtime since 2002. Yes, I run Redhat Enterprise.

    This is what disturbs me about the push for linux on the desktop. Most people can't be bothered to research and implement simple monitoring measures, then complain that everything's gone tits up. The whole point of linux is - YOU are in control, ultimate control. Use the power wisely or suffer the consequences. Don't go blaming the distro, the maintainers, the world for your inability to control something designed to react to your hand and your hand alone. A poor workman blames his tools, and GNU/linux is merely a tool.

    If you have root, fucking act like it !
  • by BruceCage ( 882117 ) on Monday April 27, 2009 @11:08AM (#27730299)

    Exactly, the confusion here might be in the terminology. Password versus passhrase.

    Anyways, just using keys doesn't magically make everything more secure, it just negates brute force password attacks. From the few high profile cases I remember the compromise was the result of somebody's private key being compromised (e.g. the Debian compromises).

    The only true solution is a combination of the principle of least privilege, sandboxing (SELinux etc.), proper monitoring and a whole host of other security measures.

The one day you'd sell your soul for something, souls are a glut.

Working...