Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Operating Systems Software IT

Cross-Distro Remote Package Administration? 209

tobiasly writes "I administer several Ubuntu desktops and numerous CentOS servers. One of the biggest headaches is keeping them up-to-date with each distro's latest bugfix and security patches. I currently have to log in to each system, run the appropriate apt-get or yum command to list available updates, determine which ones I need, then run the appropriate install commands. I'd love to have a distro-independent equivalent of the Red Hat Network where I could do all of this remotely using a web-based interface. PackageKit seems to have solved some of the issues regarding cross-distro package maintenance, but their FAQ explicitly states that remote administration is not a goal of their project. Has anyone put together such a system?"
This discussion has been archived. No new comments can be posted.

Cross-Distro Remote Package Administration?

Comments Filter:
  • Tools exist (Score:5, Informative)

    by PeterBrett ( 780946 ) on Monday April 27, 2009 @04:14AM (#27727577) Homepage
    1. Create a local package repository for each distro.
    2. Set apt/yum to point at only the local repository.
    3. Create a cron job on each box to automatically update daily.
    4. When you want to push a package update out to all boxes, copy it from the public repository to the local one.
    5. Profit!
    • by drsmithy ( 35869 )

      Exactly. This is basically your DIY RH Satellite server It's the model we use, although we don't have the Ubuntu machines to deal with.

    • Re:Tools exist (Score:5, Informative)

      by Jurily ( 900488 ) <<jurily> <at> <gmail.com>> on Monday April 27, 2009 @04:54AM (#27727773)

      When you want to push a package update out to all boxes, copy it from the public repository to the local one.

      Assuming of course all boxes have the same version of the OS, the same packages installed, etc.

      I suggest tentakel [biskalar.de], and that OP could have found it in 2 minutes with Google. I did.

      http://www.google.co.uk/search?q=multiple+linux+remote+administration [google.co.uk] The first hit mentions it.

      • Re:Tools exist (Score:5, Interesting)

        by value_added ( 719364 ) on Monday April 27, 2009 @05:33AM (#27727929)

        Assuming of course all boxes have the same version of the OS, the same packages installed, etc.

        And segregating things on the system that hosts the public repository is impossible?

        I don't think any of this is exactly rocket science. On my home LAN where I use FreeBSD, for example, I have a motley collection of hardware ranging from Soekris boxes to Opterons. Everything gets built on a central build server and distributed automagically from there using a setup similar to what's suggested the OP. Not a single box has the same collection of userland software installed, while certain boxes do get their own custom world/kernel. None of this really requires more effort or involvement on my part than some careful thought beforehand.

        One of the nice advantages of a centralised setup is that it accommodates a clean way of testing things beforehand. Rolling out the latest but broken version of "foo" to multiple systems is something to be avoided.

      • Re: (Score:3, Insightful)

        Assuming of course all boxes have the same version of the OS, the same packages installed, etc.

        Regarding having the same packages installed, you "only" need to make sure your local repos have all the packages that are used across your install base. The machines will then pull only their own updates, with no fuss. Regarding the heterogeneity... tough cookie. Either you have something more or less homogeneous and you can automate the process, or you're stuck doing things by hand. Especially once you enter the realm of "review each available update by hand and determine whether it's relevant", as the OP

        • It's obvious that my biggest mistake when submitting this question is the assumption that many people would actually know what the Red Hat Network does. I want an open source system that I can run on my own server that duplicates the functionality of RHN. That means:

          1. Each machine runs its own cron job to determine (through the regular yum repo mechanism) which updates are available
          2. Each machine publishes that list to the server.
          3. I can log in to the server, see which updates are available for each machine, vie
      • Assuming of course all boxes have the same version of the OS, the same packages installed, etc.

        How so? I'm fairly sure most repositories deal with multiple versions, and just because a repo has a package doesn't mean it's installed on all computers that connect to it. As a single box can serve as both a debian and RPM repo.

      • Yes, I could have found that myself (in much less than 2 minutes) if a distributed command execution tool were anything at all close to what I was looking for.
    • Re: (Score:3, Interesting)

      by JohnnyKlunk ( 568221 )
      Totally agree, I know this is /. and we hate windows - but it's similar to the way WSUS works - and since the introduction of WSUS I haven't given this question a second thought. You can set up different boxes to get updates on different schedules so the pilot boxes always get them first, then production boxes over a few days in a rolling pattern.
    • by nick_urbanik ( 534101 ) <nicku@@@nicku...org> on Monday April 27, 2009 @05:02AM (#27727801) Homepage
      I work in a large ISP, and this is the way we manage updates for the various Linux platforms we use. Quite simple, really. You can build tools that help: diff between the downloaded updates and what you have in your own repository, and mail you the ones that you are not using. I find lwn.net's security pages [lwn.net] useful in keeping track of what security updates matter to us.
      • by Dunkirk ( 238653 ) *

        How about a sample? I use Gentoo. My servers do this every night: /usr/bin/emerge --quiet --color n --sync && update-eix --quiet && glsa-check -l affected

        I could just as well apply every fix automatically, but I like to see it before it goes in.

        • by Bert64 ( 520050 ) <bertNO@SPAMslashdot.firenzee.com> on Monday April 27, 2009 @05:40AM (#27727963) Homepage

          You could also use nagios and check_apt/check_yum to alert you of out of necessary security updates, put a script for installing updates on every box (different script for centos/ubuntu, but same syntax), create a user who is added to sudoers for only that script, and create an ssh key for authentication...
          Then you can feed the list of hosts that need updating into a script which will ssh to each one in sequence and execute the update script followed if necessary by a reboot..

          • Resolving the large numbers of packages and their minor variations such as i386, x86_64, etc. across all the systems means a potentially large number of distinct Nagios query templates. Generating and managing those directly is a fascinating problem. Building it from scratch for a single in-house usage is a lot of work, usually better spent elsewhere.

            It really makes me wish that RedHat Satellilte Server did _not_ require an Oracle backend. For RedHat themselves, with so many thousands of clients, it makes s

    • Re: (Score:3, Insightful)

      A cron job? Just set the update-manager to run every morning and automatically download AND install all updates.

      You sub 7-digit uid guys always do everything the hard way!
      • Re:Tools exist (Score:4, Informative)

        by comcn ( 194756 ) on Monday April 27, 2009 @06:06AM (#27728085) Journal

        As another "sub 7-digit guy" - there is a reason for this... There is no way I'm going to let over 60 servers automatically install patches without me checking them first! Download, yes. Install, no.

        At work we use cfengine to manage the servers, with a home-built script that allows servers to install packages (of a specified version). Package is checked and OK? Add it to the bottom of the text file, GPG sign it, and push it into the repository. cfengine takes care of the rest (our cf system is slightly non-standard, so everything has to be signed and go through subversion to actually work).

        • You have 60 servers and you don't have a controlled local repository?? I feel sorry for the shared repos that have to deal with all you guys updating your 60+ servers all at the same time...
        • Re: (Score:3, Informative)

          by MikeBabcock ( 65886 )

          Its easier to run your own update mirror and have your clients pull the updates list from that.

          IMHO.

      • Re: (Score:3, Informative)

        For RedHat? No. 'yum update' is fairly resource intensive. And changing applications in the middle of system operations is _BAD, BAD, BAD_ practice. I do _not_ want you silently updating MySQL or HTTPD in the midst of production services, because the update process may introduce an incompatibility with the existing configurations, especially if some fool has been doing things like installing CPAN or PHP modules without using RPM packages, or has manually stripped out and replaced Apache without RPM manageme

        • Just use the exclude option in yum. It is easy when you setup a box to create a yum script that either automatically runs or you run manually by whatever method you prefer that excludes the packages you don't want installed. Exclude the kernel on boxes that have a custom kernel and exclude production apps on server machines.

          The downside is that it is up to your staff to keep track of what doesn't auto update on what machine. This is easy with RHN service. I've heard good things about puppet, but haven't

    • You missed the RedHat model. 5. Insert Oracle server for no good reason. 6. Profit!
    • What PeterBrett said. Where I work, we can't do it the easy way because of security requirements, but the methodology still works. We just have to rsync the various repos each day, and the cron job just points to the local repo on each box instead of central ones hanging out somewhere.

    • Thanks for the suggestion, but this and other solutions like Puppet and CFEngine solve only half the problem, which is actually pushing out the patches. The part that's really missing is a way to easily identify the available patches from the distro for that particular machine. When I log on to Red Hat Network it has a list like "10 of your 12 systems are up to date" and you can see which systems need updating, which packages are available for install on them, and actually schedule those patches f

  • Webmin (Score:5, Interesting)

    by trendzetter ( 777091 ) on Monday April 27, 2009 @04:19AM (#27727603) Homepage Journal
    I recommend Webmin [webmin.com] which 100% FOSS. I have found it reliable, flexible and feature-rich.
    • Re: (Score:3, Interesting)

      I have to second this. Webmin has everything you ask for and then some. If you have an update script on each machine, you could easily update all of your machines at once with the cluster management tools. I know it works well with APT (having used it myself), but I can't speak for any of the other package managers. In the worst case, it's still easy to push an update command to the non-apt machines through the Webmin cluster tools.
      • it works well with YUM too - in fact, Webmin is one of the best admin things around. I think every project should be mandated to create a webmin administrative module before being allowed into the wild :)

    • by altp ( 108775 )

      Just be sure to turn off the root user, setup the SSL, and change the port number to something else. I also like to limit webmin to a list of known IP addresses via its admin interface AND in iptables.

      Been using Webmin for longer than i can remember.

  • clusterssh (Score:5, Interesting)

    by circlingthesun ( 1327623 ) on Monday April 27, 2009 @04:20AM (#27727613)
    allows you to ssh into multiple machines and execute the same command on all of them from one terminal window. So if you set up a shell script that detects a host's distro and then execute the relevant update command you should be sorted.
    • by skeeto ( 1138903 )

      So if you want to check the filesystems on many machines would it be called a clusterfsck?

      Or maybe that's what you call it when all the filesystems are damaged at the same time.

  • You don't want it (Score:5, Interesting)

    by mcrbids ( 148650 ) on Monday April 27, 2009 @04:23AM (#27727617) Journal

    I admin several busy CentOS servers for my company. You don't probably want a fully web-based application:

    1) what happens when some RPM goes awry to borken your server(s)? Yes, it's pretty rare, but it DOES happen. In my case, I WANT to do them one by one in asc order of importance so that if anything is borked, it's most likely to be my least important systems!

    2) How secure is it? You are effectively granting root privs to a website - not always a good idea. (rarely, never)

    Me? I have a web doohickey to let me know when updates are available. Cron job runs nightly to yum and a pattern match identifies whether or not updates are needed, to show on my homepage. So it doesn't DO the update, butit makes it ez to see has been done.

    • Re:You don't want it (Score:5, Informative)

      by galorin ( 837773 ) on Monday April 27, 2009 @04:41AM (#27727707)

      Depending on how uniform your servers are, keep one version of CentOS and one version of Ubuntu running in a VM, and have these notify you when updates are available. When updates are available, test against these VMs, and do the local repository thing suggested by another person here. Do one system at a time to make sure something doesn't kill everything at once.

      Web based apps with admin privs are fine as long as they're only accessable via the intranet, strongly passworded, and no one else knows they're there. If you need to do remotely, VPN in to the site, and SSH into each box. You're an Administrtor, start administratorizing. Some things just shouldn't be automated.

    • Me? I have a web doohickey to let me know when updates are available. Cron job runs nightly to yum and a pattern match identifies whether or not updates are needed, to show on my homepage. So it doesn't DO the update, butit makes it ez to see has been done.

      Care to publish this somewhere? To be honest knowing which updates are available is a bigger concern to me than actually installing them. The latter is more a nice to have but all the other suggestions are concentrating mostly on that aspect.

      I'm close to just knuckling down and writing such a thing myself but it's always nice to not start from scratch.

    • 2) How secure is it? You are effectively granting root privs to a website - not always a good idea.

      Forgot to reply to that part... the way the RHN works is that there is a cron job that runs on the client to ask the server once an hour whether there are any requested updates. The server just provides the list of packages that the user wants updated, and the local cron job pulls those packages from the pre-approved repository/ies only.

      So yeah, it's still granting a fair deal of power to the website but it's not like someone could use it to run arbitrary commands on the client. But as long as everything is

  • If you can script it should be fairly easy. Here is how I would do it (we run mostly gentoo servers and a mixture of windows, Ubuntu (and Ubuntu based) and RPM distros, but the guys using Linux customise so heavily and is tech savvy enough to keep themselves up to date.)

    Set up sshd on every desktop, with key authorization (we do this with the gentoo servers.)

    With a script and cron job you should be able to push them to run updates regularly. But you can just use the normal update tools and a local repo that

    • Re:Can You Script? (Score:5, Informative)

      by dns_server ( 696283 ) on Monday April 27, 2009 @04:32AM (#27727659)
      The corporate product is http://www.canonical.com/projects/landscape [canonical.com]Landscape
      • Re: (Score:3, Insightful)

        by AlterRNow ( 1215236 )

        I have a question about Landscape.

        Is it possible to run your own server?
        If not, isn't it just another piece of vendor lock-in?

        I'm interested in using it but I don't want to depend on Canonical. For example, what if my internet connection goes down? I'd lose the ability to use Landscape at all, right?

        • by Bert64 ( 520050 )

          If your internet connection goes down, where will you get updates from?

          • The mirror on my LAN that finished updating 5 minutes before the connection dropped.
            I would probably use it more to monitor the machines, as I only own 4 of the 7 active ones on the network, the others are other members of the family, than use it to do updates anyway.

            • Mirroring CentOS or Fedora is fairly easy, although if you can afford it, please contribute back to the community by making your mirror available externally. (Rate limit it, but make it available: please don't be a freeloader.)

              Mirroring RHEL, on which CentOS is based, is pretty awkward. Since the 'yum-rhn-plugin' knows to talk only to the authorized, secured RedHat repository or a RedHat Satellite Server, you pretty much have to find a way to build a mirror machine for _each RedHat Distribution_, whether i3

              • I only mirror Ubuntu as I don't have any Fedora systems.
                And I do not have the connection to make my mirror public ( 75% throttle after 500mb upload ) otherwise, I would.

          • Re: (Score:3, Funny)

            by magarity ( 164372 )

            If your internet connection goes down, where will you get updates from?
             
            Congrats, you just volunteered to mail him the floppies.

          • If your server's Internet connection goes down, you have bigger things to worry about than updating.

            • You're assuming the server in question is outward facing. Maybe it's the mail server, which can still store and forward external mail and keep the internal mail flowing. Maybe it's one of the (probably many) Intranet servers that don't need Internet access to do their job. Maybe it's a computational server which can still be processing batch jobs while the connection is down. In most large corporate environments only a fraction of the servers have to actually access the Internet to their jobs. Obviously

      • Thanks, I hadn't heard of Landscape before, although it seems to be Ubuntu-specific. I am rather surprised that no vendor has created a similar non-distro-specific solution.
  • by hax0r_this ( 1073148 ) on Monday April 27, 2009 @04:28AM (#27727637)
    Look into Puppet or CFEngine (we use CFEngine but am considering switching to Puppet eventually). They're both extremely flexible management tools that will trivially handle package management, but you can use them to accomplish almost any management task you can imagine, with the ability to manage or edit any file you want, running shell scripts, etc.

    The work flow goes something like this:
    1. Identify packages that need update (have a cron job run on every box to email you packages that need updating, or just security updates, however you want to do it)
    2. Update the desired versions in your local checkout of your cfengine/puppet files (the syntax isn't easily described here, but its very simple to learn).
    3. Commit/push (note that this is the easy way to have multiple administrators) your changes. Optionally have a post commit hook to update a "master files" location, or just do the version control directly in that location.
    4. Every box has an (hourly? Whatever you like) cron job to update against your master files location. At this time (with splay so you don't hammer your network) each cfengine/puppet client connects to the master server, updates any packages, configs, etc, runs any scripts you associated with those updates, then emails (or for extra credit build your own webapp) you the results.
    • We use cfengine with close to 100 machines and works quite fine. My only gripe is that on Ubuntu 8.04, it has a bug such that it can't determine which packages are already installed. And since Desktop is first priority for Ubuntu, their maintenance of software for larger environments is abysmal.
    • by Dunkirk ( 238653 ) *

      My boss was going to have us use Puppet, but changed his mind before we got going. Instead, he now wants us to use chef. We haven't gotten to the point of needing either one yet, so I haven't checked out either one, but he definitely knows what he's doing around a computer, so I thought it worth throwing out.

  • by shitzu ( 931108 )

    /etc/init.d/yum start

    and what do you know - the updates are installed automagically without any manual intervention

    • by cerberusss ( 660701 ) on Monday April 27, 2009 @05:06AM (#27727819) Journal

      the updates are installed automagically without any manual intervention

      I'm not sure that's a good idea on a server. Why would you mindlessly update packages on a server when there's no actual reason to do so?

      • by supernova_hq ( 1014429 ) on Monday April 27, 2009 @05:32AM (#27727925)
        Because as any decent linux-server-farm admin, you have a closely controlled local repository mirror that only has updates you specifically add.
      • by BruceCage ( 882117 ) on Monday April 27, 2009 @05:51AM (#27728007)

        I'd say that it depends on a lot of factors really.

        First of all it depends on how mission critical the services that run on that system are considered and what kind of chances you're willing to take that a particular package might break something. The experience and available time of your system administrator also plays a significant role.

        There's also the very highly unlikely scenario that a certain update might include "something bad", for example when the update servers are compromised. See Debian's compromises at Debian Investigation Report after Server Compromises [debian.org] from 2003, Debian Server restored after Compromise [debian.org] from 2006, and Fedora's at Infrastructure report, 2008-08-22 UTC 1200 [redhat.com].

        I currently manage just a single box (combination of a public web server and internal supporting infrastructure) for the company I work at and have it automatically install both security and normal updates.

        I personally trust the distro maintainers to properly QA everything that is packaged. Also, I don't think any single system administrator has the experience or knowledge to be able to actually verify whether or not an update is going to be installed without any problems. The best effort one can make is determine whether or not an update is really needed and then keep an eye on the server while the update is being applied.

        In the case of security updates it's a no-brainer for me, they need to be applied ASAP. I haven't had the energy to setup a proper monitoring solution and I've never even seen Red Hat Network in action. So if I had to manually verify available updates (or even setup some shell scripts to help me here) it would be just too much effort considering the low mission criticality of the server. If there does happen to be a problem with the server I'll find out about it fast enough then I'll take a peak at the APT log and take it from there.

  • Func (Score:2, Interesting)

    by foobat ( 954034 )

    https://fedorahosted.org/func/ [fedorahosted.org]

    I know it's get Fedora in it's name but it's been accepted into as a package into Debian (and thus ubuntu).

    It's pretty cool, designed to control alot of systems at once and avoid having to ssh into them all at once, has a build in certification system, a bunch of modules written for it already , usable from the command line so you can easily add it into your scripts and has a python api so if you really wanted some you could throw together some django magic if you wanted a web

  • Why don't you determine which packages you DON'T want updated automatically and add them to an exclude list on each machine. Then you can run yum update from cron.daily and update the accepted packages, then set up a cron job to run an hour or so after the update which checks for other available package updates. It's pretty simple to run yum check-update and pipe the output into an email.

    I have no idea if you can do this with apt but I don't see why not.
    • Re: (Score:3, Insightful)

      by richlv ( 778496 )

      that won't quite work - most likely, submitter does not want a particular list of packages to never update, but instead wants to evaluate individual patches, so decision is based on the exact patch, not made for all possibeel patchs to aa prticular package.

      • by smoker2 ( 750216 )
        Hence the yum check-updates and subsequent email ! You have to get involved if you want to be picky. Unless you know an update is available, how do you know whether or not to apply it ?
  • by AVee ( 557523 ) <slashdot@a[ ].org ['vee' in gap]> on Monday April 27, 2009 @04:39AM (#27727689) Homepage
    Instead of a fancy web solutions, you could use the script and scriptreplay commands on each system. Do whatever you need to do once on 1 system but record that using script. After that replay the scripts on each of the other systems. You could either manually or automatically log on to each system and start the replay or you could set up a cron job which fetches and replays the script you publish somewhere.

    Not very fancy, but it will get the job done, and it will work for more than just updates, you could also use it to e.g. changes setting or add packages. Or basically anything else you can do from a shell in a repeatable way.

    Check man 1 script and man 1 scriptreplay for details.
  • by MrMr ( 219533 ) on Monday April 27, 2009 @04:49AM (#27727753)
    1) yum -e whateveryoudontneed
    2) chkconfig yum-updatesd on
    3) Make sure do_update = yes, download_deps = yes, etc are set in yum-updatesd.conf
    4) /etc/init.d/yum-updatesd start
    This makes your yum system self-updating.
    • - not cross-distribution
      - yum can make mistakes (e.g. move your config files around)
      - even if your binaries are updated, the running server are still executing the old (unlinked) code, you'll have to restart your services eventually
      - if there is a critical kernel patch, you'll even have to reboot (probably less problematic if you run some virtualisation like Xen)
      There is no such thing as a perfect self-updating system that doesn't need your supervision (although I heard good things

      • You'd also need an exclusion situation where if the package claims to need to update the configuration file (because of a configuration format change), the package should be downloaded and not updated because you'll need to tweak the file.

      • by MrMr ( 219533 )
        not cross-distribution
        http://packages.debian.org/unstable/admin/yum [debian.org] yum can make mistakes (e.g. move your config files around)
        rpm saves modified files, and sends a warning. What more do you need?
        you'll have to restart your services eventuall
        any good post installation for a service comes with a 'service X restart' command.
        if there is a critical kernel patch, you'll even have to reboot
        So critical kernel patches are the only thing requiring a single ssh reboot command.
  • Great! (Score:5, Funny)

    by Frogbert ( 589961 ) <frogbertNO@SPAMgmail.com> on Monday April 27, 2009 @04:58AM (#27727785)

    I for one look forward to the rational, well thought out, debate on the various pros and cons of linux distributions and their package managers, that this story will become.

  • by ^Case^ ( 135042 ) on Monday April 27, 2009 @05:16AM (#27727847)

    Puppet [reductivelabs.com] is a tool made to do exactly what you're asking for by abstracting the specific operating system into a generic interface. It might be worth checking out. Also there's a newcomer called chef [opscode.com]. And then there's the oldies like cfengine [cfengine.org].

  • On Debian type systems cron-apt is extremely useful for having remote machines notify via email and/or syslog of available updates. By default it downloads but does not install new packages, though it can be set up to do anything you can do with apt-get, so for example you could set it up to automatically install security patches but not other packages. I don't have enough similar machines to benefit from using clusterssh but it cron-apt+clusterssh would seem to be ideal for remote package management of m
  • I would say that FAI [uni-koeln.de] is worth looking at. You will have full control over which updates are applied.
    • by HogGeek ( 456673 )

      Are you seriously suggesting software that hasn't seen an update since 2005?

      • by rolfc ( 842110 )
        Where have you got that idea from?


        As far as I can see the last update is Wed, 22 Apr 2009 13:39:58 +0200.
        • by HogGeek ( 456673 )

          Actually, I meant to reply to the previous post...

          arusha hasn't seen any activity since at least 2007, and nothing announced since 2005...

  • I think that you are not putting your efforts where it matters.
    What is important is that the critical services run properly on each server. Sure that can be affected by patching but also by many other factors. So don't focus solely on the patching, focus instead on making sure all the services are running properly.
    You should have your own scripts that check that each server is responding as required. Make your test suite as strong as you can and improve it each time a new problem crops up that wasn't caught

  • There are oodles of configuration management tools out there that do at least most of what you want. My personal recommendation is Bcfg: http://www.bcfg2.org/ [bcfg2.org] . It doesn't quite have the entire web interface (yet), but it is fantastic for keeping everything up to date and clean and telling you when you have outliers. I currently use it for the 350 or so support machines for the 5th fastest computer in the world [anl.gov], and I know _much_ larger installations are using too.

  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Monday April 27, 2009 @07:13AM (#27728397)

    That's your job. Bash CLI, the CLI toolkit, CLI Emacs, key-based ssh and a well-maintained, well-documented pack of own scripts in your favourite interpreted PL are just what it takes to do this sort of thing. No fancy bling-bling required or wanted. It would make your worse, not easyer in the long run.

  • by viralMeme ( 1461143 ) on Monday April 27, 2009 @07:19AM (#27728433)
    "I administer several Ubuntu desktops and numerous CentOS servers. One of the biggest headaches is keeping them up-to-date with each distro's latest bugfix and security patches"

    My advice is, if it ain't broke don't fix it, especially on a production server. Have two identical systems and test the latest bugfix on that, before you roll it out to the live system. You don't know what someone elses bugfix is going to break and would have no way of rolling it back.
  • If a system is working properly, then applying updates cannot make it better. It can leave it the same or make it worse. I Install my servers and leave them alone for about 3 years, then re-install and run them for another 3 years, then I toss them away and install new ones. Most Linux prolems are caused by finger trouble, so a security problem has to be very serious before I would apply it.
    • Re: (Score:3, Insightful)

      by neurovish ( 315867 )

      You must not handle any credit cards, financial data, or health records then. Some of us are forced to keep servers updated to within a couple weeks of the latest patches or risk a dreaded "uncompliance" status.

      Personally, I wouldn't want any of my data trusted to an infrastructure that is only updated once every three years, but in some places that approach makes sense as well. We've certainly taken that approach with SLES servers since Novell's updates are usually more malicious that whatever hole they

  • In all honesty I wouldn't set up a solution. Keep in mind that with different release schedules and different release criteria many times there may be a day or two lag between them and you still need to validate the patch\fix before putting it into a production environment. So given Ubuntu and CentOS, let's say the CentOS patch comes out 2 days later you are either doing a regression test twice on the two systems, or waiting till both are release and running a single regression test.

    If you are hell bent on

  • At the behest of my manager, I've been working on implementing Spacewalk here at my current contract. It's the open-sourced version of Redhat's Satellite.

    Downside: they've only recently released 0.5, and there are still lots of bugs (help doesn't, in general, work).

    But you can inventory, and push, updates to rpms; ditto for configuration files, and there's kickstart support. It also supports other than RH and Fedora; I've been working with CentOS (yeah, I know, just the serial numbers filed off), but it app

  • Regarding the CentOS side, you could try func [fedorahosted.org] or if you have an oracle installation lying around, spacewalk [redhat.com].

It's been a business doing pleasure with you.

Working...