Cross-Distro Remote Package Administration? 209
tobiasly writes "I administer several Ubuntu desktops and numerous CentOS servers. One of the biggest headaches is keeping them up-to-date with each distro's latest bugfix and security patches. I currently have to log in to each system, run the appropriate apt-get or yum command to list available updates, determine which ones I need, then run the appropriate install commands. I'd love to have a distro-independent equivalent of the Red Hat Network where I could do all of this remotely using a web-based interface. PackageKit seems to have solved some of the issues regarding cross-distro package maintenance, but their FAQ explicitly states that remote administration is not a goal of their project. Has anyone put together such a system?"
Tools exist (Score:5, Informative)
Re: (Score:2)
Exactly. This is basically your DIY RH Satellite server It's the model we use, although we don't have the Ubuntu machines to deal with.
Re:Tools exist (Score:5, Informative)
When you want to push a package update out to all boxes, copy it from the public repository to the local one.
Assuming of course all boxes have the same version of the OS, the same packages installed, etc.
I suggest tentakel [biskalar.de], and that OP could have found it in 2 minutes with Google. I did.
http://www.google.co.uk/search?q=multiple+linux+remote+administration [google.co.uk] The first hit mentions it.
Re:Tools exist (Score:5, Interesting)
Assuming of course all boxes have the same version of the OS, the same packages installed, etc.
And segregating things on the system that hosts the public repository is impossible?
I don't think any of this is exactly rocket science. On my home LAN where I use FreeBSD, for example, I have a motley collection of hardware ranging from Soekris boxes to Opterons. Everything gets built on a central build server and distributed automagically from there using a setup similar to what's suggested the OP. Not a single box has the same collection of userland software installed, while certain boxes do get their own custom world/kernel. None of this really requires more effort or involvement on my part than some careful thought beforehand.
One of the nice advantages of a centralised setup is that it accommodates a clean way of testing things beforehand. Rolling out the latest but broken version of "foo" to multiple systems is something to be avoided.
Re: (Score:3, Insightful)
Assuming of course all boxes have the same version of the OS, the same packages installed, etc.
Regarding having the same packages installed, you "only" need to make sure your local repos have all the packages that are used across your install base. The machines will then pull only their own updates, with no fuss. Regarding the heterogeneity... tough cookie. Either you have something more or less homogeneous and you can automate the process, or you're stuck doing things by hand. Especially once you enter the realm of "review each available update by hand and determine whether it's relevant", as the OP
Re: (Score:2)
It's obvious that my biggest mistake when submitting this question is the assumption that many people would actually know what the Red Hat Network does. I want an open source system that I can run on my own server that duplicates the functionality of RHN. That means:
Re: (Score:2)
Assuming of course all boxes have the same version of the OS, the same packages installed, etc.
How so? I'm fairly sure most repositories deal with multiple versions, and just because a repo has a package doesn't mean it's installed on all computers that connect to it. As a single box can serve as both a debian and RPM repo.
Re: (Score:2)
Re: (Score:3, Interesting)
Re:Tools exist: we do it this way. (Score:5, Interesting)
Re: (Score:2)
How about a sample? I use Gentoo. My servers do this every night: /usr/bin/emerge --quiet --color n --sync && update-eix --quiet && glsa-check -l affected
I could just as well apply every fix automatically, but I like to see it before it goes in.
Re:Tools exist: we do it this way. (Score:4, Insightful)
You could also use nagios and check_apt/check_yum to alert you of out of necessary security updates, put a script for installing updates on every box (different script for centos/ubuntu, but same syntax), create a user who is added to sudoers for only that script, and create an ssh key for authentication...
Then you can feed the list of hosts that need updating into a script which will ssh to each one in sequence and execute the update script followed if necessary by a reboot..
Re: (Score:2)
Resolving the large numbers of packages and their minor variations such as i386, x86_64, etc. across all the systems means a potentially large number of distinct Nagios query templates. Generating and managing those directly is a fascinating problem. Building it from scratch for a single in-house usage is a lot of work, usually better spent elsewhere.
It really makes me wish that RedHat Satellilte Server did _not_ require an Oracle backend. For RedHat themselves, with so many thousands of clients, it makes s
Re: (Score:3, Insightful)
You sub 7-digit uid guys always do everything the hard way!
Re:Tools exist (Score:4, Informative)
As another "sub 7-digit guy" - there is a reason for this... There is no way I'm going to let over 60 servers automatically install patches without me checking them first! Download, yes. Install, no.
At work we use cfengine to manage the servers, with a home-built script that allows servers to install packages (of a specified version). Package is checked and OK? Add it to the bottom of the text file, GPG sign it, and push it into the repository. cfengine takes care of the rest (our cf system is slightly non-standard, so everything has to be signed and go through subversion to actually work).
Re: (Score:2)
Re: (Score:3, Informative)
Its easier to run your own update mirror and have your clients pull the updates list from that.
IMHO.
Re: (Score:3, Informative)
For RedHat? No. 'yum update' is fairly resource intensive. And changing applications in the middle of system operations is _BAD, BAD, BAD_ practice. I do _not_ want you silently updating MySQL or HTTPD in the midst of production services, because the update process may introduce an incompatibility with the existing configurations, especially if some fool has been doing things like installing CPAN or PHP modules without using RPM packages, or has manually stripped out and replaced Apache without RPM manageme
Re: (Score:2)
Just use the exclude option in yum. It is easy when you setup a box to create a yum script that either automatically runs or you run manually by whatever method you prefer that excludes the packages you don't want installed. Exclude the kernel on boxes that have a custom kernel and exclude production apps on server machines.
The downside is that it is up to your staff to keep track of what doesn't auto update on what machine. This is easy with RHN service. I've heard good things about puppet, but haven't
Re: (Score:2)
Re: (Score:2)
What PeterBrett said. Where I work, we can't do it the easy way because of security requirements, but the methodology still works. We just have to rsync the various repos each day, and the cron job just points to the local repo on each box instead of central ones hanging out somewhere.
Re: (Score:2)
Thanks for the suggestion, but this and other solutions like Puppet and CFEngine solve only half the problem, which is actually pushing out the patches. The part that's really missing is a way to easily identify the available patches from the distro for that particular machine. When I log on to Red Hat Network it has a list like "10 of your 12 systems are up to date" and you can see which systems need updating, which packages are available for install on them, and actually schedule those patches f
Webmin (Score:5, Interesting)
Re: (Score:3, Interesting)
Re: (Score:2)
it works well with YUM too - in fact, Webmin is one of the best admin things around. I think every project should be mandated to create a webmin administrative module before being allowed into the wild :)
Re: (Score:2)
Just be sure to turn off the root user, setup the SSL, and change the port number to something else. I also like to limit webmin to a list of known IP addresses via its admin interface AND in iptables.
Been using Webmin for longer than i can remember.
clusterssh (Score:5, Interesting)
Re: (Score:2)
So if you want to check the filesystems on many machines would it be called a clusterfsck?
Or maybe that's what you call it when all the filesystems are damaged at the same time.
You don't want it (Score:5, Interesting)
I admin several busy CentOS servers for my company. You don't probably want a fully web-based application:
1) what happens when some RPM goes awry to borken your server(s)? Yes, it's pretty rare, but it DOES happen. In my case, I WANT to do them one by one in asc order of importance so that if anything is borked, it's most likely to be my least important systems!
2) How secure is it? You are effectively granting root privs to a website - not always a good idea. (rarely, never)
Me? I have a web doohickey to let me know when updates are available. Cron job runs nightly to yum and a pattern match identifies whether or not updates are needed, to show on my homepage. So it doesn't DO the update, butit makes it ez to see has been done.
Re:You don't want it (Score:5, Informative)
Depending on how uniform your servers are, keep one version of CentOS and one version of Ubuntu running in a VM, and have these notify you when updates are available. When updates are available, test against these VMs, and do the local repository thing suggested by another person here. Do one system at a time to make sure something doesn't kill everything at once.
Web based apps with admin privs are fine as long as they're only accessable via the intranet, strongly passworded, and no one else knows they're there. If you need to do remotely, VPN in to the site, and SSH into each box. You're an Administrtor, start administratorizing. Some things just shouldn't be automated.
Re: (Score:2)
Certainly not englishicating :-)
Re: (Score:2)
Me? I have a web doohickey to let me know when updates are available. Cron job runs nightly to yum and a pattern match identifies whether or not updates are needed, to show on my homepage. So it doesn't DO the update, butit makes it ez to see has been done.
Care to publish this somewhere? To be honest knowing which updates are available is a bigger concern to me than actually installing them. The latter is more a nice to have but all the other suggestions are concentrating mostly on that aspect.
I'm close to just knuckling down and writing such a thing myself but it's always nice to not start from scratch.
Re: (Score:2)
2) How secure is it? You are effectively granting root privs to a website - not always a good idea.
Forgot to reply to that part... the way the RHN works is that there is a cron job that runs on the client to ask the server once an hour whether there are any requested updates. The server just provides the list of packages that the user wants updated, and the local cron job pulls those packages from the pre-approved repository/ies only.
So yeah, it's still granting a fair deal of power to the website but it's not like someone could use it to run arbitrary commands on the client. But as long as everything is
Can You Script? (Score:2)
If you can script it should be fairly easy. Here is how I would do it (we run mostly gentoo servers and a mixture of windows, Ubuntu (and Ubuntu based) and RPM distros, but the guys using Linux customise so heavily and is tech savvy enough to keep themselves up to date.)
Set up sshd on every desktop, with key authorization (we do this with the gentoo servers.)
With a script and cron job you should be able to push them to run updates regularly. But you can just use the normal update tools and a local repo that
Re:Can You Script? (Score:5, Informative)
Re: (Score:3, Insightful)
I have a question about Landscape.
Is it possible to run your own server?
If not, isn't it just another piece of vendor lock-in?
I'm interested in using it but I don't want to depend on Canonical. For example, what if my internet connection goes down? I'd lose the ability to use Landscape at all, right?
Re: (Score:2)
If your internet connection goes down, where will you get updates from?
Re: (Score:2)
The mirror on my LAN that finished updating 5 minutes before the connection dropped.
I would probably use it more to monitor the machines, as I only own 4 of the 7 active ones on the network, the others are other members of the family, than use it to do updates anyway.
Re: (Score:2)
Mirroring CentOS or Fedora is fairly easy, although if you can afford it, please contribute back to the community by making your mirror available externally. (Rate limit it, but make it available: please don't be a freeloader.)
Mirroring RHEL, on which CentOS is based, is pretty awkward. Since the 'yum-rhn-plugin' knows to talk only to the authorized, secured RedHat repository or a RedHat Satellite Server, you pretty much have to find a way to build a mirror machine for _each RedHat Distribution_, whether i3
Re: (Score:2)
I only mirror Ubuntu as I don't have any Fedora systems.
And I do not have the connection to make my mirror public ( 75% throttle after 500mb upload ) otherwise, I would.
Re: (Score:3, Funny)
If your internet connection goes down, where will you get updates from?
Congrats, you just volunteered to mail him the floppies.
Re: (Score:2)
If your server's Internet connection goes down, you have bigger things to worry about than updating.
Re: (Score:2)
You're assuming the server in question is outward facing. Maybe it's the mail server, which can still store and forward external mail and keep the internal mail flowing. Maybe it's one of the (probably many) Intranet servers that don't need Internet access to do their job. Maybe it's a computational server which can still be processing batch jobs while the connection is down. In most large corporate environments only a fraction of the servers have to actually access the Internet to their jobs. Obviously
Re: (Score:2)
Puppet or CFEngine + Version Control (Score:5, Informative)
The work flow goes something like this:
1. Identify packages that need update (have a cron job run on every box to email you packages that need updating, or just security updates, however you want to do it)
2. Update the desired versions in your local checkout of your cfengine/puppet files (the syntax isn't easily described here, but its very simple to learn).
3. Commit/push (note that this is the easy way to have multiple administrators) your changes. Optionally have a post commit hook to update a "master files" location, or just do the version control directly in that location.
4. Every box has an (hourly? Whatever you like) cron job to update against your master files location. At this time (with splay so you don't hammer your network) each cfengine/puppet client connects to the master server, updates any packages, configs, etc, runs any scripts you associated with those updates, then emails (or for extra credit build your own webapp) you the results.
Re: (Score:2)
Re: (Score:2)
My boss was going to have us use Puppet, but changed his mind before we got going. Instead, he now wants us to use chef. We haven't gotten to the point of needing either one yet, so I haven't checked out either one, but he definitely knows what he's doing around a computer, so I thought it worth throwing out.
Re: (Score:3, Funny)
I don't want the Swedish chef Bork Bork Borking up the systems...
Re: (Score:2)
[Citation needed] Ye ol' Swedish Chef is getting a little long in the tooth. For those with higher UID's:
http://www.youtube.com/watch?v=jc_UCc8EQcQ&NR=1 [youtube.com]
In centos you could try (Score:2, Informative)
/etc/init.d/yum start
and what do you know - the updates are installed automagically without any manual intervention
Re:In centos you could try (Score:4, Interesting)
the updates are installed automagically without any manual intervention
I'm not sure that's a good idea on a server. Why would you mindlessly update packages on a server when there's no actual reason to do so?
Re:In centos you could try (Score:4, Insightful)
Re: (Score:3, Funny)
Re:In centos you could try (Score:4, Informative)
I'd say that it depends on a lot of factors really.
First of all it depends on how mission critical the services that run on that system are considered and what kind of chances you're willing to take that a particular package might break something. The experience and available time of your system administrator also plays a significant role.
There's also the very highly unlikely scenario that a certain update might include "something bad", for example when the update servers are compromised. See Debian's compromises at Debian Investigation Report after Server Compromises [debian.org] from 2003, Debian Server restored after Compromise [debian.org] from 2006, and Fedora's at Infrastructure report, 2008-08-22 UTC 1200 [redhat.com].
I currently manage just a single box (combination of a public web server and internal supporting infrastructure) for the company I work at and have it automatically install both security and normal updates.
I personally trust the distro maintainers to properly QA everything that is packaged. Also, I don't think any single system administrator has the experience or knowledge to be able to actually verify whether or not an update is going to be installed without any problems. The best effort one can make is determine whether or not an update is really needed and then keep an eye on the server while the update is being applied.
In the case of security updates it's a no-brainer for me, they need to be applied ASAP. I haven't had the energy to setup a proper monitoring solution and I've never even seen Red Hat Network in action. So if I had to manually verify available updates (or even setup some shell scripts to help me here) it would be just too much effort considering the low mission criticality of the server. If there does happen to be a problem with the server I'll find out about it fast enough then I'll take a peak at the APT log and take it from there.
Func (Score:2, Interesting)
https://fedorahosted.org/func/ [fedorahosted.org]
I know it's get Fedora in it's name but it's been accepted into as a package into Debian (and thus ubuntu).
It's pretty cool, designed to control alot of systems at once and avoid having to ssh into them all at once, has a build in certification system, a bunch of modules written for it already , usable from the command line so you can easily add it into your scripts and has a python api so if you really wanted some you could throw together some django magic if you wanted a web
Up2date ? (Score:2)
I have no idea if you can do this with apt but I don't see why not.
Re: (Score:3, Insightful)
that won't quite work - most likely, submitter does not want a particular list of packages to never update, but instead wants to evaluate individual patches, so decision is based on the exact patch, not made for all possibeel patchs to aa prticular package.
Re: (Score:2)
Use script + scriptreplay (Score:3, Insightful)
Not very fancy, but it will get the job done, and it will work for more than just updates, you could also use it to e.g. changes setting or add packages. Or basically anything else you can do from a shell in a repeatable way.
Check man 1 script and man 1 scriptreplay for details.
yum-updatesd is meant for that (Score:5, Informative)
2) chkconfig yum-updatesd on
3) Make sure do_update = yes, download_deps = yes, etc are set in yum-updatesd.conf
4)
This makes your yum system self-updating.
Re: (Score:2)
- not cross-distribution
- yum can make mistakes (e.g. move your config files around)
- even if your binaries are updated, the running server are still executing the old (unlinked) code, you'll have to restart your services eventually
- if there is a critical kernel patch, you'll even have to reboot (probably less problematic if you run some virtualisation like Xen)
There is no such thing as a perfect self-updating system that doesn't need your supervision (although I heard good things
Re: (Score:2)
You'd also need an exclusion situation where if the package claims to need to update the configuration file (because of a configuration format change), the package should be downloaded and not updated because you'll need to tweak the file.
Re: (Score:2)
http://packages.debian.org/unstable/admin/yum [debian.org] yum can make mistakes (e.g. move your config files around)
rpm saves modified files, and sends a warning. What more do you need?
you'll have to restart your services eventuall
any good post installation for a service comes with a 'service X restart' command.
if there is a critical kernel patch, you'll even have to reboot
So critical kernel patches are the only thing requiring a single ssh reboot command.
Great! (Score:5, Funny)
I for one look forward to the rational, well thought out, debate on the various pros and cons of linux distributions and their package managers, that this story will become.
Re: (Score:3, Funny)
Dunno, should this be modded funny, redundant or flamebait? :)
Re: (Score:2)
Puppet, chef, cfengine (Score:3, Informative)
Puppet [reductivelabs.com] is a tool made to do exactly what you're asking for by abstracting the specific operating system into a generic interface. It might be worth checking out. Also there's a newcomer called chef [opscode.com]. And then there's the oldies like cfengine [cfengine.org].
cron-apt+clusterssh (Score:2)
Fully automatic install (Score:2)
Re: (Score:2)
Are you seriously suggesting software that hasn't seen an update since 2005?
Re: (Score:2)
As far as I can see the last update is Wed, 22 Apr 2009 13:39:58 +0200.
Re: (Score:2)
Actually, I meant to reply to the previous post...
arusha hasn't seen any activity since at least 2007, and nothing announced since 2005...
Look at the problem differently (Score:2, Insightful)
I think that you are not putting your efforts where it matters.
What is important is that the critical services run properly on each server. Sure that can be affected by patching but also by many other factors. So don't focus solely on the patching, focus instead on making sure all the services are running properly.
You should have your own scripts that check that each server is responding as required. Make your test suite as strong as you can and improve it each time a new problem crops up that wasn't caught
Configuration management (Score:2)
There are oodles of configuration management tools out there that do at least most of what you want. My personal recommendation is Bcfg: http://www.bcfg2.org/ [bcfg2.org] . It doesn't quite have the entire web interface (yet), but it is fantastic for keeping everything up to date and clean and telling you when you have outliers. I currently use it for the 350 or so support machines for the 5th fastest computer in the world [anl.gov], and I know _much_ larger installations are using too.
Answer: That's your job. (Score:3, Insightful)
That's your job. Bash CLI, the CLI toolkit, CLI Emacs, key-based ssh and a well-maintained, well-documented pack of own scripts in your favourite interpreted PL are just what it takes to do this sort of thing. No fancy bling-bling required or wanted. It would make your worse, not easyer in the long run.
never update a live system .. (Score:3, Interesting)
My advice is, if it ain't broke don't fix it, especially on a production server. Have two identical systems and test the latest bugfix on that, before you roll it out to the live system. You don't know what someone elses bugfix is going to break and would have no way of rolling it back.
Don't fix it if it ain't broke. (Score:2)
Re: (Score:3, Insightful)
You must not handle any credit cards, financial data, or health records then. Some of us are forced to keep servers updated to within a couple weeks of the latest patches or risk a dreaded "uncompliance" status.
Personally, I wouldn't want any of my data trusted to an infrastructure that is only updated once every three years, but in some places that approach makes sense as well. We've certainly taken that approach with SLES servers since Novell's updates are usually more malicious that whatever hole they
I wouldn't (Score:2)
In all honesty I wouldn't set up a solution. Keep in mind that with different release schedules and different release criteria many times there may be a day or two lag between them and you still need to validate the patch\fix before putting it into a production environment. So given Ubuntu and CentOS, let's say the CentOS patch comes out 2 days later you are either doing a regression test twice on the two systems, or waiting till both are release and running a single regression test.
If you are hell bent on
Spacewalk (Score:2)
At the behest of my manager, I've been working on implementing Spacewalk here at my current contract. It's the open-sourced version of Redhat's Satellite.
Downside: they've only recently released 0.5, and there are still lots of bugs (help doesn't, in general, work).
But you can inventory, and push, updates to rpms; ditto for configuration files, and there's kickstart support. It also supports other than RH and Fedora; I've been working with CentOS (yeah, I know, just the serial numbers filed off), but it app
func (Score:2)
Re:Remote admin of a UNIX box? (Score:4, Insightful)
Re:Remote admin of a UNIX box? (Score:5, Insightful)
Uh, right. Like putting ssh commands into a script?
ssh user@host aptitude update
Set up key based login and you don't even have to type passwords. By the sounds of it he needs to pay some attention to each individual machine anyway, as he has multiple distros and wants to determine which patches he needs for each box.
Re: (Score:2)
Damned if I can find it...
Re:Remote admin of a UNIX box? (Score:5, Informative)
clusterssh
Re:Remote admin of a UNIX box? (Score:5, Informative)
It's called "dssh". Google is your "search" friend (we will ignore the evil side of Google at the moment... :-)
Re: (Score:2, Interesting)
http://www.debian-administration.org/article/Use_ssh_on_multiple_servers_at_one_time [debian-adm...ration.org]
Re: (Score:3, Funny)
Actually, I remember reading a year or so ago about a program that would allow you to run a specified command via ssh on a list of machines. You could do this with a shell-script (pass arguments), but I think the program also did it all in parallel and showed some output as well.
I think many rootkits do the same thing. You can run a command (via irc) on a list of machines and return output to the channel (in parallel). The best update and control solution is to rootkit your own boxes and maintain them via
Re:Remote admin of a UNIX box? (Score:5, Interesting)
Set up key based login and you don't even have to type passwords.
Since you basically need root access to do updates this definitely poses a security hazard as when your client is compromised there is direct access to the server. Then again, an attacker could always use a keylogger to capture the password anyways.
If you even attempt to do this I'd setup a different user account specifically for the process of updating and limit the rights accordingly and then I'd restrict the commands that can be executed (you can do this per key).
There may actually be better ways but I'm not a very experienced sysadmin. Most experience I have is from managing a single web server and my local desktop obviously. Be sure to correct me (in a friendly manner) if I'm wrong.
Then again, if you do this from the same machine as your normal account is located on you'll still have the same issues in case of a compromised client. Probably just best to limit every single account to just that what is specifically needed and setup proper host based intrusion detection (OSSEC?) to be notified when something goes wrong. This stuff is hard...
Re: (Score:2)
You create a local script that runs on each server that "pulls" updates and installs them, logging the results (alerting if something failed.) If you need to do Out Of Schedule updates, you can manually kick off the updates using a limited priv account that has explicit (restricted) sudo ability.
Local packages are easier if they are all the same style package (I prefer dpkg's - apt is available for CentOS too.) Running a mixed distro system still means you have to build packages multiple times which is a PI
Re: (Score:3, Informative)
Never use ssh+password. Always use ssh+PKI. I think you missed "key" and focused on "[no] password"
From Ubuntu wiki SSHHowto [ubuntu.com]:
If your SSH server is visible over the Internet, you should use public key authentication instead of passwords if at all possible.
Re: (Score:2, Informative)
Using keyfiles doesn't mean no password. Most ssh guides that talk about keyfiles actually suggest to protect them with a password. Why? Well, like the GP said (what you obviously somehow missed), as soon as one client is somehow compromised you would have root access to every server.
Using passwords doesn't mean you don't use keyfiles. And using keyfiles doesn't mean you don't have to enter passwords. The way to go is ssh+PKI+password+ssh_agent.
Re: (Score:3, Informative)
Exactly, the confusion here might be in the terminology. Password versus passhrase.
Anyways, just using keys doesn't magically make everything more secure, it just negates brute force password attacks. From the few high profile cases I remember the compromise was the result of somebody's private key being compromised (e.g. the Debian compromises).
The only true solution is a combination of the principle of least privilege, sandboxing (SELinux etc.), proper monitoring and a whole host of other security measure
Re: (Score:2)
The best way to handle a situation like this is to have an account used only for updateing, have all logs going to a central syslog server, and have a log monitoring system like OSSEC send you alerts whenever the special account is used.
You should never see these alerts, except during maintenance windows.
Re:Remote admin of a UNIX box? (Score:4, Informative)
IMHO, a good sysadmin is the one who sees such issues and finds or writes a script to do it for them.
(That's "you're both right")
for address in addresses; do /var/lib/triggers/update'
ssh $address 'touch
done
With an obvious job on the machines watching for such trigger files (to avoid root access, etc.)
if [ -f $TRIGGERFILE ]; then /tmp/yum_trigger.log \
yum -y update 2>&1| tee
&& rm -f $TRIGGERFILE
fi
Re: (Score:2)
note that spacewalk currently requires oracle, which means it might not be the best solution.
Re: (Score:2)
Even with Oracle, it's still a decent solution and works fairly well. I've had it up and running since the 0.1 release.
It has its problems, not least of which is the absolute resource hog that Oracle is, but it's better than having to log into every machine when an update comes out.
Exactly... (Score:2)
You hit the nail on the head. I have a cron job to exactly that and for 3 days after an "auto update", my Mythbuntu box shuts down on its own after running for about 54 hours. I am now wondering whether an update I should have avoided is the culprit.
By the way, can one point me to a resource that could help me determine what's going on on my box? Thanks.
Re: (Score:3, Informative)
Actually,
Re: (Score:2)
And is there such a repository for Windows that has roughly the same amount of software?