Patch the Linux Kernel Without Reboots 286
evanbro writes "ZDNet is reporting on ksplice, a system for applying patches to the Linux kernel without rebooting. ksplice requires no kernel modifications, just the source, the config files, and a patch. Author Jeff Arnold discusses the system in a technical overview paper (PDF). Ted Ts'o comments, 'Users in the carrier grade linux space have been clamoring for this for a while. If you are a carrier in telephony and don't want downtime, this stuff is pure gold.'"
Update: 04/24 10:04 GMT by KD : Tomasz Chmielewsk writes on LKML that the idea seems to be patented by Microsoft.
Amazing (Score:5, Interesting)
replace modules (Score:3, Interesting)
Re:Needed that bad? (Score:3, Interesting)
Two things:
The very fact that there is load balancing means that every server is likely to have active connections going through it. If you currently have connections going through a specific server, you don't want to drop those connections in order to reboot that particular machine. This allows updates to a live machine.
Second, this is telephony, meaning it is the infrastructure on which the internet is based. There's no dns tricks or tcp/ip you can use to send people to a different "server" if that server is the switch connected to your fiber backbone. Basically, there are points in the infrastructure where there are by necessity a single chokepoint.
As to how often these things collide, and how much of a pain it is to actually stop a server for some amount of time, I can't say. But I can see situations where being able to hot-swap a kernel would be useful.
Re:Needed that bad? (Score:3, Interesting)
start up a new virtual machine with the new kernel, then when your sure it's working, just switch everything across from the old to the new, and shut down the old virtual instance
Not only the CEO (Score:5, Interesting)
It always irks me to see people just not understand concepts like "peak" vs "average", or "failing over".
- A cluster of, say, 4 machines (small application, really) which are loaded to 90% of capacity, if one dies, the other 3 are now at 120% of capacity each. If you're lucky, it just crawls, if you're unlucky, Java clutches its chest and keels over with an "OutOfMemoryError" or such.
- if you're at 90% most of the time, then fear Monday 9:00 AM, when every single business partner on that B2B application comes to work and opens his browser. Or fear the massive year-end batch jobs, when that machine/cluster sized barely enough to be ready with the normal midnight jobs by 9 AM, so those users can see their new offers and orders in their browsers, now has to do 20 times as much in a burst.
Basically it amazes me how many people just don't seem to get that simple rule of thumb of clusters: you're either getting nearly 100% uptime and nearly guaranteed response times, _or_ you're getting that extra hardware fully used to support a bigger load. Not both. Or not until that cluster is so large that 1-2 servers failing add negligible load to the remaining machines.
No, No, No and No again. (Score:5, Interesting)
1. We connect to several load balanced systems and the complexity introduced by load balancers translates to inexplicable down time. No load balancers means a pretty steady diet of the latest and greatest server hardware, but no down time. The a few minutes of down time costs more than the server hardware.
2. High availability translates more roughly into nodes that can fail (ex. power off) and not take the cluster down. This boils down to active-passive application architecture more than just using heartbeat.
As an FYI, PostgreSQL clustering is a killer application for me. Erlang is also great in many ways, but requires application architecture with active-passive node awareness. Which isn't present in things like Yaws, or even my other favorite non-erlang app nginx. Heartbeat is the solution there, but I'd like to see yaws be cluster aware on its own. http://yaws.hyber.org/
Re:Needed that bad? (Score:3, Interesting)
Re:Needed that bad? (Score:3, Interesting)
I hot-swap whole networks.
HTH.
Re:Unless it fails. (Score:2, Interesting)
100 - (((30 seconds) / (1 year)) * 100) = 99.9999049
So if you're trying to keep up 6 9s for some super critical system, you've just used a years worth of downtime.
Even for lower numbers of nines, you still don't get many minutes per year for patching, assuming no hardware failures ever.
Re:Needed that bad? (Score:4, Interesting)
So you have this security hotfix you really want to apply, but it's going to 25 hours on average to fix a box and 50 hours to fix them all.
You could, with ksplice and a good concurrent control system, make your average time to fix 5 minutes in over 80% of kernel upgrade scenarios rated "Critical". Your boxes could still be rebooted on a regular basis later.
Which do you prefer?
Re:Over-engineered solution to a non-existent prob (Score:3, Interesting)
Every production environment I've ever administered had a smaller version set aside for testing. We'd configure the machines identically and just make the cluster smaller. Then we'd test on the test machines any action that was to be made part of the admin process of the production machines. If it passes on the test machine and fails in production, then you didn't make the machines sufficiently similar.
Round robin upgrades take ( ( (time_to_idle + time_to_upgrade + time_to_reboot) * machines ) / 2) on average to get a machine upgraded. If you have a "Critical" upgrade, that might be longer than you want.
Not everyone has the exact same QA requirements you do, either. Some of us are happy with proving that it works, then proving that it worked on the production machine, then resuming our normally scheduled maintenance.
Re:Needed that bad? (Score:2, Interesting)
>How do you know a configuration change hasn't rendered one of your startup scripts ineffective?
And the answer is: Because every configuration change that happens in the production environment happened many times in the lab on an identical machine, and if there was any danger of this occurring then an image of the production machine would be made "hot" during the maintenance window, applied to the lab system, and the config change applied there, and the system thoroughly tested, including reboots.
For those of us with a dozen boot options on our grub screen the danger of a "change that makes the system reboot into an odd state." May seem very real, but in production Telco environments it's been factored out of the equation.