Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Unix Technology IT

Why You Shouldn't Reboot Unix Servers 705

GMGruman writes "It's a persistent myth: reboot your Unix box when something goes wrong or to clean it out. Paul Venezia explains why you should almost never reboot a Unix server, unlike say Windows."
This discussion has been archived. No new comments can be posted.

Why You Shouldn't Reboot Unix Servers

Comments Filter:
  • Uptime (Score:5, Funny)

    by cdoggyd ( 1118901 ) on Monday February 21, 2011 @12:46PM (#35269456)
    Because you won't be able to brag about your uptime numbers.
    • Re:Uptime (Score:5, Funny)

      by Anrego ( 830717 ) * on Monday February 21, 2011 @12:55PM (#35269560)

      I once had to move my router (486 running slackware and with a multi-year uptime) across the room it was in. It was connected to a UPS, however the cable going from the UPS to the computer was wrapped through the leg of the table it was sitting on.

      I actually _removed the table leg_ so I could hawl the 486 still plugged into the UPS across the room and quickly plug it in before it powered down!

      and then we had the first real substantial power failure in years like a few months later.. and the thing had to go down :(

      But yeah.. now I reboot frequently to verify that everything still comes up properly.

      • and then we had the first real substantial power failure in years like a few months later.. and the thing had to go down :(

        Perhaps caused by minor hard drive damage caused by relocating the system while under power?

        A rotary-media hard drive is fairly robust, if static. If spinning, it's more fragile than a Slashdotter's ego.

        I mean, it's your server, and it's an ancient 486 and all, so respect the hardware to the limit and extent you want to, but for me, if it's mine and uses hard drives, it doesn't move 2

        • Re:Uptime (Score:5, Funny)

          by Anrego ( 830717 ) * on Monday February 21, 2011 @01:18PM (#35269868)

          I meant mains power.. due to a hurricane actually (hurricane Juan).

          The machine came out fine (and actually still runs.. though I don't use it as a router any more). Those old drives are surprisingly robust ..

          But yeah.. I was actually surprised.. and I did it more for the sake of the doing (the only reason I even left the machine going was because of the uptime). I'd never pull a stunt like that with a real machine :D

      • Re: (Score:3, Funny)

        by mallyone ( 541741 )
        I bet any slashdotter that he still has a 3 legged table! :).
    • by kju ( 327 ) *

      I once suffered from this illness myself. Thankfully I was able to overcome it.

  • Persistent myth? (Score:5, Interesting)

    by 6031769 ( 829845 ) on Monday February 21, 2011 @12:48PM (#35269470) Homepage Journal

    This is not a myth I had heard before. In fact, none of the *nix sysadmins I know would dream of rebooting the box to clear a problem except as a last resort. Where has this come from?

    • Re:Persistent myth? (Score:5, Informative)

      by SCHecklerX ( 229973 ) <greg@gksnetworks.com> on Monday February 21, 2011 @12:52PM (#35269518) Homepage

      Windoze admins who are now in charge of linux boxen. I'm now cleaning up after a bunch of them at my new job, *sigh*

      - root logins everywhere
      - passwords stored in the clear in ldap (WTF??)
      - require https over http to devices, yet still have telnet access enabled.
      - set up sudo ... to allow everyone to do everything
      - iptables rulesets that allow all outbound from all systems. Allow ICMP everywhere, etc.

      • by arth1 ( 260657 ) on Monday February 21, 2011 @01:04PM (#35269680) Homepage Journal

        Don't forget 777 and 666 permissions all over the place, and SELinux and iptables disabled.

        As for "ALL(ALL) ALL" entries in sudoers, Ubuntu, I hate you for ruining an entire generation of linux users by aping Windows privacy escalations by abusing sudo. Learn to use groups, setfattr and setuid/setgid properly, leave admin commands to administrators, and you won't need sudo.

        find /home/* -user 0 -print

        If this returns ANY files, you've almost certainly abused sudo and run root commands in the context of a user - a serious security blunder in itself.

        • Re:Persistent myth? (Score:5, Interesting)

          by element-o.p. ( 939033 ) on Monday February 21, 2011 @02:30PM (#35270744) Homepage

          As for "ALL(ALL) ALL" entries in sudoers, Ubuntu, I hate you for ruining an entire generation of linux users by aping Windows privacy escalations by abusing sudo.

          Yeah, I agree with you in principle, although to be fair, there really isn't a way that Ubuntu could know what user account you are going to set up before you actually set it up, and therefore, there isn't really a way for Ubuntu to create an appropriate sudoers entry to give admin privileges to the server admin.

          Learn to use groups, setfattr...properly...

          Okay, agreed...

          Learn to use...setuid/setgid properly...

          Ugh...setuid and setgid, IMHO, should be used as little as possible. If there's a security hole in your app, then having it setuid/setgid allows a sufficiently skilled user the ability to gain elevated privileges. I'd much prefer to use sudoers to give access to specific apps to people I trust than give any user access to an app I "trust" through setuid/setgid.

          ...leave admin commands to administrators, and you won't need sudo.

          Maybe I'm just missing something, but that sounds really stupid to me. While I'm a reasonably skilled Linux admin, I don't pretend to know everything, and maybe you can teach me something I've missed in my experience so far. If so, cool. But from my perspective, sudo is an ideal tool for granting appropriate permissions as required to trusted individuals. Sudo logs the user name and command in the log files, so if someone is abusing sudo, you know. Sudo can e-mail failures to admin staff, so if someone is habitually trying to exceed their permissions, you know. Sudo allows pretty fine-grained access to users based upon group or user name, so you can easily allocate permissions as required (well, relatively easily, anyway) -- much more fine-grained than Unix User/Group/Other permissions would allow. For example, with sudo you could allow senior admins (group: admin) and web developers (group: www-dev) read/write permissions to CGI script directories, junior admins (group: jadmin) read-only permissions and all other users (group: users) no access. Uh-oh...we've got four groups here: admins, jadmins, www-dev and users, so doing that with standard Unix permissions is going to be kind of difficult (admins could be members of the www-dev group I suppose, but I can imagine cases where group A might need permissions to a subset of files that group B owns, but shouldn't have access to another subset, which would really complicate things). Sudo is a powerful tool, and just like all the other tools you mentioned, should be used appropriately as a component of overall system security.

          find /home/* -user 0 -print

          If this returns ANY files, you've almost certainly abused sudo and run root commands in the context of a user - a serious security blunder in itself.

          Maybe. I see what you are saying, but as a counter-example, I sometimes run tcpdump from within my home directory when troubleshooting problems. tcpdump has to run as superuser, and I have a lot more faith in giving myself and other admins permission to run "sudo tcpdump" than running tcpdump setuid 0. Again, maybe I'm just missing something, but I really don't have a huge problem with tcpdump (or other admin tools) writing UID 0 data to an admin user's home directory.

          • Re: (Score:3, Informative)

            Maybe. I see what you are saying, but as a counter-example, I sometimes run tcpdump from within my home directory when troubleshooting problems. tcpdump has to run as superuser, and I have a lot more faith in giving myself and other admins permission to run "sudo tcpdump" than running tcpdump setuid 0. Again, maybe I'm just missing something, but I really don't have a huge problem with tcpdump (or other admin tools) writing UID 0 data to an admin user's home directory.

            You don't have to be root to use tcpdump. On ubuntu, do this:

            sudo aptitude install libcap2-bin
            sudo setcap cap_net_raw,cap_net_admin=eip `which tcpdump`

            If you run: getcap `which tcpdump` and it shows: /usr/sbin/tcpdump = cap_net_admin,cap_net_raw+eip then you're good to go. Now try running tcpdump as a regular user.

      • by BlueBlade ( 123303 ) <<moc.liamg> <ta> <reitrofam>> on Monday February 21, 2011 @09:52PM (#35274846)

        - iptables rulesets that allow all outbound from all systems. Allow ICMP everywhere, etc.

        As a network admin, I have violent fantasies of driving hot nails through the privates of the "Let's block all ICMP by default" admins whenever I come up at a new client's site to troubleshoot some complex networking issues. If you block ICMP echo, you better have an extremely good reason for it. If it's from a public WAN link facing the internet, then *maybe* you might have a case (but most often not). If it's on a web server or other public-facing services, you PROBABLY DON'T HAVE A VALID REASON. If you block traceroutes from anywhere except edge firewalls, you are a clueless idiot. And even then, requests coming from inside interfaces should be let through. THIS IS ESPECIALLY TRUE OVER MPLS AND Site-to-Site VPN LINKS!

        Whew, that felt good. Seriously, blocking icmp doesn't do *anything* for security. If you are getting flooded by icmp packets, just configure a flood threshold. These days, any icmp DoS flood that is bad enough to actually interrupt services very likely doesn't need the extra "reply" traffic to work. And if your clever "security" of not replying to pings on anything that has ports open is stupid, as a simple port scan will reveal the host.

        Please, for the sake of every network admin's sanity, leave ICMP alone. Thank you.

    • People who don't know any better. On Windows systems sometimes the system gets so that there's a bit of corrupted memory that prevents a program from running correctly if the computer isn't completely shut off and let to sit for a few seconds before being turned back on. I was personally skeptical until I saw that work for myself. I still don't really understand why that's the case, but IIRC it had to do with some errors you could run into with Autocad.

      I'm not familiar with Unix itself enough to comment, bu

      • by Dracos ( 107777 ) on Monday February 21, 2011 @01:13PM (#35269784)

        I'm not familiar with Unix itself enough to comment, but with both Linux and *BSD...

        I'm not sure how to respond to that.

        • Re: (Score:3, Interesting)

          by mini me ( 132455 )

          He is quite correct in his assertion that Linux and BSD are not Unix. Without experience with real Unix systems, it would be impossible for him to verify that they exhibit the same behaviour. However, Mac OS X is Unix. I find it hard to believe that someone posting on Slashdot has not at least spent some time evaluating OS X, even if they ultimately decided it was not for them.

          • The BSD variants are descended from the original Berkeley Unix codebase, which is simply an enhanced form of original ATT Unix. BSD is Uinx. However, I think that of the BSD variants in use today, only Apple has had theirs certified by the Open Group, which makes it not just Unix but Unix(tm).

          • I find it hard to believe that someone posting on Slashdot has not at least spent some time evaluating OS X, even if they ultimately decided it was not for them.

            - hmmm. I've worked with computers since about 91 and professionally since 95 and I only really touched the old apple machines a few times. So yeah, there are people here who didn't evaluate OSX (and not intending to)

        • by Thyrsus ( 13292 )

          Unix is a trademark owned by the The Open Group, and you may use that trademark to describe your system if you pay money to have them run their tests to verify compliance with the Single Unix Specification. I believe Red Hat has done that in the past, and that particular version of Linux was thus bona fide Unix(R), but it seems Red Hat has not chosen to continue certifying their systems. Someone please correct me if I'm wrong.

          I believe Red Hat sent back upstream all the changes they needed to make to pass

          • by Xtifr ( 1323 )

            I believe Red Hat has done that in the past

            A company called Lasermoon [skytel.co.cr] once got their flavor certified. I don't believe that Red Hat ever did so.

            At the time, the biggest issue was a feature called STREAMS [wikipedia.org] (all-caps), which Linus refused to include in the kernel, arguing that it was unnecessary for a system that came with source. Caldera (now SCO) acquired Lasermoon and included STREAMS in some of their versions of Linux, and was lobbying to have it included as a standard feature despite Linus's objections, but I don't believe that any flavor of STR

    • by afabbro ( 33948 ) on Monday February 21, 2011 @12:54PM (#35269548) Homepage

      This is not a myth I had heard before.

      +1. This article should be held up as a perfect example of building a strawman.

      "It's a persistent myth that some natural phenomena travel faster than the speed of light, but at least one physicist says it's impossible..."

      "It's a persistent myth that calling free() after malloc() is unnecessary, but some software engineers disagree..."

      "It's a persistent myth that only the beating of tom-toms restores the sun after an eclipse. But is that really true?"

      • So I took 2 minutes and actually read the article. The point was that Unix is not Windows and reboots are not a fix-all. No straw man, just common sense advice for those MCSE's out there. It's also good advice for Windows, but after a few attempts to discover root cause only to find out that the MF'n Event Log is "corrupt and cannot be read", I don't blame people for just rebooting/reinstalling. Hell, it's what MS says to do; which just goes to show they don't even know how their black box works...
    • I came here to say the same thing, I've never thought to reboot a unix box to fix a problem. In fact, in the face of a serious operating system issue I want to do everything I can do to avoid the temporary purgatory that is a reboot.
    • This is not a myth I had heard before. In fact, none of the *nix sysadmins I know would dream of rebooting the box to clear a problem except as a last resort. Where has this come from?

      The idea that you ought to just reboot to fix things comes from the Windows world.

      I've got several Windows servers that absolutely have to be rebooted nightly to keep them running happily. This isn't because I'm some crappy admin or anything like that... Rather, the software running on them just isn't stable. It's actually the vendor's suggestion that these servers be rebooted nightly. Not that particular services need to be restarted - but that the entire box should be rebooted.

      I'm not entirely sure wh

      • That sounds like horrible software.

        Thinking of the Windows servers I admin and used be an assistant admin for - we usually used reboots only after a large number of other diagnostics were tried. For our desktop users, yes, we said reboot first - but anything on the server should be stable enough as to not need a reboot.

        Actually, I am to blame for the one Windows server restart at my last job that wasn't due to a patch that required it. Long day, logged onto the backup domain controller and accidentally rest

      • by theCoder ( 23772 )

        At work I have a Windows box that, I kid you not, can only run about 25000 processes between reboots. It doesn't seem to be able to reuse process IDs, and once it gets to about PID 100000 or so (Windows PIDs are always multiples of 4), it just can't reliably spawn new processes. Including the process to shutdown and reboot the computer (equivalent of `shutdown'). Windows seems to generate PIDs somewhat randomly, so sometimes creating a process is able to find a good PID and it works, but other times it c

    • I wouldn't read too much into it. From what I can tell the author is a idiot. He knows some stuff, probably to an impressive extent even, but he's too arrogant and one-size-fits-all.

      I don't know of any Unix admin who reboots early-on. Even the few I know (myself included) who came over from windows (or still admin it).

    • by vux984 ( 928602 )

      In fact, none of the *nix sysadmins I know would dream of rebooting the box to clear a problem except as a last resort. Where has this come from?

      I take a somewhat contrary stance, rebooting is like testing the backup recovery procedure, or the backup power system... you have to do it to know that you can do it.

      If you are a afraid to reboot your server when its working fine because you don't know it will come back up, then you ALREADY HAVE A PROBLEM.

      That said, I fully understand the desire not to reboot espe

  • Uh.. no (Score:5, Informative)

    by Anrego ( 830717 ) * on Monday February 21, 2011 @12:48PM (#35269474)

    I for one believe in frequent-ish reboots.

    I agree it shouldn't be relied upon as a troubleshooting step (you need to know what broke, why, and why it won't happen again). That said, if you go years without rebooting a machine... there is a good chance that if you ever do (to replace hardware for instance) it won't come back up without issue. Verifying that the system still boots correctly is imo a good idea.

    Also, all that fancy high availability failover stuff... it's good to verify that it's still working as well.

    The "my servers been up 3 years" e-pene days are gone folks.

    • by Anonymous Coward

      Disagree.

      Rebooting is bad. It booted the first time, Why would it not boot the second?

      If you don't have proper controls than you should not have anyone touching the box.

      • Re:Uh.. no (Score:5, Insightful)

        by Anrego ( 830717 ) * on Monday February 21, 2011 @01:03PM (#35269662)

        Maybe true if the box is set up then never touched. If anything new has been installed on it.. or updated.. I think it's a good idea to verify that it still boots while the change is still fresh in your head. Yes you have changelogs (or should), but all the time spent reading various documentation and experimenting on your proto box (if you have one) is long gone. There's lots of stuff you can install and start using, but could easily not come up properly on boot.

        And why are reboots bad. If downtime is that big a deal, you should have a redundant setup. If you have a redundant setup, rebooting should be no issue. I've seen a very common trend where people get some "out of the box" redundancy solution running... then check of "redundancy" on the "list of shit we need" and forget about it. Actually verifying from time to time that your system can handle the loss of a box without issue is important (in my view).

      • by OzPeter ( 195038 )

        Disagree.

        Rebooting is bad. It booted the first time, Why would it not boot the second?

        If you don't have proper controls than you should not have anyone touching the box.

        Even with controls you are assuming that anybody who touched the box between boots has performed their work flawlessly and/or the actions that they performed will do as expected. Yes you can replicate an environment and practice changing things and rebooting, but unless you have 100% replicated things then all you are testing is your assumption that the replication was complete. So it still comes down to an assumption that can only be tested by a physical reboot.

      • Re:Uh.. no (Score:4, Insightful)

        by OzPeter ( 195038 ) on Monday February 21, 2011 @01:14PM (#35269798)

        (wishing that /. would allow edits)

        To add to my previous comment. The general consensus of disaster recovery best practice is that you do not test a backup strategy, you test a restore strategy. Rebooting a server is testing a system restore process.

    • . That said, if you go years without rebooting a machine... there is a good chance that if you ever do (to replace hardware for instance) it won't come back up without issue.

      we reboot our unix server once a month exactly for this reason, we have been bitten once so we learned this the hard way.

    • Well, that's the thing, with cloud computing you generally don't have to waste the resources on a server that's up all the time and enough to cover the full load, depending upon the service or set up it's definitely possible to set yourself up to have additional capacity come online as needed, and for the most part those other servers are pretty much identical.

    • I agree with you. I used to build it into the cron to reboot every Sunday at 11:00p. The medical practice management software that ran on there tended to build up temp files and not remove them automatically...this was a fault of the application. My startup script would remove them and keep the hard drive (a whopping 4GB) from filling up. Since the services that needed to run were appropriately added to the same script there was never an issue of them not starting which is one of the main reasons you wouldn
      • by GreyLurk ( 35139 )

        Why reboot? Why not just kill off the process, clear the temp files, and restart the process?

        • We also had serial terminals attached through DigiBoard and Stallion Boards, both were notorious for flaking out unless rebooted regularly, as well. Maybe this was a unique situation but every *nix machine I built after I left the medical field received the same treatment.
    • I for one believe in frequent-ish reboots.

      I agree it shouldn't be relied upon as a troubleshooting step (you need to know what broke, why, and why it won't happen again). That said, if you go years without rebooting a machine... there is a good chance that if you ever do (to replace hardware for instance) it won't come back up without issue. Verifying that the system still boots correctly is imo a good idea.

      Also, all that fancy high availability failover stuff... it's good to verify that it's still working as well.

      The "my servers been up 3 years" e-pene days are gone folks.

      Well, you make a point but, shouldn't a server be replaced when it gets old enough anyway? Wouldn't it be nice to have a server up for 3 years of reliability? At this point, who really cares if a reboot would cause a failure? You have backups, plan to replace the aging hardware. It doesn't pay to be miserly with server hardware, especially because its quality has gone on a downward trend as demand for cheaper pricing goes up. And how does verifying a system boot really ensure the the server is working

      • by Kjella ( 173770 )

        Well, you make a point but, shouldn't a server be replaced when it gets old enough anyway? Wouldn't it be nice to have a server up for 3 years of reliability? At this point, who really cares if a reboot would cause a failure? You have backups, plan to replace the aging hardware.

        You care because it's 2:30 in the morning, your manager is yelling at you because the all important end-of-quarter stuff is due in the morning, the server is full of one day's production data that isn't backed up yet and even though you have money in the budget you don't have a hot server with the exact same software/patch level/configuration ready to dump your backups into?

        Very few systems are so critical they can't have some planned downtime. Unplanned downtime on the other hand can be extremely costly, a

    • Comment removed based on user account deletion
      • by Anrego ( 830717 ) *

        I recommend you actually read my post ;p

        I clearly said.. right there in the second paragraph.. that I agree with him on not using reboot as a troubleshooting mechanism.

    • by jcoy42 ( 412359 )

      Well, that's your opinion.

      The boot up process starts a lot of extra electrical noise in the box by spinning up all the fans, HDs, probing things, etc. That's usually when something breaks. What I have seen is that boxes which get rebooted frequently tend to burn out faster. I have had 2 otherwise equivalent machines, purchased at the same time, one used for dev and one for production, and the dev machine burned out 2 years before we retired the production machine (burned out means too many fan/disk/CPU f

    • by dAzED1 ( 33635 )
      err...or, you could figure out why there was a problem. Rebooting a system removes a lot of forensic data, and you should know long before it's dying that there is a problem.
      There's nothing a reboot "fixes."
    • by dch24 ( 904899 )
      Everything old is new again. "I can reboot an instance, it's cloud-based with HA!" That means you are not the target market for this article.

      Who do you think keeps your magical cloud running with five-9's of uptime? You can't seriously think the VM host will run better after a reboot. Who do you think manages the HA load balancer? (Hint: it is managed, just like everything else.) What if they had to reboot it?

      "I need to reboot every month/week/solar cycle because otherwise I have no disaster recovery!"
    • I never reboot unless the system hangs up completely. In recent years I had to reboot once, when the air conditioning failed and a server had a bad memory alarm.

      By keeping reboot as an extreme measure, I know when something truly bad happened. If I reboot without reason, I lose that information.

    • by Yaur ( 1069446 )
      Totally agree. If things fail you want it to happen when you have control of the situation, not whenever some retard decides to pull the wrong cable.
    • Interesting, but not true.

      "Frequent-ish" reboots can work in non-enterprise environments where you have downtime windows. In international organizations that run 24-7, this is rarely the case without lots of coordination. Now, if you design a system with high availability and redundancy, you can very well take down one node in a cluster for maintenance... Or if you virtualization you could migrate the VM to another host transparently. Alas, in many enterprises there are one-off systems that exist for a pa

    • by tbuskey ( 135499 )

      Reboots to fix problems should never be done.

      Reboots as a matter of policy isn't a bad idea.

      If your system reboots periodically, you force network disconnections, memory cleanup, etc.

      Users that logged on months ago are no longer tying up resources. Maybe they don't need it but forgot to logout. Or their client died so there's a zombie on the server.

    • Agreed. Servers should be rebooted periodically. Once every 3 months is a good number. Almost every time we've had a server up for a year or two there were problems bringing it back up up when it went down unexpectedly or for some sort of hardware maintenance. Of course, many of the people that were the sys admins had gone elsewhere and hours went by before they finally figured out some startup script was copied and altered just to get it to come up the last time. Better off scheduling a shutdown and

    • Agreed. A reboot isn't a panacea for troubleshooting, but they still should be performed. I view them as akin to drills in the military - they drill and practice so that flaws in the process can be identified early on.

  • by Anonymous Coward on Monday February 21, 2011 @12:48PM (#35269476)

    i'm really tired of this semi-technical stuff on slashdot that seems aimed at semi-competent manager-types.

  • by Syncerus ( 213609 ) on Monday February 21, 2011 @12:49PM (#35269486)

    One minor point of disagreement. I'm a fan of the pre-emptive reboot at specific intervals, whether the interval be 30 days, 60 days, or 90 days is up to you. In the past, I've found the pre-emptive reboot will trigger hidden system problems, but at a time when you're actually ready for them, rather than at a time when they happen spontaneously ( 2:30 in the morning ).

    • by Wovel ( 964431 )

      Interestingly, all his arguments against rebooting would bolster your argument for periodic planned reboots. One of his points was that someone may have screwed up the system, it would be better to find that in a controlled environment.

      I will stay away from periodic reboots and remain firmly entrenched in the land of if it ain't broke, don't fix it.

  • by pipatron ( 966506 ) <pipatron@gmail.com> on Monday February 21, 2011 @12:51PM (#35269498) Homepage

    FTFA:

    Some argued that other risks arise if you don't reboot, such as the possibility certain critical services aren't set to start at boot, which can cause problems. This is true, but it shouldn't be an issue if you're a good admin. Forgetting to set service startup parameters is a rookie mistake.

    This is retarded. A good admin will test so that everything works, before it will get a chance to actually break. Anyone can fuck up, forget something, whatever. Doesn't matter how experienced you are. Murphys law. The only way to test if it will come up correctly during a non-planned downtime is to actually reboot while you have everything fresh in memory and while you're still around and can fix it. Rebooting in that case is not a bad thing, it's a responsible thing to do.

  • What a load of BS (Score:5, Insightful)

    by kju ( 327 ) * on Monday February 21, 2011 @12:52PM (#35269508)

    I RTFA (shame on me) and it is in my opinion absolutely stupid.

    There is actually only one real reason given and that is that if you reboot after some services ceased working, you might end up with a unbootable machine.

    In my opinion this outcome is absolutely great. Ok, maybe no great, but it is important and rightful. It forces you to fix the problem properly instead of ignoring the known problems and missing yet unknown problems which might bite you in the .... shortly after.

    Also: When services start being flakey on my system, i usually want to run an fsck. In 16 years linux/unix administrations I found quite a time that the FS was corrupted without an apparent reason and with beeing unnoticed before. So a fsck is usually a good thing to run when strange things happen and to be able to run it, i nearly always need to reboot.

    I can't grasp what kind of thinking it must be to continue running a server where some services fail or behave strangely. You could end up with more damage than cause by a outage when the reboot does not go through. You just might want to do the reboot at off-peak hours.

  • This is like *NIX 101.

    But then, try changing the locale on a running system...

  • By and large there is really no need to reboot a UNIX machine unless you are making a change to the kernel, i.e. an upgrade or a recompile with an added feature. Other than that, the author is correct. I have machines with uptimes of two years. It would have been more had I not had to power the machine down for a physical move.
  • by Sycraft-fu ( 314770 ) on Monday February 21, 2011 @12:53PM (#35269520)

    More or less it is "You shouldn't reboot UNIX servers because UNIX admins are tough guys, and we'd rather spend days looking for a solution than ruin our precious uptime!"

    That is NOT a reason not to reboot a UNIX server. In fact it sounds like if you've a properly designed environment with redundant servers for things, a reboot might be just the thing. Who cares about uptime? You don't win awards for having big uptime numbers, it is all about your systems working well and providing what they need and not blowing up in a crisis.

    Now, there well may be technical reasons why a reboot is a bad idea, but this article doesn't present any. If you want to claim "You shouldn't reboot," then you need to present technical reasons why not. Just having more uptime or being somehow "better" than Windows admins is not a reason, it is silly posturing.

    • by pz ( 113803 )

      Please point out exactly where in the article the issue of uptime is raised. I fail to see it. Many others have also suggested that long uptimes ("e-pene" as one poster put it) is the reason for avoiding reboots. There has been no such suggestion that I could find. I authored a post to the previous thread about the origins of the Unix attitude against reboots that was highly rated and nowhere in that post, or in the follow-on replies, was uptime ever considered an issue.

      The issue -- the only issue -- is

    • by GreyLurk ( 35139 )

      I don't think the article was expounding on Unix "Manliness" and uptime metrics... It mostly just highlighted the mistake that a lot of junior admins make (Both Windows and Unix) that it doesn't matter if you understand why the problem is happening, just mashing the power button until it goes away is the best rout forward.

      Rather than presenting technical reasons why you shouldn't reboot, It's actually probably better to ask for technical reasons why you *should* reboot. Rebooting a server to try and fix a

    • by Jose ( 15075 )

      Now, there well may be technical reasons why a reboot is a bad idea, but this article doesn't present any.

      hrm, the article states: ...If you shrug and reboot the box after looking around for a few minutes, you may have missed the fact that a junior admin inadvertently deleted /boot and some portions of /etc and /usr/lib64 due to a runaway script they were writing. That's what was causing the segfaults and the wonky behavior. But since you rebooted the server without digging into the problem, you've made it

    • Comment removed based on user account deletion
  • by RedK ( 112790 ) on Monday February 21, 2011 @12:53PM (#35269526)

    You lie.

    Seriously. I don't know what HP is doing, but NFS hangs/stuck processes that you can't kill -9 your way out of is just wrong.

    • by inflex ( 123318 )

      NFS is designed to be like that, block/hang until connection is restored... though not sure about the resilliance to the sig-9 though. You do now have the option on some NFS systems to have a soft-block.

      • by RedK ( 112790 )

        I've had systems with HP-UX that could rpcinfo/showmount on the NFS server and yet still had hanged filesystems. Soft, hard, whatever mount option, it's random. Then when you try to shut down the NFS subsystem, the rpc processes get stuck, you try to kill -9 and they simply don't die. umount -f doesn't work. Nothing works.

        You really have to have experience on HP-UX to understand the pain... And if only I was talking about the old 11iv1 instead of the brand spanking new 11iv3 with ONCplus up to date.

    • Re:HP-UX says... (Score:5, Informative)

      by sribe ( 304414 ) on Monday February 21, 2011 @01:12PM (#35269768)

      Seriously. I don't know what HP is doing, but NFS hangs/stuck processes that you can't kill -9 your way out of is just wrong.

      Kind of a well-known, if very old, problem. From Use of NFS Considered Harmful [time-travellers.org]:

      k. Unkillable Processes

      When an NFS server is unavailable, the client will typically not return an error to the process attempting to use it. Rather the client will retry the operation. At some point, it will eventually give up and return an error to the process.
      In Unix there are two kinds of devices, slow and fast. The semantics of I/O operations vary depending on the type of device. For example, a read on a fast device will always fill a buffer, whereas a read on a slow device will return any data ready, even if the buffer is not filled. Disks (even floppy disks or CD-ROM's) are considered fast devices.

      The Unix kernel typically does not allow fast I/O operations to be interrupted. The idea is to avoid the overhead of putting a process into a suspended state until data is available, because the data is always either available or not. For disk reads, this is not a problem, because a delay of even hundreds of milliseconds waiting for I/O to be interrupted is not often harmful to system operation.

      NFS mounts, since they are intended to mimic disks, are also considered fast devices. However, in the event of a server failure, an NFS disk can take minutes to eventually return success or failure to the application. A program using data on an NFS mount, however, can remain in an uninterruptable state until a final timeout occurs.

      Workaround: Don't panic when a process will not terminate from repeated kill -9 commands. If ps reports the process is in state D, there is a good chance that it is waiting on an NFS mount. Wait 10 minutes, and if the process has still not terminated, then panic.

      • by Svartalf ( 2997 )

        That's why I'm all for coming up with something OTHER than NFS for server framework. Seriously. And using it in a HP/HA cluster is...verging on insane... It's an old crufty design that was designed for use in a simpler time with simpler conditions- and it wasn't all that great then.

  • by Anonymous Showered ( 1443719 ) on Monday February 21, 2011 @12:53PM (#35269538)

    I run web servers for a few dozen clients, and rebooting a remote machine was always scary. There was the possibility that something might not boot up during startup (e.g. SSHd) and I would be locked out. I would then have to travel to my data center downtown (about 30 minutes away) and troubleshoot the problem. Since I don't have 24/7 access to the DC (I don't have enough business with the DC to warrant an owned security pass...) I have to wait until they open to the general clientèle in the morning.

    With ESXi, however, I'm not that scared anymore. If something does go wrong, I have a console to the VM through vCenter client (the application that manages virtual machines on the server). It's happened once where a significant upgrade of FreeBSD 7.2 to 8.1 was problematic. Coincidentally, it was because I didn't upgrade the VMware tools (open-vmware-tools port). Nonetheless, I managed to fix the problem through vCenter.

    This is why I love virtualization in general. It's making managing servers easier for me.

    • by inflex ( 123318 )

      It's why a "good" server has a lights-out system in it that lets you gain access to the machine as it boots as if you were there with a keyboard/console.

      Of course, yes, the VM-route is nice, I do that too now ( so long as you don't mess up the host :D ).

    • by Spad ( 470073 )

      Not to mention the joy of snapshots.

  • I read TFA (Score:3, Interesting)

    by pak9rabid ( 1011935 ) on Monday February 21, 2011 @12:54PM (#35269542)
    What a load of horse shit.
  • by Anonymous Coward

    Often system upgrades (eg. security fixes) include new versions of libraries and such. It's impossible for the package manager to know which processes are using those libraries so it can't automatically restart everything. Consider if you have custom processes running, the package manager wouldn't even know about them.

    Therefore you have to do it manually, but then you have the same problem. It's damn hard to know which processes are using the libraries that were upgraded. Really, really hard if it's a b

  • by cpct0 ( 558171 ) <slashdot@micheldona i s . com> on Monday February 21, 2011 @12:56PM (#35269574) Homepage Journal

    Quotes from stupid people:
    You should never reboot a Mac, it's not like Windows.
    You should never reboot Unix/Lunux, it's not like Windows.

    Well, you shouldn't reboot Windows either. You reboot it when it goes sour. Our Windows servers seldom go sour, so we don't reboot them. Same for Mac or *nix.

    Problem is when it starts to cause problems. Like our /var/spool partition deciding it has better things to do than exist... or the ever so important NFS or iSCSI mount that decides to Go West, and gives us the ??? ls we all dread ... with umounting impossible, so remounting impossible, and all these stale files and stuff. You either tweak these things for hours cleaning up all processes, or you reboot.

    In fact, being a good sysadmin, all my servers are MEANT to be rebooted if something goes sour. One SVN project goes sour? check if it's not the repository itself that got problems, or if the system needs to save something to safely exist ... and if not, reboot the server. Everything magically restarts itself, does its little sanity check, and a quick look at a remote syslog to make certain everything is all right. 2 minutes lost for everyone, not 3 hours of trying to clean up mess left by some stray process somewhere or trying to kill the rogue 100 compression and rsync jobs that got started eating up all RAM, CPU and network.

    Since all our servers are single processes and are either VMs or single machines, it's a breeze to do this. iSCSI will diligently wait before the machine is back up before trying to reconnect. NFS will keep its locked files up, and will reconnect to them. No, seriously, everything simply reconnects!

    Of course, the idea is to minimize these occurences, so we learn from it, and we try to repair what could've caused this problem in the first place. And there's a place to do this in a server crash postmortem. But no need to make users wait while we try to figure out wth.

  • by Enry ( 630 ) <enry@@@wayga...net> on Monday February 21, 2011 @12:57PM (#35269588) Journal

    While it's true servers don't need to be restarted as often as Windows counterparts, there are valid reasons for restarting a server:

    - new kernel, new features
    - new kernel, new security patches (yes, these are distinct reasons)
    - ensure all services restart in the event of a real failure
    - we have cases where memory fills and the system starts thrashing. It may cure itself eventually, but you can't get in via SSH or console (and no, the OOM killer doesn't kick in).

    I think item #3 is important. If you have a crusty system that's been in place for a while and it reboots for some reason, you now have to spend time to make sure everything started, figure out what didn't start, and why. This doesn't mean you need to restart once a week, but every 6-12 months is certainly reasonable.

    • by sjames ( 1099 )

      Did you RTFA? He did NOT say NEVER boot, just that it is not a valid troubleshooting step (it MAY be part of a valid solution to a diagnosed problem). He explicitly named your first 2 points as good reasons to reboot. The 3rd is a bit of a rookie mistake, but as long as sshd and basic networking starts the rest can be resolved in the unlikely case that a reboot does happen. Arguably, that's a test procedure and not a troubleshooting solution.

      The thrashing case is one he didn't mention. It is sometimes the o

  • This is a myth? (Score:5, Interesting)

    by pclminion ( 145572 ) on Monday February 21, 2011 @12:57PM (#35269590)

    I've heard a lot of myths. I've never heard a myth stating "You need to reboot a UNIX system to fix problems." If anything I've heard the opposite myth. Who promulgates this shit?

    I do remember ONE time a UNIX system needed a reboot. We (developer team) were managing our own cluster of build machines. The head System God was out of town for two weeks. We were having problems with a build host, and tried everything. Day after day. Finally, on the last day before System God was due to return, it occurred to me that the one thing we hadn't tried was to reboot the machine. The reboot fixed the problem, whatever it was.

    I felt stupid. One, for not figuring out the problem in a way that could avoid a reboot. Two, for not recording enough information to determine root cause in a post-mortem analysis. Three, for configuring a system in such a way that a reboot might be required in order to fix a problem.

    To this day I believe that reboot was unnecessary, although at the time it was the fastest way to resolving the immediate blocking issue.

  • by PPH ( 736903 ) on Monday February 21, 2011 @12:57PM (#35269596)

    ... the crap I read on Slashdot is so unbelievable, I have to reboot my laptop in the hopes that it will go away.

  • by Spad ( 470073 ) <[slashdot] [at] [spad.co.uk]> on Monday February 21, 2011 @12:59PM (#35269604) Homepage

    The same argument can be applied to Windows servers; sometimes rebooting will only make things worse, or at least no make things any better. Unfortunately, these days the trusty reboot is often the first option instead of last resort; at the very least some basic troubleshooting needs to be done to identify potential causes before you likely erase half the evidence.

    I suffer from a desktop variant of this issue at work, whereby re-imaging has become the "troubleshooting" tool of choice, to the point that all thought has now left the support process so that I've witnessed an engineer re-image a PC 3 times (at 30+ minutes each time) before someone else identified that the issue was being caused by a BIOS setting and that re-imaging was a complete waste of time.

    Let's face it, if your admin/support staff are lazy and/or stupid, then it doesn't matter which approach they take because they're not going to fix the problem anyway.

  • by aztektum ( 170569 ) on Monday February 21, 2011 @01:04PM (#35269672)

    /. editors: I propose a new rule. Submissions with links to PCWorld, InfoWorld, PCMagazine, Computerworld, CNet, or any other technology periodical you'd see in the check out line of a Walgreens be immediately deleted with prejudice.

    They're the Oprah Magazine of the tech world. They exist to sell ads by writing articles with grabby headlines and little substance.

  • Did anyone else notice the reek of the True Scotsman fallacy? If you agree with him, he brags about it. If you don't, he cites the reason to be because you aren't a TRUE pro-unix admin.

    Sorta grates on my nerves a bit.
  • The new crop of sysadmins are sortof funny. I wasn't aware that there was a myth that rebooting a server fixed anything, among the unix ranks. Of course that doesn't fix anything.
    Are the people spreading this myth the same folks that log in as root because hey - they're the sysadmin, and access controls are for wimps?
  • by ledow ( 319597 ) on Monday February 21, 2011 @01:16PM (#35269840) Homepage

    - Design system
    - Build system (involves inevitable reboots)
    - Test system (involves inevitable reboots)
    - Move system into production.

    Once the services you need start up the way you want, don't play with it. Put it into service and have backups of the original image, any changes you make and a working replacement (Yes, have a working replacement - there is *nothing* better than having another machine sitting next to your server that can take over its job with the flick of a switch while you repair it - it also lets you test changes safely, and whenever you're sure the system is how you want it, you push the same image to your "copy of" server).

    If you do it properly, that machine will then stay up until hardware failure. Sometimes that *can* be years away. If you do it properly, you shouldn't ever, ever, ever be rebooting a server that's in production - you're just masking the real problem. Yeah, it'll work most of the time but it's just a way of papering over the cracks. The server hung, the service died, the settings got out of sync, or whatever, for a reason. Just rebooting is ignoring that reason for sake of service continuance - if the service is that vital, you should have high enough availability to cover such incidences or that same problem will come back to bite you later.

    Nobody cares about enormous uptimes, but having a server that you haven't NEEDED to touch in months is a good thing. It means that it has a well-defined function and has been performing correctly - that's your "stable" version and should be treated as such. Every time you make a change to a server, it then becomes a "current/experimental" version that you should be wary of.

    At worst, when a problem appears, you turn ON a replacement server and fix the one that is showing problems. If its role is well-specified, you don't get "feature creep" where it's running a million things that it never used to and they're not in your startup properly because it's never rebooted enough for you to test them.

    On Windows, or Unix, you shouldn't have to reboot. If you do, it's to test something or correctly reinitialise after fixing a problem (a post-solution reboot just to make sure it works as required isn't a bad thing but certainly not "required"). The worry of hardware failure on boot shouldn't stop you rebooting, and similarly you shouldn't reboot just to "spot" problems. Both suggest inattention and lack of suitable backups/replacements/high availability solutions.

    Systems can easily go 3-4 years in operation without requiring a reboot. If your hardware is good quality, you're monitoring the server as you should be, you have adequate backups/replacements and the role it performs isn't changed, there's no need to ever reboot it past initial testing. I have internal school servers that only get rebooted in the summer (i.e. once per annum) and that's only because the power goes off to upgrade the electrics each year.

    If it wasn't for that, I'd just leave them running. They don't need kernel 2.6.192830921830 and they have been doing that same job reliably for a LONG time. I'm not going to kick them into a reboot "just because". Similarly even the tiniest memory leak in their processes would cause me problems that I would spot immediately.

    As it is, 450 happy users all day long for years. The last one I installed actually took a whack from a collapsed networking cabinet coming off the wall (full of fully-populated Gigabit switches) and dropping six feet onto it. Apart from a small dent it carried on just fine, and the disks were idle, and SMART / data integrity show no problems. I rebuilt the entire network cabling around it because switching it off wasn't necessary. If it did reboot and it didn't come up in the expected state? There's a copy of it on another machine on the other side of the room - it's predecessor that also didn't reboot for years but wasn't fast enough to run the amount of PHP / MySQL we needed it to among its other functions. Having the replacement machine

  • by idontgno ( 624372 ) on Monday February 21, 2011 @01:25PM (#35269960) Journal

    courtesy of Appendix A of the Jargon File [catb.org].

    Tom Knight and the Lisp Machine

    A novice was trying to fix a broken Lisp machine by turning the power off and on.

    Knight, seeing what the student was doing, spoke sternly: "You cannot fix a machine by just power-cycling it with no understanding of what is going wrong."

    Knight turned the machine off and on.

    The machine worked.

  • by Anonymous Coward on Monday February 21, 2011 @01:27PM (#35269996)

    It makes a nice figure. Ten years. HP-UX running a few more or less referential databases. 3650 days. Was it patched properly? Did anyone *really* look after it? The only thing that can be said, is that it apparently was quite a stable machine room in terms of 10 full years of electrical & other provisions, more or less intact.

    Then it was shut down for good.

    I'd rather see regular maintenance breaks and maintenance windows (pun not entirely intended), than collect numbers in the uptime command's output. But the story is true, after I left that company not a single soul ever rebooted it. Ten years after they send me an email, with an attachment of a putty session. Ten years, :)

  • After making configuration changes it makes _a lot_ of sense to reboot if possible. That way you can determine that your changes indeed load properly after a reboot. You don't want that kind of a surprise when you have long since forgotten all the little tweaks in place.

If money can't buy happiness, I guess you'll just have to rent it.

Working...