Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software Books Media Book Reviews Linux

Automating Unix and Linux Administration 167

nead writes "If you are disciple in the church of Wall, or like me you believe that laziness is the father of invention, or if you simply have more than a couple *nix machine to administer, Kirk Bauer's new book Automating Unix and Linux Administration is definitely for you. From the creator of the popular open source projects AutoRpm and LogWatch comes a thorough - and believe it or not entertaining - look at how one can leverage the power of a few common tools to significantly reduce the time and effort system administrators spend doing their jobs." Read on below for the rest of nead's review.
Automating Unix and Linux Administration
author Kirk Bauer
pages 547
publisher Apress Inc.
rating 8.0
reviewer Nick Downey
ISBN 1590592123
summary Tools and methods for automating *nix administration for a couple (or a few thousand) computers.

From the outset, Bauer takes a straightforward and principled approach to problem analysis. Usually starting with anecdotal example scenarios (many of which will have you saying "been there before") and progressing through ideals, goals and consequences, he examines many of the common issues facing system administrators with candor and realism. Almost nowhere in the book does the author assume an authoritarian stance; he questions his own decision making process and encourages the reader to come up with exceptions to his rules. Fundamentally Bauer has one goal -- to develop a comprehensive system for reliably automating the tedious but important tasks that all system administrators face on a recurring basis.

Admittedly, it would be a fallacy for any book to claim complete and comprehensive coverage of all things related to system administration and Bauer does no such thing. When the author touches on topics that obviously require more depth than a single chapter can afford, he is certain to include at least one reference (and in many instances more) to alternate publications without bias to any particular publisher or author. Having said that, the book's scope and depth of topic coverage is impressive. Starting with an exhaustive examination of SSH and progressing through cfengine, NFS, LDAP, RPM and Tripwire (just to name a few) Bauer provides carefully detailed instruction on how to automate tasks ranging from simple network management and software packaging to security, monitoring and backups. The author even goes so far as to suggest methods for efficiently front-ending automation systems for the less technical of users.

Although not expressly stated in the text, the overall theme of the book is walk on the shoulders of giants. Starting with simple example scripts (in both Bash and Perl) and many single-line commands, Bauer builds on the content of each previous chapter as the book progresses. Examples shown in early chapters are incorporated into more complex systems one step at a time. Following along is easy, each script or command is detailed on a line-by-line basis, and because of Bauer's principle-based approach the reader is rarely left wondering why the author has chosen a particular tool or implementation. More often than not the elegance of how Bauer pieces together methods and procedures will excite you about the possibilities for automation of your own systems.

Although Bauer explicitly states that readers are presumed to have more than a modicum of experience in system administration, even the novice administrator, as well as those that are responsible for only a handful of machines, will find this book invaluable. Also included are three appendices which provide an easy introduction to basic shell tools, creating your own RedHat distribution and how to package software as RPMs. These portions of the book alone justify the less than $40 price tag, but for those who run clusters or data centers, this book stands to save you countless hours of repetitive headaches. Published by apress and boasting nearly 600 pages, this lively read has made itself a permanent addition to at least one reference library.


You can purchase Automating Unix and Linux Administration from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

Automating Unix and Linux Administration

Comments Filter:
  • by dnotj ( 633262 ) on Thursday October 09, 2003 @12:12PM (#7174074) Homepage
    RANT

    Have book reviews on slashdot become about who can get the earliest links to their amazon.com partner site?

    This books looks interesting (to me) and I might actually take a trip to the book store to check it out. But the comments (so far) aren't about the book.

    /RANT

    • Have book reviews on slashdot become about who can get the earliest links to their amazon.com partner site?

      No - it's about mis-representing a advertisement for Barnes & Noble as 'news' instead of as a paid commercial.
    • ...the reviewers should be obligated to explain why the book wasn't a 10/10. In many cases they do say why, but in this case I thought that 8/10 was very stingy considering the types of good compliments that he gave. He could have kept the same compliments & the score, if he just explained why he took off 2 points.
    • But the comments (so far) aren't about the book.

      This is slashdot. Nobody reads the articles before posting. Do you really expect them to read a whole BOOK before posting?

      ;)

  • by Gudlyf ( 544445 ) <<gudlyf> <at> <realistek.com>> on Thursday October 09, 2003 @12:17PM (#7174129) Homepage Journal
    Obligatory O'Reilly plug:

    Perl for System Administration [amazon.com].

  • That just sounds so bad. I prefer the term "minimal keystroke solution". -B
  • by Anonymous Coward on Thursday October 09, 2003 @12:27PM (#7174243)
    1. Decide that automating takes too much time
    2. Do everything by hand
    3. Fuck up once too often
    4. Decide that automating is necessary

    Don't know about the rest of you.
    • I had to go through steps 5 through 8
      5. Screw up the automation process
      6. Restore from tape
      7. Scramble for two months of data
      8. Debug scripts

      Moral of the story boys and girls, If you're an idiot, buy a book!
    • by KeithH ( 15061 ) on Thursday October 09, 2003 @01:35PM (#7174904)
      1. Type the same thing three times in a row
      2. Decide that this task could be replaced by a
      shell script.
      3. Spend the afternoon perfecting and documenting
      the 400 shell/perl/expect/... script so that I
      can save 30 seconds a day for the next few
      months.
      4. Find a better solution on sourceforge
  • Little rants.. (Score:3, Interesting)

    by Karamchand ( 607798 ) on Thursday October 09, 2003 @12:28PM (#7174253)
    • If this book obviously doesn't have any downsides (at least you didn't mention any) - why did it get only eight points? (assuming maximum would be 10, as usual. Or do you mean eight out of eight points?;)
    • 547 pages - I'd say that's nearer 500 pages than 600 pages. Or simply around 550 pages. But certainly not nearly 600 pages.
    At all - thanks for the review!
  • Autocrash (Score:3, Insightful)

    by G3ckoG33k ( 647276 ) on Thursday October 09, 2003 @12:31PM (#7174286)
    Call me a cynic, but I am under the impression that without knowledged personnel (i.e. who don't need autowhatever) there will be, almost as sure as a natural law, a corrupt server or an autocrash. Don't do away yourself with knowledge - see what happened in the Windows world.
    • Agreed, that is why it is important for admins to automate thier tasks by writing scripts themselves. The task of writing many of these administration scripts helps one to understand more fully the ins and outs of the programs they are using and the tools they need in order to do thier daily monitoring of processes, performance, and logs.

      Books such as this one (and others, both about administration and tools, such as "Perl for System Administration", and about the unix tools themselves, such as "Mastering
    • Re:Autocrash (Score:5, Insightful)

      by Chanc_Gorkon ( 94133 ) <gorkon@gmai[ ]om ['l.c' in gap]> on Thursday October 09, 2003 @01:17PM (#7174745)
      And then reality smacks you in the face. If your in a shop with a staff of 10 sysadmins and you have 2000 servers to look after, you NEED automation. Anyone who thinks automation puits sysadmins out of work is full of it. Who do you think has to write/customize these scripts? SYSADMINS! Do you as a sysadmin really have time to properly pour over the logs by hand or would writing a script to do this for you help you do the other things you need to do like:

      Patching
      Fixing user passwords (unless you have a help desk)
      Working on upgrades and installs.
      Planning for future growth
      Work on your disaster recovery plan
      Possible Machine Room moves
      etc
      etc

      Sysadmins do more then watch over a system. We need to realize that automation is NOT a panacea, but yet another tool in the sysadmin bag. Besides....if everything was supposed to be done by hand why was cron created??
  • I bought the book (Score:5, Interesting)

    by klieber ( 124032 ) on Thursday October 09, 2003 @12:34PM (#7174298) Homepage
    I own the book and have been using it for a couple of weeks now. All in all, I think it's a great resource if you already have a fair amount of linux knowledge. I purchased it primarily because of its coverage of cfengine [cfengine.org] but found it useful for other purposes as well.

    Definitely not for the newbie system administrator (nor does it pretend to be). But it is a great resource if you're looking to administer more boxes with less bodies.
  • Unix/Linux ratio?? (Score:3, Insightful)

    by swordgeek ( 112599 ) on Thursday October 09, 2003 @12:35PM (#7174309) Journal
    Simple question, that isn't really answered in the review. How much of this book is generic Unix/Unixlike information, how much is specific to a single vendor OS, and how much is specific to Linux?

    I'd like to think that most of this stuff is fairly transportable, but when I hear about "bash scripts," I wonder if it's the reviewer or the book that's pushing Linux-centricisms. (and yes, I know that bash is available everywhere, blah blah blah. It still doesn't make it a valid replacement for /sbin/sh, for admin scripting)
    • by jaymz666 ( 34050 )
      Couple with the fact that the reviewer liked it for the RPM information, it really sounds like it's very Redhat centric
      • by archen ( 447353 )
        Building rpm's might be red hat centric (although other systems use rpm too now), but the idea is probably more important. Whatever packaging system you use, the only real differences are in the package system which you should have a fundamental understanding of anyway.

        On my FreeBSD boxes I just use 'make package' off of the box that keeps the source tree in sync. Then uses rsync to push them to the other servers where cron picks up the updates and installs them. I could just as easily replace that with
      • by kaybee ( 101750 )
        Disclaimer: I'm the author of this book.

        Very little of the book is only applicable to Linux, and even less is only appicable to Red Hat Linux. Basically, one appendix is on Red Hat Linux. RPM is covered more than other package managers (but RPM is also the most common package manager to use across different Unix variants). Solaris patches are also covered to some degree. Everything else is pretty generic.
    • by kaybee ( 101750 ) on Thursday October 09, 2003 @01:23PM (#7174795) Homepage
      Disclaimer: I am the author of this book.

      The book is aimed towards all Unix variants (as is Cfengine, which is a big part of the book). But I prefer Linux and use Linux for many of the examples... but all that usually means is it begins with #!/bin/bash at the top instead of #!/usr/local/bin/bash or #!/bin/sh.

      One appendix is on RPMs (which is used on other systems besides Linux) and another is on Red Hat Linux specifically.
    • I have yet to encounter a Unix platform that bash wouldn't run on. I also have yet to encounter one that I didn't choose to install bash on anyway. So bash scripts are perfectly transportable.

      I'd say that bash has certainly been around long enough to be considered standard issue.

      • Its not really a question of if it runs on it, its a question of if its installed. Bash is only default on linux, even the bsd's don't install it by default, and there may be times when the senior admin just doesnt want it on there, wether you want it or not, and that is his perogative. A good generic book or examples, in my opinion will use tcsh or sh for scripts, unless the script is specifically for Linux.
        • Its not really a question of if it runs on it, its a question of if its installed.

          Since the book is for admins, presumably if they want it, it will be installed! I have seen csh and ksh more often than tcsh (until recently that is). In the end, you just have to pick one and be sure it's available for all or at least most platforms. I can't really think of any complete combination of tools that is guaranteed to be installed on any Unix platform, especially if you count platforms where the tool exists but

          • I agree -- and this is just what I say in the book, and why I picked it as an example here. If you want bash on all of your systems, it is easy to do. The book even talks about such a task.

            But, if for some reason you don't want to use bash, none of the examples are so complicated that they couldn't be converted to any other language.
  • Nothing new here (Score:4, Interesting)

    by AKAImBatman ( 238306 ) <`akaimbatman' `at' `gmail.com'> on Thursday October 09, 2003 @12:39PM (#7174355) Homepage Journal
    This is nothing new. Unix admins have been automating machines since before Linux was even a glimmer in Trovald's eye. The only difference I think, is that there are a great deal more admins today who don't know their craft very well. To many fuzzy GUI widgets (that invariably screw things up) getting in the way.(You hear me RedHat?!)

    • (You hear me RedHat?!)

      OT: Does anyone actually like Disk Druid? If you do, did you use it only for a single-boot box without a pre-existing installation of any other OS?

      • I prefer fdisk over disk druid, possibly because I know fdisk, and don't know disk druid. I was surprised to see disk druid the default partitioner and fdisk the "experts only" option on a redhat install.

        Likewise, when they switched to grub, I was whizzed. I spent all this time learning lilo, just to have grub dropped on me, which I have gotten used to and now prefer :)

        I have no intention of "getting used to" disk druid though, as long as fdisk is still around, I'll use that.

  • I know it doesn't apply in all cases, but if you're just running a web server my experience is that running it all out of Knoppix RAMDisk just makes sense in every way. It's faster, it's cheaper and if it screws up, just start from scratch. But since it's so cheap why not run redundant servers? It's a winner from every angle.
    Yeah, you need to make a few little scripts to automate your rebuilding process, but once you've done that it's about as maintenance free as you can possibly imagine.
    Of course a
  • If RPM weren't such a mess, it might be more convenient to make RPMs of the packages you build from source but want to install the same exact binary on all the other machines. I just make Slackware style tarballs, so it's real easy; no spec file needed.

    • RPM tracks dependencies, which is the main reason to use it.

      It is a user-hostile, old-skool *nix horror, but it's a hell of an improvement on HP-UX's dreaded "depot" system.
      • Which is why I don't use it. I compile all the critical software, and a lot of other software, on my systems from the original source. Some packages even have local source mods (patches). The reason to use a binary packaging system in this case is that it forms a convenient way to compile once, and install on many systems. Unless doing a source compile on each machine, this ensures each machine has a checksum verifiable identical copy of every file. I don't need the dependency tracking for the purpose

  • i haven't been getting much bonuses or raises lately. why waste money on a book?

    i get most of my stuff from reading periodicals while sipping chai at Borders. and websites like the Linux Documentation Project.

    http://www.tldp.org

  • by Anonymous Coward
    RE: >

    and necessity is the mother of invention, does this mean laziness and necessity get together and have nasty sex before inventing something :)
  • by bsDaemon ( 87307 ) on Thursday October 09, 2003 @12:56PM (#7174542)
    man cron
    • For those of you running on crippled (MS Windows) systems here it is:

      CRON(8)

      NAME
      cron - daemon to execute scheduled commands (Vixie Cron)
      SYNOPSIS
      cron
      DESCRIPTION
      Cron should be started from /etc/rc or /etc/rc.local. It will return immediately, so you don't need to start it with '&'.

      Cron searches /var/spool/cron for crontab files which are named after accounts in etc/passwd; crontabs found are loaded into memory. Cron also searches for /etc/crontab and
  • by _ph1ux_ ( 216706 ) on Thursday October 09, 2003 @01:05PM (#7174612)
    I have just about any task down to issuing one command:

    "Brian, go overto server X and do such-and-such"
  • Multiple Machines (Score:5, Interesting)

    by BrookHarty ( 9119 ) on Thursday October 09, 2003 @01:48PM (#7175005) Journal
    One of the problems we have, is when you have clusters with 100+ machines, and need to push configs, or gather stats off each box.

    On solaris, we run a script called "shout" that does a for/next loop that ssh's into each box and runs a command for us. We also have one called "Scream" which does some root privilege ssh enabled commands.

    Nortel has a nice program called CLIManager (use to be called CLImax), that allows you telnet into multiple passports and run commands. Same idea, but the program formats data to display. Say you wanted to display "ipconfig" on 50 machines, this would format it, so you have columns of data, easy to read and put in reports.

    Also, has a "Watch" command that will repeat a command, and format the data. Say you want to display counters.

    I have not seen an opensource program that does the same as "CliManager" but its has to be one of the best idea's that should be implemented in opensource. Basically, it logs into multiple machines, parses and displays data, and outputs all errors on another window to keep your main screen clean.

    Think of logging into 10 machines, and doing a tail -f on an active log file. Then the program would parse the data, display it in a table, and all updates would be highlighted.

    I havnt spoken to the author of CliManager, but I guess he also hated logging into multiple machines, and running the same command. This program has been updated over the years, and is now the standard interface to the nodes. It just uses telnet and a command line, but you can log into 100's of nodes at once.

    Wish I could post pics and the tgz file, maybe someone from Nortel can comment. (Runs on Solaris, NT and linux)
    • Nortel has a nice program called CLIManager (use to be called CLImax),


      thank heck the final word wasn't one beginning with "T"....
    • > Nortel has a nice program called CLIManager (use
      > to be called CLImax), that allows you telnet into
      > multiple passports and run commands.

      Fermilab has available a tool called rgang that does (minus the output formatting) something like this:

      http://fermitools.fnal.gov/abstracts/rgang/abst r ac t.html

      We use it regularily on a cluster of 176 machines. It's biggest flaw is it tends to hang when one of the machines it encounters is down.

      But it is free so I won't complain. :)
    • One of the problems we have, is when you have clusters with 100+ machines, and need to push configs, or gather stats off each box. On solaris, we run a script called "shout" that does a for/next loop that ssh's into each box and runs a command for us. We also have one called "Scream" which does some root privilege ssh enabled commands.

      While the serial approach of looping through machines is a huge improvement over making changes by hand, for large scale environments, you need to use a parallel approach,

    • I do pretty much the same thing this way:

      Generate ssh key file.
      Put pub key file in $HOME/.ssh/authorized_keys2 on the remote machines.

      Have a text file with a list of all the names the machines resolve to.

      for i in `cat machinelist.txt`; do echo "running blah on $i"; ssh user@$i 'some command I want to run on all machines'; echo " "; done

      It comes in handy for stuff like checking the mail queues or doing a tail -50 on a log file. Mundane stuff like that. Everyone once in a while I'll do basically the same
  • Learn to script (Score:4, Interesting)

    by holden_t ( 444907 ) <holden_t.yahoo@com> on Thursday October 09, 2003 @03:09PM (#7175570)
    Certainly I haven't read the book but it looks as if Kirk is offering examples of how to write scripts to handle everyday gruntwork. Good idea.

    But I say to those that call themselves sys.admins, Learn how to script!!!

    I work at a large bankrupt telcom :) and it's amazing the amount of admins that don't have the slightest idea how to write the simplest loop. Or use ksh, bash, or csh's cmd history. Or vi.

    Maybe this is just a corporate thing. They were raised, in a sense, in a setting where all they had to do was add users and replace disks. Maybe they never learned how to do anything else.

    Back in '83 I took manuals home and poured over every page, every weekend for months. That didn't make me a good admin but it gave me a good foundation. From there I had to just halfway use my head (imagination?) and start writing scripts. Ugly? Sure. Did they get better? Of course!

    Now I play admin on 110+ machines, and I stay bored. Why? Because I've written a response engine in Expect that handles most of my everyday problems. I call it AGE, Automated Gruntwork Eliminator.

    There's no way I could have done this if I had just sat back and floated, not put in a bit of effort to learn new things.

    T.
    • Now I play admin on 110+ machines, and I stay bored. Why? Because I've written a response engine in Expect that handles most of my everyday problems. I call it AGE, Automated Gruntwork Eliminator.

      Boredom is good. Boredom means nothing bad is happening. If you want to not be bored, do something stupid.

      I have been interested in learning how to write scripts in Bash. Any recomendations?

      • Try the bash man page. No, I'm actually serious -- read it over, about three times. It may actually start making sense at that point. There is a lot to it though...
      • O'Reilly's book helped me quite a bit.

        http://www.oreilly.com/catalog/bash2/

        In addition, Debian has a new package called abs-guide that I haven't checked out yet.

        http://packages.debian.org/unstable/doc/abs-gui d e. html

        --I've written a bunch of helpful bash scripts to help me with everyday stuff, as well as aliases and functions. If you want, email me - kingneutron at yahoo NOSPAM dot com and put "Request for bash scripts" in the subject line, and I'll send you a tarball.
    • Now I play admin on 110+ machines, and I stay bored. Why? Because I've written a response engine in Expect that handles most of my everyday problems. I call it AGE, Automated Gruntwork Eliminator.


      and have you published this tool??? so others can share the and enjoy and possibly improve it for you???
  • by Moderation abuser ( 184013 ) on Thursday October 09, 2003 @04:00PM (#7176232)
    If you're thinking of this computer or that computer then you won't make an effective systems administrator. You have to see the network of all of the computers as a single whole and treat them as such.

    Once you've got the mindset change sorted, 10, 100, 1000 systems it makes no difference, it's just as simple to manage. You aren't managing individual computers, you're managing an infrastructure.

    Course, you actually have to be competent as well... Obviously.

  • by lysium ( 644252 ) on Thursday October 09, 2003 @04:45PM (#7176708)
    I thoroughly browsed the book at my local B&N cafe, and recommend it highly. It is a well-written, knowledgable book for admins/techs of intermediate sysadmin skills. I mean truly intermediate, for there are no lengthy chapters on Installing Linux, The History of Unix, The History of the Internet, or any such thing. Just useful instruction, insight into the application and usage of certain software packages, and enough scripts to keep one happy. The author's tone is similarly refreshing, as it avoids the blandness of other (good) tech books I've read.

    It is definately on my list of Expensive Books (50. Am I cheap?)to Buy.

    =============

  • This might very well be a book I'll pick up sometime. I'm always looking for more ideas.

    I maintain about ~170 remote Linux boxes (in our company's retail stores and warehouses), as well as our ~30 or so inhouse servers.

    I went through a lot of work to enable our rollout and conversion to go more smoothly. The network and methodology for users, printers, etc. is extremely simplified and patterened.

    For each of the 3 'models' of PCs we use, I have a master system that I produced. I used Mondo Rescue [mondorescue.com] to

  • Full details, including sample chapter, here [apress.com].

    I see the /.gods have already got to this thread: "Duh, it's easy, just use cron/telnet/syslog!" Do any of you people have more than a home PC to maintain? Come to that, would anyone trust you with more than that?

    Ade_
    /
  • On a similar topic, I am a longtime Unix/Linux admin who has inherited a large farm of Windows servers (don't ask, I'm not happy about it either). This is probably about the worst place to ask this, but I'll give it a shot:

    Do any of you have recommendations for books/URLs on how to effectively manage a large Windows cluster using automated methods?

    Thanks in advance for any useful information.
    • At one point the publisher wanted at least a chapter on this in the book. But they figured that Unix people wouldn't want it in the book and that Windows people wouldn't buy a Unix book.

      Personally, I really feel for anybody who has to manage more than two windows machines. But I do think there are methods that Microsoft will be sure to sell you.
    • I did this in a test environment (testing hardware) using Linux and VNC. You can automate quite a bit of mouseclicking via replays and recordings, etc.. Check in to it.

      As far as a large cluster, I had a dozen racks full of 1u and 2u machines, does that count as large? :)

Hackers are just a migratory lifeform with a tropism for computers.

Working...