Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software

How To Adopt 10 'Good' Unix Habits 360

An anonymous reader writes to mention an article at the IBM site from earlier this week, which purports to offer good Unix 'habits' to learn. The ten simple suggestions may be common sense to the seasoned admin, but users with less experience may find some helpful hints here. From the article: "Quote variables with caution - Always be careful with shell expansion and variable names. It is generally a good idea to enclose variable calls in double quotation marks, unless you have a good reason not to. Similarly, if you are directly following a variable name with alphanumeric text, be sure also to enclose the variable name in square brackets ([]) to distinguish it from the surrounding text. Otherwise, the shell interprets the trailing text as part of your variable name -- and most likely returns a null value."
This discussion has been archived. No new comments can be posted.

How To Adopt 10 'Good' Unix Habits

Comments Filter:
  • FP? (Score:5, Funny)

    by Shadow-isoHunt ( 1014539 ) on Saturday December 16, 2006 @06:17AM (#17267396) Homepage
    export POST="first"
    • Re: (Score:2, Funny)

      by Anonymous Coward
      Imbecile! POST wasn't defined yet! This won't work on /bin/sh, only with the proprietary bash shell.
      • Re: (Score:3, Informative)

        by lahi ( 316099 )
        Actually it will work with Korn shell as well, and probably zsh too. Not to mention that many systems have a /bin/sh which is Bourne-compatible but enhanced. Many systems have a /bin/sh implementation that supports this, not just bash-based Linux systems.

        -Lasse
  • welll.. (Score:5, Funny)

    by macadamia_harold ( 947445 ) on Saturday December 16, 2006 @06:21AM (#17267414) Homepage
    An anonymous reader writes to mention an article at the IBM site from earlier this week, which purports to offer good Unix 'habits' to learn.

    I seriously doubt reading this article is going to get anyone to start showering on a regular basis.
    • Re:welll.. (Score:5, Insightful)

      by AchiIIe ( 974900 ) on Saturday December 16, 2006 @06:55AM (#17267556)
      Some of the points he is making are BS. They are not good `Unix habits` they are simply hacks that marginally reduce the workload but (arguably) increase complexity.

      Ie there is NOTHING bad about piping cats. While you might indeed get a ~30% performance increase if you skip the cat, the complexity increases. We often sacrifice performance in order to increase abstraction and understanding.

      What makes unix so powerful is its modularity, the fact that you can pipe any output from any application to any applications stdin. This makes it possible to use common tools app1 | app2, app1longoutput | grep thingsIwant. The possibility to mix and match common elements that (arguably) makes unix powerful.

      Advice that says "stop piping cats" is akin to "stop using helper functions, they overload the stack, instead do everything in one function"

      --
      A better articulated article on the programmers intellectual ability vs proper abstraction techniques:
      http://www.acm.org/classics/oct95/ [acm.org] - Dijkstra, Edsger - "Go To Statement Considered Harmful"
      • Re:welll.. (Score:5, Funny)

        by SharpFang ( 651121 ) on Saturday December 16, 2006 @07:26AM (#17267680) Homepage Journal
        Ie there is NOTHING bad about piping cats

        PETA would disagree.
      • Re: (Score:3, Insightful)

        by t_ban ( 875088 )

        Some of the points he is making are BS. They are not good `Unix habits` they are simply hacks that marginally reduce the workload but (arguably) increase complexity. Ie there is NOTHING bad about piping cats. While you might indeed get a ~30% performance increase if you skip the cat, the complexity increases. We often sacrifice performance in order to increase abstraction and understanding. What makes unix so powerful is its modularity, the fact that you can pipe any output from any application to any ap

        • Re:welll.. (Score:5, Insightful)

          by Znork ( 31774 ) on Saturday December 16, 2006 @08:57AM (#17268128)
          I'd tend to agree with the GP. Consider for example if you have excessively badly named files like '-whatever' in a particular directory; cat has very few destructive ways it can go wrong, other commands may be less forgiving, and cause much more surprise.

          Further, the assembly line abstraction of cat as 'input the contents of these files into the beginning of my pipeline' is predictable, simple and very clear and readable. Using the filenames in the commands means you have to be certain each command will take filenames, and if you replace the first step (from a grep to an awk, for example), you have to rethink your input method semantics again.

          Any typing speed gains and performance improvements you may get will probably get shot the first time some command does something unexpected, or by the extra steps of thought.

          And if performance really was a serious concern you probably shouldnt be writing it as a shell script...
          • People who argue that piping a single file via cat is the best method are wrong. The following method has all of the advantages you cite, but is also shorter to type, uses less system resources (no cat process, no pipe(2) object), and doesn't require you to "rething the input method" in case you want to change the grep command:

                $ <file.txt grep foobar

            Few people know that input-redirection can be established before the command name :)
        • Re:welll.. (Score:5, Informative)

          by hackstraw ( 262471 ) * on Saturday December 16, 2006 @10:11AM (#17268534)
          cat-ing a file and then piping it to grep. surely that is a good point he is making, because grep already takes filenames as an argument?

          That list was fairly arbitrary, but the piping cat thing is something that basically only annoys the most anal of anal, and they probably do it sometimes too.

          Its common for me to do cat foo and then hit the up arrow and append a pipe to another command instead of editing the whole command line. Computers are pretty fast, and real anal people would use fgrep instead of grep, but again I always use egrep, because I never know when a regular expression will be edited into a more complex one, and to me all of the speeds are the same.

          My #1 habit to tell people, although it is not a habit, but just where to start it to learn your shell. No science guys, csh is not a worthy shell in 2006. If you have to suffer with the wacky behavior of a csh variant, at least use tcsh.

          My #2 thing to learn is a text editor.

          As far as habits go. First and foremost, unalias cp, mv, rm to have the -i flag. In my opinion, that is a BAD habit to start. You WILL lose files sooner or later, and the more painful the better so that you will think so you will stop doing it. the -i flag will NOT stop you from redirecting into a file, and the most dangerous is the -rf flag with rm will override that -i. Remote copies via rcp or scp will not honor the -i flag. Unarchiving an archive will not honor the -i flag. There are tons of ways to lose files, and you will lose them. Its a much better habit to universally save yourself from yourself to not lose them by testing with -i, working off of a copy, and thinking before you hit return, creating new directories to eliminate clobbering a file, NEVER, EVER, do tar cf foo.tar . or tar cf foo.tar *. You will piss yourself and others by doing that.

          Actually, this top 10 list is pretty lame, and should be ignored.

        • Re: (Score:2, Informative)

          The best reason to pipe to grep is to keep filenames out of the output. grep foo ?.txt will produce
          a.txt: foo
          b.txt: foo
          c.txt: foo
          whereas cat ?.txt|grep foo produces
          foo
          foo
          foo

          I've also seen Unixes where their shells are linked against spectacularly broken libc's. Under Tru64's Bourne and Korn shells, for example, a multithreaded program foo fork bombs when run as foo < z.txt, but works fine as cat z.txt|foo (foo < z.txt under Bash works, though, because Bash is linked against the GNU

      • Re:welll.. (Score:4, Informative)

        by bugg ( 65930 ) on Saturday December 16, 2006 @10:28AM (#17268666) Homepage
        I don't think it ever makes sense to use cat with one file - something I have seen far too many people do. To do so, logically, is to tell the commands to run through the file twice.

        First you are telling cat to output the entire file, and then you are telling grep to go through the entire output of cat. If you're working with gigabytes of data here, that can quickly be a frustrating exercise! Folks who are in the mentality of using cut | grep and even a visual editor like vi instead of sed are up the creek when they find themselves needing to manipulate and get portions of very large data sets.
      • Re: (Score:3, Funny)

        by db32 ( 862117 )
        I frequently cat bleh | grep whatever just because it was the first way I learned and reflexively do it that way. grep whatever * actually tells me what file it found the match in, rather than just showing me the line. So while piping cats isn't inherently bad, there is frequently better ways to do it that also give more correct output, but it depends on what the desired output is. However "I grepped all the files for 'bleh'" sounds infinitely less disturbing than "I grepped what came out of the cat I pi
  • by Beolach ( 518512 ) <beolach AT juno DOT com> on Saturday December 16, 2006 @06:27AM (#17267436) Homepage Journal
    enclose the variable name in square brackets ([])

    ~ $ ls tmp/
    a b
    ~ $ VAR="tmp/*"
    ~ $ echo $VARa

    ~ $ echo "$VARa"

    ~ $ echo "${VAR}a"
    tmp/*a
    ~ $ echo ${VAR}a
    tmp/a
    Their example correctly uses Curly brackets, {}, but their text says square brackets []. That seems like a typo to me.
    • by tigga ( 559880 )
      Their example correctly uses Curly brackets, {}, but their text says square brackets []. That seems like a typo to me.

      I think somebody needs glasses ;)

      • by thc69 ( 98798 )
        I think somebody needs glasses ;)
        Don't you mean:
        I think somebody needs glasses ;}

        Anyway...Curly brackets? Nyuk nyuk nyuk!
    • by lmfr ( 567586 ) on Saturday December 16, 2006 @07:44AM (#17267748) Journal
      The correct form is {}, not []. There are other things you can use with ${VAR}:
      • ${VAR:-text if $VAR is empty}
      • ${VAR:=text if $VAR is empty and set VAR to this}
      • ${VAR:+text if $VAR is set}
      • ${#VAR} -> length of $VAR
      • ${VAR#pattern} or ${VAR##pattern} -> remove match of pattern from beginning of $VAR (## -> longest match)
      • ${VAR%pattern} or ${VAR%%pattern} -> remove match of pattern from end of $VAR (%% -> longest match)
      There are other formats (see the man page), but these are the ones I use the most. Eg:

      for i in *.png; do convert "$i" "${i%.*}.jpg"; done
  • Typo (Score:3, Informative)

    by seebs ( 15766 ) on Saturday December 16, 2006 @06:31AM (#17267454) Homepage
    The quoted paragraph from the article is incorrect -- and it is in the article too -- but the example immediately following it correctly shows the use of braces ("curly brackets"), not square brackets, for variable names in shell.
  • mkdir (Score:3, Insightful)

    by pfafrich ( 647460 ) <rich@sin[ ]rf.org ['gsu' in gap]> on Saturday December 16, 2006 @06:34AM (#17267468) Homepage
    His example of good habit with mkdir did not convince me

    $ cd tmp/a/b/c || mkdir -p tmp/a/b/c

    If the directory exists you end up in the directory, if it does not it creates the directory but leaves you where you first started. Hence you don't know which directory you will be in after the command is executed!

    • Comment removed (Score:5, Informative)

      by account_deleted ( 4530225 ) on Saturday December 16, 2006 @06:48AM (#17267526)
      Comment removed based on user account deletion
      • Save a few keystrokes and typos:

        ( A=tmp/a/b/c cd $A || mkdir -p $A && cd $A )
        But this is silly: mkdir -p succeeds even if the directory already exists, so this suffices:
        mkdir -p a/b/c && cd a/b/c
        I also resent the use of zillions of little command options to "be more efficient". The -C to tar is just plain silly:
        ( cd a/b/c ; tar xf $ELSEWHERE/foo.tar )
        does the same thing, withouth having to look up the fricking -C each time you want to use it. Three guesses which technique has th

        • by lahi ( 316099 )
          Not only is it silly, but the example given in Listing 4 is WRONG: tar xvf -C tmp/a/b/c newarc.tar.gz will not work, unless there is a version of tar which implies -z given -C. (Someone else already mentioned the misplacement of -C as the filename for -f.) And of course, relying on -z options for tar will not work portably (including versions of AIX, though I'm not sure about AIX 5). This is a case where zcat (given a zcat that knows gzip) should be used: zcat arc.tar.gz| tar xvf -

          The latter also works nice
          • The latter also works nicely across machine boundaries:
            zcat arc.tar.gz| ssh user@foo 'cd tmp/a/b/c ; tar xvf -'

            Only that isn't portable. Some flavors of unix zcat will try to uncompress $1.Z ; clearly there wouldn't be a arc.tar.gz file.

            Fortunately, there's an easy fix for that: Use redirected stdin rather than a filename for zcat :

            zcat <arc.tar.gz | ssh user@foo 'cd tmp/a/b/c ; tar xvf -'

            Some people call me a 'Grammar Nazi' for being so picky about how people use English. I get frustrated

            • clearly there wouldn't be a arc.tar.gz.Z file
            • Re: (Score:3, Interesting)

              You should be using gzcat, not zcat, anyhow. zcat is only portably able to be compress -d.

              gzcat will never be broken in the way described, hence the following is fine and portable IME:

              gzcat arc.tar.gz | ssh user@foo 'cd tmp/a/b/c && tar -xvf -'

              HOWEVER, I find that even vaguely modern CPUs are much faster at gunzipping than typical internet speeds. So, I would use this myself:

              cat arc.tar.gz | ssh user@foo 'cd tmp/a/b/c && gzcat | tar -xvf -'

              On the otherhand, I would never actually write that,
      • In that case, do this:
        mkdir -p tmp/a/b/c; cd tmp/a/b/c
    • Re:mkdir (Score:5, Insightful)

      by SharpFang ( 651121 ) on Saturday December 16, 2006 @07:36AM (#17267716) Homepage Journal
      Especially the habit of using || and && on command line seems ridiculous to me. These have room in two situations:
      - scripts
      - commands that take long enough that you go have a coffee.

      This makes sense:

      make install && lilo && reboot

      This doesn't:

      cd tmp/a/b/c || mkdir -p tmp/a/b/c

      If you fail the first part, well, you typed " || " instead of pressing enter.
      If you succeed the first part, you typed " || mkdir -p tmp/a/b/c" without a bloody reason.

      Type first part. Press enter. Observe result.
      If necessary, type the second part, otherwise correct the first without baggage of the second one hanging around.

      • First, let me be clear that this list is bogus. These 'Good' habits are not that advantagious at all, and I've been using UNIX for quite some time now, and I'm pretty good at it.

        Now, being pedantic, the cd a/b/c || mkdir -p /a/b/c is silly, just do the mkdir -p /a/b/c if that is what you want. Don't test for it, just do it!

    • His example of good habit with mkdir did not convince me

      $ cd tmp/a/b/c || mkdir -p tmp/a/b/c

      If the directory exists you end up in the directory, if it does not it creates the directory but leaves you where you first started. Hence you don't know which directory you will be in after the command is executed!

      Actually, the example is a good habit.

      If the change directory fails, the script will not make the directory in the wrong place. It is a good practice.

  • This article is really just about good (in the authors opinion) TTY shell usage. There's more to Unix than just its shell
    (plus he didn't mention my favourite shortcut: shell history)

    How about being more inclusive and expanding this to deal with security features (surely the single biggest benefit?) and the ease of working on remote boxes?

  • Quote variables with caution

    Hey, FON [slashdot.org], are you reading [datenbruch.de] this? Sadly, even shell code in comsumer devices is often deeply flawed. Which is a good thing, if you want to install a new firmware - and a bad thing, if an attacker does it for you.

  • This article... (Score:5, Informative)

    by nevali ( 942731 ) on Saturday December 16, 2006 @06:43AM (#17267506) Homepage
    ...is so littered with basic errors that it really shouldn't be recommended to anybody. How is 'tar xvf -C tmp/a/b/c newarc.tar.gz' expected to work, for example? Quote variables with square brackets? Running subshell commands using ; instead of && ? No mention of 'xargs -0' ? Don't pipe from cat to grep? Does anybody actually care that people do this (primarily so that the syntax is consistent between a munged- and unmunged-grep, and also such that the order of the command-line is logical from a human point of view)? Plus, of course, it's possible that cat | grep could yield better performance than grep alone: if cat uses mmap() to efficiently read the input files, and the kernel's pipe implementation is good, then it could do better than a grep implementation alone that simply read()s the files.
    • by delire ( 809063 )

      Agreed, the article is incorrect regarding use of tar's -C switch.

      This works: tar -C /tmp/foo -xvf /path/to/some.tar.gz

      This doesn't: tar xvf -C /tmp/foo /path/to/some.tar.gz

      I agree that '&&' is a better practice to encourage than ';'. Chaining commands without heeding their exit status can make a real mess and cause much confusion.

      Aside from that I don't think it's a bad top-ten. I think there should've been something in there about command completion. If productivity is important to this
      • by shird ( 566377 )
        I think whats really missing in the tar command is the '-z' due to it being a .gz
        • by delire ( 809063 )
          It works anyway, try it. -z is redundant in this case.
        • by Haeleth ( 414428 )

          I think whats really missing in the tar command is the '-z' due to it being a .gz

          This article is about Unix, where there is no such option (see the specification [opengroup.org]).

          You may be thinking of GNU tar. I suggest you look up what the acronym GNU stands for, with particular reference to the N.

          (This is not to dispute that the example given in the article is broken.)

    • Re:This article... (Score:4, Informative)

      by treat ( 84622 ) on Saturday December 16, 2006 @12:08PM (#17269340)
      You're the only one who hasn't mentioned xargs -0. I think it's important to elaborate on this. You should never do "find | xargs" or "find | cpio", you should always do "find -print0 | xargs -0" and find -print0 | cpio -0". The former will break if filenames have spaces or newlines in them. You break xargs if filenames have quotes, backslashes, or spaces in them. I never come across a large data set where you can do find | xargs without the -0 options.

      If you are encountering data created by untrusted users, don't forget the strange consequences of filenames that contain newlines.

      Failing to use -0 is dangerous malpractice.
  • by EvanED ( 569694 ) <evanedNO@SPAMgmail.com> on Saturday December 16, 2006 @07:15AM (#17267628)
    I really like this example:

    ~ $ time grep and tmp/a/longfile.txt | wc -l
    2811

    real 0m0.097s
    user 0m0.006s
    sys 0m0.032s
    ~ $ time grep -c and tmp/a/longfile.txt
    2811

    real 0m0.013s
    user 0m0.006s
    sys 0m0.005s


    I am so glad that he showed what a difference can make, because I was *really* getting annoyed at having to wait that extra .084 seconds.
    • Re: (Score:3, Funny)

      by trip11 ( 160832 ) *
      There is no such thing as .084 seconds. Surely you mean .084 hours.
    • I'll gladly trade 0.1 seconds of CPU time if it means I don't need to remember an extra command line option for the wc command.

      That's why I'm of the v7 Unix philosophy. Make a tool that does one thing, and does it well.

      If I later need to optimize for speed, then I'll read the man page.
    • Re: (Score:3, Informative)

      Especially since that example doesn't account for filesystem caching effects. There's no way of knowing if the bulk of the gain is because of the changed command or because the file is already in RAM, some background process was running, etc.

      When timing commands, it's best to repeat the command several times and see if the times change significantly.

    • I am so glad that he showed what a difference can make, because I was *really* getting annoyed at having to wait that extra .084 seconds.

      But if in perl, python or java??? Worse yet, for for loop over 1 million files?

  • by gd23ka ( 324741 ) on Saturday December 16, 2006 @07:28AM (#17267686) Homepage
    1. Don't rm with an absolute path because you could easily

    #rm -r -f / tmp/dir

    when "all" you wanted was

    #rm -r -f /tmp/dir

    instead do this:

    #(cd /tmp ; rm -r -f dir)

    or even better use sudo if you have it:

    $(cd /tmp ; sudo rm -r -f dir)

    2. When logged on as root or when using sudo on a production system think things over
    at least twice before hitting enter.

    3. Make sure at all times you're on the right machine, logged on as the right user in the right directory.
    Set up your shell prompt to look like this user@host /path$
    • Also DO remember 'su -' changes working directory.

      % cd ~
      % cd somejunk
      % ls
      just junk, junk...
      % rm -Rf *
      somejunk: permission denied
      % su -
      Password:
      # rm -Rf *^H^H^H^H^H^H^H^Hpwd
      /

    • Re: (Score:2, Interesting)

      by sarathmenon ( 751376 )

      you could easily
      #rm -r -f / tmp/dir
      when "all" you wanted was
      #rm -r -f /tmp/dir

      You are forgetting one thing - there's no solution for stupidity and lack of common sense. While tips like these are generally useful, the person who's going to screw up a system will ignore you, and the zillion other tips that people have taken efforts to write. I've seen people who've run a rm -rf /bin to clean the recycle bin and then wonder what happened.

      Its hard to bring in any improvement in the average unix admin,

    • When I do an rm which removes more than a single file, I quite often first type it prefixed with echo, look at the output, and after having verified that it is really what I want I go back in the command history, remove the echo and press enter. This is especially good with wildcards, which unintentionally might match something I didn't intend to remove.
    • Comment removed based on user account deletion
  • absolute drivel (Score:4, Informative)

    by Anonymous Coward on Saturday December 16, 2006 @07:35AM (#17267714)
    This is, without a doubt, the most worthless article I have ever seen, both on Slashdot and on ibm.com, of which I thought better. It is not that the article is boring, but that it is factually incorrect in some places.
    "the only excuse to define directories individually was that your mkdir implementation did not support this option, but this is no longer true on most systems. IBM, AIX®, mkdir, GNU mkdir, and others that conform to the Single UNIX Specification now have this option."
    This is nonsense. The expansion of the path components in the {braces} is not a function of mkdir(1), but of the shell, and how its argument expansion is configured. I cannot believe that anyone "with 20 years of experience" is brazenly quoting names of standards in an effort to give his ramblings an air of credibility. Actually, wait a minute...
    Another bad usage pattern is moving a .tar archive file to a certain directory because it happens to be the directory you want to extract it ...
    Better is to check what's in the archive before extracting it in case some inconsiderate fool has failed to put a top-level directory in it.
    His research interests include digital publishing and the future of the book.
    Let me give you a couple [lotfi.net] of hints [ginpetty.com].
    • by portnoy ( 16520 )
      He's not talking about the braces construct, he's talking about the -p option to mkdir. Just because you can't read doesn't mean the article is factually incorrect.
  • I don't know whether it is complete BS, as I stopped reading after the 3rd or 4th "good habit". Maybe AIX has an exceedingly slow ''cat'', Linux does not. Maybe the other stuff is valid in some OSes, but then I have not encounterd them. By the same token, really important stuff is missing completely. This looks a bit like a Unix-noveces initial list of things he/she thought about in the first week. Really not worth your time.
  • The title should be 10 Good Unix Hints. Not Habits.

    These things he is stating is like when an other Unix guy is looking over your shoulder and you type in a command that works perfectly fine but has to correct you to do it his way, to save the extra millisecond in CPU while wasting about 5 Seconds to retype the command.

    Secondly depending what you are doing on Unix you may or may not need to use these commands often enough to remember all the parameters for every tool. like the mkdir -p, While I know mkd
    • Re: (Score:3, Insightful)

      by jgrahn ( 181062 )
      The title should be 10 Good Unix Hints. Not Habits.

      Yes -- and habits is what people desperately need. The people I know primarily need three habits: RTFM when they don't understand something; adjusting their behavior based on the FM; and managing their use of the current directory (i.e. you don't have to cd into a directory to use a file which lives there).

    • like the mkdir -p, While I know mkdir could create the entire directory Tree with some parameter, I just never bothered using it, or looking it up.

      Interesting... I'm used to using mkdirhier for the same effect. I thought it was simply an alias to mkdir -p, but it turns out to be a shell script (part of X11 imake package) that uses plain mkdir. I better update my scripts to the more portable version now...

  • by Taagehornet ( 984739 ) on Saturday December 16, 2006 @08:32AM (#17267974)

    "10 good habits that improve your UNIX command line efficiency" would probably have been a better title.

    The title did however bring back fond memories of Eric Raymond's The Art of Unix Programming [catb.org]. The book is available online, and if you were hoping for something a bit more substantial as well, then the section Basics of the Unix Philosophy [catb.org] might be worth a read.

    • Yes, some idiots (e.g. Jim Thompson) claim that Eric isn't a good programmer, and yet he somehow has gotten the insight to write this book. Wonder of wonders!
  • tar comment (Score:3, Informative)

    by thomasa ( 17495 ) on Saturday December 16, 2006 @09:18AM (#17268250)
    In their example with tar they did

    tar xvf

    without the dash. (E.g., tar -xvf)

    While that does work, I prefer to add
    the dash as it makes it more consistent
    with the other commands. So I consider
    that a bad example. tar is one of the
    older commands like dd that have weird
    command line syntax.

  • by Artraze ( 600366 ) on Saturday December 16, 2006 @09:21AM (#17268272)
    As has been pointed out, this article is riddled with errors. It's also not very interesting. So in the interest of perhaps actually providing some interesting tips:

    In scripts, prefix dangersous commands with an 'echo' for a test run (So you can catch all those rm -rf /).

    Single quotes are the best quotes for plain strings. The only reasion to use double quotes is if you need to quote a variable or a single quote.

    Completion is fun, but using wildcards is more flexible (though you'll only want to use benign commans like cd, less, etc):
        nano /etc/modules.autoload.d/kernel-2.6
        nano /etc/m*a*d/*6

    Note that the use of subpaths reduces the amount of flexibility.
        cd /etc/m* -> /etc/mail
        cd /etc/m*d -> /etc/modules.d
        nano /*/m*/*6 -> /etc/modules.autoload.d/kernel-2.6, and /etc/modules.d/i386 (not quite!)

    Finally, as a comment for the article, using:
        test -e $DIR || mkdir -p $DIR
    is much better than their suggestion and probaly faster anyway. Though I'd just do "mkdir -p $DIR" and maybe "&>/dev/null" under most circumstances anyway.

    That's all I can think of at this point. Anyone else have tips?
  • by xyloplax ( 607967 ) on Saturday December 16, 2006 @10:37AM (#17268706)
    Here are some more important general IT rules (Unix rules can easily be OS and version dependent and frequently come from usage in YOUR environment)
    • Copy before edit
    • Tape backup before delete/decommission
    • READ YOUR COMMAND before hitting return
    • Check where things are symlinked to
    • Echo in your scripts instead of destructive commands as a test run
    • Test your changes on a lesser-importance box
    • Use proper Change Control procedures
    • Cover your ass and capture your terminal output
    • When taking something out of service, turn it off for a few days/weeks before deleting/purging it
  • by Junta ( 36770 ) on Saturday December 16, 2006 @02:00PM (#17270434)
    I'm evaluating the tips based on them being prescriptions for things to do in interactive shell behavior, since that seems to be the theme. Writing scripts changes the situation to make some tips valuable. My number one tip as a response to these is don't try to be too clever (particularly when the biggest benefit of the approach is to say 'look how clever it is!'). Maybe it's because I don't work in a vacuum and all too many times have been called in to clean up where an administrator tried to do something too complicated for their understanding.

    mkdir -p is a convenience people should be aware of, but telling people to start getting overly creative with the shell expansion behavior is asking for mistakes/trouble. Also, having a mkdirhier script in case the example isn't supported on all shells is an indication that you shouldn't get overly cozy if you are going to be dealing with a lot of different systems/users with different default shells. The amount of time a lot of people take to figure out the 'clever' way in terms of how to phrase the expansion so the shell will expand it right is often longer than just typing the two lines more that the less thought takes. Not saying this isn't useful, but in my experience too many people mess things up too frequently or take too long to think up the expressions to tell them trying to be clever ends up taking more time than they think they are saving.

    Change the path instead of the archive is not that dire to do normally, but if you avoid it, to me it's just easier to be in the target directly and use full path to the archive.

    On combining commands, I second that ; can be dangerous and && as a default will make the chain more ready to break, but again I say not trying to be so clever as to put all you want on one line. Some things go wrong that aren't reflected in return codes, doing it one at a time let's you think of those. True, though, that the && never assume the first command works, while your fingers may keep moving and hit enter on next command before your brain realizes the command failed, so && may have merit, but then again taking your time may have more merit.

    On the quotation thing, true enough, you must understand how quoting works to do remotely complex things, particularly nested circumstances (i.e. ssh to a system to run a command, where the output will be parsed by two shells.)

    On the breaking up long lines thing, in a shell script it may be more necessary, but on an interactive command line it could also indicate you are trying too hard to do things in one chunk. I admit sometimes it does get too wide, but particularly less experienced admins should consider if there were a simpler way to do it in smaller chunks they won't screw up.

    Grouping commands is important to know, and harmless (better than repeating the same pipe over and over and more powerful).

    I will say xargs is way way over-rated. Too many people, particularly dealing with directory trees containing spaces, get into trouble piping the output of find into anything when IFS causes something like "/tmp/Monthly Report" to be parsed as two different files. find has a competent filtering mechanism (-type, -iname, -name, etc...) and it's own -exec. find is well aware of the state of each file. You could assign IFS to try to avoid it, but using find's built-ins where possible alleviates it.

    When you are talking about interactive shell operation, picking the .01s instead of the .09s operation is a bad example. He could have set up a
    much larger demonstration that would have been useful, but this just makes people mock the example. In any event, this seems like an okay thing to convey, but I dunno if it would've made my top 10.

    Probably a more valid point about using awk, and a common trap I do see people stuck in.

    On piping cat, that seems like more an annoyance than anything constructive. Some people use the cat | grep construct because it is so unambiguou
  • bad ibm no cookie (Score:4, Informative)

    by illuminatedwax ( 537131 ) <stdrange&alumni,uchicago,edu> on Saturday December 16, 2006 @11:28PM (#17274296) Journal
    Great, IBM, way to ignore the dreaded "xargs" security bug! Seriously, IBM notices some kind of obscure danger about underscores, but completely ignores the fact that xargs separates arguments by newlines??

    Let's say I'm a sysadmin and I'm running as root, trying to remove all the files in the /tmp directory by a certain user for some reason:
    find /tmp -user 1001 | xargs rm

    User 1001 has a directory in /tmp called "haxor\n". Inside there he puts another directory "etc" and inside there he puts a file called "passwd."

    Can you guess what happens?
    find prints: /tmp/tmp43cc91 /tmp/haxor /tmp/haxor /etc/passwd

    xargs sees: ["/tmp/tmp43cc91","/tmp/haxor","","/tmp/haxor","/e tc/passwd"]
    Oops!! You just hosed your system!

    The correct way to use xargs is to use the -0 switch, which will separate the input by null characters, which cannot appear in filenames. find has a handy -print0 option which will output the correct output:

    find /tmp -user 1001 -print0 | xargs -0 rm

    And your system is safe.
  • by Vince ( 4999 ) on Sunday December 17, 2006 @04:14AM (#17275638) Homepage
    Does anybody else notice these benchmarks are flawed? For an article discussing the shell, we should know that in this first benchmark, time is only counting the execution time of grep, and not wc, and is thus undercounting how much CPU time is actually used. How about a neat shell trick to correctly run that benchmark?

    > ~ $ time grep and tmp/a/longfile.txt | wc -l
    > 2811
    >
    > real 0m0.097s
    > user 0m0.006s
    > sys 0m0.032s
    > ~ $ time grep -c and tmp/a/longfile.txt
    > 2811
    >
    > real 0m0.013s
    > user 0m0.006s
    > sys 0m0.005s
    > ~ $

C makes it easy for you to shoot yourself in the foot. C++ makes that harder, but when you do, it blows away your whole leg. -- Bjarne Stroustrup

Working...