Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Don't Overlook Efficient C/C++ Cmd Line Processing 219

An anonymous reader writes "Command-line processing is historically one of the most ignored areas in software development. Just about any relatively complicated software has dozens of available command-line options. The GNU tool gperf is a "perfect" hash function that, for a given set of user-provided strings, generates C/C++ code for a hash table, a hash function, and a lookup function. This article provides a reference for a good discussion on how to use gperf for effective command-line processing in your C/C++ code."
This discussion has been archived. No new comments can be posted.

Don't Overlook Efficient C/C++ Cmd Line Processing

Comments Filter:
  • by tot ( 30740 ) on Sunday July 29, 2007 @11:40AM (#20032263)
    I would not consider speed of command line option processing to be bottleneck in any application, the overhead of starting of the program is far greater.
    • by ScrewMaster ( 602015 ) on Sunday July 29, 2007 @11:42AM (#20032279)
      I'd say the speed of human motor activity is an even greater limiting factor.
      • Re: (Score:3, Informative)

        by pete-classic ( 75983 )
        What a limited point of view. See "man system", for example.

        -Peter
      • Re: (Score:3, Insightful)

        by timeOday ( 582209 )

        I'd say the speed of human motor activity is an even greater limiting factor.


        I wouldn't bet on that. The command line is not just a human/computer interface, but also a computer/computer interface. It's very common for one script to fire off many others.


        That said, I agree with the grandparent that it's hard to imagine a program where command line processing is a significant runtime expense.

    • Re: (Score:2, Insightful)

      by Anonymous Coward
      It's still handy to have a fairly comfortable way of generating code that does things needed every time (or at least very, very often) in an easily applicable and very optimized way. I like it.
    • by ChronosWS ( 706209 ) on Sunday July 29, 2007 @11:46AM (#20032301)
      Indeed, what the hell? Now you have to have another tool and another source file for what is essentially declaring a dictionary in C++, which should be in any good developer's library? Yeesh.

      If you don't like the nasty nested ifs, make the keys in your dictionary the command line options and the values delegates, then just loop through your list of options passed on the command-line, invoking the delegate as appropriate. Eliminates the if, there are no switch statements either, and each of your command line arguments is now handled by a function dedicated to it, bringing all of the benefits of compartmentalizing your code rather than stringing it out in a huge processing function.
      • by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Sunday July 29, 2007 @12:06PM (#20032439) Homepage Journal

        Now you have to have another tool and another source file for what is essentially declaring a dictionary in C++, which should be in any good developer's library?
        Due to the brokenness of how some linkers handle virtual method lookup tables, using anything from the C++ standard library tends to bring in a large chunk of dead code [wikipedia.org] from the standard library. I compiled hello-iostream.cpp using MinGW and the executable was over 200 KiB after running strip, compared to the 6 KiB executable produced from hello-cstdio.cpp. Sometimes NIH syndrome produces runtime efficiency, and on a handheld system, efficiency can mean the difference between fitting your app into widely deployed hardware and having to build custom, much more expensive hardware.
        • by sentientbrendan ( 316150 ) on Sunday July 29, 2007 @02:36PM (#20033479)
          It sounds like the author is statically linking his library and running on embedded an embedded system. It is not surprising in that case that the c++ standard library brings in much more code than the c standard library, but it should be made clear that it is not relevant to desktop developers, pretty much all of which dynamically link with glibc.

          Again, to be clear, dynamically linking with the c++ standard library is not going to increase your executable size. Please don't try to roll your own code that exists in the standard library. It is a real nuisance when people do that.

          I should qualify that by saying that template instantiations do (of course) increase executable size, but that they do so no more than if you had rolled your own.
          • It is not surprising in that case that the c++ standard library brings in much more code than the c standard library, but it should be made clear that it is not relevant to desktop developers, pretty much all of which dynamically link with glibc.

            On MinGW, the port of GCC to Windows OS, my programs dynamically link with msvcrt, not glibc. Also on MinGW, libstdc++ is static, just like in the embedded toolchain. Are you implying that one of the C++ toolchains for Windows uses a dynamic libstdc++? Which toolchain for which operating system that is widely deployed on home desktop computers are you talking about?

    • by hxnwix ( 652290 )
      Except on Windows XP, where pipe performance degraded an order of magnitude as compared to Windows 2000.
    • by Anonymous Coward on Sunday July 29, 2007 @12:11PM (#20032473)
      You're not a real programmer if you won't over optimize unrelevant parts of your code.
    • by canuck57 ( 662392 ) on Sunday July 29, 2007 @12:11PM (#20032479)

      I would not consider speed of command line option processing to be bottleneck in any application, the overhead of starting of the program is far greater.

      Your just experiencing this with Java, Perl or some other high overhead bloated program. People often pull out a heavy weight needing a 90MB VM or a 5-10MB basis library calling the cats breakfast of shared libraries I would agree, but lets take a look at C based awk for example, it is only a 80kb draw. Runs fast, nice and general purpose and does a good job of what it was designed to do. It can be pipelined in, out and used directly on the command line as it has proper support for stdin, sndout and stderr. On my system, only 10 disk blocks to load.

      While fewer people are proficient at it, C/C++ will outlast us all for a language. Virtually every commodity computer today uses it in it's core. Many others have come and gone yet all our OSes and scripting tools rely on it. So any dooms day predictions would be premature, and if your want fast, efficient and lean code you do C/C++....

      • Re: (Score:3, Insightful)

        by ultranova ( 717540 )

        While fewer people are proficient at it, C/C++ will outlast us all for a language. Virtually every commodity computer today uses it in it's core.

        Which is why they are so crash-prone. With C/C++, any mistake whatsoever will likely crash the program/machine, and possibly also allow crackers to make the program execute arbitrary code.

        Many others have come and gone yet all our OSes and scripting tools rely on it. So any dooms day predictions would be premature, and if your want fast, efficient and lean co

      • I would not consider speed of command line option processing to be bottleneck in any application, the overhead of starting of the program is far greater.

        Your just experiencing this with Java, Perl or some other high overhead bloated program.

        No, even with the most naive of command-line argument parsing code, it is highly unlikely that it will take a significant amount of time compared to the effort required for the kernel to fork off a new process, exec the binary, and for the dynamic loader to set it up fo

    • Re: (Score:2, Insightful)

      by Anonymous Coward
      Indeed. The applications of perfect hashing (and minimal perfect hashing) are quite limited. Basically it only makes sense if you need to quickly identify strings from a fixed, finite set of strings known at compile time. And as with all optimizations, only if that part of your program is a bottle neck or you are prepared to optimize all other aspects of your program as well.

      The traditional example application for perfect hashing was identifying keyword tokens when building a compiler, but for complex moder
    • by ai3 ( 916858 ) on Sunday July 29, 2007 @01:11PM (#20032857)
      You must not have seen the recent proposal for GNU tools options, which will require four dashes instead of two and a minimum of four words per option. Under a UN/EU funded program to ease the transition to intelligent machines, developers are rewarded for implementing full-sentence options and/or prose. But initial experiments showed that many users where unwilling to wait for the parsing of the command "remove-files --recursively-from-root-directory --do-not-ask-for-confirmation-just-delete --i-really-want-this!" just to be 1337, which led to whatever development efforts are mentioned in the article, which I didn't read.
    • Use gcc much?
    • I would not consider speed of command line option processing to be bottleneck in any application, the overhead of starting of the program is far greater.

      Have you tested this using getopt() and getopt_long() , or did you mean by parsing them manually?
  • Too much (Score:4, Insightful)

    by bytesex ( 112972 ) on Sunday July 29, 2007 @11:41AM (#20032269) Homepage
    I'm not sure that for the usually simple task of command line processing, I'd like to learn a whole new lex/yacc syntax thingy.
    • Re:Too much (Score:5, Insightful)

      by hackstraw ( 262471 ) on Sunday July 29, 2007 @07:26PM (#20036061)
      I'm not sure that for the usually simple task of command line processing, I'd like to learn a whole new lex/yacc syntax thingy.

      The syntax for gperf is not that bad, but its simply the wrong tool for the job as far as commandline processing goes.

      gperf simply makes a "perfect" has function for searching a predetermined static lookup. It provides no mechanism for arbitrary arguments like input filenames or modifiers (like a filter for including/excluding things, or increasing/decreasing something) nor does it check for conflicting options or missing options.

      gperf would give you nothing besides a match of input to a state. gperf would provide nothing for a common commandline like: --include="*.txt" --exclude="*.backup" --with-match="some text|or this text" --limit-input=5megabytes

      getopt or just rolling your own if/else if ladder or switch statement would provide much more flexibility over gperf.

      Now, with parsing a configuration file, gperf might help, but for processing commandline arguments, gperf is simply the wrong tool for the job.

      This is like the second or third slashdot posting from IBM's developer works that is simply a well formated nonsense. Past examples are http://developers.slashdot.org/article.pl?sid=07/0 4/09/1539255 [slashdot.org] and http://developers.slashdot.org/article.pl?sid=07/0 4/09/1539255 [slashdot.org]

      This is silly on both slashdot and IBMs part.

  • by V. Mole ( 9567 ) on Sunday July 29, 2007 @11:50AM (#20032331) Homepage
    Does the phrase "reinvent the wheel" strike a chord with anyone?
    • Yeah, because getopt(3) is a real bottleneck
      getopt() is in the header <unistd.h>, which is in POSIX, not ANSI. POSIX facilities are not guaranteed to be present on W*nd?ws systems. It also handles only short options, not long options. For those, you have to use getopt_long() of <getopt.h>, which isn't even in POSIX.

      Does the phrase "reinvent the wheel" strike a chord with anyone?
      If the wheel isn't licensed appropriately, copyright law requires you to reinvent it. Specifically, using software under the GNU Lesser General Public License [gnu.org] in a proprietary program intended to run on a platform whose executables are ordinarily statically linked, such as a handheld or otherwise embedded system, is cumbersome.
      • Re: (Score:3, Interesting)

        by tqbf ( 59350 )

        Are you seriously trying to argue that gperf is more portable than getopt?

        • by tepples ( 727027 )

          Are you seriously trying to argue that gperf is more portable than getopt?
          I'm not arguing specifically in favor of gperf, but arguing generally that reinventing the standard library has its justifications at times.
      • Well, if you're going to reinvent the wheel, you might a well do it compatibly. You can get a BSD-style licensed implementation of getopt and getopt_long [geocities.com] that is portable to Windows. From the README:

        WHY RE-INVENT THE WHEEL?

        I re-implemented getopt, getopt_long, and getopt_long_only because there were noticable bugs in several versions of the GNU implementations, and because the GNU versions aren't always available on some systems (*BSD, for example.) Other systems don't include any sort of standard argum

      • When faced with this issue, I simply wrote a Windows version of getopt. Took about a day.

        Even when reinventing the wheel, it is important to reinvent as little as possible. If you need functionality that isn't there, at least keep the same interface.
        • Re: (Score:2, Informative)

          by __aawavt7683 ( 72055 )
          When faced with the issue of implementing getopt on Windows, I merely took the code from FreeBSD: src/lib/libc/stdlib/getopt.c

          I love FreeBSD. (I once changed the motherboard, rebooted, went, "Oh.. shit," and proceeded to login. All drivers are compiled as modules, in less time than my lean linux kernel. :-/)

          I sidestepped the license issue, stripped out extraneous header files, changed a couple referenced to _getprogname() (either to static string "" or to a global var, as it is in libc), read the man page t
  • by Anonymous Coward on Sunday July 29, 2007 @12:02PM (#20032399)
    Good grief. What a strawman of an example.
    Anyone writing or maintaining command line programs knows that they
    should be using the API getopt() or getopt_long().
    There are standards on how command line options and arguments are to be
    processed. They should be followed for portability and code maintenance.
    • I agree... (Score:3, Insightful)

      by SuperKendall ( 25149 )
      There's a time and place for gperf - command line argumnet processing is not it!

      Actually, I've never really come across a case where I knew ahead of time the whole universe of strings I would be accepting, and so never ended up using it - gperf is a great idea, but this seems to be a case of someone really looking hard to figure out where they could shoehorn gperf into just for the sake of using it.
    • From what I can see in the article, it's not meant to replace getopt/getopt_long.

      I am currently writing an application (for my employer) where this may be useful. Although it also uses command line parameters (via getopt_long), it also receives commands in ASCII over a network connection - that is what I believe this article targets.

      Because the commands I receive can have almost any series of parameters in any sequence however, I prefer to do what another poster here already stated - you look for keywords

    • Anyone writing or maintaining command line programs knows that they
      should be using the API getopt() or getopt_long().


      There is no getopt or getopt_long in the C or C++ standard.
      • There is no getopt or getopt_long in the C or C++ standard.

        getopt is in Posix.

        getopt_long is a GNU extension, though

    • Re: (Score:2, Informative)

      by JNighthawk ( 769575 )
      Yes, because we should be using functions that are NOT IN THE STANDARD to maintain portability.

      Oh, and as far as I know, those functions aren't in VC++, which is what a hefty chunk of C/C++ development is done on.
  • Correction... (Score:2, Insightful)

    by Pedrito ( 94783 )
    Just about any relatively complicated software has dozens of available command-line options.

    That should probably be rephrased to "Just about any relatively complicted software that inflicts command-lines on its users..."

    This is clearly a very unix oriented post as there are relatively few command-line windows apps and few window GUI apps that accept command-lines. But this is also a topic that's about as old as programming itself and clearly something that takes the "new" out of "news".
    • by AuMatar ( 183847 )
      Umm, most Windows apps accept command line inputs- its just not the default way of using it. But type it in at the command line and you'd be surprised. A few that come to mind- VC++'s compiler and internet explorer.
  • by geophile ( 16995 ) <(jao) (at) (geophile.com)> on Sunday July 29, 2007 @01:05PM (#20032819) Homepage
    Perfect hash functions are curiosities. If you have a static set of keys, then with enough work you can generate a perfect (i.e. collision-free) hash function. This has been known for many years. The applicability is highly limited, because you don't usually have a static set of keys, and because the cost of generating the perfect hash is usually not worth it.

    Gperf might be reasonable as a perfect hash generator for those incredibly rare situations when the extra work due to a hash collision is really the one thing standing between you and acceptable performance of your application.

    I thought maybe we were seeing a bad writeup, but no, it's the authors' themselves who talk about the need for high-performance command-line processing, and give the performance of processing N arguments as O(N)*[N*O(1)]. I cannot conceive of a situation in which command-line processing is a bottleneck. And their use of O() notation is wrong (they are claiming O(N**2) -- which they really don't want to do, not least because it's wrong). O() notation shows how performance grows with input size. Unless they are worrying about thousands or millions of command-line arguments, O() notation in this context is just ludicrous.

    I don't know why I'm going on at such length -- the extreme dumbness of this article just set me off.
    • Gperf might be reasonable as a perfect hash generator for those incredibly rare situations when the extra work due to a hash collision is really the one thing standing between you and acceptable performance of your application.

      The primary REAL use of gperf is generating keyword recognizers for language parsers. It's another tool in the same vein as lex and yacc.

    • O() notation shows how performance grows with input size.

      Really?
      I'd really like to see an algorithm whose performance grew with input size...
  • Historically? (Score:4, Insightful)

    by ClosedSource ( 238333 ) on Sunday July 29, 2007 @01:07PM (#20032839)
    "Command-line processing is historically one of the most ignored areas in software development."

    This is like saying that walking is historically one of the most ignored areas in human transportation.
  • is this a joke? (Score:3, Insightful)

    by oohshiny ( 998054 ) on Sunday July 29, 2007 @01:14PM (#20032879)
    If it's not, the author of that article should be kept as far away from writing software as possible; he epitomizes the attitude that so frequently gets C++ programmers into trouble.
    • Re: (Score:3, Insightful)

      by turgid ( 580780 )

      Well, what do you expect from IBM? It's just another one of their look-Ethel-it's-open-source-and-look-at-us-helping -the-community content-free PR fluff pieces. Ignore them and they'll crawl back into their mainframe cave.

  • by pclminion ( 145572 ) on Sunday July 29, 2007 @01:21PM (#20032927)
    Where's the Foot icon? Optimizing command line parsing? Oh God, my sides are splitting.
    • Re: (Score:3, Insightful)

      by moosesocks ( 264553 )
      The weird bit is that, despite being a somewhat silly article, it launched one of the most intelligent discussions I've seen on /. in a while.
  • First of all, how many programs have command line parsing as a bottleneck?

    Secondly, they should put this functionality into GCC instead, so that it creates a perfect hash for any large switch statement.

  • Something Eric Allman wrote many moons ago. I found it and modified it to support "native" command line syntax on MS-DOS, VMS, and AmigaDOS, and added some support for improved self-documentation... and then Brad Appleton saw it and rapidly enhanced it to support a plethora of shells and interfaces until it took up 10 posts in comp.sources.misc.

    The following two directories should bring it up to the latest version I know of.

    This is not efficient, mind you. Command line parsing doesn't generally need to be efficient, even by my miserly standards, honed when a PDP-11 was something you hoped to upgrade to... some day...

    ftp://ftp.uu.net/usenet/comp.sources.misc/volume29 /parseargs/ [uu.net]
    ftp://ftp.uu.net/usenet/comp.sources.misc/volume30 /parseargs/ [uu.net]

    PARSEARGS
     
                            extracted from Eric Allman's
     
                                NIFTY UTILITY LIBRARY
     
                              Created by Eric P. Allman
                                <eric@Berkeley.EDU>
     
                            Modified by Peter da Silva
                                <peter@Ferranti.COM>
     
                      Modified and Rewritten by Brad Appleton
                              <brad@SSD.CSD.Harris.COM>
    Brad's latest work in this area seems to be here:

    http://www.cmcrossroads.com/bradapp/ftp/src/libs/C ++/CmdLine.html [cmcrossroads.com]

    http://www.cmcrossroads.com/bradapp/ftp/src/libs/C ++/Options.html [cmcrossroads.com]

    • What about Boost.Program_Options [boost.org]? I thought I'd see a post on it here somewhere, but not one person has mentioned it (yet).

      A few months ago, I was looking around for a C++ library for parsing command line options. I checked out get_opt and I thought that there must be something that uses std::string instead of char*. After some googling, I found Boost.Program_Options seemed to be exactly what I was looking for. It supports long and short options (-s,--short) and I was able to start using it quite eas

      • by abdulla ( 523920 )
        Boost.Program_Options has some odd problems with the GCC visibility flags that cause it to return invalid values. However, after wrapping the header with visibility pragmas it works, and it works well with my needs. I needed a library that would allow me to specify a library on the command line, load that library and add possibly more command line options, then continue processing all other arguments. However PO is rather bloated compared to other options available, but at least it isn't leaking memory like
  • This is kinda silly. If you only have a few keywords you don't need anything sophisticated. If you have more then a few but not more then a few dozen its usually easiest just to arrange them in a linear array and do an index lookup based on the first character to find the starting point for your scan. More then that and you will want to hash them or arrange them in some sort of topology such as a red-black tree.

    Generally speaking hashes are very cpu and cache-inefficient beasts, especially if one can rea
  • Which means that using at the command line is "linking" it. Doing so, of course, means your upstream code must be GPL as well. Ad Infinitum. Sorry, but the bulk of c/c++ code out there is non-gpl licensed and therefor can take no advantage of tools such as this.
  • by stupendou ( 466135 ) on Sunday July 29, 2007 @04:35PM (#20034467)
    Try supergetopt instead. Much easier to use and also open source.
    http://www.ibiblio.org/pub/Linux/devel/sugerget-1. 1.tgz [ibiblio.org]

    With this code, you simply specify command-line strings and variables in a printf()
    style format.

    E.g. supergetopt( argc, argv,
                                        "string1", "%d %d", function1,
                                        "string2", "%s", function2 )

    will call function1( int a, int b ) when string1 is on the command line,
    and will call function2( char *s ) when string2 is used on the command line.

    A whole lot easier than gperf, IMHO.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...