Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Bjarne Stroustrup on the Problems With Programming 605

Hobart writes "MIT's Technology Review has a Q&A with C++ inventor Bjarne Stroustrup. Highlights include Bjarne's answers on the trade-offs involved in the design of C++, and how they apply today, and his thoughts on the solution to the problems. From the interview: 'Software developers have become adept at the difficult art of building reasonably reliable systems out of unreliable parts. The snag is that often we do not know exactly how we did it.'"
This discussion has been archived. No new comments can be posted.

Bjarne Stroustrup on the Problems With Programming

Comments Filter:
  • In my experience... (Score:3, Informative)

    by Wizard052 ( 1003511 ) on Tuesday December 05, 2006 @12:47AM (#17109726)
    ...at a university I know, they start teaching you programming in...Visual Basic. I can imagine the effect that has on the poor confused heads here who never eventually grasp other languages (including VB).

    Maybe if they started with something like Pascal or something...but thats just not 'modern' or cutting edge nowadays...

    I think this is the case in many institutions leading to low quality coders.
    • by bogaboga ( 793279 ) on Tuesday December 05, 2006 @01:00AM (#17109812)
      In my case, we were presented with a problem and asked to "produce" a possible solution in a month. From the tools we had, VB was the most obvious. No body dictated what we should be using in our solutions.

      With a little research, nothing could beat MS-Access with its VB. We quickly had working GUIs integrated with business logic. Things were beautiful. PHP was available but the its abilities at the time were very limited.

      Sadly, there is still no real answer to MS-Access' programming paradigm in the Linux world. Gambas http://gambas.sourceforge.net/ [sourceforge.net] comes close. So does RealBasic http://www.realbasic.com/ [realbasic.com]. Other wannabe environments are simply wasting time at present, and do not appear to be serious.

      I am meant to understand that Kross http://conference2006.kde.org/conference/talks/2.p hp [kde.org] is progressing well, but was not impressed when I tried it.

      Having powerful programming environments that are friendly to newbies is OK, but making them actively hostile to power users on the other hand is insane. Those two items aren't mutually exclusive, but Linux programmers tend to think so - sadly.

      • Re: (Score:3, Insightful)

        by Wizard052 ( 1003511 )
        Actually, I like VB. I believe it's good for what it's for- RAD and if properly used (as applies to any tool). But too often its seen as some kind of panacea, at least in this part of the world, for IT education. It's probably the power of the Microsoft brand (if not the product(s) ) at work here...
      • by Garridan ( 597129 ) on Tuesday December 05, 2006 @01:47AM (#17110054)
        VB is a rapid prototyping environment. And just like an RP machine, it makes a flimsy product that you can send back to the drawing board without much expense. But you don't ship a product you've made on an RP machine -- it's crap. You take your prototype, and make a real product out of it using sturdy materials. Same goes for VB. You make something that works the way you expect, then you make it work in a real language. Good thing about VB is that you can replace pieces at a time with DLLs compiled from C++. If that isn't a part of the VB curriculum, it's a waste of time.
        • by weicco ( 645927 ) on Tuesday December 05, 2006 @02:25AM (#17110234)
          I've worked in project that was/is probably the most largets VB project in the world. It started in 93 and I don't think they are going to end it soon. I personally hate VB, it's not-so-strongly-typed variables, funny rounding rules and so on, but I wouldn't say that all you can do with it is crap!

          The software we were making just works. It has worked for 13 years and keeps working. Maybe it could be little faster if written with some other language and tools or it might have more fancy UI blaablaablaa but it doesn't need those. And rewriting those hundreds of thousands lines of code... Let's just say that I wouldn't like to be in that team.
          • by mosel-saar-ruwer ( 732341 ) on Tuesday December 05, 2006 @10:59AM (#17113886)

            it's not-so-strongly-typed variables, funny rounding rules and so on

            I know they're like [pagan] Gods to an awful lotta people in the CS community, but The Founders of The Art, guys like Kernighan, and Ritchie, who had the chance to insist that a declaration actually mean something, but hesitated, and hemmed and hawed, and got all wishy-washy, and finally decided [really deferred a decision until it was too late to make a decision] that a declaration could mean any-damned*-thing that the implementor wanted to interpret it to mean, well those guys, those pagan Gods of the Founding Arts, seriously - someone should take them out behind the toolshed and whip their asses** [if not shoot them outright].

            So now, fast forward 30 or 40 years, and we've got:

            "floats" in the ATI GPU world that are only 24 bits in length
            "floats" in the nVidia/Microsoft/IBM/Sony/Cell world that are 32 bits in length
            "floats" in the classical Unix world [e.g. SunOS/Solaris] that are 64 bits in length
            etc etc etc
            And then you go to do something in VB, or in Javascript, and you get shit** like

            2 + 2 = 4.0000000000012459
            or, what's even worse,

            2 + 2 = 22
            and you end up having to write shit**** like

            var i = parseInt(2);
            var j = parseInt(2);
            var k = parseInt(parseInt(i) + parseInt(j));
            window.alert(i + " + " + j + " = " + k);
            and you scream at your computer, "YES, THESE ARE NUMBERS, NOT CHARACTER STRINGS, YOU GOD-DAMNED***** COMPILER/INTERPRETER/SYNTAX/PARADIGM/NIGHTMARE OF A SACK OF SHIT******!!!!!"

            PS: There is a special circle in Hell******* for the sonuva bitch******** who dreamed up the idea of interpreting variable types on the fly...

            *Pardon my French.
            **Pardon my French a second time.
            ***Pardon my French a third time.
            ****Pardon my French a fourth time.
            *****Pardon my French a fifth time.
            ******Pardon my French a sixth time.
            *******Pardon my French a seventh time.
            ********Pardon my French a final time.
            • Re: (Score:3, Informative)

              by 2short ( 466733 )
              K&R did not commit Javascript, and aren't responsible for the problems in your examples.

              K&R invented C. In it, and the myriad that follow it's lead, variables will sometimes get converted to closely related types for the purposes of expressions that need them. I for one want to be able to write

              float x = 2.0 + 2;

              without the compiler throwing up it's hands in a panic not knowing what to do.

              But numbers will not be converted to strings, and of course conversions will have no lasting eff
        • by Flodis ( 998453 ) on Tuesday December 05, 2006 @09:29AM (#17112742)
          VB is a rapid prototyping environment. And just like an RP machine, it makes a flimsy product that you can send back to the drawing board without much expense. But you don't ship a product you've made on an RP machine -- it's crap. You take your prototype, and make a real product out of it using sturdy materials. Same goes for VB. You make something that works the way you expect, then you make it work in a real language. Good thing about VB is that you can replace pieces at a time with DLLs compiled from C++. If that isn't a part of the VB curriculum, it's a waste of time.
          Sigh... To me, this sounds like a typical rant from someone who doesn't have any actual experience.

          Anyway... I think the problem may be that VB is too easy to use. People who would not be able to write the makefile for their 'Hello World' program in C++, are able to write working but very rickety/ flimsy VB programs.

          I happen to make a living as a computer consultant. This means I get to see a lot of different organizations and their in-house software... This means a LOT of VB code... And of that VB code, a lot (maybe 90%) is written by people who may know their business but don't have a clue about programming. I can definitely see how that would create the reputation that VB programmers are bad, but not how it makes the LANGUAGE bad.

          As for stability, I can promise you that some of my VB programs are a hell of a lot stabler than the memory-leaking SEGF/GPF-ing C++ hacks they replaced. In case you didn't know - it's perfectly possible to write shitty C++ code too. It's just that you have to get above a certain level to even get the compiler to work, so most of the would-be self-made computer wizards turn to something easier instead.. Like VB.

          The big question here is: Is it better to have a flimsy but functioning VB program or a defunct makefile? I'm not sure of the answer myself. A defunct makefile is a 5-minute job to fix, whereas some of the VB messes I've seen would literally take years to get straightened out. (I hate people who think they can program just because their $h!+ compiles.)
      • by Anne Honime ( 828246 ) on Tuesday December 05, 2006 @03:07AM (#17110504)

        If I had mod points, I'd gladly give them all to you ; I'm not a programmer by education, but I've always programmed tools since I have a computer. Basic could be abused in the past (in fact it was more or less a requirement with MS-BASIC on 8 bits computers - 48 Kb RAM !) but since OOP has become widespread, you just can't beat that language for day to day scripting, SQL access etc. Even in the mid 80s, if you were lucky enough to have a better PC than the average plastic toy, you could go with Basic-E or CBasic, which were by many aspects precursors to Java.

        The sad truth is today's Basics (VB, Gambas...) have an unfounded bad reputation ; you can't really abuse them anymore, and with a bit of care, they make a very good entry point in the programming realm for everybody. And if Linux is to become relevant on the desktop, it needs power users to be able to switch the enormous base of custom applications made in VB for every business out there on Linux. The VB6 converter in Gambas might become soon the killer app of Linux, in that respect, combined with superior DB access and tight KDE integration (yes, you can use DCOP in Gambas).

        To me, Gambas, being free software, fills the same spot MBasic was fulfilling on Amstrad CPC or Commodore 64. It gives control to the user, and that is priceless. Since my 8 bits days, I've learned bits of x86 ASM, Clipper, C, C++, perl, and liked the extra power it gave me ; but I've indulged in Gambas for a couple of months, and realisticaly, it's the only way to create a cool looking, desktop integrated application on spare time in a pinch. If I were again the teen I was, I'd like to begin programming with it because it would be the quickest rewarding experience in programming. You get to love programming cool things you can show to the world before you actually begin to like programming correctly for the sake of it.

    • by atomicstrawberry ( 955148 ) on Tuesday December 05, 2006 @01:07AM (#17109858)
      My university started everyone out on C. Having seen some of the horrible code that some students produced even in the final year, I'd say that the problem lies deeper than the language they started out on.

      Though I'd hate to have started with Visual Basic all the same.
      • by arivanov ( 12034 ) on Tuesday December 05, 2006 @02:21AM (#17110208) Homepage

        Exactly.

        C, C++, Java and god forbid VB should be prohibited by law for university courses and any person teaching them during the first 2 semesters in CS should be prosecuted for child abuse. Pascal (even without the object oriented extensions) remains the best language for teaching the first years in CS. Once students are past their data structures course and know how to deal with linked lists, pointers, objects hashes and the like you can switch to C, C++ or Java with minimal fuss. Before that its outright criminal. In fact the total amount of hours spent till the point when the students can produce something that will pay their daily bread will most likely end up being less than the required when teaching directly in C/C++.

        There was a very good article on the subject by Joel called The perils of Java schools [joelonsoftware.com] and I tend to agree with it 100%. In fact I will extend its reasoning further to C and C++. Probably the most important part of teaching a data structure course is to teach it in a language that has a clear syntax and "one way to get it right" for pointers, linked lists and the like. C and C++ are insufficiently clear and unambiguous. Java simply does not allow you half of the things you need to do in that course.

        Many people advocate for the usage of Java and especially VB from the perspective of "look how fast can I learn to program in these". That is irrelevant as far as university courses are concerned. What is relevant is will the student learn to produce literate, commercially viable code or not. If he has been subjected to VB - never, Java or C++ - not bloody likely, C - it may work but it will be anything but readable for the first 10 years of his career.

        • by sgt101 ( 120604 ) on Tuesday December 05, 2006 @05:39AM (#17111280)
          The languages students need to study are :

          Prolog
          Miranda/SML/Haskell
          Java/C++/C#/Smalltalk/any other imperative with OO

          Because these show the different choices in representation that programmers essentially have : declarative, functional, imperative (scripts). OO is a useful concept to describe to students because it gets them used to the ideas of abstraction and forces good programming practice like information hiding.

          Later on it would be good if Universities taught web development (Php for example) and database development (SQL, possibly microsoft tools).

          Interestingly universities do not teach, and I think rightly, the most common activity that CS grads end up doing in the real world, which is installation, integration, customisation and configuration of COTS products like CRM systems.
      • Re: (Score:3, Interesting)

        There's a good reason why MIT's introductory computer science courses are teach in the Lisp dialect Scheme: so that they can focus on teaching algorithms, modular design and other high level concepts rather than doing the grease monkey work of dealing with manual memory allocation and an old CPU design when the world quickly changes to a more parallelized approach.
    • Re: (Score:3, Interesting)

      by omeg ( 907329 )
      You know what, this may sound strange, but I would recommend ActionScript for beginning programmers. Why, you ask? Well, aside of the fact that ActionScript is very simple, it's fully integrated with the capabilities of the SWF file format. That means you can make some kind of visual representation of what you're doing extremely quickly, and you won't have to worry about advanced rendering code, since it's all already there. You can have people algorithmically draw lines and shapes and graphs in virtually n
    • by americangame ( 1025646 ) on Tuesday December 05, 2006 @01:38AM (#17110004)
      Well in my experience they had me learning C on Unix, then Bjarne Stroustrup (along with another professor) taught us C++. I must say that learning a programming language from the creator isn't the best way to do so, as he will begin to go into the extreme detial of how a pointer in C++ works with no regard for the fact that it might be too much information for the first week of class. But it is a great way to scare the ever living piss out of freshmen in college that are considering to become computer engineers.
    • by RAMMS+EIN ( 578166 ) on Tuesday December 05, 2006 @02:41AM (#17110340) Homepage Journal
      I don't think the main issue is which language you start with. What's important is that you teach people multiple paradigms and multiple languages, so that they are aware of them and their strengths and weaknesses.
    • by $pearhead ( 1021201 ) on Tuesday December 05, 2006 @03:05AM (#17110492)
      I disagree. Programming isn't something you learn from someone else. Programming is something you learn by yourself. Of course, you can get excellent help/lectures/tips/advice/insights/whatnot at an university for example, but my point is that in the end you have to sit down and think and then write some code (and figure out why it doesn't work) by yourself. I would say it doesn't matter if you start with Visual Basic or Pascal; if you haven't got the ambition/derive/whatever to really sit down by yourself and figure things out, you will never be a (good) programmer.
    • Re: (Score:3, Insightful)

      by Alioth ( 221270 )
      Grrr.

      If that's a computer science course, or any other degree that purports to teach fundamentals, that's so wrong it's not even wrong.

      You have to learn the fundamentals, not use ready made components. Indeed, I'd advocate at least some assembly language programming, because this forces you to think HOW the machine actually does things. It needn't be x86 or anything particularly fancy - but something that will at least teach the student on an absolutely fundamental level what happens when you get a buffer o
  • by jarich ( 733129 ) on Tuesday December 05, 2006 @12:47AM (#17109728) Homepage Journal
    "Software developers have become adept at the difficult art of building reasonably reliable systems out of unreliable parts. The snag is that often we do not know exactly how we did it."

    So he doesn't remember how he created C++ huh? That explains a ~lot~!

    ;)

  • by pchan- ( 118053 ) on Tuesday December 05, 2006 @12:48AM (#17109734) Journal
    I wouldn't take programming advice from a guy who overloads the bit-shift operator to perform I/O.
    • by _merlin ( 160982 ) on Tuesday December 05, 2006 @01:02AM (#17109836) Homepage Journal
      That's the outflow of an inherent problem with allowing operators to be overloaded. People will inevitable make them do different things on different types, making it impossible to know what an operator does without knowing something about the types of the arguments.

      Of course, there are arguments for the other side, two. One is that people will create similarly named methods on different objects that do completely different things, and ambiguous operators are no worse than ambiguous method calls. Another is that in cases where the normal operation of an operator is meaningless, it should be acceptable to overload it with different functionality.

      Overloading the bit shift operator on I/O streams is a case of the second way of thinking: a bit shift makes no sense on an object, so why not use it for something else.
      • by QuantumG ( 50515 ) * <qg@biodome.org> on Tuesday December 05, 2006 @01:36AM (#17109990) Homepage Journal
        Some say C++ didn't go far enough, in that you can't define arbitary operators. As such, you have a small limited number to choose from and therefore overloading is all you can do. I'd love to be able to define an operator like .= to do string concatenation, but I can't, so I use += and live with the confusion and possible errors that causes.
        • You want Lisp. (Score:5, Insightful)

          by piranha(jpl) ( 229201 ) on Tuesday December 05, 2006 @05:24AM (#17111184) Homepage

          You want Lisp. Hear me out.

          Of course, the character syntax is superficially different. Operators use infix notation ("(+ 1 2)" is analogous to "1 + 2"), and have identical character syntax as function calls ("+", an operator in Lisp jargon, may be implemented as a function).

          If you can sleep at night after that, your can define own higher-level language syntax that looks exactly like any other Lisp syntax. Lisp is extremely flexible in its naming of functions and variables (symbols). If you'd like, you could define an operator named .= as a function: (.= string new-character-strings ...) would modify the given string object, string , in-place, appending each specified new-character-string to the end.

          Recognizing the downside to modifying random strings in-place, perhaps you'd rather have your .= operator assign a newly-instantiated string to the variable referenced by string . You could, by writing the operator as a macro. The macro would act like a function, taking as input each "raw" argument—symbols and lists, the structure as they appear in your program, before evaluation—and returning as output replacement Lisp code to evaluate in its place. So that your .= operator form of (.= out "lalala") is semantically equivalent to (setf out (concatenate 'string out "lalala")) (like out = out . "lalala"; in other languages).

          It's not just simple textual substitution. You can use any function or macro in your macro definition to transform your input arguments into whatever replacement code you'd like. I'm using macros in Common Lisp to generate recursive-descent parsers based on a grammar production expression: the following form defines a function named obs-text that takes a string as input and returns a list of matches found as output:

          (defproduction obs-text
          (LF :* CR :* (obs-char LF :* CR :*) :*))

          This function is defined in place and evaluated and compiled immediately by the Common Lisp implementation.

          Macros can be abused, but they add a tremendously powerful capability of abstraction not possible with many other languages.

      • Re: (Score:3, Insightful)

        by misleb ( 129952 )

        Overloading the bit shift operator on I/O streams is a case of the second way of thinking: a bit shift makes no sense on an object, so why not use it for something else.

        Especially when you can make it do something intuitive (if only visually). I mean, "" looks like "I/O" to me. It looks like the are sending the item to teh right towards/into the item to the left. Makes sense to me.

        If only the rest of C++ was the intuitive. ;-)

        -matthew

      • Re: (Score:3, Informative)

        That's the outflow of an inherent problem with allowing operators to be overloaded. People will inevitable make them do different things on different types, making it impossible to know what an operator does without knowing something about the types of the arguments.

        You mindlessly claim that allowing people to make operators do different things on different types is a bad thing. Do you actually know what's good about supporting operator overloading? It's the ability to make operators do different things

    • by 2short ( 466733 ) on Tuesday December 05, 2006 @01:07AM (#17109866)
      Why, because you've been confused by that? Because anyone has ever been confused by that ever? So you see:

      cout << "You are a bazooty head";

      and you think, obviously, that is supposed to shift the bits of the standard output stream left by "You are a bazooty head"?

      I wouldn't even call it an overloaded operator except in an overly technical sense. It's an operator that means two different things, and while that may in general be a bad idea, in this case the possible contexts for those meanings are so different, it's not anything close to a problem.

      Now I'm sure people will deluge me with examples of cryptic, intentionally obtuse code that dumps the results of shift expressions directly to streams, and thus abuses this construct to create confusion. That's not the point. In decently written code, it's not a problem.
      • by epine ( 68316 ) on Tuesday December 05, 2006 @02:25AM (#17110228)
        I've been involved in more threads than I wish to recall slinging mud at C++ and there is always a strong representation from the crowd who aren't willing to invest the time to understand the object they are criticizing. The criticism fundamentally boils down to: why should a language force me to think?

        The fact of the matter is that the conceptual challenge of writing pointer-correct code is isomorphic to other forms of resource-correctness which one must still confront in whatever saintly language one employs. When I worked with microcontrollers (fairly hefty ones), in actual practice I never lost any sleep over pointer correctness. However, I did sweat bullets over real-time response in my nested interrupt handlers. Pointers were small potatoes compared to fundamental challenges posed by the design of the hardware we employed. A few small changes to the hardware design would have saved enormous challenges in the software layer. No language would have spared me that challenge.

        Certainly overloading can be abused. Has it ever caused me a problem? Never. Excess delegation in an object-oriented framework? Nightmares.

        Another post blames C++ for having an accretion-based design process. Oh, that stings. It was an explicit design approach to gain real-world understanding of one feature before designing the next. The two areas where the C++ design process got ahead of itself were multiple inheritance and templates. The former Stroustrup has confessed was perhaps a misguided priority. The later was caused by discovering that templates were an exceptionally fertile mechanism very late in the standardization process. C++ templates evaluate at compile time as a pure functional language. What makes templates difficult is that they are too much like other languages (e.g. Haskell) that the same people go around praising.

        If one fully understands the cascade of implications of the original decision to take a relatively hard line on backwards compatibility with C, there isn't much in C++ that strikes me as "could have been vastly better". OTOH, I've come to the opinion that for someone who lacks that deep historical perspective, the overhead involved in mastering all the syntactic quirks that stemmed from that root is excessive. I don't regard C++ as a language that justifies the learning curve unless the person is suited to the kind of challenge involved in writing a real-time correct interrupt handler on a random piece of hardware that wasn't necessarily designed to make this easy.

        Just the other day I commented out a section of PHP code in website skin (a language I use irregularly) to roughly this effect:

        <!--
        <markup> ... </markup> <?php require ("somefile.php"); ?> <markup> ... </markup>
        -->

        somefile.php executed regardless and emitted an HTML comment which closed my open comment in the first line above, leaving my closing comment exposed in the rendered document. Sigh.

        At the end of the day, I find it extremely obnoxious the sentiment that some kind of pure language design can save us from this misery. There is no salvation to be found among programmers who brag mostly about thinking less.
        • by 2short ( 466733 ) on Tuesday December 05, 2006 @03:01AM (#17110476)
          As a former co-worker once put it "C++ is a professionals language"; while this sounds at first like snobish looking down ones nose at other languages, it's not. If you're going to be spending much of your productive work hours over some significant chunk of your career writing code, C++ may be the language you want to do it in. If not, it's probably not.
          • Re: (Score:3, Interesting)

            by radish ( 98371 )
            Speaking as a professional who doesn't use (or want to use) C++, I disagree. Whilst I agree that there are tiers of languages, and there are plenty I certainly wouldn't want to use on a regular basis, C++ is by no means the only one worthy of serious consideration. Java is one other obvious candidate, also (as much as I hate to say it) C#. And I'm sure we all know a 20+ year pro who uses Perl for everything :)
            • Re: (Score:3, Informative)

              by 2short ( 466733 )

              Which is why I said C++ may be the language you want; certainly there are other candidates. I'm saying I would not reccommend anyone use C++ on an irregular basis. If you're a biologist who does some coding on the side, don't use C++; and don't be surprised if it seems unsuited to your needs.

              By analogy, I'm a coder who occasionally does some welding in the garage on weekends. Professional welders would scoff at my hobbyist-level equiptment, and be insanely frustrated by its limited capabilities. But wer
              • Re: (Score:3, Informative)

                by radish ( 98371 )
                Completely agree, on rereading I realise I misunderstood the last line of your post as implying that if you don't want to use C++ you're not a professional. Mea culpa :)
        • Comment removed (Score:4, Insightful)

          by account_deleted ( 4530225 ) on Tuesday December 05, 2006 @03:53AM (#17110746)
          Comment removed based on user account deletion
          • Re: (Score:3, Insightful)

            by 2short ( 466733 )
            Well, quite a few of us see the benefit (and frankly don't think it's all that complex). If you don't, don't use it; nobody is holding a gun to your head. Nobody is holding a gun to the head of the many, many programmers who do choose C++. C++ is a wildly popular language; if you can't figure out the reason why, that doesn't mean there isn't one.
        • I've been involved in more threads than I wish to recall slinging mud at C++ and there is always a strong representation from the crowd who aren't willing to invest the time to understand the object they are criticizing. The criticism fundamentally boils down to: why should a language force me to think?

          That's a good question. The purpose of a programming language is much the same of that of mathematical notation, which is to allow you to think at the level of abstraction of your problem while not wasting

    • Re: (Score:2, Interesting)

      There's nothing intrinsically wrong with operator overloading as other posters have indicated. One thing I do think C++ could do better is have operators in a family. For instance, == and != have well understood and complementary functions. When we define equality on a type, the definition for inequality is pretty obvious. In the spirit of C++, there should be a way to specify completely different functions for them of course, but generally bool operator!=(const X &x1, const X &x2) { return !(x1 ==
    • by hey! ( 33014 ) on Tuesday December 05, 2006 @09:06AM (#17112520) Homepage Journal

      I wouldn't take programming advice from a guy who overloads the bit-shift operator to perform I/O.


      Well, in the real world we have these things which often seriously limitthe elegance of our designs. They're called constraints.

      In the case of C++, Stroustrup wanted to add extensions to C that would turn it into a complete object oriented programming language. With the hindsight of years of experience, some things that were then thought to be critically important turned out to be of only marginal value. Multiple inheritance for one thing. Another thing was allowing object classes to act as "first class types", which implies the need to create and overload operators. However, given the state of knowledge at the time, they were reasonable goals.

      So, Stroustrup needed to implement operator overloading. He also chose to implement C++ as a preprocessor that converted C++ into C. There were some undesirable consequences of this, but for the most part it was a good decision for the language. What he accomplished at one stroke was creating a complete and highly capable object oriented programming implementation available on a vast number of systems. The big advantage of C is that is small size made it the most portable language ever; piggybacking on it brought much of this advantage to C++, with minimal effort (another real world constraint).

      IIRC, one of the undesirable consequences of his implementation approach was that it was much more convenient to limit C++ operators to tokens that are recognized as tokens in the C compiler. This means that to allow classes to be first class types, the operators we define on those classes were "overloaded" C operators.

      From a design standpoint, this kind of "overloading" is a totally different kettle of fish from normal operator overloading. "Overloading" proper implements a kind of conceptual parallelism: floating point addition is analagous to integer addition, even though it has a totally implementation. True OO operator overloading plays the same role in expressions that polymorphism does in method calls. The C++ use of existing C operators to implement new concepts (e.g. I/O) is a pure kludge.

      This is what is known in the real world as a trade-off.

      We thought, back in 1979, that making classes first class types with their own operators was pretty important. Stroustrup needed to implement it then, but he also wanted to piggyback C++ on the existing C compiler for the reasons noted above. This trade-off satisfies both constraints at the cost of some aesthetic inelegance. Redefining the bitwise shift operator for I/O is conceptually inelegant, but it gets the job done and creates no confusion in practice. This is also a good trade-off.

      In retrospect, Stroustrup could have left certain of the features of C++ out, becuase either they have proved more problematic than they are worth (e.g., multiple inheritance) or they are not really as useful as people thought they were going to be (operator overloading). Perhaps what we really needed was more like Objective C. But C++ became the dominant systems programming language, and Objective C did not. Speaking as somebody who worked through the era of C++'s rise to dominance, this is a direct result of Stroustrup's choice of trade offs. C++ was more widely ported. And C++ was a convincingly complete implementation of nearly everything we thought was important to have in an OO language at the time.

      There is no doubt that C++ is a work of genius -- what's more a rare mix of pragmatic ane theoretical genius. If you need proof, consider that after twenty five years, C++ remains an indispensable systems programming language, if not the indispensible language. You can hardly fault Stroustrup if it is not quite what we'd come up with today.
      • by Kupek ( 75469 ) on Tuesday December 05, 2006 @11:28AM (#17114336)
        I'm curious why you think multiple inheritence in C++ is more trouble than it's worth.

        As far as operator overloading is concerned, the intent was to provide the conceptual parallelism you explained. In D&E, he talks about C++ users asking for the capability for things like matrix addition. Using << and >> for stream input and output was an afterthought. Further, I don't think it was leveraging the C compiler that precluded him from overloading operators other than what were already in C. He easily could have supported new operators, as Cfront was not just a preprocessor, it was a full compiler that happened to compile down to C. Since I've never read anywhere (either in interviews, D&E, or TC++PL) why he chose to not allow arbitrary operators, I assume it was because he didn't feel they were necessary. I know that D&E has discussion of an exponent operator, which was eventually ruled out.
        • by hey! ( 33014 ) on Tuesday December 05, 2006 @01:05PM (#17115758) Homepage Journal

          I'm curious why you think multiple inheritence in C++ is more trouble than it's worth.


          Because nearly always composition is a better way to deal with the design problems multiple inheritance attempts to solve, especially as the situation becomes more and more complex. Also, inheritance often implies more than necessary -- multiple inheritance multiply so. You usually are most concerned with guaranteeing an object's behavior when you use inheritance, but you also get an implementation whether you want it or not. This creates unnecessary complexity and problems when you use multiple inheritance simply to ensure that object class memebers provide certain services.

          I'm not saying it's never useful of course. But it is never necessary and often a bad thing.


          He easily could have supported new operators, as Cfront was not just a preprocessor, it was a full compiler that happened to compile down to C. Since I've never read anywhere (either in interviews, D&E, or TC++PL) why he chose to not allow arbitrary operators, I assume it was because he didn't feel they were necessary. I know that D&E has discussion of an exponent operator, which was eventually ruled out.


          You make some good points. My guess is this: allowing user created operators probably made lexxing difficult or impossible. You wouldn't be able to tell whether a sequence of characters was an operator or something else until you had parsed the operator's definition. You couldn't have a fixed grammar either, which might preclude further parsing even if you had a clever way of guessing that some string is probably an operator.

  • by Salvance ( 1014001 ) * on Tuesday December 05, 2006 @12:49AM (#17109742) Homepage Journal
    Bjarne says:
    Think of the Mars Rovers, Google, and the Human Genome Project. That's quality software
    but then goes on to say:
    On the other hand, looking at "average" pieces of code can make me cry. The structure is appalling, and the programmers clearly didn't think deeply about correctness, algorithms, data structures, or maintainability
    I doubt he has seen the code to the Mars Rovers, Google, or many other applications that he/we consider quality. He's judging it based on the software's function. If we were to judge software purely on how it worked, quite a bit of software could be considered quality. But if you were to look at the same software's code, you'd probably "cry" like Bjarne. Look at Firefox. That is a Quality application, but programmers I've spoken to said the code is a mess.
    • by Anonymous Coward on Tuesday December 05, 2006 @01:04AM (#17109848)
      The Firefox codebase is indeed a mess. Don't take my word for it, view it yourself: http://lxr.mozilla.org/seamonkey/source/ [mozilla.org].

      Part of the problem is the severe over-architecturing. This over-architecturing has added much unnecessary complexity to the overall design of Gecko and Firefox. Much of it is "justified" in the name of portability. But then we find that other frameworks, including wxWidgets and GTK+, do just fine without the overly complex and confusing architecture of Gecko and Firefox.

      It's just not easy for most developers to become up-to-date with the Mozilla codebase because of all this added complexity. Unless a volunteer developer has literally months to spend learning even the small portion of the code they're interested in working on, it's basically inaccessible to most programmers.

      The constraints of the real-world often come into play, and we have developers modifying code they don't necessarily understand fully. And so we get the frequent crashes, glitches, memory leaks and security problems that Firefox 1.5.x and 2.x have become famous for.

      It's likely that Mozilla should ideally rewrite a vast portion of their code, keeping simplicity in mind. That likely won't happen, and thus we will most assuredly still run into problems with Firefox and Gecko, problems caused directly by the overcomplication of the Mozilla architecture.

    • by jZnat ( 793348 ) *
      Look at Firefox. That is a Quality application, but programmers I've spoken to said the code is a mess.

      nsCOMPtr<pr_bool>(No) nsIt(s) NS_NOT
  • by sammy baby ( 14909 ) on Tuesday December 05, 2006 @12:50AM (#17109752) Journal
    Stroustrup:
    On the other hand, looking at "average" pieces of code can make me cry. The structure is appalling, and the programmers clearly didn't think deeply about correctness, algorithms, data structures, or maintainability. Most people don't actually read code; they just see Internet Explorer "freeze."


    Now that is just ridiculous. I'm using IE7 to post this article, and have been using it since its release, and I can say
    • by grammar fascist ( 239789 ) on Tuesday December 05, 2006 @02:07AM (#17110150) Homepage
      Now that is just ridiculous. I'm using IE7 to post this article, and have been using it since its release, and I can say

      You can say that it's magical, because it managed to post for you just before it crashed. Though that's pretty nifty, I've seen Firefox tack on a "NO CARRIER" before. Maybe you should submit a feature request.
  • by jgannon ( 687662 ) on Tuesday December 05, 2006 @12:51AM (#17109764) Homepage
    This is only my second favorite Stroustrup interview. The first is here: http://www.chunder.com/text/ididit.html [chunder.com] (Yes, I know it's a hoax.)
  • by TransEurope ( 889206 ) <[ed.znelbok-inu] [ta] [caine]> on Tuesday December 05, 2006 @01:03AM (#17109840)
    "...looking at "average" pieces of code can make me cry. The structure is appalling, and the programmers clearly didn't think deeply about correctness, algorithms, data structures, or maintainability."

    Maybe it's because the average programmer is enslaved in company business. They don't have the
    time to create masterpieces or art in programming. Instead of that they are forced to create
    something adequate in a given time. Happens almost everytime, when science becomes business.
    I don't like that, you don't like that, no one likes that, but that's the way commercial industries
    are working (at the moment).
    • by 2short ( 466733 ) on Tuesday December 05, 2006 @01:24AM (#17109932)

      I deal every day with programmers who don't think they have time to deal with things like correctness, algorithms, data structures, or maintainability. In their panic to create something adequate in a given time, they invariably run over time and create something inadequate. They'd have been much better off doing it the "right" way, because the whole reason it's called the "right" way is it's the fastest way to get the bloody job done.

      Like it or not, writing code that has to be done on some deadline, and work, is how commercial (and much non-commercial) coding is; at the moment, at all previous moments, and for all future moments. So learn to write good code in that environment or get a different job; don't write bad code and blame it on obvious truisms.

      Sorry, long day :)
      • by SageMusings ( 463344 ) on Tuesday December 05, 2006 @01:56AM (#17110086) Journal
        So learn to write good code in that environment or get a different job; don't write bad code and blame it on obvious truisms

        Sure....

        And the moment you demonstrate to the organization you can write a quality app in 3 months, they'll decide they can ask for the next one in 2 months. You should come and try my environment some time.

        I wouldn't say we write bad code. We just write adequate code in a survival mode to appease customers who were assured by our sales team that we can change anything they like.
        • by 2short ( 466733 ) on Tuesday December 05, 2006 @02:44AM (#17110368)
          I generally have a release deadline every 2 weeks or so. I have some code I've been building on for 8 years; and some I've re-invented and re-written so many times I shudder to think what might have been accomplished with the time I'd have saved if I hadn't tried to take a shortcut the first time I wrote it.

          If there is anything that my job has firmly beaten in to me it's that doing it right saves you time over taking the shortcut; and not even down the road, but right there, the first time. The stupefyingly huge savings in maintainability and reusability are just gravy.

          It sounds to me like I would say you write bad code, and I'd recommend trying to write the best code you can because it will get things done faster. Salesmen promising customers unreasonable things won't change, so it's no reason to make things worse. If the things they sell are truly unreasonable, in that they cost more to do than someone pays, and they don't get fired, then your management is incompetent. In that case, you're screwed, but still no reason to make it worse. :)
      • Re: (Score:3, Insightful)

        by loconet ( 415875 )
        "the whole reason it's called the "right" way is it's the fastest way to get the bloody job done"

        Wrong. Why do I keep hearing this argument? It makes no sense whatsoever. I've heard it from many managers, is that what they teach them? I'm obviously missing something. My experience and common sense tells me that it takes longer to sit down and design a solution using a well thought out process (ie: the "right" way) than it takes to throw together a hack to address the problem quickly.

        If the "right" way is th
        • Re: (Score:3, Insightful)

          by qwijibo ( 101731 )
          I think the "right" way is the one that takes the least time in the long run. The problem is that business is focused on "good enough right now" and doesn't care if spending a week instead of a month on something now may still cost the same month later, plus another couple of months of converting everything from the wrong way to the right way. From the business perspective, "later" is someone else's problem. Businesses succeed or fail based on how effectively they determine which things can be put off un
    • Agreed... (Score:5, Interesting)

      by Bamafan77 ( 565893 ) on Tuesday December 05, 2006 @01:42AM (#17110032)
      Maybe it's because the average programmer is enslaved in company business. They don't have the time to create masterpieces or art in programming. Instead of that they are forced to create something adequate in a given time. Happens almost everytime, when science becomes business. I don't like that, you don't like that, no one likes that, but that's the way commercial industries are working (at the moment).
      Agreed, but the problem is complicated. Sometimes code is bad because the programmer is not very good (vast majority of cases). Other times it's bad because a good programmer wasn't given enough time to do that job. I once inhereted something where a customer wasn't happy with a product and I pulled open the hood expecting a mess. Instead what I got was extremely well documented code explaining the layout, sanely named variables, and some fairly complicated things happening in an understandable manner. The guy who I got this from was a very good programmer (heh, how often does THAT happen?!). Then it occured to me that the customer simply wanted the impossible done.

      Anyway, the typical unsophsticated (software development-wise) customer can't tell the difference between the two. This is made worse when many managers who were supposedly professional programmers themselves can't tell the difference. As far as I can tell, the only way for a programmer to deal with this is to simply BE great and be ready to move on if the customer can't see that greatness. Eventually they'll get somewhere that will appreciate it.

      I also cover some of this in another reply. [slashdot.org]

      • Re:Agreed... (Score:4, Insightful)

        by squoozer ( 730327 ) on Tuesday December 05, 2006 @06:30AM (#17111540)

        Sometimes code is bad because the programmer is not very good (vast majority of cases).

        I hear this quite a bit and I think it's probably a flawed assumption or at least to simple a statement to describe the truth. The vast majority of developers can't be below average or the average would drop. What we can say is that a good portion of developers seem to have a poor grasp of basic software development skills. What we need to ask then is why.

        In my experience there seems to be far more variation in skill level between software developers than I have seen in any other profession. Perhaps this is simply because I am only familiar with software development and there is the same spectrum width in other professions as well but I somehow doubt it. I suspect, however, that software development is actually a very very hard process that only a small number of people truly have the mental discipline for. Since that number is less than the number of developers required we need to do something to make software development easier for the masses of developers. This is similar to the way cabinets were made. The master cabinet maker would produce the top and front and the less skilled (apprentice) would produce the frame (since it's easier).

    • by misleb ( 129952 )
      Oh please. Look at just about every open source project out there. No management holding them back. No marketing departments making unreasonable demands. No accountants cutting budgets. And yet most projects are full of flaws and crappiness. I'm not knocking Open Source. I love it, but at the same time I have to overlook a lot of shortcomings. When it comes right down to it the Linux kernel, for example, is not significantly better than, say the Solaris kernel. They each have their own strengths and weakn
  • Comment removed based on user account deletion
    • by r00t ( 33219 ) on Tuesday December 05, 2006 @01:48AM (#17110058) Journal
      The KISS principle is totally lost on that guy.

      The moment you have 2 people doing C++ on 1 project, at least 1 person will be faced with code written using features they just don't understand. C++ has features to spare.

      Think you know C++? No, you don't. Heck, the compiler developers are often unsure.

      This is a recipe for disaster, as we often see.

      C was hard enough. Few people truly understood all the dark corners. (sequence points, aliasing rules, etc.)

      C++ is addictive. Everybody wants one cool feature. C code is somewhat easy to convert. Soon you're using enough of C++ that you can't go back, and hey, more is better right? The next thing you know, some programmer on your team got the wise-ass idea to use Boost lambda functions (for no good reason) and you find yourself with 14 different string classes and... you have a mess that no one single developer can fully deal with.

  • PIC microcontrollers in particular. Not only do you know exactly what the chip will do, but since it's programming on a reasonably small scale, you can have complete control -- right down to the actual bits -- of everything on the chip. Small, efficient, fast -- and with a bit more effort, you can do a quick mathematical proof that the software is airtight.

    I realize this is completely impractical on a level of an operating system -- but TFA is right; if we could put a little less emphasis on having the sh
  • by Bamafan77 ( 565893 ) on Tuesday December 05, 2006 @01:26AM (#17109938)
    From the article:

    TR: How can we fix the mess we are in?

    BS: In theory, the answer is simple: educate our software developers better, use more-appropriate design methods, and design for flexibility and for the long haul. Reward correct, solid, and safe systems. Punish sloppiness.

    In reality, that's impossible. People reward developers who deliver software that is cheap, buggy, and first. That's because people want fancy new gadgets now. They don't want inconvenience, don't want to learn new ways of interacting with their computers, don't want delays in delivery, and don't want to pay extra for quality (unless it's obvious up front--and often not even then). And without real changes in user behavior, software suppliers are unlikely to change.

    There ya go! Time pressures and price are fundamentally incompatable with code quality, even amongst the best programmers. Ergo, great programming is incompatible with most business models (i.e., most businesses don't have the money to make the software they want at the quality they want). It's sort of like wanting a Ferrari, but only having enough money to buy Gremlin. Sadly, many (most?) programming projects are nothing more than an arms race between getting something out the door that hangs together reasonably well and the bottom of the client's bank accounts.

    The good thing about working in software-centric companies (besides understanding the programmer psyche) is that they often don't balk as much at being told something can't be done in a timeframe. Blizzard doesn't blink an eye when it has to delay a game by a year (probably more like 2 or 3 years when compared to internal, non-public set dates). Microsoft finally decided to nuke WinFS once they finally conceded that you're not going to get it within this decade, no matter how much they throw chairs. Google apparently has almost no schedules [blogspot.com].

    • There ya go! Time pressures and price are fundamentally incompatable with code quality, even amongst the best programmers. Ergo, great programming is incompatible with most business models (i.e., most businesses don't have the money to make the software they want at the quality they want). It's sort of like wanting a Ferrari, but only having enough money to buy Gremlin. Sadly, many (most?) programming projects are nothing more than an arms race between getting something out the door that hangs together rea

  • Its crazy (Score:3, Insightful)

    by JustNiz ( 692889 ) on Tuesday December 05, 2006 @01:31AM (#17109970)
    To all those people saying C++ is too dangerous/prone to errors and Java/C## is the way ahead:
    Stop blaming the tools and look to yourselves.

    C++ is like a sharp scalpel. Yes you can hurt yourself if you're unskilled, inexperienced or sloppy.
    Java and C# are like those scissors with rounded ends for kids. Totally inefficent but safe for beginners.

    Unfortunately it seems that there are a lot of people out there who like to call themselves programmers but have no actual ability. Java/C## does a good job of removing their need to think and hiding their inate lack of skill which is why they prefer it.

    But there's a reason why surgeons don't use plastic scissors. The same applies to good software engineers.
    • Re: (Score:3, Interesting)

      by Coryoth ( 254751 )

      C++ is like a sharp scalpel. Yes you can hurt yourself if you're unskilled, inexperienced or sloppy.
      Java and C# are like those scissors with rounded ends for kids. Totally inefficent but safe for beginners.

      So where does something like Eiffel fit in? It's all the usual bist to stop you shooting yourself in the foot (a strong static type, garbage collection, etc.) plus added extras to make your code even more maintainable, and even harder to shoot yourself in the foot (design by contract, SCOOP concurrency [se.ethz.ch], e

    • Re:Its crazy (Score:5, Insightful)

      by EvanED ( 569694 ) <evaned@noSpam.gmail.com> on Tuesday December 05, 2006 @01:51AM (#17110072)
      C++ is like a sharp scalpel. Yes you can hurt yourself if you're unskilled, inexperienced or sloppy

      "C++ gives you enough rope to shoot yourself in the foot"

      Java and C# are like those scissors with rounded ends for kids. Totally inefficent but safe for beginners.

      I'm not convinced of the "totally inefficient" bit. I think you'd be pressing it to do time-critical systems (indeed, current GC is more or less incompatible with realtime systems), OSs, etc., but I'm not convinced that they're not just fine for applications. This especially applies to C#, because C# GUIs are actually responsive. (Swing and to a lesser extent SWT lag a little.)

      But there's a reason why surgeons don't use plastic scissors.

      There's also a reason carpenters don't use scalpels. It's because different tools are good for different jobs.
    • Of course surgeons don't use plastic scissors. But how often do you need a surgeon? Most of the time you need a cheap tailor to make you a suit. And that's when scissors come in handy.

      C++ can do wonders when used by highly experienced people. But most of the time, it is more cost effective to get entry level coders and use PHP/Java/C#/whatever. You will get a (somewhat) working product cheaper and possibly faster. And time to market and cost is often more important than maintainability/quality.

      And don
    • Re:Its crazy (Score:4, Insightful)

      by QuantumG ( 50515 ) * <qg@biodome.org> on Tuesday December 05, 2006 @02:06AM (#17110136) Homepage Journal
      As much as your comment is flamebait, I have to agree with you to a point. The "virtual machine" aspect of the Java programming environment has probably done more to harm the quality of programmers than anything else. I know java programmers who don't understand how a computer works. They ask me questions about "how the processor loads strings into registers" and such. Being able to not think about the nitty gritty of the processor you are writing your code for is great, but that doesn't justify not knowing the basics of how a processor actually works. You might as well be coding in LOGO.

      This, of course, is not true of all Java programmers. It probably isn't true of most Java programmers, but I feel safe to say that it's true of more Java programmers than it is of C or C++ programmers.
      • Re: (Score:3, Insightful)

        by arevos ( 659374 )

        The "virtual machine" aspect of the Java programming environment has probably done more to harm the quality of programmers than anything else. I know java programmers who don't understand how a computer works. They ask me questions about "how the processor loads strings into registers" and such. Being able to not think about the nitty gritty of the processor you are writing your code for is great, but that doesn't justify not knowing the basics of how a processor actually works.

        You don't explain how knowin

  • by CPE1704TKS ( 995414 ) on Tuesday December 05, 2006 @02:06AM (#17110138)
    The problem with programming is that too many people that lack the talent are in the programming business. I know because I work with many of them. They are not detail oriented, they don't think strategically, long term, etc and just make a mess of code. They only want to fix the problem they need to fix without worrying about the effect it will have on the system, etc. This is what causes bad programs. Programming is easy enough that any moron can make something work, but to make something continue to work requires an engineering understanding, and this is something most people don't have. It's unfortunate.
  • by shess ( 31691 ) on Tuesday December 05, 2006 @03:07AM (#17110502) Homepage
    Many of the points in the interview implied that software was simply soaking up all the hardware performance, and perhaps we could squeeze more out of the software. I completely agree, except ...

    http://www-inst.eecs.berkeley.edu/~maratb/readings /NoSilverBullet.html [berkeley.edu]

    The problem is that the software is an order of magnitude slower than it needs to be because the hardware has increased in performance by 2 and 3 and 4 orders of magnitude. If we had held the software to the same standards as we used to back when the hardware cost more than the programmers, it would be more efficient - but would only be able to make use of a couple megabyte of RAM and disk. The looseness of current software is part and parcel of harnessing the hardware. The hardware didn't just allow us go loose with the software we wrote - it allowed us to use abstractions which were measurably less efficient, but which had the side effect of allowing us to harness the hardware in the first place.

    As a pair of trivial examples, take arrays and dictionaries. When I ask interview questions like "Design a hashtable" or "Reverse a linked list", many candidates have to actually step back and think about the question! 30 years ago, designing a good hashing function was the mark of true talent, and gains were to be had by selecting the linked-list scheme which best suited the problem at hand. These days, many people don't really know why you'd use a map versus a hash_map, or a vector versus a deque. And, for the most part, they don't really need to.
  • The C++ Hater (Score:3, Insightful)

    by Zobeid ( 314469 ) on Tuesday December 05, 2006 @10:28AM (#17113418)
    Not enough is made of the sheer obfuscatory nature of C++. C was already somewhat cryptic, but at least it was small. C++ is cryptic and large, and that's really a bad combination. At one point in the interview Stroustrup recounts that half the programmers he polled who said they disliked C++ then admitted they had never programmed an application using C++. He calls that prejudice. I call it perfectly normal human behavior; if you begin to study a language and quickly discover it to be a load of tailings, then you will be disinclined to program applications using it.

    That was my experience anyhow. I began studying C++, and at some point I stopped and asked myself, "Why must I endure this? Surely there must be better options." And I was right.

    I really have to grit my teeth when Stroustrup talks about C++ "winning" against competing languages. C++ is successful for the same reason that COBOL and Microsoft Windows have been successful: because they happened to appear in the right place at the right time, and were promoted by the right people, to become entrenched. Once entrenched, the world was saddled with them for decades to come. It has nothing to do with their inherent qualities or advantages, it's little more than random chance.
  • by Mock ( 29603 ) on Tuesday December 05, 2006 @03:28PM (#17118290)
    While you can bash the tools as much as you want, the point remains that the majority of the fault for bad programs lies with the programmer.

    Just as an adept sculptor can build a beautiful (though somewhat rough) art piece using a chainsaw, so can a good programmer make do in situations where he is forced to use the wrong tool for the job.

    Namespaces can be simulated with a good naming convention.
    OO can be accomplished in a procedural language.
    Technologies can be married together, and even replaced in part with other technologies at a later date (it's called refactoring, folks!)

    I currently program exclusively in Java. I learned from the ground up (Analog electronics -> Digital logic -> Machine code & Assembly -> C -> C++ -> other OO languages & scripting languages, AOP etc)

    I'd love to have multiple inheritance in Java, but I hate the fact that you can't rebind a reference in C++.
    I'd love to have real properties and closures supported by the Java language proper, but I make do with standardized boilerplate code in the meantime.
    I love the quick UI building you can do in VB, but I certainly wouldn't want to write business logic with it!
    Access is great for building quick and simple systems, and does its job well, but I'm not going to store 10 million records in it!
    Nothing beats the speed of a library written in assembler, but I'm certainly not going to write database access code in it!
    Perl is a great tool in the right hands. In the wrong hands, it is the worst disaster ever, and the first thing I get rid of when I take over support of a project (except for 10% of the time when the previous programmer was competent).

    I've seen horrendous code in every language I've ever encountered, and it's always a result of the programmer not understanding what he's working with. My personal opinion is that you shouldn't be programming unless you understand at least one layer below where you're working.

    Do you know how to examine a core dump? How about interpreting a Java HotSpot dump?
    Do you understand how the technology you're working with interacts with the operating system?
    Do you know how the auto-code generator deep within the script overlay you're using actually works? Have you even once looked at the intermediate output?
    How about the .S files created by your compiler? Do you even know what they are? Could you load an object file into a disassembler?
    What will you do when something goes wrong with it? Give up?

    Do you follow the generally accepted practices used in your domain? Do you even know what they are?
    Do you know what domain driven design is?
    Do you understand when it's a bad idea to use inheritance? (Answer: most of the time)

    I could go on forever. The point is that good programmers find the right tool for the job because they understand how it all works. Hackers do it fast, but forget to make it readable or maintainable. Bad programmers just plain do a bad job and make things shitty for everyone.

"You'll pay to know what you really think." -- J.R. "Bob" Dobbs

Working...