Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Donald Knuth Rips On Unit Tests and More 567

eldavojohn writes "You may be familiar with Donald Knuth from his famous Art of Computer Programming books but he's also the father of TeX and, arguably, one of the founders of open source. There's an interesting interview where he says a lot of stuff I wouldn't have predicted. One of the first surprises to me was that he didn't seem to be a huge proponent of unit tests. I use JUnit to test parts of my projects maybe 200 times a day but Knuth calls that kind of practice a 'waste of time' and claims 'nothing needs to be "mocked up."' He also states that methods to write software to take advantage of parallel programming hardware (like multi-core systems that we've discussed) are too difficult for him to tackle due to ever-changing hardware. He even goes so far as to vent about his unhappiness toward chipmakers for forcing us into the multicore realm. He pitches his idea of 'literate programming' which I must admit I've never heard of but find it intriguing. At the end, he even remarks on his adage that young people shouldn't do things just because they're trendy. Whether you love him or hate him, he sure has some interesting/flame-bait things to say."
This discussion has been archived. No new comments can be posted.

Donald Knuth Rips On Unit Tests and More

Comments Filter:
  • by Gordonjcp ( 186804 ) on Saturday April 26, 2008 @11:50AM (#23207686) Homepage
    ... looks like it falls into the same trap as COBOL. The idea that by making programming languages incredibly verbose, they will somehow become easier to use is a fallacy.

    Using "MULTIPLYBY" instead of "*" isn't going to make your code easier to read.
    • You misunderstand (Score:5, Interesting)

      by Rix ( 54095 ) on Saturday April 26, 2008 @12:01PM (#23207750)
      Using "MULTIPLYBY" instead of "*" is asinine, because both are equally descriptive. Putting a comment above the line telling people why you're doing it isn't.
      • Re:You misunderstand (Score:5, Informative)

        by Anonymous Coward on Saturday April 26, 2008 @03:07PM (#23208660)
        The GP must have been confused by the example on Wikipedia, which a) wasn't literate programming and b) used a shitty made-up language where "multiplyby" was one of the operators. Literate programming is programming (in your favourite language) with a code-in-documentation approach instead of the usual documentation-in-code approach. So, for example, the flow of your literate program is defined by how best to explain what's happening to a human reader, rather than being constrained by the order the compiler requires. You run your literate program through a tool and it spits out compilable code or pretty documentation.
      • by PingPongBoy ( 303994 ) on Sunday April 27, 2008 @11:12AM (#23214642)
        Putting a comment above the line telling people why you're doing it isn't.

        I don't have to tell people why I do it. I do it for money. Well, sometimes for love and world peace, but that's rare.
    • by CastrTroy ( 595695 ) on Saturday April 26, 2008 @12:04PM (#23207760)
      I'm not sure if MultiplyBy is any better. It takes longer to type, and most people have understood that * (or X) meant multiply by since second grade. The thing I like about more verbose languages like VB.Net are that they force you to write out things that most good programmers would write out anyway as comments. At the end of a while loop, you write "End While". Most people using C++/Java/C# end up writing "} //end while" anyway, especially if the loop goes over 6 or 7 lines, and even more likely if there's a nesting construct like an if statement in there anyway. Seeing the "End While" let's you know what you are finishing off, without scrolling higher and trying to line up and see which bracket matches. Which even with visual brace matching, can sometimes be difficult.
      • by Eunuchswear ( 210685 ) on Saturday April 26, 2008 @12:18PM (#23207832) Journal

        Most people using C++/Java/C# end up writing "} //end while" anyway,
        Pray god I never have to work on code written by these fictitious "most people".

        I'd kill any colleague of mine who wrote such a vacuous comment. With a golf club, in front of its cow-orkers to drive the lesson home,
        • by 1729 ( 581437 ) <slashdot1729@nospam.gmail.com> on Saturday April 26, 2008 @12:22PM (#23207856)

          Most people using C++/Java/C# end up writing "} //end while" anyway,
          Pray god I never have to work on code written by these fictitious "most people".

          I'd kill any colleague of mine who wrote such a vacuous comment. With a golf club, in front of its cow-orkers to drive the lesson home,
          I sometimes add comments like that if the brace is closing a deeply nested block, but then the comment indicates which particular loop or block is ending.
          • by numbsafari ( 139135 ) <swilson@bsd4usBLUE.org minus berry> on Saturday April 26, 2008 @12:51PM (#23207990)
            A better way to handle that is to turn the loop body into a function or group of functions. makes the code easier to read and a good compiler will inline the function so their's no performance loss.
            • Spaghetti-O Code (Score:5, Insightful)

              by illegalcortex ( 1007791 ) on Saturday April 26, 2008 @01:33PM (#23208214)
              I somewhat disagree with what you and... *sigh* Monkeybaister posted. Yes, there are many times when long stretches of code should be broken out into functions. But I tend to do that mostly when the same bit of code is used in several different cases. The reason being is that when you start modularizing off all your while loops that are more than a dozen lines long, you create a whole new type of spaghetti code. I'm going to coin a term and call it "spaghetti-O code." You try to track down a bug and what would have been a straightforward couple pages of code now has all kinds of functions at different places in the code. As such, it can often make debugging or forming a mental map of the code much harder.
              • Re: (Score:3, Insightful)

                by fishbowl ( 7759 )
                >The reason being is that when you start modularizing off all your while loops that are
                >more than a dozen lines long, you create a whole new type of spaghetti code.

                There is also a risk that you or a maintenance programmer might re-use such a "function"
                that was created simply to make a while loop more aesthetically pleasing, and introduce a bug because that function was not designed or tested for use in isolation.

                And in the spirit of the topic, such functions become awkward to unit test, since you're e
              • Re:Spaghetti-O Code (Score:5, Interesting)

                by Kent Recal ( 714863 ) on Saturday April 26, 2008 @03:14PM (#23208702)
                I know this problem very well from the dark days when I was still writing java.
                There doesn't seem to be a satisfactory solution, it's always a tradeoff.

                While reading this thread I realized a funny thing: This particular annoyance
                totally vanished from my day-to-day headaches when I switched to python about
                a year ago.

                It's a bit wierd because Python doesn't even use braces so one would expect
                it to be even harder to identify where a block begins and where it ends.
                But the opposite has been the case for me: The clean syntax and language
                design has led me to write, on average, shorter blocks with very little
                nesting.

              • Re:Spaghetti-O Code (Score:5, Informative)

                by mykdavies ( 1369 ) on Saturday April 26, 2008 @04:08PM (#23208962)
                It's called ravioli code [wiktionary.org]
        • by Swizec ( 978239 ) on Saturday April 26, 2008 @12:33PM (#23207906) Homepage
          I concur. Comments should tell WHY the while block is there and what it DOES. Not where it starts and where it ends, the code tells us that descriptively enough.

          I've met code blocks several hundred lines long and it was never ambigious where they started and ended.
        • by rascher ( 1069376 ) on Saturday April 26, 2008 @01:46PM (#23208276)
          I myself solve the problem using this construct:

          #define BeginWhile {
          #define EndWhile }

          #define BeginFor {
          #define EndFor } ...
        • Re: (Score:3, Insightful)

          by johannesg ( 664142 )
          I worked with Captain Endif for a while. It gets very, very, VERY tiring at some point. Especially in cases like this:

          #define TRUE_VAL true
          #define FALSE_VAL false // if theVar is true

          if (theVar == TRUE_VAL) { // set theVar to false
          theVar = FALSE_VAL;
          } // end if

          (I made this up, but sadly it is not that far removed from actual examples...)

          I also worked with a guy (another one) who left a blank line between every two lines of code. ALWAYS.

          Anyway, if you are in the neighbourhood, feel free to com
        • by mkcmkc ( 197982 ) on Saturday April 26, 2008 @04:26PM (#23209032)

          Most people using C++/Java/C# end up writing "} //end while" anyway,
          Pray god I never have to work on code written by these fictitious "most people".
          Well, actually, once you've programmed in Python for a while, all of those spurious '}'s to close blocks really start to look as annoying and useless as "} //end while".
          • Re: (Score:3, Insightful)

            Yeah, that's the one thing I hate about Ruby now -- seeing the end of a file that looks like this:

            # for some reason, Slashdot won't indent the first line...
            end
            end
            end
            end
            end
            end

            Especially when the whole culture around things like Ruby on Rails is "Convention over Configuration" (thus, your code should always be indented properly anyway) and "Don't Repeat Yourself" (tons of 'end' statements isn't par

      • Re: (Score:3, Insightful)

        by davolfman ( 1245316 )
        I don't. It's pointless if you format your code decently. The bracket and the knockdown in tabbing should be enough. The only place I can see it being useful is when you have a truckload of nested brackets and even then you want something a lot more useful than "end while" it should at least name the stinking loop.
      • Re: (Score:3, Informative)

        Something like this might help: folding in vim [toronto.edu]. Emacs probably already has an 11-note chord that does this.
    • That's not at all what he's referring to. It's the practice of using white space and proper comments to document the code in a clear way. Since nobody that's going to be reading code is going to need MULTIPLYBY for clarity over * that wouldn't be an appropriate choice. Further using the former rather than the later would cause further headaches by making the code more difficult to read.

      As a general rule no matter where you are in the code no screen should be without at least a couple of codes, if you can sc
    • by 1729 ( 581437 ) <slashdot1729@nospam.gmail.com> on Saturday April 26, 2008 @12:09PM (#23207790)

      ... looks like it falls into the same trap as COBOL. The idea that by making programming languages incredibly verbose, they will somehow become easier to use is a fallacy.

      Using "MULTIPLYBY" instead of "*" isn't going to make your code easier to read.
      From what I've seen (particularly of CWEB), literate programming doesn't change the programming language itself, it just adds a TeX style markup to the comments so that detailed (and nicely typeset) documentation can be generated from the source code. Take a look at some of Knuth's CWEB code, such as his implementation of Adventure:

      http://sunburn.stanford.edu/~knuth/programs/advent.w.gz [stanford.edu]

      It appears to be ordinary C once the CWEB documentation is stripped out.
      • Re: (Score:2, Insightful)

        by CastrTroy ( 595695 )
        So basically it's the same as the XML comments you can put in your .Net or Java code to create JavaDocs, or whatever they are called in .Net, based on the comments in the code?
        • by CustomDesigned ( 250089 ) <stuart@gathman.org> on Saturday April 26, 2008 @12:24PM (#23207864) Homepage Journal

          So basically it's the same as the XML comments you can put in your .Net or Java code to create JavaDocs, or whatever they are called in .Net, based on the comments in the code?
          Not quite. In Javadoc (or the C/C++ equivalent) the C/Java code is the source, and documentation is generated from that. In literate programming, the documentation is the source, and it has code snippets, like you would see in a Knuth textbook.


          The snippets have markup to indicate when some snippet needs to come textually before another to keep a compiler happy, but mostly this is figured out automatically. But in general, the resulting C code is in a different order than it appears in the source documentation. For instance, the core algorithm might come first, with all the declarations and other housekeeping at the end. (With documentation about why you're using this supporting library and not that, of course.)

        • by Goaway ( 82658 )
          No. It also allows re-ordering code for better readability.
    • by Basilius ( 184226 ) on Saturday April 26, 2008 @12:10PM (#23207792)
      That's not literate programming at all. A tad more research on your part is required. I actually remember when "web" in a computing context a literate programming tool rather than that thing you're surfing right now.

      Literate Programming interleaves the documentation (written in TeX, naturally) and code into a single document. You then run that (Web) document through one of two processors (Tangle or Weave) to produce code or documentation respectively. The code is then compiled, and the documentation built with your TeX distribution. The documentation includes the nicely formatted source code within.

      You can use literate programming in any language you want. I even wrote rules for Microsoft C 7.0's Programmer's Workbench to use it within the MSC environment.

      I've frequently thought about going back. Javadoc and/or Sandcastle are poor alternatives.
      • Excuse my ignorance, but please explain how this this different (or superior) to doxygen or any of the many systems that do just this. I'm not meaning to be rude, I'm just asking.
        • Re: (Score:3, Informative)

          by Sancho ( 17056 ) *
          From my brief look at doxygen, it looks like the biggest difference is semantic. Literate Programming with web is effectively documentation with code bits and metacode to indicate where the code bits should go. This means that the code bits can be (and should be) in the order that makes the most sense for the documentation. This is not necessarily the order that makes the most sense for the code.

          Doxygen looks like it just extracts properly formatted comments in code in order to generate documentation. W
        • Re: (Score:3, Informative)

          by Coryoth ( 254751 )

          Excuse my ignorance, but please explain how this this different (or superior) to doxygen or any of the many systems that do just this. I'm not meaning to be rude, I'm just asking.

          I think the prime difference is that literate programming allows you to re-order the code; that is, you include snippet of code within the documentation, and attach tage to the snippets that allow them to be reassembled in a different order. That doesn't sound like much, but it means that you can just write the documentation have code appear as i is relevenat to the documentation rather than having the program structure dictate things. Take a look at some [literateprograms.org] examples [literateprograms.org] (in various languages [literateprograms.org]) to see what I mean.

        • Re: (Score:3, Informative)

          by sholden ( 12227 )
          It predates it.

          And the philosophy is different, literate program is essentially embedding the code in the documentation. Doxygen is more about embedding documentation in the code.

          So doxygen gives you fancy comments and a way of generating documentation from them and from the code structure itself. CWEB lets you write the documentation and put the code in it deriving the code structure from the documentation, sample cweb program: http://www-cs-faculty.stanford.edu/~knuth/programs/prime-sieve.w [stanford.edu]

          Literate progra
      • Re: (Score:3, Interesting)

        by jacquesm ( 154384 )
        if you feel like experimenting with literate programming try finding the 'leo' editor (written in python)
    • by Junta ( 36770 )
      I read a sample and I must confess that I think the sample was more confusing than most typical code. I'm talking about how multiply gives only one operand, and seems to be using an implicit default variable. Perl also can do this, and when I see people use it in complex situations, it's hard to tell when the last time default would have been set at times.

      I do agree making the primitives verbose doesn't help. Ultimately, it isn't the fact that '*' is hard to understand, or that braces enclose blocks, the
    • by Nicolas Roard ( 96016 ) on Saturday April 26, 2008 @01:56PM (#23208320) Homepage
      Literate Programming is not about making programming languages incredibly verbose; it's about *describing* your program in a normal, human way, by explaining it step by step and quoting bits and pieces of the program. Sounds ideal from a documentation point of view, right ? only that if that was all there was with Literate Programming, it would be a stupid idea, as documentation has a nasty habit to not follow up with code modification.

      The really cool idea with LP is that the code snippets you use in the documentation are then weaved together to generate the "real" code of your program. So a LP document is BOTH the documentation and the code. A code snippet can contains references ("include") to other code snippets, and you can adds stuff to an existing code snippet.

      Let me show you an example in simple (invented) syntax:

      {my program}

      {title}My super program{/title}

      Blablabla we'd need to have the code organized in the following loop:

      {main}:
          {for all inputs}:
              {filter inputs}
              {apply processing on the filtered inputs}
          {/}
      {/}

      The {for all inputs} consist in the following actions:

      {for all inputs}:
          some code
      {/}

      The filtering first remove all blue inputs:

      {filter inputs}:
        remove all blue inputs
      {/} ... and then remove all the green inputs:

      {filter inputs}+:
        remove all green inputs
      {/}

      etc.

      {/}

      The above is purely to illustrate the idea, the actual CWEB syntax is a bit different. But you can see how, starting with a single source document, you could generate both the code and the documentation of the code, and how you can introduce and explain your code gradually, explaining things in whichever way that makes the most sense (bottom-up, top-down, a mix of those..).

      In a way, Doxygen or JavaDoc have similar goals: put documentation and code together and generate documentation. But they take the problem in reverse from what literate programming propose; with Doxygen/JavaDoc, you create your program, put some little snippets of documentation, and you automatically generate a documentation of your code. With LP, you write your documentation describing your program and you generate the program.

      Those two approaches produce radically different results -- the "documentation" created by Doxygen/JavaDoc is more a "reference" kind of documentation, and does little to explain the reason of the program, the choice leading to the different functions or classes, or even something as important as explaining the relationships between classes. With some effort it's possible to have such doc system be the basis of nice documentation (Apple Cocoa documentation is great in that aspect for example), but 1/ this requires more work (Cocoa has descriptive documents in addition to the javadoc-like reference) 2/ it really only works well for stuff like libraries and frameworks.

      LP is great because the documentation is really meant for humans, not for computers. It's also great because by nature it will produces better documentation and better code. It's not so great as it poorly integrates with the way we do code nowadays, and it poorly integrates with OOP.

      But somehow I've always been thinking that there is a fundamentally good idea to explore there, just waiting for better tools/ide to exploit it :-P

      (also, the eponymous book from Knuth is a great read)
    • That's a mischaracterization of literate programming.

      The whole idea of literate programming is to basically write good technical documentation -- think (readable) academic CS papers -- that you can in effect execute. What many people do with Mathematica and Maple worksheets is effectively literate programming.

      It has nothing to do with what language you use, and is certainly not about making your code more COBOL-esque.

      Maybe think of it this way: Good documentation should accurately describe what your c

  • Shocked (Score:5, Interesting)

    by gowen ( 141411 ) <gwowen@gmail.com> on Saturday April 26, 2008 @11:52AM (#23207690) Homepage Journal

    He pitches his idea of "literate programming" which I must admit I've never heard of
    I'm shocked to discover that Knuth is taking an opportunity to push literate programming, given that he's been pushing literate programming at every opportunity for at least 25 years.

    Now, I've no problem with literate programming, but given that even semi-literate practices like "write good comments" hasn't caught on in many places, I think Don is flogging a dead horse by suggesting that code should be entirely documentation driven.
    • Re:Shocked (Score:5, Insightful)

      by Coryoth ( 254751 ) on Saturday April 26, 2008 @12:37PM (#23207912) Homepage Journal

      Now, I've no problem with literate programming, but given that even semi-literate practices like "write good comments" hasn't caught on in many places, I think Don is flogging a dead horse by suggesting that code should be entirely documentation driven.
      To be fair to Knuth, I don't think the failure to write good comments detracts from literate programming. What Knuth wants is an inversion of thr traditonal code/documentation relationship: you write the documentation and embed the code within that, as opposed to concentrating on code, and incidentally embedding documentation (as comments) within the code. Ultimately the failure of good comments and good documentation is because people are focussing on the code; as long as documentation and comments are an afterthought they will be forgotten or poorly written. If you switch things around and focus on the documentation and insert the code, comment-like, within that, then you're focussing on the documentation and it will be good.

      The reason I think literate programming doesn't catch on has mostly to do with the fact that a great many programmers don't bother to think through what they want to do before they code it: they are doing precisely what Knuth mentions he does use unit testing for -- experimental feeling out of ideas. Because they don't start with a clear idea in their heads, of course they don't want to start by writing documentation: you can't document what you haven't thought through. This is the same reason why things like design by contract [wikipedia.org] don't catch on: to write contracts it helps to have a clear idea of what your functions and classes are doing (so you can write your pre-conditions, post-conditions and invariants) before you start hammering out code. The "think first" school of programming is very out of favour (probably mostly because it actually involves thinking).
      • Out of favor (Score:4, Interesting)

        by illegalcortex ( 1007791 ) on Saturday April 26, 2008 @01:54PM (#23208312)
        It's also out of favor because of how much of the real world of programming works. My very small company does a lot of work for a very, very large company. At my small company, we have one layer of management - the owner of the company. Everyone else is in the level of "not an owner of the company."

        At the large company, there are a multitude of layers of management. Any software they build require extensive specifications and documentation far in advance of laying down any code. The decide all the aspects of the software before it's written. At my company, the boss just gives us a general outline of what he's thinking about and ask us to feel out the idea. We use a RAD environment and will often have a first iteration within a week. This version tends to get completely, sometimes multiple times. We do not document any of this in advance because the usable version may differ so much from the original ideas.

        At the large company, their projects tend to take years and years, go far over budget and typically are much less useful than they had originally hoped. As a bonus, they are usually bug-ridden and unstable. Many times they just eventually get canceled by the new layer of management, who then get awards for this "cost saving measure." At my company, our projects are typically finished far in advance for a tiny price. They are typically of very high quality, with very minor bugs which we fix rather quickly.

        This large company frequently hires our company to build software rather than trying to do it internally. They are usually amazed at the things we can do.

        Something like "literate programming" is completely anathema to how our company works. If we started trying to write specifications in advance of figuring out what product our clients actually want (as opposed to what they think they want at the start of the process).

        Now, I will state that our company only works because we don't hire idiots or slackers. Also, I am fully aware that this is not a good way to, for example, design nuclear power plant software or a baggage control system. But for businesses, all that documentation and "thinking" can just cloak that fact that the people building the software don't know what they are doing.
        • Re: (Score:3, Informative)

          by Coryoth ( 254751 )
          I think, perhaps, you're missing the point. Go ahead, build a prototype and try out ideas. Do the Brooks thing, and build one to throw away. Work out exactly what it is you want to do via experimentation. None of that contradicts literate programming, or "thinking first": the prototypes, the messing around, that's part of the thinking (stage one really). Once you've gone through your iterations and want to finalise something... well at that point you do have some specs, you should know what you want to buil
  • It is probably folklore. But the story during my grad school days was that, Knuth offered 1000$ prize to anyone fining a bug TeX and he doubled it a couple of times. And it was never claimed. If that was true, it is very unlikely he was just flame baiting.
    • by paulbd ( 118132 ) on Saturday April 26, 2008 @12:01PM (#23207748) Homepage
      the prize was not US$1000. it started out very small. Knuth did indeed pay out, and indeed doubled it, several times. From wikipedia: "The award per bug started at $2.56 (one "hexadecimal dollar"[24]) and doubled every year until it was frozen at its current value of $327.68. This has not made Knuth poor, however, as there have been very few bugs claimed. In addition, people have been known to frame a check proving they found a bug in TeX instead of cashing it."
      • by SEMW ( 967629 ) on Saturday April 26, 2008 @03:56PM (#23208904)
        If you define "bug" to mean "unexpected undocumented behaviour", as Knuth seems to, then it's not surprising that there have been very few bugs claimed, since TeX is so very well documented.

        But most people -- and certainly the majority of open source projects these days -- define "bug" as "undesirable behaviour"; and by that standards, TeX is chock full of bugs. To pick a couple of obvious examples, incorrect handling of ASCII 0x22 quotation marks, and treating "etc." as the end of a sentence. These aren't "bugs" to Knuth since the incorrect behavious is well documented, but by many people's standards they are.
    • Re: (Score:3, Interesting)

      by nuzak ( 959558 )
      > But the story during my grad school days was that, Knuth offered 1000$ prize to anyone fining a bug TeX and he doubled it a couple of times.

      The $1000 bounty was from Dan Bernstein with respect to qmail. He's always found a reason to weasel out of ever paying.

      Knuth started the bounty at $2.56 (one "hexidollar") and doubled it every year til it reached $327.68. Several people have claimed it, most people never cashed the checks. One of the first bug finders had his check framed.
    • by TheRaven64 ( 641858 ) on Saturday April 26, 2008 @12:05PM (#23207772) Journal
      The original prize was $2.56 (i.e. 2^8Â), and he doubled it every time someone found a bug until it reached $327.68. Over 400 bugs have been fixed in TeX, and that's just counting the core VM and typesetting algorithms - all of the rest is in metaprogrammed packages, many of which contain numerous bugs. I'm fairly sure that most programmers could write bug-free code if the only place where bugs counted was in a simple VM with a few dozen instructions that interpreted all of the rest of the code...
    • by 1729 ( 581437 ) <slashdot1729@nospam.gmail.com> on Saturday April 26, 2008 @12:15PM (#23207822)

      It is probably folklore. But the story during my grad school days was that, Knuth offered 1000$ prize to anyone fining a bug TeX and he doubled it a couple of times. And it was never claimed. If that was true, it is very unlikely he was just flame baiting.
      He offers rewards for reporting errors is his books and for finding bugs in his code:

      http://en.wikipedia.org/wiki/Knuth_reward_check [wikipedia.org]

      Many people save these (usually small) checks as souvenirs. My father -- a frugal mathematician -- received a few $2.56 checks from Knuth, and he promptly cashed each one.
    • He's written some checks. [wikipedia.org] Few of them are cashed - (pdf) On page 10 of this document [tug.org] he explains one.

  • At the end, he even remarks on his adage that young people shouldn't do things just because they're trendy.
    They shouldn't, but we all know that they do it anyway. Peer pressure has a big impact on the lives of "young people."
  • What? (Score:5, Interesting)

    by TheRaven64 ( 641858 ) on Saturday April 26, 2008 @12:00PM (#23207732) Journal
    You've heard of TeX, written in Web, the language designed for Literate Programming, but you've not heard of Literate Programming?

    I have a lot of respect for Knuth as an algorithms guy, but anything he says about programming needs to be taken with a grain of salt. When he created the TeX language, he lost all credibility - designing a language in 1978 which makes structured programming almost impossible is just insane. TeX gets a lot of praise as being 'bug free,' but that's really only half true. The core of TeX is a virtual machine and a set of typesetting algorithms, both of which are very simple pieces of code (they'd have to be to run on a PDP-10). Most of the bits people actually use are then metaprogrammed on top of the virtual machine, and frequently contain bugs which are a colossal pain to track down because of the inherent flaws in the language (no scoping, for example).

    If you want to learn about algorithms, listen to Donald Knuth and you will learn a great deal. If you want to learn about programming, listen to Edsger Dijkstra or Alan Kay.

    • Re:What? (Score:5, Insightful)

      by gowen ( 141411 ) <gwowen@gmail.com> on Saturday April 26, 2008 @12:07PM (#23207780) Homepage Journal
      Amen about TeX (and even LaTeX). I consider myself pretty knowledgeable about many computing languages, but every time I've hit a non-trivial problem with making TeX do what I want, I've had to consult with a TeXpert (i.e. the utterly invaluable comp.text.tex). And, sadly, in almost every case the solution has been either insanely baroque, or there's been no real solution at all. LaTeX makes brilliant looking documents, but Jesus wept, it's hard to make your documents look like YOU want, as opposed to how it thinks they should look.
    • Re:What? (Score:5, Interesting)

      by cbart387 ( 1192883 ) on Saturday April 26, 2008 @12:40PM (#23207932)

      If you want to learn about algorithms, listen to Donald Knuth and you will learn a great deal. If you want to learn about programming, listen to Edsger Dijkstra or Alan Kay.
      For those that didn't read the article, Knuth expressed criticism on several of the questions asked, but he didn't want to just duck the question. For instance, the 'trendy question' he said this.

      With the caveat that thereâ(TM)s no reason anybody should care about the opinions of a computer scientist/mathematician like me regarding software development
      • Re: (Score:3, Funny)

        by jeremyp ( 130771 )
        I fon't know. I've found that Dijkstra hasn't said anything new or profound in more than five years.
    • Re: (Score:3, Interesting)

      by sc00p18 ( 536811 )
      Fortune cookie at the bottom of the page for me:

      They are relatively good but absolutely terrible. -- Alan Kay, commenting on Apollos
  • by TuringTest ( 533084 ) on Saturday April 26, 2008 @12:01PM (#23207746) Journal
    Literate programming is an old friend for developers of functional programming languages. I see it like "code for the human mind": it provides a source code that is well adjusted to the needs of the developer, not just the machine.

    It interleaves code and documentation in the same files, and provides a specialized compilator to tell the two kinds of codes apart. Just like Doxygen and Javadoc can extract the comments from a source code project, the "tangle" process can extract all the code from a Literate program and pass it to a clasic compiler.

    Now that C and C++ seem to have a declining popularity [slashdot.org], maybe we can look for better ways of getting away from the bare metal (which, don't forget it, is why those languages become popular at the beginning). Don't get me wrong, they served us well for 36 years, but I think it's time again to begin caring more for the developers' requirements and less for the hardware requirements.
  • by LargeWu ( 766266 ) on Saturday April 26, 2008 @12:06PM (#23207774)
    The reason for his dismissive attitude of unit tests - that he knows exactly how all of his code works, and what impact a change will have - is exactly the reason you need them. In the real world, most programmers do in fact have to share their code with others. You're not always going to know the ramifications of refactoring a particular block of code, even if you wrote it yourself. And if you can keep all of that in your head at once, either your program is trivial, or you are some sort of supergenius. Now while I think the TDD guys are a little bit overzealous sometimes with their "100% coverage or die" attitude, unit testing is still a good habit to get into, regardless of what Knuth thinks.
    • by CodeBuster ( 516420 ) on Saturday April 26, 2008 @12:25PM (#23207876)

      Unit tests, especially if written and organized in an intelligent fashion, can be of tremendous value in eliminating small coding errors that were not inteded but are bound to creep in if the project is large enough. If you are clever about your tests then you can usually inherit the same test multiple times and get multiple uses out a test or part of a test. If unit tests are not used then it is more likely that several small errors in seemingly unrelated classes or methods can combine in an unforseen way to produce nasty and unexpected emergent behavior that is difficult to debug when the whole system is put together. Unit tests will not make up for crappy design, but they will help detect minor flaws in a good design that might otherwise have gone undected until final system integration where they could be much more difficult to debug.

      I actually have a great deal of respect for Knuth, but I think that he is wrong about unit tests. Perhaps it is the difference between the academic computer scientist and the career programmer who admires the ivory tower, but is willing to make concessions in the name of expedience and getting work done on time.

      • by ivan256 ( 17499 ) on Saturday April 26, 2008 @01:17PM (#23208124)
        I think that the arguments about unit tests often go too far in one direction or the other. People either treat unit tests as a religion, or they disavow them entirely.

        People in the first group end up with a project full of tests where many are valid, many end up testing basic language functions, and many end up not helping due to oversimplication of behavior in the mocked interfaces.

        People in the second group end up missing simple problems that could have been caught had they exercised their code.

        Both groups waste a lot of time.

        Perhaps this is what you were trying to say when you said "TDD guys are overzealous". I think there are other problems with TDD. Namely that you can only use it when you don't need to learn how to solve a problem as you go... Most interesting programs fall into that category.

        Really, people need to use good testing judgement.

      • by seaturnip ( 1068078 ) on Saturday April 26, 2008 @01:59PM (#23208340)
        It actually doesn't sound to me like Knuth has heard of the term 'unit test' before this interview at all. It sounds like he thinks it means prototyping a function before writing the real version. Given that he likes to push his model of documentation-driven programming, I think he might be more sympathetic to unit tests if he understood that they can serve as a kind of formalized documentation.
    • Re: (Score:2, Insightful)

      by Jeff DeMaagd ( 2015 )
      I would agree. I was probably doing "unit tests" in programs before it was given a name. As far as I'm concerned, not doing them is a waste of time, I think I learned it the hard way, so I got in the habit of writing code to test code to make sure it was providing all the right results for several different circumstances. If I make changes, the test is re-run to be sure I didn't miss something.

      I think it's possible that this person, despite his earlier genius, has ceased to be as useful as his previous s
  • No, that isn't arguable.

    Tex got started in 1977 after Unix (1974), well after SPICE (1973), and about even with BSD.
  • by free space ( 13714 ) on Saturday April 26, 2008 @12:20PM (#23207848)

    ...the idea of immediate compilation and "unit tests" appeals to me only rarely, when Iâ(TM)m feeling my way in a totally unknown environment and need feedback about what works and what doesnâ(TM)t. Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about. Nothing needs to be "mocked up."


    I'm not sure, but I think he's talking personally about his own work on his code. Remember that he comes from an era where people had the goal of mathematically proving that the code is indeed correct. He isn't necessarily doing this now but my persaonal guess is that he prefers statically checking the code to checking a running program. In certain kinds of mathematic/scientific applications this could make sense.
  • He's right (Score:5, Insightful)

    by Brandybuck ( 704397 ) on Saturday April 26, 2008 @12:33PM (#23207904) Homepage Journal
    He's right about unit tests... sort of. Just as most coders shouldn't be designing interfaces, most coders don't know how to test. It can often be more work writing the unit tests than writing the code.

    If you have a function that multiplies two integers, most coders will write a test that multiplies two numbers. That's not good enough. You need to consider boundary conditions. For example, can you multiply MAX_INT by MAX_INT? MAX_INT by -MAX_INT? Etc. With real world functions you are going to have boundaries up the whazoo. In addition, if you have a function that takes data coming from the user, check for invalid cases even if another function is validating. Check for null or indeterminate values. Write tests that you expect to fail.
    • Re:He's right (Score:4, Insightful)

      by Brandybuck ( 704397 ) on Saturday April 26, 2008 @12:37PM (#23207918) Homepage Journal
      Conclusion: Knuth is somewhat right, in that most unit tests written by coders are useless. But unit tests themselves are not.
    • Re:He's right (Score:5, Interesting)

      by clap_hands ( 320732 ) on Saturday April 26, 2008 @01:12PM (#23208094) Homepage
      The thing about unit testing is that it's subject to the law of diminishing returns. A simple test of the basic functionality gets you a lot for minimal effort. Writing dozens of carefully chosen tests to examine boundary conditions etc. gives you a little bit more, but for a great deal more effort. Whether or not it's worth it depends very much on the situation and the nature of the code you're writing.
  • .... use a spell checker.

    Ultimately literate programming is a matter of translation.
    When you boil it all down to what the machine understands, it comes out binary.

    To achieve the literate programming goal its clear there needs to be a programming language designed for it and a translator, be it a compiler or interpreter, that can take the results and convert it to machine understandable binary that runs as intended by the programmer/writer.
  • On multicore (Score:3, Insightful)

    by cryptoluddite ( 658517 ) on Saturday April 26, 2008 @12:45PM (#23207948)
    Making a program parallel will always be too hard for most programmers. But that's exactly why you don't have normal programmers do it... have the libraries do it automatically. Functions like qsort(2) are already black boxes, so they can be made to always run in parallel when the input is large enough. Other functions like say Ruby's .collect can run in parallel. For things like .each there can be a parallel and a sequential version that the programmer can pick which is appropriate.

    But to do this we need operating systems that can efficiently and reliably schedule code across cores. Add an ability to 'bind' threads together, so that they schedule always at the same time but on separate real processors. This gives the program the ability know when running an operation split between these threads will always complete faster than sequentially, without vagaries of scheduling possibly starving one thread and making it run much slower.

    Once you have this then you can automatically get some speedups from multiple cores on programs that are designed to only run sequentially, and more speedup on programs with just minor tweaks. You aren't going to get perfect scaling this way, but you will get substantial improvements at virtually no cost to the programmer.
    • Re:On multicore (Score:5, Insightful)

      by Nomen Publicus ( 1150725 ) on Saturday April 26, 2008 @01:43PM (#23208264)
      I've been involved with "parallel programming" for 20 years. There is no mystery involved. The C family of languages are particularly ill suited for parallel programming which may be why we are seeing reports recently claiming that it is "too difficult". Pointers in particular make life difficult for both the programmer and the compiler.

      There are a few techniques to be mastered and using a language designed with parallelism in mind helps hugely with the picky details.

    • Nonsense (Score:3, Interesting)

      by MarkusQ ( 450076 )

      Making a program parallel will always be too hard for most programmers.

      Nonsense. The problem isn't with the programmers, it's with the languages. Writing object oriented code in Fortran is too difficuiltr for most programmers, but that doesn't mean that the programmers aren't up to the task, but that the language they are using isn't well suited to the job.

      Learn a little erlang, or Haskel to see how easy writing massively parallel programs can be. p.--MarkusQ

  • by CaptKilljoy ( 687808 ) on Saturday April 26, 2008 @12:48PM (#23207968)
    The headline is misleading. Donald Knuth represents the epitome of the solitary super-programmer stereotype, so it's only natural that he sees no need for unit tests to catch mistakes or extreme programming to improve team development practices. I don't think he's necessarily saying that those things are without value for ordinary programmers.
  • by DirtySouthAfrican ( 984664 ) on Saturday April 26, 2008 @12:56PM (#23208016) Homepage
    "I've only proven that it works, I haven't tested it" - Knuth
  • by Cal Paterson ( 881180 ) * on Saturday April 26, 2008 @12:59PM (#23208022)
    Knuth said many of these supposedly outrageous things in passing, and does it while noting that he is an academic. Most of these claims in the summary vastly exaggerates the strength of the claims in the interview. Knuth specifically states;

    there's no reason anybody should care about the opinions of a computer scientist/mathematician like me regarding software development.
    Knuth doesn't claim that unit testing is a waste of time for everyone, just that it is a waste of time for him, in his circumstances. This makes sense, considering he follows his own (diametrically opposed) doctrine of "literate programming", which, if the summary author has never heard of, should cause him to be cautious about interpreting Knuth.
  • by superwiz ( 655733 ) on Saturday April 26, 2008 @01:11PM (#23208092) Journal
    and their unit test. in my days, if you needed a language, you wrote your own assembly. and when you couldn't document it, you wrote your own mark up language.... and your own fonts. phew... multiple cores. who needs them?!
  • by zullnero ( 833754 ) on Saturday April 26, 2008 @01:37PM (#23208240) Homepage
    Seriously, if you're "religious" about unit testing and mock objects, then you really need to revise the way you live your life.

    It's just a good habit to get into, if you take it seriously and don't just create tests that test silly little things like "is my text box centered where I slapped it on the form with gui form tool" type of stuff. That's kind of the point he's trying to make, that you program intelligently in the first place to avoid having an insane amount of redundant tests to pass each time you build.

    I've been doing literate programming (well, as close as you can with C and its derivative languages) for a long time now. I've watched XP coders take that literacy and chop it all up because "it didn't look pretty enough". The idea with making something literate is to make it so clear that you can reduce the total numbers of tests needed to make that code pass to only ones that test the actual expected outputs of that function. That's something that intelligent coders who don't just follow the Agile rulebook, but apply it effectively, can do. I don't know how many times I'd see a piece of code that did one simple task, had one test to test the output of that test, then another coder drops 3 more tests because they "didn't feel comfortable with only one" without specifying WHY. That is how you get into having redundant tests that muck up your test infrastructure.
  • by galimore ( 461274 ) on Saturday April 26, 2008 @01:37PM (#23208246)
    I'll forgive you for being a Java developer, but the fathers of C have always cited readability first (The C Programming Language ~1978). They don't call it "literate programming", which is simply a trendy buzzword, but the idea of programming for readability has been around for an extremely long time.
  • Worst Summary Ever (Score:5, Insightful)

    by TerranFury ( 726743 ) on Saturday April 26, 2008 @01:58PM (#23208332)

    The summary sounds like it was written by the headline-producing monkeys at Fox, CNN -- or hell, at the Jerry Springer show. Donald Knuth is not "playing hardball." Nobody needs to call the interview "raw and uncut," or "unplugged."

    The interview has almost nothing to do with unit testing and the little Knuth does have to say about the practice is hardly "ripping."

    When will people stop sullying peoples' good names by sensationalizing everything they say?

    Knuth is a well-respected figure who makes moderate, thoughtful statements. From the summary, you'd think he was a trash-talking pro-wrestler.

  • heresy (Score:4, Insightful)

    by tsotha ( 720379 ) on Saturday April 26, 2008 @02:00PM (#23208342)

    After initially being a proponent, I've come to the same conclusion about unit tests myself. I don't think they're worthless, but the time you spend developing or maintaining unit tests could be more profitably spent elsewhere. Especially maintaining.

    That's my experience, anyway. I suppose it's pretty heavily dependent on your environment, your customers, and exactly how tolerant your application is of bugs. Avionics software for a new jet fighter has a different set of demands than ye olde "display records from the database" business application. More applications fall in the second category than the first.

  • by pslam ( 97660 ) on Saturday April 26, 2008 @02:44PM (#23208560) Homepage Journal

    I expected much, much more from Knuth than what I've just seen in that interview and after reading the design of MMIX.

    Knuth dismisses multi-core and multi-threading as a fad and an excuse by manufacturers to stop innovating. I'm amazed someone of his intelligence has managed not to read up on exactly WHY this is happening:

    • Faster single-thread processing means faster clocking.
    • Faster clocking means smaller feature size.
    • Eventually you run into the limit of your process, shrink further and continue.
    • Finally, you run into the thermal limit of your process, and you cannot go faster in the same area.
    • To go faster you have to go sideways - more parallelism.

    So he dismisses the technical problems that manufacturers have been falling over for the last few years as merely a lack of imagination. No - parallelism is here to stay, and people need to realise it rather than just wishing up some magical world where physics aren't a problem.

    He dismisses multi-threading as too hard. It isn't, if you're not unfair to the concept. Nobody is getting 100% out of their single-threaded algorithms. You always have stalls due to cache misses, branching, the CPU not having exactly the right instructions you need, linkage, whatever. Nobody EXPECTS you to get 100% of 1 CPU's theoretical speed. So why do people piss all over multi-core/multi-threading when it doesn't achieve perfect speed-ups?

    If you achieve only a 50% speed-up using 2 cores compared to 1, you're done a good job, in my opinion. That means you could have dual-core 3GHz CPU or a single core 4.5GHz CPU. Spot which of those actually exists. Getting a 25-50% speed-up from multi-core is easy. The 100% speed-up is HARD. If you stop concentrating on perfection, you'll notice that multi-threading is a) actually not hard to implement, and b) worthwhile.

    Then there's MMIX. Knuth thinks that simplicity has to work all the way down to the CPU design. Yes, but not simplicity by way of having instructions made up of 8 bit opcode and 3x8 bit register indexes. A CPU doesn't give a crap how elegant that looks. It's also BAD design - 256 registers makes for a SLOW register file. It'll either end up being the slow critical path in the CPU (limiting top clock speed) or taking multiple cycles to access. There's also no reason to have 256 opcodes. He should have a look at ARM - it manages roughly the same functionality with much less opcodes.

    It almost pains me to see the MMIX design and how it's a) not original, b) done better in existing systems already on the market, e.g ARM, and c) doesn't solve any of the performance limit problems he complains about. What's going on with Knuth?

    • Remember that MMIX is not designed to be a practical hardware computer architecture. It's designed to illustrate algorithms written in assembly language. It's optimized for humans to read and write, not for computers to execute quickly. I'm glad that he's keeping assembly as part of his books, and that's he's updated them to a 64-bit RISC architecture. Reading MMIX assembly programs is the closest to hardware that some readers will ever get, so he has one chance to show those readers how computers actually
  • Literate programming (Score:3, Informative)

    by pikine ( 771084 ) on Saturday April 26, 2008 @06:57PM (#23210020) Journal

    I think most people who post here don't know what literate programming is. It's more like writing a textbook explaining how your code works, but you can strip away the text and actually have runnable code. This code can be in any language of your choice. It makes sense from Knuth's point of view, but for many of us, we don't write textbooks for a living.

    Knuth also doesn't need unit testing because he probably runs (or type checks) the program in his head. Again, for most of us, seeing the program run provides additional assurance that it works. Unit tests also provide a specification of your program. It doesn't have to be just b = f(a). For example, if your code implements a balanced binary search tree, a unit test could check the depth of all subtrees to make sure the tree is balanced. Another unit test would check if the tree is ordered. You can prove by the structure of your program that these properties hold, but a lay-man doesn't want to write proofs for the code he writes, so the second best alternative is to use unit test.

    About parallel programming, Knuth is actually right. Many high-performance parallel programs are actually very involved with the underlying architecture. But we can write a high-level essentially-sequential program that uses libraries to compute things like FFT and matrix multiplication in parallel. This tends to be the trend anyways.

Every successful person has had failures but repeated failure is no guarantee of eventual success.

Working...