Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Donald Knuth Rips On Unit Tests and More 567

eldavojohn writes "You may be familiar with Donald Knuth from his famous Art of Computer Programming books but he's also the father of TeX and, arguably, one of the founders of open source. There's an interesting interview where he says a lot of stuff I wouldn't have predicted. One of the first surprises to me was that he didn't seem to be a huge proponent of unit tests. I use JUnit to test parts of my projects maybe 200 times a day but Knuth calls that kind of practice a 'waste of time' and claims 'nothing needs to be "mocked up."' He also states that methods to write software to take advantage of parallel programming hardware (like multi-core systems that we've discussed) are too difficult for him to tackle due to ever-changing hardware. He even goes so far as to vent about his unhappiness toward chipmakers for forcing us into the multicore realm. He pitches his idea of 'literate programming' which I must admit I've never heard of but find it intriguing. At the end, he even remarks on his adage that young people shouldn't do things just because they're trendy. Whether you love him or hate him, he sure has some interesting/flame-bait things to say."
This discussion has been archived. No new comments can be posted.

Donald Knuth Rips On Unit Tests and More

Comments Filter:
  • by Gordonjcp ( 186804 ) on Saturday April 26, 2008 @12:50PM (#23207686) Homepage
    ... looks like it falls into the same trap as COBOL. The idea that by making programming languages incredibly verbose, they will somehow become easier to use is a fallacy.

    Using "MULTIPLYBY" instead of "*" isn't going to make your code easier to read.
  • by TuringTest ( 533084 ) on Saturday April 26, 2008 @01:01PM (#23207746) Journal
    Literate programming is an old friend for developers of functional programming languages. I see it like "code for the human mind": it provides a source code that is well adjusted to the needs of the developer, not just the machine.

    It interleaves code and documentation in the same files, and provides a specialized compilator to tell the two kinds of codes apart. Just like Doxygen and Javadoc can extract the comments from a source code project, the "tangle" process can extract all the code from a Literate program and pass it to a clasic compiler.

    Now that C and C++ seem to have a declining popularity [slashdot.org], maybe we can look for better ways of getting away from the bare metal (which, don't forget it, is why those languages become popular at the beginning). Don't get me wrong, they served us well for 36 years, but I think it's time again to begin caring more for the developers' requirements and less for the hardware requirements.
  • by CastrTroy ( 595695 ) on Saturday April 26, 2008 @01:04PM (#23207760)
    I'm not sure if MultiplyBy is any better. It takes longer to type, and most people have understood that * (or X) meant multiply by since second grade. The thing I like about more verbose languages like VB.Net are that they force you to write out things that most good programmers would write out anyway as comments. At the end of a while loop, you write "End While". Most people using C++/Java/C# end up writing "} //end while" anyway, especially if the loop goes over 6 or 7 lines, and even more likely if there's a nesting construct like an if statement in there anyway. Seeing the "End While" let's you know what you are finishing off, without scrolling higher and trying to line up and see which bracket matches. Which even with visual brace matching, can sometimes be difficult.
  • by LargeWu ( 766266 ) on Saturday April 26, 2008 @01:06PM (#23207774)
    The reason for his dismissive attitude of unit tests - that he knows exactly how all of his code works, and what impact a change will have - is exactly the reason you need them. In the real world, most programmers do in fact have to share their code with others. You're not always going to know the ramifications of refactoring a particular block of code, even if you wrote it yourself. And if you can keep all of that in your head at once, either your program is trivial, or you are some sort of supergenius. Now while I think the TDD guys are a little bit overzealous sometimes with their "100% coverage or die" attitude, unit testing is still a good habit to get into, regardless of what Knuth thinks.
  • Re:What? (Score:5, Insightful)

    by gowen ( 141411 ) <gwowen@gmail.com> on Saturday April 26, 2008 @01:07PM (#23207780) Homepage Journal
    Amen about TeX (and even LaTeX). I consider myself pretty knowledgeable about many computing languages, but every time I've hit a non-trivial problem with making TeX do what I want, I've had to consult with a TeXpert (i.e. the utterly invaluable comp.text.tex). And, sadly, in almost every case the solution has been either insanely baroque, or there's been no real solution at all. LaTeX makes brilliant looking documents, but Jesus wept, it's hard to make your documents look like YOU want, as opposed to how it thinks they should look.
  • by CastrTroy ( 595695 ) on Saturday April 26, 2008 @01:13PM (#23207818)
    So basically it's the same as the XML comments you can put in your .Net or Java code to create JavaDocs, or whatever they are called in .Net, based on the comments in the code?
  • by Eunuchswear ( 210685 ) on Saturday April 26, 2008 @01:18PM (#23207832) Journal

    Most people using C++/Java/C# end up writing "} //end while" anyway,
    Pray god I never have to work on code written by these fictitious "most people".

    I'd kill any colleague of mine who wrote such a vacuous comment. With a golf club, in front of its cow-orkers to drive the lesson home,
  • by davolfman ( 1245316 ) on Saturday April 26, 2008 @01:20PM (#23207844)
    I don't. It's pointless if you format your code decently. The bracket and the knockdown in tabbing should be enough. The only place I can see it being useful is when you have a truckload of nested brackets and even then you want something a lot more useful than "end while" it should at least name the stinking loop.
  • by 1729 ( 581437 ) <slashdot1729@gma i l .com> on Saturday April 26, 2008 @01:22PM (#23207856)

    Most people using C++/Java/C# end up writing "} //end while" anyway,
    Pray god I never have to work on code written by these fictitious "most people".

    I'd kill any colleague of mine who wrote such a vacuous comment. With a golf club, in front of its cow-orkers to drive the lesson home,
    I sometimes add comments like that if the brace is closing a deeply nested block, but then the comment indicates which particular loop or block is ending.
  • by CodeBuster ( 516420 ) on Saturday April 26, 2008 @01:25PM (#23207876)

    Unit tests, especially if written and organized in an intelligent fashion, can be of tremendous value in eliminating small coding errors that were not inteded but are bound to creep in if the project is large enough. If you are clever about your tests then you can usually inherit the same test multiple times and get multiple uses out a test or part of a test. If unit tests are not used then it is more likely that several small errors in seemingly unrelated classes or methods can combine in an unforseen way to produce nasty and unexpected emergent behavior that is difficult to debug when the whole system is put together. Unit tests will not make up for crappy design, but they will help detect minor flaws in a good design that might otherwise have gone undected until final system integration where they could be much more difficult to debug.

    I actually have a great deal of respect for Knuth, but I think that he is wrong about unit tests. Perhaps it is the difference between the academic computer scientist and the career programmer who admires the ivory tower, but is willing to make concessions in the name of expedience and getting work done on time.

  • He's right (Score:5, Insightful)

    by Brandybuck ( 704397 ) on Saturday April 26, 2008 @01:33PM (#23207904) Homepage Journal
    He's right about unit tests... sort of. Just as most coders shouldn't be designing interfaces, most coders don't know how to test. It can often be more work writing the unit tests than writing the code.

    If you have a function that multiplies two integers, most coders will write a test that multiplies two numbers. That's not good enough. You need to consider boundary conditions. For example, can you multiply MAX_INT by MAX_INT? MAX_INT by -MAX_INT? Etc. With real world functions you are going to have boundaries up the whazoo. In addition, if you have a function that takes data coming from the user, check for invalid cases even if another function is validating. Check for null or indeterminate values. Write tests that you expect to fail.
  • by Swizec ( 978239 ) on Saturday April 26, 2008 @01:33PM (#23207906) Homepage
    I concur. Comments should tell WHY the while block is there and what it DOES. Not where it starts and where it ends, the code tells us that descriptively enough.

    I've met code blocks several hundred lines long and it was never ambigious where they started and ended.
  • Re:Shocked (Score:5, Insightful)

    by Coryoth ( 254751 ) on Saturday April 26, 2008 @01:37PM (#23207912) Homepage Journal

    Now, I've no problem with literate programming, but given that even semi-literate practices like "write good comments" hasn't caught on in many places, I think Don is flogging a dead horse by suggesting that code should be entirely documentation driven.
    To be fair to Knuth, I don't think the failure to write good comments detracts from literate programming. What Knuth wants is an inversion of thr traditonal code/documentation relationship: you write the documentation and embed the code within that, as opposed to concentrating on code, and incidentally embedding documentation (as comments) within the code. Ultimately the failure of good comments and good documentation is because people are focussing on the code; as long as documentation and comments are an afterthought they will be forgotten or poorly written. If you switch things around and focus on the documentation and insert the code, comment-like, within that, then you're focussing on the documentation and it will be good.

    The reason I think literate programming doesn't catch on has mostly to do with the fact that a great many programmers don't bother to think through what they want to do before they code it: they are doing precisely what Knuth mentions he does use unit testing for -- experimental feeling out of ideas. Because they don't start with a clear idea in their heads, of course they don't want to start by writing documentation: you can't document what you haven't thought through. This is the same reason why things like design by contract [wikipedia.org] don't catch on: to write contracts it helps to have a clear idea of what your functions and classes are doing (so you can write your pre-conditions, post-conditions and invariants) before you start hammering out code. The "think first" school of programming is very out of favour (probably mostly because it actually involves thinking).
  • Re:He's right (Score:4, Insightful)

    by Brandybuck ( 704397 ) on Saturday April 26, 2008 @01:37PM (#23207918) Homepage Journal
    Conclusion: Knuth is somewhat right, in that most unit tests written by coders are useless. But unit tests themselves are not.
  • by Jeff DeMaagd ( 2015 ) on Saturday April 26, 2008 @01:40PM (#23207930) Homepage Journal
    I would agree. I was probably doing "unit tests" in programs before it was given a name. As far as I'm concerned, not doing them is a waste of time, I think I learned it the hard way, so I got in the habit of writing code to test code to make sure it was providing all the right results for several different circumstances. If I make changes, the test is re-run to be sure I didn't miss something.

    I think it's possible that this person, despite his earlier genius, has ceased to be as useful as his previous self. Genius is very often like that, they make a good body of work at one point in their life, and their previous success seems to alter them to the point that later work is suspect or just wrong. Sometimes it's ego, other times it's just being stuck in a mental rut, or whatever other reason there may be.
  • On multicore (Score:3, Insightful)

    by cryptoluddite ( 658517 ) on Saturday April 26, 2008 @01:45PM (#23207948)
    Making a program parallel will always be too hard for most programmers. But that's exactly why you don't have normal programmers do it... have the libraries do it automatically. Functions like qsort(2) are already black boxes, so they can be made to always run in parallel when the input is large enough. Other functions like say Ruby's .collect can run in parallel. For things like .each there can be a parallel and a sequential version that the programmer can pick which is appropriate.

    But to do this we need operating systems that can efficiently and reliably schedule code across cores. Add an ability to 'bind' threads together, so that they schedule always at the same time but on separate real processors. This gives the program the ability know when running an operation split between these threads will always complete faster than sequentially, without vagaries of scheduling possibly starving one thread and making it run much slower.

    Once you have this then you can automatically get some speedups from multiple cores on programs that are designed to only run sequentially, and more speedup on programs with just minor tweaks. You aren't going to get perfect scaling this way, but you will get substantial improvements at virtually no cost to the programmer.
  • by CaptKilljoy ( 687808 ) on Saturday April 26, 2008 @01:48PM (#23207968)
    The headline is misleading. Donald Knuth represents the epitome of the solitary super-programmer stereotype, so it's only natural that he sees no need for unit tests to catch mistakes or extreme programming to improve team development practices. I don't think he's necessarily saying that those things are without value for ordinary programmers.
  • Dismissive of DAK (Score:3, Insightful)

    by symbolset ( 646467 ) on Saturday April 26, 2008 @01:50PM (#23207982) Journal

    That's a brave stance. He's old, but he hasn't reached his dotage yet. The good doctor has contributed more to the science of information than most, and almost certainly more than you.

    debugging ALGOL on punch cards as he has done would be brutally painful, of course, but here we are in 2008 with no punch cards or ALGOL.

    One of the reasons why we're reinventing so much over and over with nuisances like VB and C# is that developers are architecting grand toolchains based on ideas that were in the 1960's proven incorrect. They get a lot profits from their workarounds, and then we burn it all down and start over because they all contain the same fatal flaws.

    my dualcore laptop really has no problem with that.

    That would be because you haven't installed Vista on it yet.

    Having watched this tragedy unfold for a quarter century I've often shook my head and wondered what y'all were thinking. And then I remember that I once thought my parents were fools too. If you can read TAOCP and understand a good fraction of it you will come away with a firmer foundation for the way all things work. It's a tough slog, though, and not everybody is capable.

  • by ivan256 ( 17499 ) on Saturday April 26, 2008 @02:17PM (#23208124)
    I think that the arguments about unit tests often go too far in one direction or the other. People either treat unit tests as a religion, or they disavow them entirely.

    People in the first group end up with a project full of tests where many are valid, many end up testing basic language functions, and many end up not helping due to oversimplication of behavior in the mocked interfaces.

    People in the second group end up missing simple problems that could have been caught had they exercised their code.

    Both groups waste a lot of time.

    Perhaps this is what you were trying to say when you said "TDD guys are overzealous". I think there are other problems with TDD. Namely that you can only use it when you don't need to learn how to solve a problem as you go... Most interesting programs fall into that category.

    Really, people need to use good testing judgement.

  • Knuth is hardcore (Score:2, Insightful)

    by Sits ( 117492 ) on Saturday April 26, 2008 @02:26PM (#23208166) Homepage Journal
    FVWM on Ubuntu Linux. Emacs with special modes using a homemade bitmap font. Mac OSX for Illustrator and Photoshop...

    Now that's breadth AND depth.
  • Spaghetti-O Code (Score:5, Insightful)

    by illegalcortex ( 1007791 ) on Saturday April 26, 2008 @02:33PM (#23208214)
    I somewhat disagree with what you and... *sigh* Monkeybaister posted. Yes, there are many times when long stretches of code should be broken out into functions. But I tend to do that mostly when the same bit of code is used in several different cases. The reason being is that when you start modularizing off all your while loops that are more than a dozen lines long, you create a whole new type of spaghetti code. I'm going to coin a term and call it "spaghetti-O code." You try to track down a bug and what would have been a straightforward couple pages of code now has all kinds of functions at different places in the code. As such, it can often make debugging or forming a mental map of the code much harder.
  • Re:On multicore (Score:5, Insightful)

    by Nomen Publicus ( 1150725 ) on Saturday April 26, 2008 @02:43PM (#23208264)
    I've been involved with "parallel programming" for 20 years. There is no mystery involved. The C family of languages are particularly ill suited for parallel programming which may be why we are seeing reports recently claiming that it is "too difficult". Pointers in particular make life difficult for both the programmer and the compiler.

    There are a few techniques to be mastered and using a language designed with parallelism in mind helps hugely with the picky details.

  • by fishbowl ( 7759 ) on Saturday April 26, 2008 @02:52PM (#23208306)
    >The reason being is that when you start modularizing off all your while loops that are
    >more than a dozen lines long, you create a whole new type of spaghetti code.

    There is also a risk that you or a maintenance programmer might re-use such a "function"
    that was created simply to make a while loop more aesthetically pleasing, and introduce a bug because that function was not designed or tested for use in isolation.

    And in the spirit of the topic, such functions become awkward to unit test, since you're extracting a unit of work out of a loop or control structure, that logically lives there.
  • Worst Summary Ever (Score:5, Insightful)

    by TerranFury ( 726743 ) on Saturday April 26, 2008 @02:58PM (#23208332)

    The summary sounds like it was written by the headline-producing monkeys at Fox, CNN -- or hell, at the Jerry Springer show. Donald Knuth is not "playing hardball." Nobody needs to call the interview "raw and uncut," or "unplugged."

    The interview has almost nothing to do with unit testing and the little Knuth does have to say about the practice is hardly "ripping."

    When will people stop sullying peoples' good names by sensationalizing everything they say?

    Knuth is a well-respected figure who makes moderate, thoughtful statements. From the summary, you'd think he was a trash-talking pro-wrestler.

  • heresy (Score:4, Insightful)

    by tsotha ( 720379 ) on Saturday April 26, 2008 @03:00PM (#23208342)

    After initially being a proponent, I've come to the same conclusion about unit tests myself. I don't think they're worthless, but the time you spend developing or maintaining unit tests could be more profitably spent elsewhere. Especially maintaining.

    That's my experience, anyway. I suppose it's pretty heavily dependent on your environment, your customers, and exactly how tolerant your application is of bugs. Avionics software for a new jet fighter has a different set of demands than ye olde "display records from the database" business application. More applications fall in the second category than the first.

  • Re:He's right (Score:3, Insightful)

    by cgranade ( 702534 ) <cgranade@@@gmail...com> on Saturday April 26, 2008 @03:05PM (#23208364) Homepage Journal
    Unit testing is often about detecting regressions, and so writing a unit test to catch some failure that you found and fixed can often be very helpful. To borrow the MAX_INT * MAX_INT example above, if after getting that case to work right and writing a unit test to confirm it, you decide to improve the performance of your integer multiplication routines (silly, I know... imagine a better example if you have to, like that they're matricies and you're implementing Strassen's Algorithm), then the unit test can tell you if you introduced bugs back into your code.
  • by MrSteveSD ( 801820 ) on Saturday April 26, 2008 @03:26PM (#23208446)

    Not all programmers are uber-elite, and many are only slightly better than not being there at all.


    I don't think there's anything elite about writing short concise functions and breaking things up. The problem is when people first go into programming, they make these kinds of mistakes unless there are proper code reviews/training (things which often don't happen). When you are at university, the programs you write tend to be quite short and because of that, you don't realise how bad a programmer you actually are at that stage. It's only when you leap into the workplace and start writing a lot that your inadequacies become evident.

    To me, programming is a discipline requiring a fair bit of intelligence, but all to often companies hire programmers like they are just hiring shelf-stackers or something. I think there is a lot more professionalism in Open Source projects than in many software houses.
  • by rawler ( 1005089 ) <{ulrik.mikaelsson} {at} {gmail.com}> on Saturday April 26, 2008 @03:44PM (#23208556)
    "my dualcore laptop really has no problem with that"
    Moore's law says (well, indirectly at least) that machines from 2007 should be roughly 256 times as powerful as machines from 1995.

    Somehow, the actual performance difference (starting the computer, starting a web-browser, editing text etc.) in running Win95 on hardware from it's time, compared to running Vista on todays hardware, seems to be nowhere near a 256-times improvement.

    I can only conclude that while the hardware-industry have improved itself again and again, the software industry have ate almost all of those improvements, instead of giving it to the users.
  • by pslam ( 97660 ) on Saturday April 26, 2008 @03:44PM (#23208560) Homepage Journal

    I expected much, much more from Knuth than what I've just seen in that interview and after reading the design of MMIX.

    Knuth dismisses multi-core and multi-threading as a fad and an excuse by manufacturers to stop innovating. I'm amazed someone of his intelligence has managed not to read up on exactly WHY this is happening:

    • Faster single-thread processing means faster clocking.
    • Faster clocking means smaller feature size.
    • Eventually you run into the limit of your process, shrink further and continue.
    • Finally, you run into the thermal limit of your process, and you cannot go faster in the same area.
    • To go faster you have to go sideways - more parallelism.

    So he dismisses the technical problems that manufacturers have been falling over for the last few years as merely a lack of imagination. No - parallelism is here to stay, and people need to realise it rather than just wishing up some magical world where physics aren't a problem.

    He dismisses multi-threading as too hard. It isn't, if you're not unfair to the concept. Nobody is getting 100% out of their single-threaded algorithms. You always have stalls due to cache misses, branching, the CPU not having exactly the right instructions you need, linkage, whatever. Nobody EXPECTS you to get 100% of 1 CPU's theoretical speed. So why do people piss all over multi-core/multi-threading when it doesn't achieve perfect speed-ups?

    If you achieve only a 50% speed-up using 2 cores compared to 1, you're done a good job, in my opinion. That means you could have dual-core 3GHz CPU or a single core 4.5GHz CPU. Spot which of those actually exists. Getting a 25-50% speed-up from multi-core is easy. The 100% speed-up is HARD. If you stop concentrating on perfection, you'll notice that multi-threading is a) actually not hard to implement, and b) worthwhile.

    Then there's MMIX. Knuth thinks that simplicity has to work all the way down to the CPU design. Yes, but not simplicity by way of having instructions made up of 8 bit opcode and 3x8 bit register indexes. A CPU doesn't give a crap how elegant that looks. It's also BAD design - 256 registers makes for a SLOW register file. It'll either end up being the slow critical path in the CPU (limiting top clock speed) or taking multiple cycles to access. There's also no reason to have 256 opcodes. He should have a look at ARM - it manages roughly the same functionality with much less opcodes.

    It almost pains me to see the MMIX design and how it's a) not original, b) done better in existing systems already on the market, e.g ARM, and c) doesn't solve any of the performance limit problems he complains about. What's going on with Knuth?

  • by eldavojohn ( 898314 ) * <eldavojohn.gmail@com> on Saturday April 26, 2008 @04:08PM (#23208666) Journal

    Worst Summary Ever
    Thanks. I really appreciate the amount of respect and appreciation I get from this site.

    Donald Knuth is not "playing hardball." Nobody needs to call the interview "raw and uncut," or "unplugged."
    Wow, where exactly did I (or CmdrTaco) use any of those phrases?

    Calling something a "complete waste of time" is, in my book at least, "ripping" on something. I didn't "sully his good name," I posted what I found interesting. You should also point out he has prostate cancer and I left that out. God, what horrible spin I used! You'd think I was talking about someone whose life wasn't at risk, the way I spun that summary!

    Knuth is a well-respected figure who makes moderate, thoughtful statements.
    I happen to disagree with his stances on multi-core chips and unit testing. I didn't find anything thoughtful about what he said and really wish he would have elaborated on why unit testing is a complete waste of time.

    From the summary, you'd think he was a trash-talking pro-wrestler.
    Actually, after reading the article, I did find him to be a bit preachy. Apparently you and everyone else find him unquestionably correct in all his statements from that interview.

    And also, people are claiming he said these things "in passing." Which I find to be a phrase used when you want to avoid owning up to something you said. If I call you a "whiney bitch in passing" that doesn't lessen it one bit. Knuth claims no one should listen to him. Why is he publishing books if no one should listen to him?

    The guy said some inflammatory comments. If you read the following posts, you'll realize that I wasn't the only one that found them inflammatory or controversial.
  • by Anonymous Coward on Saturday April 26, 2008 @04:12PM (#23208692)
    C and C++ only seems to be declining because they are the de facto standards and everyone knows how to use them. There is no need for news articles, blog entries, etc. They may not be hip but they power everything and are more used than anything else.
  • by Estanislao Martínez ( 203477 ) on Saturday April 26, 2008 @04:40PM (#23208808) Homepage

    To be fair, I find that a lot of high-level (business layer) code I write today consists of "foreach (...) { ... }". It would seem that there are quite a few opportunities to parallelize there.

    ...which the compiler can't discover, because foreach describes a mechanism (looping through a sequence, in order), and not a high-level transformation.

    Compare foreach with map. Map is a higher order function that takes a function and a collection, and results in a collection of the same size and structure as the original, but with each element replaced by the result of applying the supplied function to it. Note that the value of each element in the result depends only on the corresponding element of the input. It's trivial to parallelize map.

    You can parallelize map easily because it has a favorable contract that specifies the relationship between its inputs and its outputs, and it just so happens that this contract is amenable to parallel execution. A smart compiler, upon seeing a use of map, can trivially tag it as a parallelism candidate.

    But since foreach specifies a sequential looping mechanism, there are no guaranteed relations between the input and output (in fact, not even any simple way to determine what should be treated as inputs and outputs). When you write a foreach loop to perform the equivalent of a map, you're underspecifying the transformation you're performing on your collection, and overspecifying the mechanism. That's bad programming.

    You mention Parallel LINQ, and this is very relevant. LINQ is based on operations similar to map, that transform sets into sets. LINQ queries, since they abstractly describe the relation between an input and the desired output, can be executed in a number of ways: (a) the system can translate them into SQL queries and send them to a database server to execute; (b) the system can execute them serially; (c) the system can execute them in parallel.

  • by SEMW ( 967629 ) on Saturday April 26, 2008 @04:56PM (#23208904)
    If you define "bug" to mean "unexpected undocumented behaviour", as Knuth seems to, then it's not surprising that there have been very few bugs claimed, since TeX is so very well documented.

    But most people -- and certainly the majority of open source projects these days -- define "bug" as "undesirable behaviour"; and by that standards, TeX is chock full of bugs. To pick a couple of obvious examples, incorrect handling of ASCII 0x22 quotation marks, and treating "etc." as the end of a sentence. These aren't "bugs" to Knuth since the incorrect behavious is well documented, but by many people's standards they are.
  • by Anonymous Brave Guy ( 457657 ) on Saturday April 26, 2008 @05:03PM (#23208948)

    No offence meant, but I think your preconceptions may be clouding your judgement here.

    You claim that today's programming field is not about clever tricks and fast algorithms. I claim that if more people understood these old-fashioned concepts, we would have much better software today. For a start, anyone developing those "libraries implemented by specialists" you mentioned had better be very good, since a lot of other people's code is going to depend on them. Having worked in groups that develop various kinds of library, I can assure you that a little more general programming knowledge about clever tricks and fast algorithms wouldn't go amiss.

    You claim that today's programming field is about big systems with many programmers. I claim that this is because management and technical leadership in most places isn't competent enough to divide up a big system in modular fashion and allow smaller, more flexible teams to solve the little problems before multiplying them all up to solve the big ones. Instead, the guys at the top tend to reduce all problems to the least common denominator, "throw enough people at it and we'll win eventually" philosophy. This explains how a small company with a few dozen employees can produce software that is better in every way than the competing offering from a larger company with hundreds of developers. You don't even need to have a dew dozen genius programmers; you just need to understand the concept that there are O(n^2) lines of communication between n individuals working in a single large team, but if your project is divided hierarchically then logarithms start coming into play, and if you can split a problem into several properly independent smaller ones this becomes a constant factor overhead. This elementary mathematics seems to be beyond a lot of senior management in the software business, and that has far more to do with the need to build monolithic systems maintained by zillions of developers than any actual project requirements do.

  • by johannesg ( 664142 ) on Saturday April 26, 2008 @05:24PM (#23209018)
    I worked with Captain Endif for a while. It gets very, very, VERY tiring at some point. Especially in cases like this:

    #define TRUE_VAL true
    #define FALSE_VAL false // if theVar is true

    if (theVar == TRUE_VAL) { // set theVar to false
        theVar = FALSE_VAL;
    } // end if

    (I made this up, but sadly it is not that far removed from actual examples...)

    I also worked with a guy (another one) who left a blank line between every two lines of code. ALWAYS.

    Anyway, if you are in the neighbourhood, feel free to come over to our office. If you forgot your golf club I'm sure we can rig something using parts from the paper cutter or something...
  • by 1729 ( 581437 ) <slashdot1729@gma i l .com> on Saturday April 26, 2008 @05:38PM (#23209100)

    I sometimes add comments like that if the brace is closing a deeply nested block
    Deeply nested blocks is a sure sign of crap code. Don't document crap code, rewrite it.
    A sure sign? Some code is necessarily complex. Conditional blocks inside nested loops are sometimes necessary for a logical, efficient, and human-readable implementation of an algorithm.

    Furthermore, I don't have the luxury of rewriting millions of lines of existing code. I document the parts I touch and I try not to make anything worse, but rewriting "crap code" is not always an option.
  • by Mr. Slippery ( 47854 ) <{ten.suomafni} {ta} {smt}> on Saturday April 26, 2008 @05:39PM (#23209106) Homepage

    This applies mostly to programmers who write functions. Developers who create objects with methods usually don't require blocks of code "pages" long.

    Oh, please. Don't put on object-oriented airs. A "method" ain't a nothing more than a function associated with a class..

    A function (method, procedure, subroutine) should be just as long as it has to be to encapsulate the work it's doing. Sometimes that's one line. Sometimes it's pages.

    Breaking those pages of code into a bunch of other subroutines solely on some misguided notion that a function shouldn't be longer than N lines, makes for code that is harder to understand an maintain.

  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Saturday April 26, 2008 @05:53PM (#23209188) Journal
    Yeah, that's the one thing I hate about Ruby now -- seeing the end of a file that looks like this:

    # for some reason, Slashdot won't indent the first line...
              end
            end
          end
        end
      end
    end
    Especially when the whole culture around things like Ruby on Rails is "Convention over Configuration" (thus, your code should always be indented properly anyway) and "Don't Repeat Yourself" (tons of 'end' statements isn't particularly DRY).

    I will say one thing, though: After haml, [hamptoncatlin.com] I never want to write any raw HTML, or any XML, by hand again. Ever.
  • Re:Out of favor (Score:3, Insightful)

    by Coryoth ( 254751 ) on Saturday April 26, 2008 @06:49PM (#23209560) Homepage Journal
    One would hope that you get to the final system design by iterative prototyping, which you then clean up, refactor, and rewrite into a nice codebase that doesn't have the evolved and iterative cruft, and is suitably documented (it's this last step where literate programming an come, though you can introduce some incrementally as modules and subsystems get frozen and rewritten). Who knows, maybe you do just ship the first thing you can get to work. What I do know is that in the business world where time is money, the clean well documented final version is going to involve a lot less time (and hence a lot less money) to maintain and debug and patch. And what if I want a new version next year? Well I would certainly prefer to have a clean readable well documented codebase to work from than "whatever worked" code. Maybe that's just me though. Different people work differently, and I'm sure you do what works best for you.
  • by Anonymous Coward on Saturday April 26, 2008 @08:03PM (#23210060)
    I've heard such overly-modularized code referred to as "ravioli code". Usually it has to do with object orientation though. When you have a maze of interdependent functions, you have a plain old hairball.
  • by Gorimek ( 61128 ) on Sunday April 27, 2008 @02:47AM (#23212132) Homepage
    I've never seen any interesting and useful software that is ever "finished". You always need to add and change things, and there is always far more functionality wanted than you can produce.

    If you clean up and refactor as you go, rather than at "the end", what you have described is Agile/XP development.

After a number of decimal places, nobody gives a damn.

Working...