Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Apache Software

Covalent's Version of Apache 2.0 To Drop Monday 85

kilaasi points out this CNET story about the planned release on Monday of Apache 2.0, "or at least the version that has proprietary extensions. Covalent sells the core of Apache and its own extensions which make it easier to adapt for specific areas and simpler to administer. Covalent is confident that the next generation Apache is mature and is ready for prime time. Covalent employs some of the core members of the Apache-development-team." XRayX adds a link to Covalent's press release, writing: "It's not clear when the Open Source Edition (or whatever) will come out and I didn't find anything at the official Apache Site." Update: 11/10 16:37 GMT by T : Note that the product name is Covalent Enterprise Ready Server; though it's based on Apache software, this is not Apache 2.0 per se. Thanks to Sascha Schumann of the ASF for the pointer.
This discussion has been archived. No new comments can be posted.

Covalent's Version of Apache 2.0 To Drop Monday

Comments Filter:
  • by chrysalis ( 50680 ) on Saturday November 10, 2001 @09:29AM (#2548290) Homepage
    One of the most annoying thing in Apache 1.x is that when PHP is compiled in the server (not run through the CGI), all scripts are running as "www", "nobody", or whatever anonymous user your Apache daemon is running as.
    There's no way to have PHP script run as different users (just like what suexec does for spawning CGI external progs) .
    Sure, PHP has a so-called "safe-mode", but it's still not that secure, especially when it comes to creating files or acess shared memory pages.
    I was told that Apache 2.0 had a mechanism that could make user switching for PHP scripts possible. Has anyone experimented with it?

    • There's no way to have PHP script run as different users (just like what suexec does for spawning CGI external progs)

      Actually, there is. You have to use PHP in CGI mode, where it ISN'T compiled into Apache as a module. I've never used it in that mode myself (I only have one simple PHP script on my entire server); however, a search on google for php+suexec [google.com] turns up some info. Apparently, CGI mode does work, but not quite as well as module mode (some people seem to indicate that it runs like a dog).
    • At my work (an ISP), I tweaked cgi-wrap and run php. The cgi-wrap tweak provides the safety of running as the user, along with other checks (is the php script world writable, is it owned by the user, etc) and it takes out the necessity of putting #!/path/to/php at on every php file.

      If you are interested in this, email me.
    • I've run PHP+suEXEC since PHP 4.0.1RC2 as far as I remember. Works fine on NetBSD with Apache 1.3.x although a little slower than compiled as module.
      But we run PHP scripts the same way we run CGI written in C, Python, Perl, etc.
    • by cehf2 ( 101100 ) on Saturday November 10, 2001 @11:10AM (#2548474)
      With any application running on a web server there is a trade off between performance and security. because the PHP module is running inside the core of the web server, it should be fairly fast, however if you want the ability to change what users the php scripts run as, your only option is to use CGI scripts. CGI by its very nature is *very* slow. This is due to the overhead of the fork/exec/load program.

      You may also be able compile PHP as a FastCGI program, you could then run several external FastCGI's as different users and configure Apache to run the particular script with a particular FastCgi program. I have no idea how to do this with apache, as I use Zeus [zeus.com] myself.

      If Apache 2 does have a way to switch users for PHP scripts, it will not be secure. Under UNIX, once you have dropped your permissions you can never gain them again. The work around is to have 'real' and 'effective' users that programs run as. As long as you only change your efective user, you can re-gain permissions, but anything can regain permissions. You can also only change users when you are root. This would be a big security hole, in that if there was a buffer overflow attack root could trivially be optained by anyone.

      security, performance, configurability - pick 2

      • Actually the way they do it is there is an MPm called perchild. With the perchild scheme each process runs under a different userid(to replace suexec), so you would have php scripts run as a different user.

        You can see more about MPMs here [apache.org]
      • I think you overestimate the intrinsic speed problems of CGI. Sure, you have to start a new process, but that doesn't take that many resources. If you have to start up a complicated interpreter, as you would for PHP, then yes it's slow. But a small C program starts fairly quickly.

        When testing different adapters for an application server I was playing with, there were persistent versions written in Python, for use with mod_python/mod_snake -- the adapters were essentially small scripts that contacted the application server. Those persistant Python versions were actually slower than an equivalent C CGI program. Of course, the C version built as an apache module was somewhat faster, but they were both at the point when neither was a significant bottleneck. So CGI can be pretty fast.

        You can actually do what is essentially CGI through PHP too -- if you have something that needs to be run suid, then run it through system() (which loads up a shell, which is annoying and slow) or some other way (I don't know of a way to call a program directly in PHP...?)

        Or you can go the FastCGI (or FastCGI-like) direction, where you have a sub-server that handles certain requests. I don't know how easy that is to do in PHP -- it's very useful to have object serialization at that point, and I don't think PHP has that (?)

  • by imrdkl ( 302224 ) on Saturday November 10, 2001 @09:30AM (#2548292) Homepage Journal
    This thing better weave with golden thread(s)
  • "It's not clear when the Open Source Edition (or whatever) will come out and I didn't find anything at the official Apache Site."

    Here is apache 2.0 documentation [apache.org] and you can download [apache.org] 2.0.16 (public beta) or 2.0.18 (it's an alpha).. but what do you want them to open source? The 2.0 core (it is) or the proprietary enhancements (yeah right).

    Kenny


    at least slashdot didn't change my urls into http://slashdot.org/httpd.apache.org this time.
    • by huftis ( 135056 ) on Saturday November 10, 2001 @09:54AM (#2548341) Homepage
      It's not clear when the Open Source Edition (or whatever) will come out and I didn't find anything at the official Apache Site.

      Apache Week has more information [apacheweek.com] on this:

      Those waiting since April for a new 2.0 beta will have to keep on waiting after another release candidate, 2.0.27, was abandoned this week when a bug was discovered while running the code on the live apache.org server. Some httpd processes were found to be stuck in infinite loops while reading POST requests; the bug was traced to the code handling request bodies. After fixes for this bug and a build problem on BSD/OS were checked in, the tree was tagged ready for a 2.0.28 release.
  • Time warp? (Score:5, Funny)

    by carm$y$ ( 532675 ) on Saturday November 10, 2001 @09:44AM (#2548321) Homepage
    From the press release:
    SAN FRANCISCO -- November 12, 2001 -- In conjunction with the launch of Enterprise Ready Server, Covalent Technologies today announced a coalition of support for its new enterprise solution for the Apache Web server.

    Is this a little bit confusing, or what? I mean, I had a meeting on Monday the 12th... well... which I don't recall yet. :)
  • by imrdkl ( 302224 ) on Saturday November 10, 2001 @10:10AM (#2548371) Homepage Journal
    I've always been a bit suspicious of threads, even the latest and greatest kernel threads. Is there someone who can speak to the wisdom and tradeoffs in doing this? I like my fu^Horking apache just the way it is. Programming threads is also hard. What about all of the cool API stuff and plugins, I suppose they all have to be rewritten? Mod_rewrite, mod_perl, etc, etc, yes?
    • You don't have to use the threaded model with Apache 2.0 on Unix. There is a 1.3-style processing model available.

      However, a module for Apache 2.0 probably would want to be thread-aware to avoid requiring that the admin use the 1.3-style processing model.

      On some platforms threads won't beat fork for speed, but certainly the total virtual memory usage for a threaded Apache deployment should be less than for a non-threaded deployment on any platform. For most people this is a non-issue, but in some environments Apache 1.3 is a big problem because of the memory footprint required by the zillions of processes required.

      • On some platforms threads won't beat fork for speed

        You care to substantiate this claim? fork() generally dupes the current process in memory -an expensive operation. Threads make no such operation, instead relying upon a simple, lightweight Thread object to manage execution, and in the case of servers and servlets, utilizing already-instantiated server objects to execute.
        • Yes, but forking isn't an issue because Apache pre-forks a number of "worker processes". So it should be true that a threaded Apache would give little advantage on many operating systems.
          • Yes, but forking isn't an issue because Apache pre-forks a number of "worker processes". So it should be true that a threaded Apache would give little advantage on many operating systems.

            But forked or pre-forked, each process, which will handle only one "hit" at a time, has the same memory burden as a full apache process (coz that's what it is.)

            Now compare this to the threaded version, where threads are objects, miniscule next to an Apache process, and where many of the other objects used by a thread are reused, not regenerated.

            My experience in running Apache servers is that memory is consumed before bandwidth or processor... with threads it'll be cpu first, coz you'll be able to handle much higher number of concurrent requests.

            The earlier point about thread-based Apache being more vulnerable to a process dying than process-based *is* true, so maybe a mix of processes and threads will give some margin towards failsafety. Don't run all server threads under just one process, have multiple processes, if that's possible.
    • mod_perl (Score:2, Informative)

      by m_ilya ( 311437 )
      What about all of the cool API stuff and plugins, I suppose they all have to be rewritten? Mod_rewrite, mod_perl, etc, etc, yes?

      AFAIK Apache's API have been changed and indeed all its modules should be rewritten for new Apache.

      I don't know about all modules but here some info about mod_perl. There is already exist rewrite [apache.org] of mod_perl for Apache 2.0 with threads support. It has many tasty features. Check [apache.org] yourself.

    • by jilles ( 20976 ) on Saturday November 10, 2001 @10:47AM (#2548437) Homepage
      Programming threads is just as hard as programming with processes on a conceptual level. The type of problems you encounter are the same.

      However, process handling is potentially more expensive since processes have separate address spaces and require special mechanisms for communication between these address spaces. From the point of view of system resources and scalability you are better of with threads than with processes. Typically the amount of threads an OS can handle is much larger than the amount of processess it can handle. With multi processor systems becoming more prevalent, multithreaded systems are required to be able to use all the processors effectively and distribute the load evenly.

      The primary reasone why you would want to use processes anyway is stability. When the mother process holding a bunch of threads dies, all its threads die too. If your application consists of 1 process and 1000 threads, a single thread can bring down the entire application. At the process level, you have the OS shielding each process' addressspace from the other processess so that gives you some level of protection against misbehaving processes. Running apache in multiple processes therefore gives you some protection, if one of the httpd processes dies, the other processes can take over and continue to handle requests.

      The use of highlevel languages & APIs (e.g. Java and it's threading facilities) addresses these stability issues and makes it safer (not perfectly safe) to use threads. Java for instance offers memory management facilities that basically prevent such things as buffer overflows or illegal memory access. This largely removes the need for the kind of memory protection an OS offers for processes.

      Apache 2.0 is specifically designed to be more scalable than the 1.3.x series. Threading is a key architectural change in this respect. Sadly it is not written in Java which unlike some people on slashdot believe is very capable of competing with lower level languages in this type of server applications. Presumably the apache developers are using a few well developed C APIs to provide some protection against stability issues.
      • try {
        If your application consists of 1 process and 1000 threads, a single thread can bring down the entire application
        }
        catch (IllegalFUDOperation excep) {
        Only if you're not on top of your exception handling!
        }
      • Programming threads is just as hard as programming with processes on a conceptual level. The type of problems you encounter are the same.

        This makes it sound as if the two models have equivalent obstacles, and that neither is easier than the other. It's true that separate processes are used for stability reasons, but that stability isn't gained only because one process can crash without taking all other processes with it. The main problem with threads that doesn't exist with processes is with shared memory. All variables on the heap can potentially be accessed by two threads at any given time, and access to them must be synchronized. Bugs related to these race conditions can be very hard to track down, and many people would rather forego the problem entirely and just use processes.
        • Shared data is inevitable in distributed systems. If you isolate the data for each process using memory protection, that implies that there has to be some means of transferring data from one process to another (e.g. pipes). Typically such mechanisms are cumbersome and make context switches expensive.

          My whole point is that with highlevel languages, such as Java, the language encapsulates most of the complexity of dealing with synchronization. Java does not have a process concept other than the (typically single) JVM process that hosts all the threads.

          Strong typing, and OO further enhance the stability and consistency. Emulating such mechanisms in a language like C is hard and requires intimate knowledge of parallel programming and discipline of the programmers.

          Therefore multithreading wasn't very popular until very recently. Only since the 2.2 and 2.4 linux kernels were introduced, threading has become somewhat feasible in terms of performance. Using the new threading features requires that you think beyond the heap as a central storage facility for data. In Java the heap is something that JVM uses to store and manage objects. At the programming level you only have objects. Objects are referred to by other objects (which may be threads) and may refer to/create objects themselves. Access to the data in the objects is done through access methods and where applicable you make those methods synchronized (i.e. you include the synhronized keyword in the method signature or employ a synchronized code block somewhere) to ensure no other objects interfere.

          Each time you employ (or should employ) a synchronization mechanism, you would have had to employ a similar mechanism if you had been using processes. The only problem is that that mechanism would probably be much more expensive to use since you are accessing data across address space boundaries.

          With this in mind, the use of processes is limited to situations where there is little or no communication between the processes. Implementing such software using threads should be dead simple since you will only have a few situations where the threads are accessing each others data so there is no real risk for race conditions. Such situations you can deal with using well designed APIs and by preventing dirty pointer arithmetic. A company I have worked with who write large embedded software systems for an OS without memory protection on processes has successfully built a rock solid system this way in C++. By their own account they have actually encountered very few race conditions in their system. My guess is that the apache people have employed similar techniques and code guidelines to avoid the kind of bugs you are talking about.

          So if you are encountering race conditions in your code, using processes rather than threads won't solve your problems because you still need to synchronize data. You can do so more cheaply with threads than with processes.
          • You're still glossing over things. When using threads, *anything* on the heap can potentially be accessed by two threads simulataneously. If you're using processes, you know exactly when and where data is being shared (it's kind of hard to miss data moving through a pipe). It's much easier to control, but it does come at the expense of efficiency. The only really efficient IPC mechanism is shared memory, which of course has the exact same problems as multi-threaded code.

            Threads do have their place--whenever you need concurrency and a large amount of data needs to be shared, go with them. But saying that you should use them when you have largely independent tasks which don't share data is silly. That's exactly what processes are for, and you eliminate any risk of threads stomping on each other. If you need to have thousands of them, maybe you should look into threads, but it would probably be best to check your algorithm. Any time you think you need huge numbers of processes or threads, you'd best think again. Context switches are going to kill whether you're using threads or processes.

            • You can only access anything on the heap if your programming language allows it (e.g. C or C++) in which case you need to constrain how the language is used. I've seen quite a few companies who work with C/C++ employ strict coding guidelines that accomplish this.

              If you have a lot of independent tasks which don't share data. You use threads because that will give you a more scalable system. Of course your system will be riddled with bugs if you start doing all sorts of pointer arithmetic which, in general, is a bad idea (even on non distributed systems). If two threads are accessing the same data they are sharing it. If they shouldn't, its a bug. The only reason processes are useful is that they force you to employ methods other than pointers to access shared data (so if you create a bug by doing funky pointer arithmetic it will only affect one process).

              Multi threaded applications are known to scale to several thousands of threads on relatively modest hardware. Context switches typically occur when different threads/processes on the same processor are accessing different data. Context switching for processes is more expensive then for threads on modern operating systems.

              You are calling me silly for recommending threads as good alternative for processes in situations that require scalability. Yet, IMHO, this is exactly the reason why apache 2.0 is using threads.
              • Have you seen thousands of threads running on one of Linus' kernels? He isn't particularly fond of the idea, and isn't going to tweak the kernel to support that sort of thing. There's a reason Linux native threads JVMs do poorly on the scalability portion of VolanoMark...

                Everything you're saying makes sense on a system where processes really are heavyweight monsters. On Linux, processes and threads are much more similar. The difference is copy-on-write semantics for memory pages. Up until you actually modify a page, it is shared by child and parent. This means that using processes instead of threads doesn't automatically mean that you're grossly increasing memory needs.

    • If you like your prefork server, just build Apache 2.0 with the "prefork" MPM. Some plaftorms are not supported by it, there's a MPM specific to Win32 for instance.

      Threads programming is made hard when you are communicating between the threads or when a thread goes haywire and overwrites another threads' memory regions. The former is not a large issue for most C or (especially) mod_perl Apache modules, since they don't try to share state. These should port rather easily to a multithreaded environment.

      The real issue is for C modules that get a little funky with the 1.3 (or older) API: there's a *lot* new under the hood in Apache 2.0 and such modules may require a complete rewrite. Many will only require minor rewrites, though complete rewrites to leverage Apache 2.0's input and output filters will be quite beneficial. Imagine writing a filter module that can alter content retrieved by the new mod_proxy, and optionally cached locally before or after the filter alters it :).

      Debugging is often more difficult with threads, but there are command line options to make it easier to debug, and there's always compiling it with the prefork MPM.

      Yes, many modules and C libraries are not thread safe; this will be a source of painful problems for advanced modules for years to come. But most modules should port relatively painlessly, and many people don't go farther than those modules that ship with Apache; those, of course, are already being ported and debugged.

      The prefork MPM is likely to be more safe in the face of memory bugs and deadlock issues due to the isolation imposed by the OS between processes, but is likely to be slower than the threaded MPMs on many platforms.

      FWIW, mod_perl 2.0 is shaping up very nicely and perl5 seems to be resolving most of the major obstacles to full, safe multithreading in a manner that will prevent unexpected variable sharing problems (all variables are thread-local unless specified otherwise). mod_perl 2.0 boots and runs multithreaded now, and as soon as the core Perl interpreter becomes more threadsafe, it should be ready for trial use.

      At least one mod_perl production site has been tested on mod_perl 2.0 (though not in production :). mod_perl 2.0 has a compatability layer that will help existing modules run with little or no modification.

      Life's looking good for Apache 2.0 and mod_perl 2.0.
    • I talked with Dirk Willem van Gulig a few days ago. The way he explained the use of both the two models available in Apache 2.0 was to run the apps you trust not to crash in the Threaded model. Apps that you may be having problems with, or really important, high-usage apps in the Process model.

      As far as rewritten modules, some of them will need to be, as modules now will need to be able to be also used as filters. With Apache 2.0, it's possible to use the output of one module as the input to another module. Such as running the output from mod_php through mod_include and then through mod_rewrite. Really cool stuff!

      The major modules have already been rewritten. The API is changed as well, to give it more power, such as a filename to filename hook. (Finally!)

      I beleive he said something about the capability of 1.3 modules to still be used, but only in the old way, not as filters. But I am not completely sure that is what he said. (He talks insanely fast! Even sitting next to him I sometimes had trouble keeping up with with his accent. Not his fault, I just haven't talked to a lot of people from the Netherlands, so I'm not used to it.)

  • "It's not clear when the Open Source Edition (or whatever) will come out..."


    Is it just me, or does this "or whatever" kind of attitude strike you as strange? Granted, Apache has been seriously draggin' ass on 2.0 and I can see folks getting a little anxious to have it out already...

    • by Anonymous Coward
      Its not going to happen. Look at Ken Coar's editorial in the last Apache Week. The ASF is spinning their wheels at this point. One person will go in to fix a single bug and instead rewrite the entire system (for instance the url parser). They fix one bug but create several more. They have no concept of a code freeze.
      The 1.3 tree is getting very long in the tooth and patches are pretty much rejected becase "work is now in the 2.0 tree". The way that the ASF is playing it, they will cause the Open Source community to loose the web server biz.
      The silly politics alone that keep SSL, EAPI and three different methods of compiling Apache are enough to make sure it is doomed. Why has IIS taken over the SSL market? Because it ships with EAPI.
      Its really sad.
      • -1 FUD (Score:2, Informative)

        by jslag ( 21657 )
        Look at Ken Coar's editorial in the last Apache Week. The ASF is spinning their wheels at this point.


        The article [apacheweek.com]
        in question says nothing of the sort. It notes that the development processes of apache have changed over the years, with associated wins and losses.


        Why has IIS taken over the SSL market? Because it ships with EAPI.


        Thanks for the laugh.

  • by markcox ( 236503 ) on Saturday November 10, 2001 @11:09AM (#2548472) Homepage
    Although the CNet article tells you otherwise, the open source verison of Apache 2.0 is not available on Monday, and as stated in Apache Week, is only just becoming stable enough for another beta release. Covalent are launching a commercial product that is based on Apache 2.0 but with proprietary extensions (the Apache license unlike the GPL allows this). IBM's httpd server has been based on a 2.0 beta for a number of months. Since Covalent say they've made it Enterprise Ready they must have cured the performance and stability problems, when these get contributed back to the main Apache 2.0 tree everyone wins.

    Mark Cox, Red Hat
  • I've read somewhere that Apache 2.0 is using the underlying code to mozilla, nspr (netscape portable runtime) for all the core stuff such as threading and memory allocation. It's good to see that an app like mozilla can be really usefull to other open source applications such as apache.
  • I fully realize that this is talking about Covalent's Apache-based software, but I'm still wondering how ready the Apache 2.0 codebase is... I've been playing with 2.0.16 beta for awhile now on one of my test servers without and problems, but that doesn't mean diddly. I'm looking foward to verison 2.0, but not without extensive testing. Version 1.3.22 works way too well for me to make a switch anytime soon.
    • At this point, I would judge the current httpd-2.0 codebase as beta-quality. There have been lots of improvements made to the Apache 2.0 codebase since 2.0.16 was released - I would expect that we have a much better codebase now than was in 2.0.16. I would expect you to have an even better experience with our next release whenever it occurs (or you may use CVS to obtain the up-to-the-minute version!).

      Yes, we're way overdue releasing Apache 2.0 as a GA (we started thinking about 2.x in 1997), but that is a testament to our quality - we will NOT release Apache 2.0 as a general availability release until we are all satisfied that it meets our expectations. "It's ready when it's ready."

      We have a very good stable product in Apache 1.3. We must match the quality expectations we've set for ourselves in the past. And, almost everyone in the group is keenly aware of that.
  • as if Covalent trying to put a 'feather in its cap'.

    (security through obscurity does not work, so I'm trying humor thru obsucrity.)

    I'll admit, I'm not versed in marketiod speak but this caught my attention:
    Covalent has taken a great web server -- Apache -- and added key functionality that enhances enterprise customers' experience."

    What this say to me is "Apache kicks ass, now any idio^H^H^H^enterprise customer can use it with our new point and click gui!"

    (shaking head)

    A few minutes on freshmeat.net, dudes, would probably solve most of your problems if you are looking for a gui to configure this stuff.

    If that is not the case, well, my programming days are over and the comments on the trade offs with what Covalent is doing just leave me to hope it does not reflect badly on Apache.
  • The release announcement by Covalent on top of this week's announcement of a proprietary version of SourceForge by VA [2001-11-06 20:04:54 VA Embraces Closed Source (articles,va) (rejected)] should have us all wondering where things are heading during this period of revision for open source business models. Are we headed for a world where ostensibly free programs are deliberately crippled relative to proprietary versions of the same code?

    Covalent funds a great deal of Apache development directly, as well as contributing board members and other members to the Apache Software Foundation. It's clearly not doing this primarily to help the open source version of Apache along, but to advance its own proprietary version of Apache. Eventually Apache 2.0 may come out in an open source version, but it doesn't seem to be a priority of the main contributor to Apache to make that happen. A conspiracy-theory approach might even suggest that they are deliberately applying a flawed, destabilizing model to the open source tree (commit then review, no feature freeze) while presumably they use a tighter and more controlled process to get the proprietary version out.

    People have suggested that the internal versions of GNAT distributed in a semi-proprietary way by ACT may be better than the open source versions, while ACT says the opposite -- that their private versions are less tested, require technical support, and would only hinder those who don't have support contracts. I don't know the truth of the matter there, and this is not meant to point the finger at ACT, but this phased-release strategy by Covalent raises some of the same questions.

    VA's proprietary SourceForge conjures a similar spectre. There will still be a free SourceForge, but improvements are going primarily into the proprietary version. Perhaps outside engineers will start playing catch-up and adding clones of the proprietary features to an open source branch of SourceForge, but at best the open source version will still lag behind, and it may happen that it will always be so far behind as to be relatively crippled compared with the proprietary version.

    Is open source heading toward a model where some of its dominant programs are available for free only in crippled versions lagging behind the proprietary releases? And if so, what does that say about unpaid volunteer contributions? Are they really for the public benefit, or for the benefit of a proprietary developer? If the latter, why volunteer?

    Other problems with crippled free versions have been noted here before, such as having to pay for documentation on ostensibly free software, or needing a proprietary installer to effectively install a supposedly free system. This week's events involving VA and Covalent show that this may be becoming a trend with significant impact on the whole open source and free software movement.

    Tim
    • Perhaps outside engineers will start playing catch-up and adding clones of the proprietary features to an open source branch of SourceForge, but at best the open source version will still lag behind, and it may happen that it will always be so far behind as to be relatively crippled compared with the proprietary version.

      I think that's far from certain. One of the premises of the BSD license is that even if someone does take the code and release a proprietary fork, the Open Source model has enough advantages that the community should be able to keep up and even surpass them.

      That seems likely to happen at some point.
      • the Open Source model has enough advantages that the community should be able to keep up and even surpass them.

        I don't think there's any historical evidence for the popular idea that open source software improves faster than proprietary software. As this post [slashdot.org] from an IBM open source developer points out, there are serious management overheads and inefficiencies associated with the model.

        One of the advantages of being closed is control. You get to choose exactly where each programmer works; you get to choose exactly which pieces of the system change, and which don't. When you open it, suddenly, you lose control. You can't just make decisions anymore; you need to work with your contributor base, which is a much slower process than managerial decree. And you need to deal with the fact that people will be changing things all over the place, and be capable of integrating those changes into your own ongoing work. That costs time(possibly a lot of time), and time costs money.

        If managing engineers under normal conditions is like herding cats, open source development is like harnessing a swarm of bees.

        Tim

    • Is open source heading toward a model where some of its dominant programs are available for free only in crippled versions lagging behind the proprietary releases?

      I doubt that. As an active Apache developer who doesn't really have any ties to a company with a vested interest in Apache, I work with the Covalent people every day. And, I doubt that the open-source version of Apache HTTPD will lag behind any version that Covalent or IBM has. In fact, I bet that the version that Covalent will release on Monday will include some bugs that have already been fixed in the open-source version.

      Where I think companies like Covalent come in is to support corporations that *require* support. Their price ($1495/CPU or something like that) isn't targeted towards people who would be interested in the open-source version, but for corporations that can't ever afford to have their web server go down.

      Covalent also offers some freebies (such as mod_ftp). I think under Apache 2.0, it is sufficiently easy for someone to come in and write a module that handles FTP. It's just that no one has had the inclination to write one. And, I bet if someone did, it just might eventually be better than the one Covalent wrote.

      VA is a little different from Covalent as, IIRC, they are the sole owners of Sourceforge, but Covalent is just a part of the Apache community (an active one though).

      And if so, what does that say about unpaid volunteer contributions? Are they really for the public benefit, or for the benefit of a proprietary developer? If the latter, why volunteer?

      I work on what I want to work on. People who work at Covalent have a "direction" on things to work on. As an unpaid volunteer, I get to work on whatever I feel like at the moment. I'll take that any day of the week. But, there is a definite value to getting paid to work solely on Apache.

      Other problems with crippled free versions have been noted here before, such as having to pay for documentation on ostensibly free software, or needing a proprietary installer to effectively install a supposedly free system.

      FWIW, I believe this is definitely not the case with Apache. The docs are freely available and the Win32 installer is one donated by IBM (I think, I forget - someone donated it).

  • They (or somebody they bought from) harvested my email address from network solutions database and blessed me with UCE yesterday:

    Subject: Buy Covalent's Apache Web Server and Get a FREE Entrust Certificate

    I can tell because I use unique email addresses for everyone.
    • If you get an unsolicited e-mail in Covalent's name, write directly to company and tell them about it. I know a couple of the guys who work there, and I'm confident that they didn't move halfway across the country just to join the spamming industry. Maybe the list got polluted with your name somehow, or maybe they farmed out the PR stuff to another company. Either way, just give them a chance to fix it.

      --Will

  • dude, it's not an album. stop pretending you're not a geek.

If I want your opinion, I'll ask you to fill out the necessary form.

Working...