Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Apache Software

Covalent's Version of Apache 2.0 To Drop Monday 85

kilaasi points out this CNET story about the planned release on Monday of Apache 2.0, "or at least the version that has proprietary extensions. Covalent sells the core of Apache and its own extensions which make it easier to adapt for specific areas and simpler to administer. Covalent is confident that the next generation Apache is mature and is ready for prime time. Covalent employs some of the core members of the Apache-development-team." XRayX adds a link to Covalent's press release, writing: "It's not clear when the Open Source Edition (or whatever) will come out and I didn't find anything at the official Apache Site." Update: 11/10 16:37 GMT by T : Note that the product name is Covalent Enterprise Ready Server; though it's based on Apache software, this is not Apache 2.0 per se. Thanks to Sascha Schumann of the ASF for the pointer.
This discussion has been archived. No new comments can be posted.

Covalent's Version of Apache 2.0 To Drop Monday

Comments Filter:
  • by jilles ( 20976 ) on Saturday November 10, 2001 @11:47AM (#2548437) Homepage
    Programming threads is just as hard as programming with processes on a conceptual level. The type of problems you encounter are the same.

    However, process handling is potentially more expensive since processes have separate address spaces and require special mechanisms for communication between these address spaces. From the point of view of system resources and scalability you are better of with threads than with processes. Typically the amount of threads an OS can handle is much larger than the amount of processess it can handle. With multi processor systems becoming more prevalent, multithreaded systems are required to be able to use all the processors effectively and distribute the load evenly.

    The primary reasone why you would want to use processes anyway is stability. When the mother process holding a bunch of threads dies, all its threads die too. If your application consists of 1 process and 1000 threads, a single thread can bring down the entire application. At the process level, you have the OS shielding each process' addressspace from the other processess so that gives you some level of protection against misbehaving processes. Running apache in multiple processes therefore gives you some protection, if one of the httpd processes dies, the other processes can take over and continue to handle requests.

    The use of highlevel languages & APIs (e.g. Java and it's threading facilities) addresses these stability issues and makes it safer (not perfectly safe) to use threads. Java for instance offers memory management facilities that basically prevent such things as buffer overflows or illegal memory access. This largely removes the need for the kind of memory protection an OS offers for processes.

    Apache 2.0 is specifically designed to be more scalable than the 1.3.x series. Threading is a key architectural change in this respect. Sadly it is not written in Java which unlike some people on slashdot believe is very capable of competing with lower level languages in this type of server applications. Presumably the apache developers are using a few well developed C APIs to provide some protection against stability issues.
  • by Anonymous Coward on Saturday November 10, 2001 @12:44PM (#2548553)
    Its not going to happen. Look at Ken Coar's editorial in the last Apache Week. The ASF is spinning their wheels at this point. One person will go in to fix a single bug and instead rewrite the entire system (for instance the url parser). They fix one bug but create several more. They have no concept of a code freeze.
    The 1.3 tree is getting very long in the tooth and patches are pretty much rejected becase "work is now in the 2.0 tree". The way that the ASF is playing it, they will cause the Open Source community to loose the web server biz.
    The silly politics alone that keep SSL, EAPI and three different methods of compiling Apache are enough to make sure it is doomed. Why has IIS taken over the SSL market? Because it ships with EAPI.
    Its really sad.
  • by jilles ( 20976 ) on Saturday November 10, 2001 @02:10PM (#2548732) Homepage
    Shared data is inevitable in distributed systems. If you isolate the data for each process using memory protection, that implies that there has to be some means of transferring data from one process to another (e.g. pipes). Typically such mechanisms are cumbersome and make context switches expensive.

    My whole point is that with highlevel languages, such as Java, the language encapsulates most of the complexity of dealing with synchronization. Java does not have a process concept other than the (typically single) JVM process that hosts all the threads.

    Strong typing, and OO further enhance the stability and consistency. Emulating such mechanisms in a language like C is hard and requires intimate knowledge of parallel programming and discipline of the programmers.

    Therefore multithreading wasn't very popular until very recently. Only since the 2.2 and 2.4 linux kernels were introduced, threading has become somewhat feasible in terms of performance. Using the new threading features requires that you think beyond the heap as a central storage facility for data. In Java the heap is something that JVM uses to store and manage objects. At the programming level you only have objects. Objects are referred to by other objects (which may be threads) and may refer to/create objects themselves. Access to the data in the objects is done through access methods and where applicable you make those methods synchronized (i.e. you include the synhronized keyword in the method signature or employ a synchronized code block somewhere) to ensure no other objects interfere.

    Each time you employ (or should employ) a synchronization mechanism, you would have had to employ a similar mechanism if you had been using processes. The only problem is that that mechanism would probably be much more expensive to use since you are accessing data across address space boundaries.

    With this in mind, the use of processes is limited to situations where there is little or no communication between the processes. Implementing such software using threads should be dead simple since you will only have a few situations where the threads are accessing each others data so there is no real risk for race conditions. Such situations you can deal with using well designed APIs and by preventing dirty pointer arithmetic. A company I have worked with who write large embedded software systems for an OS without memory protection on processes has successfully built a rock solid system this way in C++. By their own account they have actually encountered very few race conditions in their system. My guess is that the apache people have employed similar techniques and code guidelines to avoid the kind of bugs you are talking about.

    So if you are encountering race conditions in your code, using processes rather than threads won't solve your problems because you still need to synchronize data. You can do so more cheaply with threads than with processes.
  • by tim_maroney ( 239442 ) on Saturday November 10, 2001 @05:38PM (#2549177) Homepage
    The release announcement by Covalent on top of this week's announcement of a proprietary version of SourceForge by VA [2001-11-06 20:04:54 VA Embraces Closed Source (articles,va) (rejected)] should have us all wondering where things are heading during this period of revision for open source business models. Are we headed for a world where ostensibly free programs are deliberately crippled relative to proprietary versions of the same code?

    Covalent funds a great deal of Apache development directly, as well as contributing board members and other members to the Apache Software Foundation. It's clearly not doing this primarily to help the open source version of Apache along, but to advance its own proprietary version of Apache. Eventually Apache 2.0 may come out in an open source version, but it doesn't seem to be a priority of the main contributor to Apache to make that happen. A conspiracy-theory approach might even suggest that they are deliberately applying a flawed, destabilizing model to the open source tree (commit then review, no feature freeze) while presumably they use a tighter and more controlled process to get the proprietary version out.

    People have suggested that the internal versions of GNAT distributed in a semi-proprietary way by ACT may be better than the open source versions, while ACT says the opposite -- that their private versions are less tested, require technical support, and would only hinder those who don't have support contracts. I don't know the truth of the matter there, and this is not meant to point the finger at ACT, but this phased-release strategy by Covalent raises some of the same questions.

    VA's proprietary SourceForge conjures a similar spectre. There will still be a free SourceForge, but improvements are going primarily into the proprietary version. Perhaps outside engineers will start playing catch-up and adding clones of the proprietary features to an open source branch of SourceForge, but at best the open source version will still lag behind, and it may happen that it will always be so far behind as to be relatively crippled compared with the proprietary version.

    Is open source heading toward a model where some of its dominant programs are available for free only in crippled versions lagging behind the proprietary releases? And if so, what does that say about unpaid volunteer contributions? Are they really for the public benefit, or for the benefit of a proprietary developer? If the latter, why volunteer?

    Other problems with crippled free versions have been noted here before, such as having to pay for documentation on ostensibly free software, or needing a proprietary installer to effectively install a supposedly free system. This week's events involving VA and Covalent show that this may be becoming a trend with significant impact on the whole open source and free software movement.

    Tim
  • by Jerenk ( 10262 ) on Saturday November 10, 2001 @06:03PM (#2549254) Homepage
    At this point, I would judge the current httpd-2.0 codebase as beta-quality. There have been lots of improvements made to the Apache 2.0 codebase since 2.0.16 was released - I would expect that we have a much better codebase now than was in 2.0.16. I would expect you to have an even better experience with our next release whenever it occurs (or you may use CVS to obtain the up-to-the-minute version!).

    Yes, we're way overdue releasing Apache 2.0 as a GA (we started thinking about 2.x in 1997), but that is a testament to our quality - we will NOT release Apache 2.0 as a general availability release until we are all satisfied that it meets our expectations. "It's ready when it's ready."

    We have a very good stable product in Apache 1.3. We must match the quality expectations we've set for ourselves in the past. And, almost everyone in the group is keenly aware of that.

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...