Covalent's Version of Apache 2.0 To Drop Monday 85
kilaasi points out this CNET story about the planned release on Monday of Apache 2.0, "or at least the version that has proprietary extensions. Covalent sells the core of Apache and its own extensions which make it easier to adapt for specific areas and simpler to administer. Covalent is confident that the next generation Apache is mature and is ready for prime time. Covalent employs some of the core members of the Apache-development-team." XRayX adds a link to Covalent's press release, writing: "It's not clear when the Open Source Edition (or whatever) will come out and I didn't find anything at the official Apache Site." Update: 11/10 16:37 GMT by T : Note that the product name is Covalent Enterprise Ready Server; though it's based on Apache software, this is not Apache 2.0 per se. Thanks to Sascha Schumann of the ASF for the pointer.
Static PHP + scripts running as users (Score:5, Informative)
There's no way to have PHP script run as different users (just like what suexec does for spawning CGI external progs) .
Sure, PHP has a so-called "safe-mode", but it's still not that secure, especially when it comes to creating files or acess shared memory pages.
I was told that Apache 2.0 had a mechanism that could make user switching for PHP scripts possible. Has anyone experimented with it?
Re: Static PHP + scripts running as users (Score:3, Informative)
Actually, there is. You have to use PHP in CGI mode, where it ISN'T compiled into Apache as a module. I've never used it in that mode myself (I only have one simple PHP script on my entire server); however, a search on google for php+suexec [google.com] turns up some info. Apparently, CGI mode does work, but not quite as well as module mode (some people seem to indicate that it runs like a dog).
Re:Static PHP + scripts running as users (Score:1)
If you are interested in this, email me.
Re:Static PHP + scripts running as users (Score:1)
But we run PHP scripts the same way we run CGI written in C, Python, Perl, etc.
Re:Static PHP + scripts running as users (Score:4, Informative)
You may also be able compile PHP as a FastCGI program, you could then run several external FastCGI's as different users and configure Apache to run the particular script with a particular FastCgi program. I have no idea how to do this with apache, as I use Zeus [zeus.com] myself.
If Apache 2 does have a way to switch users for PHP scripts, it will not be secure. Under UNIX, once you have dropped your permissions you can never gain them again. The work around is to have 'real' and 'effective' users that programs run as. As long as you only change your efective user, you can re-gain permissions, but anything can regain permissions. You can also only change users when you are root. This would be a big security hole, in that if there was a buffer overflow attack root could trivially be optained by anyone.
security, performance, configurability - pick 2
Re:Static PHP + scripts running as users (Score:1)
You can see more about MPMs here [apache.org]
Re:Static PHP + scripts running as users (Score:2)
When testing different adapters for an application server I was playing with, there were persistent versions written in Python, for use with mod_python/mod_snake -- the adapters were essentially small scripts that contacted the application server. Those persistant Python versions were actually slower than an equivalent C CGI program. Of course, the C version built as an apache module was somewhat faster, but they were both at the point when neither was a significant bottleneck. So CGI can be pretty fast.
You can actually do what is essentially CGI through PHP too -- if you have something that needs to be run suid, then run it through system() (which loads up a shell, which is annoying and slow) or some other way (I don't know of a way to call a program directly in PHP...?)
Or you can go the FastCGI (or FastCGI-like) direction, where you have a sub-server that handles certain requests. I don't know how easy that is to do in PHP -- it's very useful to have object serialization at that point, and I don't think PHP has that (?)
At $1495 per CPU (Score:4, Funny)
Re:At $1495 per CPU (Score:4, Informative)
Re:At $1495 per CPU (Score:1, Troll)
Apache has released 2.0 betas (Score:1, Informative)
Here is apache 2.0 documentation [apache.org] and you can download [apache.org] 2.0.16 (public beta) or 2.0.18 (it's an alpha).. but what do you want them to open source? The 2.0 core (it is) or the proprietary enhancements (yeah right).
Kenny
at least slashdot didn't change my urls into http://slashdot.org/httpd.apache.org this time.
Re:Apache has released 2.0 betas (Score:5, Informative)
Apache Week has more information [apacheweek.com] on this:
Re:next generation Apache ready for prime time?? (Score:2)
Re:next generation Apache ready for prime time?? (Score:2)
*cough*Netscape*cough* Though I use Mozilla as my primary browser and love it, NS 6.00 off M1x was still a bonehead move IMHO.
Time warp? (Score:5, Funny)
SAN FRANCISCO -- November 12, 2001 -- In conjunction with the launch of Enterprise Ready Server, Covalent Technologies today announced a coalition of support for its new enterprise solution for the Apache Web server.
Is this a little bit confusing, or what? I mean, I had a meeting on Monday the 12th... well... which I don't recall yet.
Re:Linux is evil! (Score:1)
Can threads really beat fork(2)? (Score:3, Interesting)
Re:Can threads really beat fork(2)? (Score:1)
However, a module for Apache 2.0 probably would want to be thread-aware to avoid requiring that the admin use the 1.3-style processing model.
On some platforms threads won't beat fork for speed, but certainly the total virtual memory usage for a threaded Apache deployment should be less than for a non-threaded deployment on any platform. For most people this is a non-issue, but in some environments Apache 1.3 is a big problem because of the memory footprint required by the zillions of processes required.
Re:Can threads really beat fork(2)? (Score:1)
You care to substantiate this claim? fork() generally dupes the current process in memory -an expensive operation. Threads make no such operation, instead relying upon a simple, lightweight Thread object to manage execution, and in the case of servers and servlets, utilizing already-instantiated server objects to execute.
Re:Can threads really beat fork(2)? (Score:1)
Re:Can threads really beat fork(2)? (Score:1)
But forked or pre-forked, each process, which will handle only one "hit" at a time, has the same memory burden as a full apache process (coz that's what it is.)
Now compare this to the threaded version, where threads are objects, miniscule next to an Apache process, and where many of the other objects used by a thread are reused, not regenerated.
My experience in running Apache servers is that memory is consumed before bandwidth or processor... with threads it'll be cpu first, coz you'll be able to handle much higher number of concurrent requests.
The earlier point about thread-based Apache being more vulnerable to a process dying than process-based *is* true, so maybe a mix of processes and threads will give some margin towards failsafety. Don't run all server threads under just one process, have multiple processes, if that's possible.
mod_perl (Score:2, Informative)
AFAIK Apache's API have been changed and indeed all its modules should be rewritten for new Apache.
I don't know about all modules but here some info about mod_perl. There is already exist rewrite [apache.org] of mod_perl for Apache 2.0 with threads support. It has many tasty features. Check [apache.org] yourself.
Re:Can threads really beat fork(2)? (Score:5, Insightful)
However, process handling is potentially more expensive since processes have separate address spaces and require special mechanisms for communication between these address spaces. From the point of view of system resources and scalability you are better of with threads than with processes. Typically the amount of threads an OS can handle is much larger than the amount of processess it can handle. With multi processor systems becoming more prevalent, multithreaded systems are required to be able to use all the processors effectively and distribute the load evenly.
The primary reasone why you would want to use processes anyway is stability. When the mother process holding a bunch of threads dies, all its threads die too. If your application consists of 1 process and 1000 threads, a single thread can bring down the entire application. At the process level, you have the OS shielding each process' addressspace from the other processess so that gives you some level of protection against misbehaving processes. Running apache in multiple processes therefore gives you some protection, if one of the httpd processes dies, the other processes can take over and continue to handle requests.
The use of highlevel languages & APIs (e.g. Java and it's threading facilities) addresses these stability issues and makes it safer (not perfectly safe) to use threads. Java for instance offers memory management facilities that basically prevent such things as buffer overflows or illegal memory access. This largely removes the need for the kind of memory protection an OS offers for processes.
Apache 2.0 is specifically designed to be more scalable than the 1.3.x series. Threading is a key architectural change in this respect. Sadly it is not written in Java which unlike some people on slashdot believe is very capable of competing with lower level languages in this type of server applications. Presumably the apache developers are using a few well developed C APIs to provide some protection against stability issues.
Re:Can threads really beat fork(2)? (Score:2, Funny)
If your application consists of 1 process and 1000 threads, a single thread can bring down the entire application
}
catch (IllegalFUDOperation excep) {
Only if you're not on top of your exception handling!
}
Re:Can threads really beat fork(2)? (Score:2, Informative)
This makes it sound as if the two models have equivalent obstacles, and that neither is easier than the other. It's true that separate processes are used for stability reasons, but that stability isn't gained only because one process can crash without taking all other processes with it. The main problem with threads that doesn't exist with processes is with shared memory. All variables on the heap can potentially be accessed by two threads at any given time, and access to them must be synchronized. Bugs related to these race conditions can be very hard to track down, and many people would rather forego the problem entirely and just use processes.
Re:Can threads really beat fork(2)? (Score:3, Insightful)
My whole point is that with highlevel languages, such as Java, the language encapsulates most of the complexity of dealing with synchronization. Java does not have a process concept other than the (typically single) JVM process that hosts all the threads.
Strong typing, and OO further enhance the stability and consistency. Emulating such mechanisms in a language like C is hard and requires intimate knowledge of parallel programming and discipline of the programmers.
Therefore multithreading wasn't very popular until very recently. Only since the 2.2 and 2.4 linux kernels were introduced, threading has become somewhat feasible in terms of performance. Using the new threading features requires that you think beyond the heap as a central storage facility for data. In Java the heap is something that JVM uses to store and manage objects. At the programming level you only have objects. Objects are referred to by other objects (which may be threads) and may refer to/create objects themselves. Access to the data in the objects is done through access methods and where applicable you make those methods synchronized (i.e. you include the synhronized keyword in the method signature or employ a synchronized code block somewhere) to ensure no other objects interfere.
Each time you employ (or should employ) a synchronization mechanism, you would have had to employ a similar mechanism if you had been using processes. The only problem is that that mechanism would probably be much more expensive to use since you are accessing data across address space boundaries.
With this in mind, the use of processes is limited to situations where there is little or no communication between the processes. Implementing such software using threads should be dead simple since you will only have a few situations where the threads are accessing each others data so there is no real risk for race conditions. Such situations you can deal with using well designed APIs and by preventing dirty pointer arithmetic. A company I have worked with who write large embedded software systems for an OS without memory protection on processes has successfully built a rock solid system this way in C++. By their own account they have actually encountered very few race conditions in their system. My guess is that the apache people have employed similar techniques and code guidelines to avoid the kind of bugs you are talking about.
So if you are encountering race conditions in your code, using processes rather than threads won't solve your problems because you still need to synchronize data. You can do so more cheaply with threads than with processes.
Re:Can threads really beat fork(2)? (Score:1)
Threads do have their place--whenever you need concurrency and a large amount of data needs to be shared, go with them. But saying that you should use them when you have largely independent tasks which don't share data is silly. That's exactly what processes are for, and you eliminate any risk of threads stomping on each other. If you need to have thousands of them, maybe you should look into threads, but it would probably be best to check your algorithm. Any time you think you need huge numbers of processes or threads, you'd best think again. Context switches are going to kill whether you're using threads or processes.
Re:Can threads really beat fork(2)? (Score:2)
If you have a lot of independent tasks which don't share data. You use threads because that will give you a more scalable system. Of course your system will be riddled with bugs if you start doing all sorts of pointer arithmetic which, in general, is a bad idea (even on non distributed systems). If two threads are accessing the same data they are sharing it. If they shouldn't, its a bug. The only reason processes are useful is that they force you to employ methods other than pointers to access shared data (so if you create a bug by doing funky pointer arithmetic it will only affect one process).
Multi threaded applications are known to scale to several thousands of threads on relatively modest hardware. Context switches typically occur when different threads/processes on the same processor are accessing different data. Context switching for processes is more expensive then for threads on modern operating systems.
You are calling me silly for recommending threads as good alternative for processes in situations that require scalability. Yet, IMHO, this is exactly the reason why apache 2.0 is using threads.
Re:Can threads really beat fork(2)? (Score:1)
Everything you're saying makes sense on a system where processes really are heavyweight monsters. On Linux, processes and threads are much more similar. The difference is copy-on-write semantics for memory pages. Up until you actually modify a page, it is shared by child and parent. This means that using processes instead of threads doesn't automatically mean that you're grossly increasing memory needs.
Re:"Sadly Apache is not written in Java" ??? (Score:2)
And then of course there are servlets and servlet engines which are used to run complex, large websites. So it is possible.
Re:Can threads really beat fork(2)? (Score:1)
Threads programming is made hard when you are communicating between the threads or when a thread goes haywire and overwrites another threads' memory regions. The former is not a large issue for most C or (especially) mod_perl Apache modules, since they don't try to share state. These should port rather easily to a multithreaded environment.
The real issue is for C modules that get a little funky with the 1.3 (or older) API: there's a *lot* new under the hood in Apache 2.0 and such modules may require a complete rewrite. Many will only require minor rewrites, though complete rewrites to leverage Apache 2.0's input and output filters will be quite beneficial. Imagine writing a filter module that can alter content retrieved by the new mod_proxy, and optionally cached locally before or after the filter alters it
Debugging is often more difficult with threads, but there are command line options to make it easier to debug, and there's always compiling it with the prefork MPM.
Yes, many modules and C libraries are not thread safe; this will be a source of painful problems for advanced modules for years to come. But most modules should port relatively painlessly, and many people don't go farther than those modules that ship with Apache; those, of course, are already being ported and debugged.
The prefork MPM is likely to be more safe in the face of memory bugs and deadlock issues due to the isolation imposed by the OS between processes, but is likely to be slower than the threaded MPMs on many platforms.
FWIW, mod_perl 2.0 is shaping up very nicely and perl5 seems to be resolving most of the major obstacles to full, safe multithreading in a manner that will prevent unexpected variable sharing problems (all variables are thread-local unless specified otherwise). mod_perl 2.0 boots and runs multithreaded now, and as soon as the core Perl interpreter becomes more threadsafe, it should be ready for trial use.
At least one mod_perl production site has been tested on mod_perl 2.0 (though not in production
Life's looking good for Apache 2.0 and mod_perl 2.0.
Re:Can threads really beat fork(2)? (Score:2)
As far as rewritten modules, some of them will need to be, as modules now will need to be able to be also used as filters. With Apache 2.0, it's possible to use the output of one module as the input to another module. Such as running the output from mod_php through mod_include and then through mod_rewrite. Really cool stuff!
The major modules have already been rewritten. The API is changed as well, to give it more power, such as a filename to filename hook. (Finally!)
I beleive he said something about the capability of 1.3 modules to still be used, but only in the old way, not as filters. But I am not completely sure that is what he said. (He talks insanely fast! Even sitting next to him I sometimes had trouble keeping up with with his accent. Not his fault, I just haven't talked to a lot of people from the Netherlands, so I'm not used to it.)
Re:Socialism doesn't work! (Score:1)
BSD licensed projects rely on the goodwill of people contributing patches back to the free version, rather than profiting themselves by keeping their changes proprietary. That is more naive than the GPL. In fact it is contrary to economic logic.
It's naive open-source zealots who say that giving away software is supposed to be profitable. I agree that these people are insane. Obviously it's not profitable, but that doesn't mean free software is any more non-viable than ,say, public radio for example.
Re:BSD License (Score:1)
Strange wording? (Score:2)
Is it just me, or does this "or whatever" kind of attitude strike you as strange? Granted, Apache has been seriously draggin' ass on 2.0 and I can see folks getting a little anxious to have it out already...
Re:Strange wording? (Score:2, Insightful)
The 1.3 tree is getting very long in the tooth and patches are pretty much rejected becase "work is now in the 2.0 tree". The way that the ASF is playing it, they will cause the Open Source community to loose the web server biz.
The silly politics alone that keep SSL, EAPI and three different methods of compiling Apache are enough to make sure it is doomed. Why has IIS taken over the SSL market? Because it ships with EAPI.
Its really sad.
-1 FUD (Score:2, Informative)
The article [apacheweek.com]
in question says nothing of the sort. It notes that the development processes of apache have changed over the years, with associated wins and losses.
Why has IIS taken over the SSL market? Because it ships with EAPI.
Thanks for the laugh.
Apache 2.0 is *not* out on Monday (Score:4, Interesting)
Mark Cox, Red Hat
Powered by NSPR! (Score:1)
Re:Powered by NSPR! (Score:1)
http://apr.apache.org/ [apache.org]
Is Apache 2.0 ready ?? (Score:2)
Re:Is Apache 2.0 ready ?? (Score:3, Insightful)
Yes, we're way overdue releasing Apache 2.0 as a GA (we started thinking about 2.x in 1997), but that is a testament to our quality - we will NOT release Apache 2.0 as a general availability release until we are all satisfied that it meets our expectations. "It's ready when it's ready."
We have a very good stable product in Apache 1.3. We must match the quality expectations we've set for ourselves in the past. And, almost everyone in the group is keenly aware of that.
Sounds to me.. (Score:1)
(security through obscurity does not work, so I'm trying humor thru obsucrity.)
I'll admit, I'm not versed in marketiod speak but this caught my attention:
Covalent has taken a great web server -- Apache -- and added key functionality that enhances enterprise customers' experience."
What this say to me is "Apache kicks ass, now any idio^H^H^H^enterprise customer can use it with our new point and click gui!"
(shaking head)
A few minutes on freshmeat.net, dudes, would probably solve most of your problems if you are looking for a gui to configure this stuff.
If that is not the case, well, my programming days are over and the comments on the trade offs with what Covalent is doing just leave me to hope it does not reflect badly on Apache.
Re:does anyone know if the newest beta of apache2 (Score:2, Informative)
even prefork (non-threaded) MPM with a thread-safe APR doesn't work right on FreeBSD... if I recall correctly, the parent process was eating lots of CPU in some sort of signal code...
crippled free versions -- Covalent and VA (Score:2, Insightful)
Covalent funds a great deal of Apache development directly, as well as contributing board members and other members to the Apache Software Foundation. It's clearly not doing this primarily to help the open source version of Apache along, but to advance its own proprietary version of Apache. Eventually Apache 2.0 may come out in an open source version, but it doesn't seem to be a priority of the main contributor to Apache to make that happen. A conspiracy-theory approach might even suggest that they are deliberately applying a flawed, destabilizing model to the open source tree (commit then review, no feature freeze) while presumably they use a tighter and more controlled process to get the proprietary version out.
People have suggested that the internal versions of GNAT distributed in a semi-proprietary way by ACT may be better than the open source versions, while ACT says the opposite -- that their private versions are less tested, require technical support, and would only hinder those who don't have support contracts. I don't know the truth of the matter there, and this is not meant to point the finger at ACT, but this phased-release strategy by Covalent raises some of the same questions.
VA's proprietary SourceForge conjures a similar spectre. There will still be a free SourceForge, but improvements are going primarily into the proprietary version. Perhaps outside engineers will start playing catch-up and adding clones of the proprietary features to an open source branch of SourceForge, but at best the open source version will still lag behind, and it may happen that it will always be so far behind as to be relatively crippled compared with the proprietary version.
Is open source heading toward a model where some of its dominant programs are available for free only in crippled versions lagging behind the proprietary releases? And if so, what does that say about unpaid volunteer contributions? Are they really for the public benefit, or for the benefit of a proprietary developer? If the latter, why volunteer?
Other problems with crippled free versions have been noted here before, such as having to pay for documentation on ostensibly free software, or needing a proprietary installer to effectively install a supposedly free system. This week's events involving VA and Covalent show that this may be becoming a trend with significant impact on the whole open source and free software movement.
Tim
Re:crippled free versions -- Covalent and VA (Score:2)
I think that's far from certain. One of the premises of the BSD license is that even if someone does take the code and release a proprietary fork, the Open Source model has enough advantages that the community should be able to keep up and even surpass them.
That seems likely to happen at some point.
Re:crippled free versions -- Covalent and VA (Score:2)
I don't think there's any historical evidence for the popular idea that open source software improves faster than proprietary software. As this post [slashdot.org] from an IBM open source developer points out, there are serious management overheads and inefficiencies associated with the model.
If managing engineers under normal conditions is like herding cats, open source development is like harnessing a swarm of bees.
Tim
Re:crippled free versions -- Covalent and VA (Score:2)
I doubt that. As an active Apache developer who doesn't really have any ties to a company with a vested interest in Apache, I work with the Covalent people every day. And, I doubt that the open-source version of Apache HTTPD will lag behind any version that Covalent or IBM has. In fact, I bet that the version that Covalent will release on Monday will include some bugs that have already been fixed in the open-source version.
Where I think companies like Covalent come in is to support corporations that *require* support. Their price ($1495/CPU or something like that) isn't targeted towards people who would be interested in the open-source version, but for corporations that can't ever afford to have their web server go down.
Covalent also offers some freebies (such as mod_ftp). I think under Apache 2.0, it is sufficiently easy for someone to come in and write a module that handles FTP. It's just that no one has had the inclination to write one. And, I bet if someone did, it just might eventually be better than the one Covalent wrote.
VA is a little different from Covalent as, IIRC, they are the sole owners of Sourceforge, but Covalent is just a part of the Apache community (an active one though).
I work on what I want to work on. People who work at Covalent have a "direction" on things to work on. As an unpaid volunteer, I get to work on whatever I feel like at the moment. I'll take that any day of the week. But, there is a definite value to getting paid to work solely on Apache.
FWIW, I believe this is definitely not the case with Apache. The docs are freely available and the Win32 installer is one donated by IBM (I think, I forget - someone donated it).
Covalent sucks with UCE! (Score:1)
Subject: Buy Covalent's Apache Web Server and Get a FREE Entrust Certificate
I can tell because I use unique email addresses for everyone.Re:Covalent sucks with UCE! (Score:1)
--Will
"drops" monday? (Score:1)