Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Apache Software

Apache Server Nears 2.0 148

An Anonymous Coward writes: "The Apache httpd project has released a new beta of their apache 2.0 server (v32)". For those who have not been following the 2.0 development, this is the third beta that has been produced. The new version of Apache sports the new APR API and a new method for filtered I/O, and has been rewritten to make use of a hybrid thread/process model. With Covalent already selling a commercial version of 2.0, hopefully we will see a full release of the open source version in the near future.
This discussion has been archived. No new comments can be posted.

Apache Server Nears 2.0

Comments Filter:
  • Covalent (Score:1, Redundant)

    by reaper20 ( 23396 )
    Isn't Covalent selling an 'Apache 2.0' product? Does anyone have any experience with it?

    I'd like to know the changes between their version and the 'official' version. It'd be interesting to note which features/bugfixes the Apache Foundation felt was worth waiting for.
    • Haha, not only did you not bothering reading the article (links), but you didn't even finish reading the post!!

      "With Covalent already selling a commercial version of 2.0, hopefully we will see a full release of the open source version in the near future." -krow

      well done :)
  • Apache 2.0 Threads (Score:3, Interesting)

    by TurboRoot ( 249163 ) on Tuesday February 19, 2002 @09:47PM (#3035793)
    The main benefit of Apache in the first place is the stability benefited from the fork() nature of it.

    Apache 2.0 brings some nice and intresting new features that only a multithreaded server can bring, but these are all features already available in tons of other web servers..

    Unfortuantly, the programmers working on Apache 2.0 don't know how to write thread safe code. Don't believe me? Go get the source yourself, cuddle up to a posix threading book and pull out a 100% correct threading library. (Like the FreeBSD one.) :)

    Example... DONT USE SLEEP(3) in a multithreaded application!.. but whatever :)

    What I am basically saying is.. I would't get apache 2.0 for production _yet_. Someday Apache 2.0 will be the model for how a stable multithreaded multi-protocol server can be written.

    By the way, I normally don't take time out to actually post. But since my moderation and meta moderation privs were removed since i moderated a post I found intresting.. to be intresting. (The great slashdot troll investigation). About 500 people lost their moderation ability at that time. What a nice brave new world.

    The advance is. I can now say what I truely feel and not care about karma.. because this place is a joke. :)
    • by Anonymous Coward on Tuesday February 19, 2002 @10:18PM (#3035889)
      POSIX.1 specifies sleep(3) be both thread-safe and cancellation-safe.
      • by BusterB ( 10791 ) on Wednesday February 20, 2002 @01:42AM (#3036549)
        > POSIX.1 specifies sleep(3) be both thread-safe and cancellation-safe.

        I don't think he's talking about sleep being thread-safe. I think he's talking about using sleep rather than a condition variable and a while loop to wait for access to a shared resource. The problem with using sleep is that it's entirely dependent on system load/ speed/ alignment of the moon. Code like that assumes that if it waits a certain amount of time, the resource will be free.

        Imagine checking to see if a pool is dry, noticing that it is, coming back later and jumping in without looking. It might be full later, but it's much better to keep looking and not jump until the pool actually has water.

        This type of thing is especially hard to debug when you upgrade your hardware and your software mysteriously fails. Suddenly, you're not sleeping long enough to get an exclusive lock on a shared resource.
    • by ink ( 4325 ) on Tuesday February 19, 2002 @10:47PM (#3035983) Homepage
      Go get the source yourself, cuddle up to a posix threading book and pull out a 100% correct threading library. (Like the FreeBSD one.)

      When did FreeBSD get 100% compliance?

      In addition, ngpth [ibm.com] has been accepted by Linus and they are very close to 100% compliant as well as providing for M:N mapping to scale on multiple processors, and to give programmers choice of kernel or userland threads with standard calls. BSD is great and all, but you guys do way too much chest-pounding.

    • What do you mean about sleep(3) not being thread-safe? I'd like to see more explaination about how you see Apache's code as not thread-safe. You are likely talking out of your arse, and you don't deserve to get moderated to '5'.
      • Simple, a lot of platforms use lightweight userland threads that all reside within one process. When you sleep the process, all the threads will sleep.

        Now, those thread libraries are suppost to relink the sleep calls to a thread safe way to accomplish the same thing.. but not all of them do.

        Linux never had this problem, because of the _clone system call, which makes threads to be seperate processes that share memory. That isn't necessary a bad thing, but it makes developing threaded applications on Linux a bad idea because non thread safe code will work better on linux than other places.
    • by Fweeky ( 41046 )
      > a 100% correct threading library. (Like the FreeBSD one.)

      FreeBSD's threading it actually supposed to be rather smelly - just ask on freebsd-hackers or so.

      This is why Apache 2 on FreeBSD is best off sticking with the prefork MPM. The introduction of KSE's in -current will alleviate this, but that's still heavily in development.
      • by Fweeky ( 41046 )
        Should probably have included a link to http://people.freebsd.org/~jasone/kse/ for those who cba Googling; there is some good stuff about the current threading implimentations there too.
    • Of course, the only times threading is going to benefit you in Apache's model in on Windows (where processes are so damn heavy) or large SMP systems with heavy loads.

      The real improvements are things like the cool forwarding that allows you to build simple modules that say -- parse an XML with embedded PHP and pass it off to SSI and then XSLT it.
  • by Dysan2k ( 126022 ) on Tuesday February 19, 2002 @09:53PM (#3035821) Homepage
    Personally, I don't mind waiting on the Apache project to take their time and do it right. I believe 2.0 isn't bloatware, but a far more modular and extensible version of the worlds fav. web server. Personally I've been waiting for a WHILE to start using it. I'm not sure if PHP4 will compile against it yet. Maybe out of CVS it will.

    With the new threading, it should manage to push out pages a lot faster under load, and make better use of the processors. Might have to go download today. Here's a project for those of you bleeding edgers out there. I've yet to manage this one myself:

    Apache 2.0 + mod_perl + php4 (with support for MySQL 4.x) + mod_ssl.

    I don't think non-CVS PHP4 will handle MySQL 4.x, but perhaps there are others that know how.

    Back to topic, way to go guys!!
    • -flame- -flame- -flame- OK, I was just kidding. I love PostgreSQL, but even I realize that when you don't need stability, speed, good SQL compliance or ... what was I saying again? -flame- -flame- -flame- Alright, back on topic, I'm pretty sure that you've been able to compile PHP4 for Apache 2.0 for quite a while now (at least the option has been there - maybe it's been broken?).
      • by Fweeky ( 41046 ) on Wednesday February 20, 2002 @12:35AM (#3036378) Homepage
        The API's are not yet fixed, so they tend to break. You can probably compile CVS of PHP to the current beta Apache 2, but the next time they change something PHP will most likely track the CVS change, leaving the beta out in the cold again.

        I managed to get mod_php + Apache 2b28 coexisting, but it liked to segfault a lot (even when idle) and always ended up eating 100% CPU. I even managed to add Zend 2 (next-gen PHP engine) to the mix, but, well, I haven't seen Apache fall over so much since I got PHP 4.0.0 to generate 50,000 internal errors on a single script.
    • You can't do that yet. Both mod_perl and mod_php have to be ported to the new Apache 2 API first.
    • Well I just got cvs php4.2.0-dev to compile against apache2, with some tweaking of the php4/configure file that buildconf generates, so the support is there. I tried using php 4.1.1 but it was having nothing of the sorts, I modified the configure file the same way, but make gave me some error from outerspace halfway through the compile, which unforunately is long gone off my rxvt buffer. :/

      Apache2 with php4 was up and running for a little bit, I hit one php page, it worked fine, and I think apache2 segfaulted sometime after that, I might have hit the status page first. Now apache2 won't even start with php4 enabled, no error messages, no nothing, not even turning on debug in the error_log, still no messages. It simply doesn't start. If I disable the php module it starts fine. Oh well, guess I'll stick to apache 1.3.23 for the time being.

      Here's a piece of the error_log for anyone intersted:
      [Wed Feb 20 13:06:22 2002] [notice] Apache/2.0.32 (Unix) PHP/4.2.0-dev configured -- resuming normal operations
      [Wed Feb 20 13:06:45 2002] [error] [client 216.4.165.11] Invalid method in request ***binary junk here /. hates, this message 4 times, around 4 seconds apart***
      [Wed Feb 20 13:10:59 2002] [notice] Graceful restart requested, doing restart
      [Wed Feb 20 13:11:08 2002] [notice] seg fault or similar nasty error detected in the parent process
      [Wed Feb 20 13:27:52 2002] [notice] Apache/2.0.32 (Unix) configured -- resuming normal operations
  • by Eryq ( 313869 ) on Tuesday February 19, 2002 @09:54PM (#3035828) Homepage
    Many sites use Apache as an application server or to serve dynamic-content; e.g., by using mod_perl (to deliver blazingly-fast dynamic content generated by Perl scripts), or as a flexible and solid front-end to Java servlet engines like JServ and Tomcat.

    And far from being bloatware, Apache has (at least during 1.*) gotten more modularized over time, making it easier to fine-tune logging, access control, URL rewriting, etc, etc. I don't know squat about 2.x, but I expect good things.

    Just the $0.02 of a Perl/Java hacker who uses it extensively...

  • by blackmateria ( 255470 ) on Tuesday February 19, 2002 @09:56PM (#3035834)

    I've been using Apache 2 on Linux and FreeBSD for about 2 months now (got into it while playing around with Subversion [tigris.org], another project that seems to be making excellent progress), and IMHO it is really going to rock the server world. Some major plusses:

    • ./configure; make; make install (almost). No more APACI, thankfully.
    • APR. It's already starting to be used by other projects.
    • Totally rewritten mod_cache, mod_proxy, etc. Works much better now!
    • Will actually work on Windows (well, some may not see this as a benefit, but whatever).

    People have been complaining that Apache 2 is slow to come out, but from what I've seen lurking on the mailing list, it's because they want to ensure the quality of this release. They've also been talking about how they want a lot of beta testers, because (<rumor mode on>)they want to release soon, maybe even from 2.0.32. So get out there and beta test it!


    ---
    Have you crashed Windows XP with a simple printf recently? Try it! [zappadoodle.com]
    • what exactly was wrong with APACI? i haven't been following apache2 so this is the first mention i've heard of it. the docs just say now apache2 looks more like other open source projects in the install process. is that the sole benefit?
    • I've been using Apache 2 on Linux and FreeBSD for about 2 months now (...), and IMHO it is really going to rock the server world.

      This isn't meant to be a flame, but a genuine complaint of the Apache web server that I haven't seen adequately addressed anywhere. How can Apache claim to be a modern web server if it continues to use an outdated request model? Having a separate process or thread for each request is completely unnecessary. Even for a site with dynamic content, the majority of the requests will be for static content (images). So why use up system resources when not necessary?

      A request for static content is essentially just moving data from one file descriptor to a socket, something that sendfile(2) can be used for on operating systems that implement it. If a single system call combined with a select(2) loop can handle the majority of the requests, then why is each request tying up a process or a thread? When reading the Apache mailing lists, you get answers such as "it's too difficult for other programmers to extend the server", "processes or threads don't have to be expensive depending on how the operating system implements them", "everyone is happy with how it works now", and "Apache is meant to be correct first and fast second". None of these address the issue that Apache's request model is flawed, and it will never be high performance until it is corrected.

      Additionally, the Zeus Web Server [zeus.com] is well implemented and doesn't suffer from any of the problems that seem to keep Apache from being implemented correctly. It's also better than Apache in every way, ranging from performance to configuration (with the exception of not being open source). Zeus did everything right and built a great web server. Years later, Apache is just now getting their next version into beta, and it seems to be just as fundamentally flawed as the first version. If there is ever an open source web server as high quality as Zeus, then it more than likely won't be Apache.

      • If serving huge amounts (>1 GB/hour)of static content from a single-CPU computer is what your server does, Apache is not for you. The Apache model will never do that as fast as Tux, Zeus or Boa.


        But if you would stop to think for a while, you would see that no one does that. Nowdays, it's all about dynamic content. And in that case the overhead of using multiple threads is tiny compared to the added benefits of scalability and stability.


        It is actually possible to use a kernel-based server like Tux for static content and let Apache take care of the dynamic bits.

        • by Electrum ( 94638 ) <david@acz.org> on Wednesday February 20, 2002 @10:28AM (#3037612) Homepage

          If serving huge amounts (>1 GB/hour)of static content from a single-CPU computer is what your server does, Apache is not for you.

          A well designed non blocking server can run in multiple processes, to take advantage of multiple CPU's. Zeus does this.

          But if you would stop to think for a while, you would see that no one does that. Nowdays, it's all about dynamic content. And in that case the overhead of using multiple threads is tiny compared to the added benefits of scalability and stability.

          That's wrong. As I said, most of your requests will be static content. Take Slashdot, for example. This comment posting page is one perl page, and six images. Do you really need six extra processes for those images? Especially large Apache processes that have mod_perl and who knows what else compiled into them. Sure, the code pages should be shared, but it's still poor design.

          It is actually possible to use a kernel-based server like Tux for static content and let Apache take care of the dynamic bits.

          Sure you can do that, but wouldn't it be better to use a well designed server in first place, and not have to kludge around design flaws in the web server? Your web server should not be your application server. Your web server should be serving web pages. Your application server should be running applications. The Apache model of "build everything conceivable into the web server process" is a bad idea, and is not consistent with the unix philosophy of doing one thing, and doing it well.

          Everyone knows CGI's are bad for performance because it causes forking a separate CGI process for each request. Turning the CGI's into Apache modules solves this problem, but not in an optimal way. Applications do not belong in the web server. A model such as FastCGI is a much better approach. It is similar to CGI, especially in the sense that it is easy to program for. But instead of running the process and using stdin/stdout as with a CGI, it connects to the FastCGI via a socket. Thus the application stays running, and there is no process creation overhead. It keeps any necessary load balancing on the application end where it belongs, and out of the web server.

          Additionally, the application doesn't even need to be on the same box. You can have one or several application servers, and a single web server. A web server only needs to handle data. A single box should be able to fill your outbound pipe, or at least around 100mbits of it. If an application is slowing it down, then you need another application server, not another web server. It is unfortunate that the two are not seen as the separate entities that they should be.

          • Take Slashdot, for example. This comment posting page is one perl page, and six images. Do you really need six extra processes for those images? Especially large Apache processes that have mod_perl and who knows what else compiled into them.


            I doubt that any mod_perl based site is set up in such a way. At a bare miniumum, mod_perl sites have two apache binaries serving pages: one for the static pages, one for the dynamic pages. The static binary is obviously as lightweight as possible. If you're really interested in mod_perl tuning check out the mod_perl guide at perl.apache.org.

            • I doubt that any mod_perl based site is set up in such a way. At a bare miniumum, mod_perl sites have two apache binaries serving pages: one for the static pages, one for the dynamic pages. The static binary is obviously as lightweight as possible. If you're really interested in mod_perl tuning check out the mod_perl guide at perl.apache.org.

              Why should you go through all that extra hassle to make up for a design flaw in the web server? Wouldn't it make more sense to use a non blocking web server with a single process per CPU, and have the Perl FastCGI handling the Perl code?

            • You do realise that this means that the Perl processes have no idea of the remote IP? or of the SSL connection information?

              A second apache also requires a second set of configuration files and virtual servers which have to be maintained and provisioned. It's just a waste of time, although it does reduce the stupid memory requirements somewhat...
  • by Anonymous Coward
    Sorry, But I will be sticking with IIS for serving web pages. I mean if not for recovering from crashes and constantly applying patches what work would I have. People might think my job is redundant. ;-)
  • Performance results (Score:5, Informative)

    by augustz ( 18082 ) on Tuesday February 19, 2002 @10:23PM (#3035903)
    I've been following performance results for 2.0, and wanted to let folks know that it doesn't seem clear to me that there is this huge performance gain waiting to happen.

    http://webperf.org/a2/v29/Apache2_26-Nov-2001.html [webperf.org] has some 2.x v. 1.x results.

    Love to hear the lowdown on performance advantages of the new Apache from someone in the know or someone who has done some actual testing.

    Also, PHP/Apache perl/Apache integration are probably very high on many folks lists, what is the status of those two vis a vis apache?

    • Configurability is also very important. If 2.0 can be configured better than 1.x ever could, for me, than it will be faster, giving me more performance.
      • Couldn't agree more, stability and a low bug-count matter a lot for those with a bunch of servers to maintain and don't want to discount that.

        But 1.3 seems to bear up great for me at least in those respects, and higher performance means fewer servers is always appealing.

    • What compiler did they use for these results? I'm assuming GCC but would they have gotten better results using Intel's C++ compiler (or Sun's on a SPARC system)with the new Apache 2.0 code? I've heard you get much better SMP performence from Apache 2.0 using the compilers from the chip designers but i was wondering if anyone has tried this out and knows for sure.
    • Uh, those are old and there have been many improvements since then (mostly optimizing away mallocs). This report was used to press that issue and get everyone behind a faster httpd.

      It is true it isn't a huge performance win, but it is better than 1.3.
    • by Anonymous Coward on Wednesday February 20, 2002 @12:14AM (#3036313)
      the apache has several performance advantages.

      1 lower memory footprint.
      you can run a a server which normally took 4G of memory in 512M

      2. speed
      http://webperf.org/a2/v31/2002-02-11-v29/
      http://webperf.org/a2/v31/2002-02-12-v31/
      the page is similiar to the 'NEWS-STORY-NORMAL' column in the old one..

      check out the response time in the graphs.. can v1.3 get a 1-1.5 second response time as CPU increases like that ?? doubt it
      3. mtmalloc
      we found that using mtmalloc with apache 2.0 gave us a performance increase of 30% (yes 30%) by preloading the library

      4. v31 has got a different pool allocater, which reduces the mutex contention considerably.

      nice to see someone is referenceing my benchmarks ;-)

      BTW .. solaris 8/8 cpu/GCC v2.95

      while your surfing webperf.org.. why not download the agent and run it for a while?
  • What's with all the griping about how bloated and bad apache is, then how great IIS is, and how a web server should just read and write?

    Is this item being taken over by Microsoft?

    Everyone, download it and try it for yourself. It's really cool.

  • by justin_w_hall ( 188568 ) on Tuesday February 19, 2002 @11:05PM (#3036043) Homepage
    First off, I have to rant about how much I love their precompiled MSI builds. Convincing my boss that installing a webserver to replace IIS would be easy was about 3 million times earlier with that... run it, click thru the wizard, once-over the config file and you're up. Now you, too, can escape the IIS headaches in less than five minutes!

    With that said, has anyone tried the MSI for this latest beta? It didn't create the service for me automatically, and I wasn't sure if it was just my crackpipe or if it was an actual problem. Bug report's been filed already, just wanted to see if anyone else had any input...
    • Yeah, it's strange, because the previous beta does install the service. It's easy though, just run apache -i.

      I've just switched to 2.0 a few days ago on win32... so far it's been about the same as 1.3 for me, the only thing I had a problem with is that it doesn't substitute paths for shebang lines in cgi-bin files, so I have to write out full paths (1.3 did). And the conf file from 1.3 doesn't exactly work right away with 2 (which would be nice), I had to tweak it. Otherwise it's great.
    • "Convincing my boss that installing a webserver to replace IIS would be easy was about 3 million times earlier with that"

      I'm a CS student graduating soon, why is there such a hard time making bosses see the beauty and less hassle of these projects linux/apache/etc compare to the MSWin/IIS choice...I mean, who with the smallest notion on what is good would put up a fight to choose IIS over apache!? Will I have the same wonderful challange?

      • I'm a CS student graduating soon, why is there such a hard time making bosses see the beauty and less hassle of these projects linux/apache/etc compare to the MSWin/IIS choice...I mean, who with the smallest notion on what is good would put up a fight to choose IIS over apache!? Will I have the same wonderful challange?

        Before you graduate, be sure to catch up on the industry literature [dilbert.com] for valuble insights into how the real world works.

        -- MarkusQ

        P.S. Pay special attention to what happens to Asok [dilbert.com], and lean how to duck.

      • being able to plug in your domain SAM, with acls on the site. Also domain authentication with "web folders" (DAV) is another. Note: I will be happy to be corrected with a HOWTO that tells you how to point DAV at your PDC or SAMBA box here ... (without running a separate accounts database)
      • Well, it's especially hard because my company's a Microsoft Certified Partner. When I came on board we were relying on Microsoft products for everything, and I don't think anyone realized that there were a few better ways of doing stuff - proxy, for example, as Squid and IPFilter on a ghetto Pentium box smoked MS Proxy 2.0 (on a box twice as fast).

        So I'm starting to get away with using Linux and *BSD for things that they're better for, and as a result I'm slowly chipping away at the MS-dominant infrastructure we have piece by piece. YMMV, but it seems that the 'notion on what is good' doesn't always click with management.
  • Apache 2.0 is quite a bit like Linux 1.0 and, to a lesser degree, Linux 2.4.
    It keeps getting closer and closer--so amazingly close--but it never seems to actually be final. It gets tweaked and patched and asymptotically approaches 2.0, but doesn't seem to get there.
    I'm not bashing the Apache developers, quite the opposite as I am very happy that they are absolutely not releasing it until it is ready--and we all know (I hope) that Linux 1.0 was eventually released. And 2.4. If only some other server apps used were put under such intense scrutiny before release.
    • Why are you waiting for their OK? If this was MS it would be at version 3 already. The beta of open source is more reliable than most MS versions. The true final of an MS version is after SP2.

  • Threading is good (Score:2, Insightful)

    by Anonymous Coward
    I haven't used a webserver for just static pages in a long time, so it's good apache will support multithreading. Having complex database processes with apache 1.3.x could hinder it's scalability. Doing complex transactions like making calls to multiple databases in a threaded environment should scale better. Now some people will say, "why in the world would you want to make calls to multiple database?"

    The answer to that question is, dynamic transactions often access existing databases, which often have screwed up data models and require insert/updates in multiple tables. Some will run and scream "horror, horror, horror," but now that the .bomb blew up, more and more web developers are finding they have to work with bad, inefficient, poorly documented data models. Having multi-threading in Apache will improve it's scalability.

  • Reading through the changes from 1.3 to 2.0, I'd say they've put quite a bit of effort into improving win32 performance (multiprocessing, finally! among others).

    kudos.
    • I hope you're right. On their 1.3 notes for Windows they say the following:

      "Apache for Windows version 1.3 series is implemented in synchronous calls. This poses an enormous problem for CGI authors, who won't see unbuffered results sent immediately to the browser. This is not the behavior described for CGI in Apache, but it is a side-effect of the Windows port. Apache 2.0 is making progress to implement the expected asynchronous behavior, and we hope to discover that the NT/2000 implementation allows CGI's to behave as documented."

      The phrase "we hope discover" bothers me. Are they designing it to work correctly under Win32 or not?
      • Yes, clearly using synchronous calls on windows was the result of a bunch of unix coders not knowing how to program for windows at all. I was hoping that the promise for 2.0 meant that they actually hired some real windows coders. I guess not. A well written asynchronous and/or properly threaded application on windows can easily match performance of the best written UNIX apps. But no fork and block unix coder is going to ever be able to do the windows "port" justice (As we've seen). Now I guess we'll have to "hope to discover" if they got a clue or not. :(
  • From IBM... Apache V2.0 is the newly rearchitected open source Apache Web server that offers several significant enhancements, including a new "Thread-per-Request" model on UNIX and Linux operating systems. This new model offers increased performance and a significant reduction in the memory footprint of the server. On the Windows operating systems, it offers increased performance, along with capabilities and functionality that closely match those on the UNIX platform. The full information can be found here [ibm.com]
    • Re:IBM has it too (Score:2, Informative)

      by ||| ( 160483 )
      why are everyone so exited about the thread-per-request model? instead, many high-performance servers use non-blocking (or asynchronous) i/o models to scale. for instance, look at the seda project (http://www.cs.berkeley.edu/~mdw/proj/seda/) where a java implementation of a web-server using non-blocking i/o outperforms both the apache and flash web-servers for specweb99.
  • perchild MPM (Score:5, Interesting)

    by slamb ( 119285 ) on Wednesday February 20, 2002 @12:56AM (#3036434) Homepage
    I'm a little disappointed by Apache 2.0 so far.

    I've been looking forward to the perchild MPM. It can run different server processes under different UID/GIDs. This is important because mod_{perl,php,python,snake} run in-process with the Apache server. It's the only way to run them securely for different people other than a completely seperate webserver for each person (with its own IP address, configuration file, memory footprint, etc.)

    But perchild doesn't really work:

    • It's not portable to non-Linux platforms. (There was talk on the mailing list of marking it experimental because of this.)
    • It hasn't compiled (even on Linux) out of the box in several releases. In 2.0.29, easy to fix [apache.org] but still doesn't work right. (Not compiling is a sure sign it hasn't been maintained.) Not quite as easy on 2.0.32. There's a patch [theaimsgroup.com], but it doesn't look right to me.
    • It's easy to misconfigure it into running virtual hosts as root. (Bug report [apache.org])

    So, Apache 2.0 may be promising in the future...but when a feature I've been looking forward to for a long time is broken, I'm kind of disappointed.

  • &nbsp&nbsp&nbsp&nbsp&nbsp In early December, 2001, I sent an email to Ken Coar, one of the lead Apache developers, regarding Apache 2.0. Here is his telling reply.

    To: Paul Bain
    Subject: Re: Apache modules book

    Paul Bain wrote:
    >
    > Will your book on writing Apache modules cover the Apache
    > 2.0 API as well as the 1.3 API?

    No. It was originally [meant to cover 2.0], but it had to be scaled back [to cover just 1.3].

    > Is there much difference between the two API's, so much
    > so that rewriting existing 1.3 modules will be inordinately
    > time-consuming (and modules for 2.0 should instead be
    > written from scratch)?

    It depends on your definition of 'inordinately'. Unless it's something like mod_php, a few hours should probably suffice to convert pretty much anything. For best results, a complete rework of any content routines would be best, but much of the 1.3 API is still available -- but not as efficient nor as featureful.

    It's still going to be months (IMHO) before the 2.0 API is stable and the server released. &nbsp&nbsp[emphasis added]
    --
    #ken &nbsp&nbsp&nbsp&nbsp P-)}

    Ken Coar, Sanagendamgagwedweinini &nbsp&nbsp http://Golux.Com/coar/
    Author, developer, opinionist &nbsp&nbsp&nbsp&nbsp http://Apache-Server.Com/

    IOW, don't hold your breath waiting for the non-beta release of 2.0.

  • I still wonder why Apache 2.0 was designed to use a strange hybrid model instead of making a non-forking server, just like thttpd, webfs or zeus, whoose performance will probably still kick Apache.

    And Apache still doesn't have any integrated web administration front-end like Zeus.


  • The new version of Apache sports the new APR API

    The website for the APR says this:

    The mission of the Apache Portable Runtime (APR) is to provide a free library of C data structures and routines, forming a system portability layer to as many operating systems as possible

    What is the difference between this and the glib library which the GNOME programs use? This seems like the same kind of thing. Granted, it does seem to include some extra stuff which glib doesnt have, but still..

    • the apache licence is bsd'ish, and glib is lgpl.
    • It's a very sensible thing to make since it's cheap and eliminates much of the nasty #ifdef 'portability' one sees in programs.

      You can see an example of a multithreaded web server using a similar portability library on . [xitami.com]

      I remember showing this web server and its multithreaded / portability model to the IBM Apache team in December 1999 during the Bazaar at New York. Maybe they got some inspiration from it.

    • APR deals more with processes, threads, interprocess communication and networking while glib is more of a useful toolbox with trees, stacks and types, etc.

      That being said, there's definately an overlap.
  • Wouldn't you rather see the thing actually improve, than just see it get a release label?
  • Ok, so maybe this is not the place for this, but I can't seem to get any answers out of the developers about this. ./configure still doesn't work.

    I downloaded 2.0.28 in December and tried to ./configure --enable-layout=opt. No dice - it still throws everything in /usr/local/apache2.
    I posted to the apache-users mailing list in December, and no one responded. I tried again yesterday, with 2.0.32, and it still doesn't work.

    Looking through the bug tracking list, I can see that this bug has been filed since November 2001.

    How can Apache 2 be nearing release if you still can't get it to install where you want it to?

"The four building blocks of the universe are fire, water, gravel and vinyl." -- Dave Barry

Working...