Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Software The Internet

World's "Fastest" Small Web Server Released, Based On LISP 502

Cougem writes "John Fremlin has released what he believes to be the worlds fastest webserver for small dynamic content, teepeedee2. It is written entirely in LISP, the world's second oldest high-level programming language. He gave a talk at the Tokyo Linux Users Group last year, with benchmarks, which he says demonstrate that 'functional programming languages can beat C.' Imagine a small alternative to Ruby on rails, supporting the development of any web application, but much faster."
This discussion has been archived. No new comments can be posted.

World's "Fastest" Small Web Server Released, Based On LISP

Comments Filter:
  • by omar.sahal ( 687649 ) on Monday May 25, 2009 @01:43PM (#28085497) Homepage Journal

    LISP, the world's second oldest high-level programming language.

    Sorry its the third oldest this [wikipedia.org] is the oldest.
    Designed by Konrad Zuse [wikipedia.org] who also invented the first program-controlled Turing-complete computer. Fortran is the second oldest programming language.

  • by PaulBu ( 473180 ) on Monday May 25, 2009 @01:51PM (#28085595) Homepage

    It can be, but any decent production implementation is compiled to native machine codes -- it just includes compiler (and usually pretty fancy optimizing one!) built into the image and always available.

    Try running, say, SBCL one day before spreading misunderstandings...

    Paul B.

  • Re:Speed is not all (Score:2, Informative)

    by Anonymous Coward on Monday May 25, 2009 @01:52PM (#28085601)

    the LISP language itself is written in C.

    Not at all. Lisp implementations are usually written in Lisp.

  • by julesh ( 229690 ) on Monday May 25, 2009 @02:02PM (#28085695)

    Today is Memorial Day in the United States of America. We would appreciate you folks taking some time to reflect on our servicemen who gave their lives saving your asses in WW I and II.

    We do that on Nov 11, thanks. I don't see why we need to adopt your dates for the purpose.

  • by vivaoporto ( 1064484 ) on Monday May 25, 2009 @02:05PM (#28085737)
    Parent post is inflammatory but not troll. He has a point, this implementation is a minimal test case built in order to prove a point. A skilled C programmer could implement the same test case that would perform better than the LISP one, if the task was worthy.
  • by pmc ( 40532 ) on Monday May 25, 2009 @02:07PM (#28085763) Homepage

    I think there is an implied "still in use" in the statement - otherwise this is a list - http://en.wikipedia.org/wiki/Timeline_of_programming_languages [wikipedia.org] suggests there are older ones still, and Lisp wasn't even third by any stretch.

  • by Whalou ( 721698 ) on Monday May 25, 2009 @02:47PM (#28086251)
    From http://www.kvacanada.com/stories_rskap'yong.htm [kvacanada.com]

    About 1 a.m. April 25, a Dog Company platoon was attacked from three sides by large numbers of enemy troops. Two Patricias manning a Vickers machine-gun where killed. Waves of Chinese spilled into the company area. It was hand-to-hand-fight-for-your-life combat. Dog Company was on the verge of being overrun. The company commander, Capt. Wally Mills, requested that artillery be fired on his own positions. The New Zealand gunners obliged. The defenders hugged the bottom of their trenches while artillery shells roared in overhead. The shells scoured everything above ground level, driving off the Chinese. But they returned. More artillery fire followed. 2300 rounds hammered Dog Company positions.

    This web site was cited in the Wikipedia artical posted by the parent.

  • Re:Speed is not all (Score:3, Informative)

    by K. S. Kyosuke ( 729550 ) on Monday May 25, 2009 @02:49PM (#28086271)

    In terms of readability and maintenance it might be a nightmare (looking at the LISP code). The benchmark seams biased anyway, you can't beat C/C++, really, and the LISP language itself is written in C.

    Very few implementations of Lisp are written in C. Usually there is only a small kernel, and on top of that kernel sits the standard library and the compiler. The kernel often provides only the memory model, the garbage collector, and links to the OS - only the things that can't be written in Common Lisp are in C. Mind you, they *could* be written in some "lower-level Lisp", but since the passing away of Lisp Machines, which ran Lisp Assembly code, nobody seems to bother with this - portable C compilers are ubiquitous and the core itself is portable this way. This means that a compiled Lisp program runs essentially very little C, unless it is collecting garbage.

    The reason for Lisp being written in Lisp is precisely what you're claiming there (only the other way round): A Lisp written in Lisp is much more maintainable than a Lisp written in C. Besides, compiler often serves as a good test suite for itself.

    And as far as "beating C" is concerned, you might want to take a look at Stalin Scheme.

  • by __aasqbs9791 ( 1402899 ) on Monday May 25, 2009 @03:12PM (#28086527)

    I dunno. This is what I saw when I tried to read it:

    "Service Temporarily Unavailable

    The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
    Apache/2.2.4 (Ubuntu) mod_fastcgi/2.4.2 PHP/5.2.3-1ubuntu6.5 Server at john.freml.in Port 80"

  • by Anonymous Coward on Monday May 25, 2009 @03:31PM (#28086769)

    The challenge is admittedly silly, but you can be faster than C.

    Reason 1: any language (which implements it) can make kernel calls. Many Lisp implementations, for example, have a way to make system API calls -- and this glue is usually implemented without any C. All you need to do is be able to pass arguments like a compiled C program would. No language has a monopoly on syscall 186.

    Reason 2: kernel functions aren't the ultimate in performance. There is plenty of functionality which can't be accessed by (vanilla) C, and in fact the kernel you praise so highly has many pieces written in assembler for just this reason. (Quick: how do you make a SSE3 call in ANSI C?) Fortress, for example, is built around the idea of parallelizing everything, including loops, which would be difficult to do in portable ANSI C and probably not be able to beat a language designed around that as its core feature.

    Reason 3: the Linux kernel (and probably others) tends to get faster over time. 2.6 is much more efficient than 1.0. ISTR that if your Lisp compiler was pretty fast (but a bit slower than C), a Lisp program using an algorithm that Linux 2.6 does will beat the Linux 1.0 algorithm in C. I doubt the Linux 2.6 algorithms are all (provably) optimal, so all you need is a better algorithm than what your kernel does. Cheating? I don't see it that way: a lot of algorithms I can write are pretty simple in Lisp, but mind-bogglingly complex in C. It would not surprise me in the least if somebody could come up with a much more efficient algorithm in Lisp that a C programmer would have to be a supergenius to come up with. I don't consider that cheating in this particular contest.

  • As much as Lisp people want to say that Lisp lost because of the price of Lisp machines and Lisp compilers, it actually lost because it isn't a particularly practical language; that's why it hasn't had a resurgance while all these people move to haskell, erlang, clojure, et cetera.

    Lisp is a beautiful language. So is Smalltalk. Neither one of them were ever ready to compete with practical languages.

    The idea that LISP hasn't had a resurgence is wrong. Take a look at books published on common lisp recently. You'll see several from about 2004 to 2009. The SBCL project revived the CMUCL compiler in a cross platform and easier to improve way, which resulted in a large number of improvements. And places like common-lisp.net, clocc.sourceforge.net and cliki.net are the repositories for shared code in the free software community.

    There are several webservers written in common lisp, this is not the first by a long shot, and in case you didn't know, the technology inside orbitz is written in common lisp.

    The reason Common Lisp is not dominating the world is mainly that it takes a fair amount of sophistication to "get" the LISP way of doing things, and the huge availability of C based libraries.

    The popularity of Python is essentially about having a LISP that has a more familiar syntax and interfaces well with C programs. Python isn't LISP but it's not very far off.

  • by debatem1 ( 1087307 ) on Monday May 25, 2009 @03:48PM (#28086903)
    1) The question is about functional programming languages versus imperative programming languages- not high-level versus low-level.

    2) Can we agree on a platform? If I get to name it, its going to be the Xerox 1109, and you're toast.

    3) The computer language shootout [debian.org] has some numbers that don't look so good for C. Maybe you'd care to re-implement the thread-ring test? Cause right now it's taking C 164+ seconds to do it, and 9 on haskell. Same thing on the k-nucleotide test.
  • Re:He's also right (Score:4, Informative)

    by Anonymous Coward on Monday May 25, 2009 @04:16PM (#28087169)

    C is not how a modern processor thinks, with super-scalar instruction issue, cache and pre-fetch memory controls, and speculative branch prediction. In the end, even the C community splits into camps that let the optimizer do everything, versus embed some hand-written assembly or equivalent "machine intrinsics" routines in the middle of their normal C code. In both cases, non-trivial amounts of profiling and reverse-engineering are often needed to coax an appropriate machine code stream into existence, and this machine code is decidedly not how the developers usually think.

    The choice of language is not so significant really. You can find Lisp dialects that efficiently use native machine types and have little runtime cost due to having weak type systems (just like C) where casting is easy and the responsibility for crazy results lives with the programmer and the limited ability of the compiler to check some static cases. These dialects will run imperative code quite well, e.g. if you transliterated typical C procedures into equivalent Lisp procedures, you'd get similar performance. Ironically, these systems aren't as fast when you write very high-level or functional Lisp, because those sorts of programs rely on a more elaborate optimization and runtime support layer, e.g. to optimize away recursive function call overheads or frequent allocation and destruction of temporary data like lists. This kind of code also doesn't work well in C, so the programmer has to perform these optimizations at the source level, by writing loops instead of recursion and making use of stack variables and static data structures instead of making many malloc/free calls in inner-loops, etc.

    The main difference is the presumed runtime system for the language, the compilation goals, and the core language libraries. This includes things like whether you have garbage collection or explicit memory management, how you compile (whole program versus treating every function/procedure as an ABI symbol), high-level IO abstractions or low-level register (or memory-mapped) IO and interrupt events, etc.

    If you're interested in this stuff, you might learn something from reading about PreScheme, which was a lisp dialect designed to allow the runtime system for a full Scheme (lisp dialect) to be written in a more limited Scheme-like language. This is much like the core bits of an OS kernel like Linux are written in carefully controlled subsets of C that do not presume the existence of an underlying runtime environment nor the standard C library.

    In reality, many of the compiler and runtime techniques applied to a simple language like lisp could be applied to a C implementation as well. It's really a cultural rather than technical issue which prevents there being C environments that skip the traditional, naive compile and link strategy used in POSIX and UNIX ABIs.

  • Re:He's also right (Score:5, Informative)

    by The_Wilschon ( 782534 ) on Monday May 25, 2009 @04:25PM (#28087249) Homepage
    You forget about compilers. LISP gets compiled (by most implementations), too. All the "nifty high level programming shit" can, and sometimes does, if you have a good enough compiler, get compiled away. Furthermore, the "nifty high level programming shit" provides a whole lot more information to the compiler, allowing it to do much more aggressive optimizations because it can prove that they are safe. If somebody comes up with a slick new optimization technique, I don't have to rewrite my LISP code, I just implement it in the compiler. You'd have to go back through every line of C code you've ever written in order to implement it. If somebody gives you a radically different CPU architecture, the C code that is so wonderfully optimized for one CPU will run dog slow. You can reoptimize it for the new arch, but then it will run slow on the old one. With a good LISP compiler, the same code gets optimizations that are appropriate for each arch.

    Check out Stalin, SBCL, and http://www.cs.indiana.edu/~jsobel/c455-c511.updated.txt [indiana.edu]. You might be surprised at what you find.
  • Re:He's also right (Score:3, Informative)

    by Skinkie ( 815924 ) on Monday May 25, 2009 @04:34PM (#28087331) Homepage
    And that is the point LISP guy wants to make iff I have a LISP compiler that in general optimises better than the coding structure a C programmer takes, it will be faster because you could heavily optimise the LISP compiler. In the same ballpark are Haskell (and maybe in the future Python) iff their compilers generate better structures because the task is better formally defined. It could generate the optimal structure for the problem. Maybe more optimal than a human would design it. Today: no.
  • by laddiebuck ( 868690 ) on Monday May 25, 2009 @04:57PM (#28087573)

    What's more without cache! That is, for every request, the PHP webpage is being recompiled. I hope he doesn't call that a fair comparison, as anyone even remotely interested in high throughput takes ten minutes to install a caching system like xcache or one of five other alternatives. I bet you anything that lighty, fastcgi and xcache would serve 1.5-2 times as many requests per second as his homebrew code.

  • by shutdown -p now ( 807394 ) on Monday May 25, 2009 @07:08PM (#28088713) Journal

    The debian language shootout has a few examples of Functional languages being faster than people's best efforts in C, especially when it comes to parallelisation. I suggest you try and write a regex-dna example that's faster than the Haskell implementation for example.

    First of all, you should be careful about using results from the Language Shootout in general, because they often don't know what they're measuring. For example, for quite a while, Haskell scored much higher on the benchmark because the tests were written in such a way that results were computed but then never used; and Haskell compiler is surprisingly good at figuring that out, so it discarded the whole computation part as well. In other words, Haskell tests didn't actually do the same work that C tests did. They've fixed it since then, AFAIK, but many articles on the Web that reference "Haskell beating C in the Shootout" are from before the fix. Here'a [ffconsultancy.com] a much more realistic and interesting benchmark, in which Haskell didn't exactly do well (so far the best achievement is being 3 times slower than OCaml), despite the problem - ray tracing - being very well suited for functional languages.

    The other problem with Haskell is that it is very hard to tell how efficient your code is, both time- and memory-wise, from just looking at it. Because of the pervasive lazy evaluation, "neat" and innocent-looking code can be a performance deathtrap (that classic quicksort implementation in Haskell is one such example). It's extremely easy to get quadratic performance out of something that is defined in the simplest and most concise manner possible, and really shouldn't be more than linear.

    What matters is how well Haskell performs in real-world projects. Judging by how Darcs performs, it's pretty meh (how many projects have migrated off Darcs already because of performance reasons? heck, GHC team itself have dropped Darcs in favor of Git!).

  • by shutdown -p now ( 807394 ) on Monday May 25, 2009 @07:16PM (#28088793) Journal

    Today is Memorial Day in the United States of America. We would appreciate you folks taking some time to reflect on our servicemen who gave their lives saving your asses in WW I and II.

    Hi, I'm from the country formerly called the Soviet Union. We would appreciate you folks learning your history, so that you'd know that USA wasn't "the country that won WW2". You might, for example, want to remember that over 3/4 of all German losses in manpower were on the Eastern front (5.5 million KIA total, 4.3 out of which are on Eastern front), and that over 10 million Soviet soldiers, and twice as many civilians, payed with their lives for that achievement.

  • by countach ( 534280 ) on Monday May 25, 2009 @08:10PM (#28089281)

    How many C programmers use "restrict" everywhere that it could be used? Zero point zero.

    Or how many use it anywhere at all for that matter? Probably like zero point one percent of programmers.

  • Maybe this is overly pedantic, but I've seen it mentioned several times in various posts that "Orbitz is powered by LISP"

    That's very true, but only one component of their back-end is actually written in LISP - the lowest-fare search engine.

    Also, Orbitz did not write that component, called QPX - it was actually written by a company called ITA Software, who licenses it to dozens of other air-fare cross-shopping services.

    Despite the other issues with Orbitz, QPX is an excellent example of what can be accomplished by highly skilled LISP programmers - an exceedingly fast, flexible, and successful search algorithm that they have been able to maintain as the industry leader since it's invention over twelve years ago.

    As far as your assessment of "Orbitz is ridiculously slow for the amount of data it processes" I beg to differ. Having worked for ITA in the past, let me tell you the amount of data searched through is staggering, especially when you consider that that data set is updated continuously, in nearly-real-time (I could claim real-time, but I like being accurate)

    Combine that data source with the fact that the queries sent can have dozens (and in some cases hundreds) of parameters, and various results can be filtered and modified arbitrarily based on rules imposed by the airlines and their sales partners (eg. Orbitz' negotiated fares for Airline X vs Airline Y, per flight/date/time/passengers/booking class etc etc etc) *and* that without a highly sophisticated approach to finding the best solutions the result set can have *billions* of possibilities....

    Yeah... Orbitz' fare searching is pretty damned fast, considering.
  • by Anonymous Coward on Wednesday May 27, 2009 @04:37AM (#28106543)

    When Google Maps chooses a route, the length of road A doesn't change depending on whether or not you will be coming back via road B on a weeknight. Airlines use such elaborate and constantly changing rules to create price discrimination that two people on the same flight almost never pay the same fare.

  • by Ikari Gendo ( 202183 ) on Wednesday May 27, 2009 @11:39AM (#28110363) Homepage
    Paul Graham has commentary from an ITA insider [paulgraham.com].

    6. If you want to do a simple round-trip from BOS to LAX in two weeks, coming back in three, willing to entertain a 24 hour departure window for both parts, then limiting to "reasonable" routes (at most 3 flights and at most 10 hours or so) you have about 5,000 ways to get there and 5,000 ways to get back. Listing them is a mostly trivial graph-search (there are a few minor complications, but not many), that anybody could do in a fraction of a second.

    7. The real challenge is that a single fixed itinerary (a fixed set of flights from BOS to LAX and a fixed set back) with only two flights in each direction may have more than 10,000 possible combinations of applicable "fares", each fare with complex restrictions that must be checked against the flights and the other fares. That means that the search space for this simple trip is of the order 5000 x 5000 x 10000, and a naive program would need to do a _lot_ of computation just to validate each of these possibilities. Suitably formalized, its not even clear that the problem of finding the cheapest flight is NP-complete, since it is difficult to put a bound on the size of the solution that will result in the cheapest price. If you're willing to dispense with restrictions on the energy in the universe, then it is actually possible to formalize the cheapest-price problem in a not-too-unreasonable way that leads to a proof of undecidability by reduction to the Post correspondance problem :-).

    So it seems that your assumption that "Fares just aren't that complex. It's a straightforward directed graph." is in error. Remember that this work used to require dedicated intelligence (i.e. a travel agent) who was at a serious disadvantage in terms of fare data.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...