
Hello, As we all know, the true measure of performance for a web server is the classic PONG test. And, so the Happstack team is pleased to announce the release of the new acme-http server! hackage: http://hackage.haskell.org/package/acme-http source: http://patch-tag.com/r/stepcut/acme-http When testing on my laptop with +RTS -N4 using the classic PONG test: $ httperf --hog -v --server 127.0.0.1 --port 8000 --uri / --num-conns=1000 --num-calls=1000 --burst-length=20 --rate=1000 acme-http delivered 221,693.0 req/s, making it the fastest Haskell web server on the planet. By comparison, warp delivered 51,346.6 req/s on this machine. The secret to acme-http's success is that it large avoids doing anything not required to win the PONG benchmark. It does not support timeouts, it does not check quotas, it assumes the client is HTTP 1.1, it does not catch exceptions, and it responds to every single request with PONG. The goal of acme-http is two fold: 1. determine the upper-bound on Haskell web-server performance 2. push that upper bound even higher In regards to #1, we have now established the current upper limit at 221,693.0 req/s. In regards to #2, I believe acme-http will be useful as a place to investigate performance bottlenecks. It is very small, only 250 lines of code or so. And many of those lines deal with pretty-printing, and other non-performance related tasks. Additionally, it works in the plain IO monad. It does not use conduits, enumerators, pipes, or even lazy IO. As, a result, it should be very easy to understand, profile, and benchmark. In providing such a simple environment and avoiding as much extra work as possible we should be able to more easily answer questions like "Why is so much RAM required?", "What is limiting the number of connections per second", etc. As we address these issues in acme-http, we can hopefully bring solutions back to practical frameworks, or to the underlying GHC implementation itself. If performance tuning is your thing, I invite you to check out acme-http and see if you can raise the limit even higher! - jeremy