Hi Miro,

As Gregory pointed, you should use a web-benchmark tool rather than rolling your own (e.g., weighttp).

If you intend to run benchmarks and play with many parameters, I'd recommend to use a framework to handle the experiments (I'm selling my magic potion here :P). I've wrapped the weighttp client to benchmark the mighty web servers in these Laborantin experiments:
 - https://github.com/lucasdicioccio/laborantin-bench-web

From the results I got on my server, mighty handles from ~8K req/s to ~50K req/s depending on the input parameters of the server and of the measuring client. I'm not bragging that my server is beefy, but I report these results to show that results vary a lot with the methodology. Hence, take care and explore many operating points =).

Feel free to contribute a Scotty / Warp wrapper (or wait until I find time to make these myself).

Gregory, thanks for the -A4M tip, I wasn't aware of it. I'll patch my experiments with an extra parameter too =).

Best,
--Lucas


2014-04-13 11:38 GMT+02:00 Gregory Collins <greg@gregorycollins.net>:
On Sat, Apr 12, 2014 at 11:22 PM, Miro Karpis <miroslav.karpis@gmail.com> wrote:
Hi,
I'm trying to make a small benchmarking for warp and scotty (later with json/no-json text performance test). My client is a Qt c++ application. I made a minimum code in both Haskell and C++. The problem is the numbers I'm getting.

If you're not running your Haskell program with "+RTS -A4M" (or for a newer chip even larger, the "4M" should correspond to the size of your L3 cache), please do so. The default of 512k is really too small for most processors in use and will force the runtime into garbage collection before the L3 cache is even consumed. In my benchmarks this flag alone can give you a remarkable improvement.

Also, a more fundamental issue: those other tests you mentioned are measuring something different than you are. Those tests use a large number of simultaneous client connections to simulate a busy server, i.e. measuring throughput. Your test makes 10,000 connections serially: you're measuring the server's latency.

G
-- 
Gregory Collins <greg@gregorycollins.net>

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe