
Hello web-devel, (sorry if you get this twice, I originally sent this from an unsubscribed email address) Even though I already announced this on reddit[1], I was reminded by one of the comments this special-interest-group list exists, and therefore I'd like to point your attention to http://hackage.haskell.org/package/uhttpc which (as was pointed out by one reddit commenter) can be regarded as the client-side of acme-http. I've been able to get comparable measurements to what weighttp and ab reports against HTTP servers such as nginx, but I've noticed some sub-optimal results when using only a single kept-alive connection. For instance, on my i7-3770 Linux desktop against an nginx server I get: ,---- | $ uhttpc-bench -n 200000 -t1 -c2 -k http://localhost/ | uhttpc-bench - a Haskell-based ab/weighttp-style webserver benchmarking tool | | starting benchmark... | finished in 15.024069 seconds, 200000 reqs (1 conns), 13312.0 req/s received | status codes: 200000 HTTP-200 | data received: 11153.977 KiB/s, 171600000 bytes total (49200000 bytes http + 122400000 bytes content) | rtt min/avg/max = 0.038/0.074/9.928 ms `---- vs. ,---- | $ uhttpc-bench -n 200000 -t1 -c2 -k http://localhost/ | uhttpc-bench - a Haskell-based ab/weighttp-style webserver benchmarking tool | | starting benchmark... | finished in 4.849609 seconds, 200000 reqs (2 conns), 41240.4 req/s received | status codes: 200000 HTTP-200 | data received: 34554.976 KiB/s, 171600000 bytes total (49200000 bytes http + 122400000 bytes content) | rtt min/avg/max = 0.031/0.048/7.207 ms `---- Whereas running `weighttp` with -c1 vs -c2 gives a linear 1x factor scaling of 20k req/s vs 40k req/s (as opposed to the 1.5x factor scaling of uhttpc-bench between -c1 and -c2) Therefore, uhttpc-bench is significantly worse than weighttp when using only a single connection. I can't explain that yet. [1]: http://www.reddit.com/r/haskell/comments/23yuvs/%C2%B5http_lowlevel_http_cli...