
Hi all, I'm continuing work on the HTTP proxy I'm writing. The absolute bare basics are working with Warp, Wai and http-enumerator as long as the upstream web server doesn't send gzipped or chunked data. For these later two cases, httpe-enumerator helpfully gunzips/unchunks the data. That however causes a problem. If my proxy simply passes the HTTP headers and data it gets from http-enumerator and passes then on to the client, the client barfs because the headers claim the data is gzipped or chunked and the data actually isn't chunked/gzipped. There are a number of possible solutions to this: a) Strip the content/transfer-encoding header and add a content-length header instead. I think this is probably possible with the API as it is, but I haven't figured out how yet. b) Rechunk or re-gzip the data. This seems rather wasteful of CPU resources. c) Modify the Network.Http.Enumerator.http function so that de-chunking/gunzipping is optional. d) Expose the iterHeaders function that is internal to the http-enumerator package so that client code can grab the headers before deciding how to handle the body. Are there any other options I haven't thought of yet?
From the options I have, I actually think d) makes the most sense, Would a patch exposing iterHeaders be accepted?
Cheers, Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/