
Hi haskellers I´m implementing a form of intelligent streaming among nodes so that a process could know if there are bottlenecks on sending/receiving data so the process can increase or decrease the number of worker threads, for example. For this purpose, I planned to use hPutBufNonBlocking and hGetBufNonBlocking to detect when the buffer is full, using block buffering. I created hPutStrLn' that tries to write the entire string in the buffer. if it does not fit, the process would be notified in a state variable, then it would do a flush of the buffer and the rest of the string would be sent with hPutBuf. Doing some tinkering here and there It came up to be this: hPutStrLn' h str= do let bs@(PS ps s l) = BS.pack $ str ++ "\n" n <- withForeignPtr ps $ \p-> hPutBufNonBlocking h (p `plusPtr` s) l when( n < l) $ do error "BUFFER FULLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLL" hFlush h withForeignPtr ps $ \p -> hPutBuf h ( p `plusPtr` (n * sizeOf 'x' ) ) (l - n) return () The error condition is the one that I expected to detect in the tests. In the real code, this line just would set a state variable, that would be read by the process somewhere else. The problem is that this routine behaves identically to hPutStrLn. The error condition never happens and hPutBufNonBlocking send the complete string every time. I created a program https://gist.github.com/agocorona/6568bd61d71ab921ad0c The example print the output of the receiver "hello" continuously Note that in the complete running code below, the receiver has a threadDelay of a second, so the send and receive buffers should be full. That happens in Windows and Linux, in a single process (like in the example) or within two processes in the same machine. How the processes can detect the congestion? -- Alberto.