
Duncan Coutts wrote: Hi,
In practise I expect that most programs that deal with file IO strictly do not handle the file disappearing under them very well either. At best the probably throw an exception and let something else clean up.
And at least in Unix world, they just don't disappear. Normally, if you delete a file, you just delete its directory entry. If there still is something with an open handle to it, i.e. your program, the corresponding "inode" (that's basically the file itself without its name or names) still happily exists for your seeking, reading and writing. Then, when your program closes the file and there really is no remaining directory entry and no other process accessing it, the inode is removed as well. One trick for temporary files on unix is opening a new file, immediately deleting it but still using it to write and read data. So no problem here. But what happens when two processes use the same file and one process is writing into it using lazy IO which didn't happen yet? The other process wouldn't see its changes yet. I'm not sure if it matters, however, since sooner or later that IO will happen. And I believe that lazy IO still means that for one operation actually taking place, all prior operations take place in the right order beforehand as well, no? As for two processes writing to the same file at the same time, very bad things may happen anyway. Sure, lazy IO prevents doing communication between running processes using plain files, but why would you do something like that? Regards, Julien