
Nick Name wrote:
Does this mechanism allow the toolkit to determine whether an event is being listened for? Or would it force the toolkit to always generate all events just in case it's being listened for? The latter is unacceptable on X.
We can have "automatically activating" streams; it would be useful to have the "mouse positions" streams, but it's unacceptable to always get mouse position, for example. So the idea is simple: just provide a "Var" (event and ability to write to it) constructor wich looks like
mkAutomaticVar :: IO () -> IO () -> a -> Var a
where the first two arguments are the actions to execute when the first listener is added and when the last listener is garbage collected. The matter is how to catch the garbage collection, maybe with weak pointers. I guess you know more than me, so tell me what you think.
I don't think that I understood any of that, so I'll just elaborate upon my main concern. Under X, you have to tell the X server which events it should actually send to the client (as opposed to Windows, which reports every event, and the code just ignores the ones it isn't interested in). Generally, you don't ask the X server to send you any events you don't actually need, as the bandwidth between the client and the X server may be limited. In particular, you don't ask it to send MotionNotify (mouse movement) events unless you actually want them. Consequently, you don't want the abstraction layer to be registering a mouse motion callback which actually ends up doing nothing.
Again, since I am not biased, I would like to provide both events and callbacks. With multithreading, callbacks are easily derived from events.
But what about the reverse? I.e. if the toolkit only provides callbacks, how do you get events? Presumably by registering callbacks which simply add an event to a queue. But does the event mechanism allow you to only register those callbacks which are actually required at any given time? It's important that a callback is de-registered as soon at is no longer required. And what about inter-related callbacks? E.g. for composite widgets (e.g. a scrolled window), both the composite widget and all of its individual components may provide callbacks. Sometimes, a single event may invoke multiple callbacks. Basically, it seems that this is moving far enough away from the toolkit's native behaviour that it may be making life harder for the application programmer.
To "listen" for events, two functions would be provided:
sync :: Event a -> IO a
which blocks until the event happens (or returns immediately if a queued occurence of this event has already happened), and
This implies modality, which is usually a bad thing. Also, it sounds like a recipe for non-portability; what happens in response to all the other events which turn up while you're waiting for this specific event?
What is modality?
Where the application switches between distinct modes, behaving differently in each mode.
However, I think that when you are waiting for an event, you are implicitly discarding others.
Do you mean "discarding" (i.e. nothing will ever happen in response to those events) or "ignoring" (i.e. the code performing the listen doesn't respond to those events, but something else might)? The former doesn't make sense; it's seldom the case that there is only one type of event which is meaningful at a given time.
If you want to listen to two streams, either you merge them, or you use a separate thread for each stream.
That doesn't sound like the way most toolkits work.
Either I've completely misunderstood what you're proposing, or it's so
far removed from typical GUI idioms that it's bound to be a mistake. I
hope that it's the former.
If you have a specific architecture in mind, one which could be
implemented in Haskell on top of backends which only provide
callbacks, then none of this is an issue. OTOH, if you're relying upon
having the backend provide an event stream, then that's a problem.
--
Glynn Clements