
Axel Simon wrote:
So, to remain in-topic, how do you people think about a GUI builder written in CGA, for the CGA? [..] Also, Motif interface builders tend to rely upon the fact that Xt widget classes are self-describing. Given a WidgetClass value (pointer to a WidgetClassRec structure), you can enumerate the attributes of the widget class. Consequently, Motif interface builders normally allow you to use widget classes from extension libraries (i.e. widget classes which were unknown at the time the interface builder was written and compiled).
I get the impression that the model of Motif and Gtk are very similar...
No; they're far from similar.
There's also the fact that different toolkits have substantially different models for geometry management. E.g. Win32 uses absolute positions and sizes, specified in fractions of the font size. OTOH, Xt normally uses attachments; the actual positions and sizes are computed at run-time based upon factors such as the label text and font of each of the widgets, the size of the window (and whether the window size can be changed to accommodate the layout or whether the layout must adapt to the window size).
This is part of the reason why I would prefer an API which encouraged the code to stick to the core application logic (i.e. perform a given action in response to a given event), and have everything else determined "by other means".
So you're saying we should not supply any layouting mechanism to arrange widgets?
Not necessarily. I'm saying that different GUI APIs have radically different approaches to layout, so providing a common interface isn't just a matter of matching up equivalent functions in each native API. Win32 takes the simplistic approach; each widget has a specified position and size. The Xt approach is more involved. It has an open-ended set of container classes (subclasses of Composite), each of which is responsible for laying out its children. Most container classes are subclasses of Constraint, which allows the container to add attributes to each of its children. The most common Motif container classes are XmRowColumn (which lays out its children in rows/columns; menu bars and menus use this) and XmForm (a more generic class, which allows each widget to specify its position relative to its siblings or to the parent). The advantage of the Xt approach is that the layout can be (and usually is) computed dynamically based upon the preferred sizes of the individual widgets (which depends upon factors such as the label text and font). This simplifies I18N (you only have to change the strings; the layout will adjust itself), and allows dialogs to be resized sensibly without having to write any code.
Does that mean we supply tools that convert MS Visual Studio .rc files, Glade's xml files and Motif's UIL files into backend-specific Haskell code that builds a widget tree?
I wouldn't advocate that approach. Instead, I would advocate having the individual backends provide bindings for the native interface (e.g. MrmOpenHierarchy etc for Motif). Even if the the functions themselves aren't portable, the data that they return would be.
The advantage is that some higher level bindings (namely Fudgets) use combinators to arrange widgets how information flow is handled. If we force the user to use external tools for the layout, Fudgets couldn't be implemented on top of CGA. But I do not dislike the idea. It could safe us a lot of trouble.
Ultimately we still need to allow interfaces to be constructed
programmatically.
Basically, I'm suggesting to focus initially on the aspects which
absolutely have to be dealt with by code, and which are relatively
portable.
Code has to be able to get/set widgets' state (toggle state, text
field contents, slider positions, list selections etc), and there is
substantial commonality between the different platforms. This code
tends to be closely linked to the application logic.
OTOH, the actual UI creation tends to be logically separate from the
application logic. It could involve calling a few high-level functions
which read an entire widget hierarchy from an external source, or it
could be a chunk of fairly formulaic code which doesn't really
interact with the rest of the code except for storing a lot of
"handles" for later reference. It's also the area which has to deal
with many of the more significant portability issues; e.g. different
layout models, differences in the set of available widgets (e.g. are
toggle buttons and radio buttons different classes?).
There is an analogue to be found in the way in which OpenGL deals with
initialisation: it doesn't. OpenGL itself first assumes that you
already have a context in which rendering operations are meaningful,
and sticks to detailing what occurs within that context.
The actual initialisation is performed by "other means". At the lowest
level, you have the glX* functions on X11, wgl* on Win32, agl* on Mac
etc, but it's more common to just use a pre-built "OpenGL canvas"
widget (or GLUT).
This approach allows the core application code to remain unconcerned
with such details.
--
Glynn Clements