GUI systems seem to use two different positioning systems: coordinate-based and nested.
The coordinate approach seems to lend itself to better visual positioning (WYSIWYG), while nesting seems to be more conducive to re-sized screens. I think coordinate-based approaches better fit corporate culture because you can make it very close to how the customer or boss wants it. Plus, nesting often creates surprises when things don't wrap as anticipated.
Examples of coordinate-based GUI: Visual Studio (Visual Basic, Visual Cee), Delphi
Examples of nested GUI: Java Awt and Java Swing using Layout Managers, Mozilla Xul, HTML tables with no height or width attribute, Tcl Tk, GTK+, Qt.
Meldings of the two: Apple's Interface Builder, Springs And Struts, Java Eclipse Ide Visual Editor.
CSS doesn't really fall into either category, does it? Most of its work is done without explicit coordinates... of course I refer only to pure HTML with CSS here; Javascript muddies things.
CSS falls into the melding category IMO. (is that what you said?) normally it's nested, but specifying position:relative or position:fixed turns it into a coordinates based system for that element, with its descendents being nested.
Be aware that this title seems to present a False Dichotomy. Examples of non-coordinate-based non-nested GUIs include declarative relative-layout (with statements such as 'above', 'below', 'left of', 'right of', 'behind', 'in front of', 'within', conditional appearance, etc. These are, admittedly, less common than some other systems, but they are also relational (no 'hierarchy' of rules as per nested GUI) and delightfully composable (in the sense that the 'automatic' positioning rules may be combined for Automatic Vs Manual Placement). I don't know of any frameworks for this that I haven't written myself, though.
Perhaps the title should be "absolute versus relative positioning GUI". However, it's the sort of thing that if you think about long enough, you'd probably have to add gajillion caveats and disclaimers into the title.
If you need to add caveats, that's a smell that you have the wrong title. So is a False Dichotomy. Something like 'Automatic Gui Layout Conventions' would be more accurate.
I disagree. Brevity often trumps accuracy for titles. They do not need to carry the full weight of all possibilities. That's why they have content. (I've had this debate somewhere already.) Plus, the common 2 are the main attractions. We can't keep toggling the title every time a small side alternative is mentioned. That's not economical.
Only very rarely does one need to sacrifice brevity for accuracy or correctness. You are too willing to make unnecessary sacrifices, and to claim that you did so for "brevity" is usually to lie to yourself and your audience. I think you're simply being lazy and short-sighted. Do you also dump your liquid waste on the floor of your home because it is slightly closer than the lavatory?
You didn't address my points, but I don't want to argue about titles. The damned title is fine how it is.
The title as it is is a False Dichotomy, is not particularly 'brief', and your defense of it has been totally irrational. For example, you wouldn't need to "toggle the title every time" if you had applied your brain for the few brief seconds it would have taken to realize that "Nested" isn't really all that common and there would be many alternatives to mention. Over 50% of this page is dedicated to 'alternatives', so I consider your "the common 2 are the main attractions" to be a misrepresentation at best and a lie at worst. Why do you even bother with such total Straw Man arguments? Lazy. Short-sighted. That's why. You seem to get upset over page titles that you see as 'inflammatory' (such as Brain Damage). If you swear to never argue about those, then I'll not bother you with your illogical False Dichotomy titles. Deal?
Dude, you are being anal again. Go obsess on something else.
Dude, you're oozing and seeping nasty irrational shit all over the place again. Learn to keep it shut.
Why "automatic" in your suggestion?
There are reasonable debates on Automatic Vs Manual Placement. If there was no such debate, or there was some implication in the opening that you wished to include discussion of manual layout issues (such as persistence, per-user views, etc.), then I would not have included the word "automatic".
Fully automatic to me means that the designer gives no info whatsoever about placement. This excludes below, above, right, left, etc. "Automatic" is relative.
Automatic, I'll agree, is relative (and staged/layered). Automatic Vs Manual Placement still makes it clear that in "manual" placement the user is responsible for placement, not the designer. Sometimes user is designer, so there is overlap, but the dichotomy in the roles is one that can commonly be cleanly applied in the real world. What framework actively supports users in moving buttons, text-boxes, other arbitrary display elements, etc. around? Would you like to discuss such frameworks here?
There is also coordinate-relative, such as '5 pixels from the upper-left corner'. This isn't as composable in the sense that a composition cannot simply insert something 'between' two other display items without first explicitly moving the items around. A claim is made below that 'Nested GUIs' are often coordinate-relative in this sense.
Summary of Arguments
Advantages of Nested Gui's
Generally adjust more automatically to screen size/resolution
Allows for modularity of groups of gui elements (e.g. reuse of sub-dialogs). Disputed by some
Easier to internationalize as GUI components can scale to text and reorganize for character order localization.
Disadvantages
Visual design tools for nested GUI's are not as easy to create.
Can be difficult to fine-tune placement (disputed by some below).
I personally think that each has its strengths and weaknesses and each is better for different situations. Thus, ideally a GUI kit should make both available. Some GUI kits have "layout managers" that include a coordinate-based placement as one of the layout techniques for the given "panel". Generally they include at least:
Relative placement with directions (up, down, left, right)
Grid-based where the grid elements are dynamic
Coordinate
The panels can be nested so that one can mix-and-match. HTML does this to some extent by mixing tables (grid) with relative placement. Each "cell" in the table is like an independent panel where you can use the (default) relative placement or define yet another table within it.
-- top
Limitations of absolute positioning are very apparent if we will try to see site like cnn.com on a screen with fine resolution and small size ( 1400x1050 on my 15 in. notebook screen ), everything looks small and makes it difficult to read and use. Sites like java.sun.com use relative positioning and fit nicely on my screen and very useful when I use Mozilla's CTRL(+) for zooming.
Coordinate-based does not necessarily preclude zooming. It may just be bad design. A good design would perhaps make it so that only ads or menus go out of direct view when zooming, not specific articles.
One of the alleged drawbacks of the coordinate approach is that things are not "grouped". In practice, I think I would rather have set-based grouping anyhow rather than nested. For example, the group that may belong to a given entity may be different from or orthogonal to the group that you want to put a square around. Hierarchies are too limited in my opinion with regard to grouping options. See Limits Of Hierarchies.
[What do you mean by this? Obviously, a logical grouping of controls doesn't necessarily (but probably should) have anything to do with how they're displayed on your screen. But when you're talking about 'layout', then of course controls have to be grouped with the box around them. Logical/programmatic grouping and layout grouping don't have to have anything to do with each other.]
That is true, but generally tools tend to assume they are the same. But if they allow/implement orthogonal grouping for non-visual stuff, then why not also allow it for visual stuff? I suspect because it requires more training to work effectively with sets than trees.
IME, the coordinate approach has a great inertia against change. Want to add a new field? Shift everything over manually, the position of every control depends on the size and position of every other controls. With nested, you just add the field and you are done.
True, but it is usually easier to find the add location to begin with with coordinates, and a good IDE usually makes it simple to select a group of widgets to move down as a group. Then again, it depends on the particular form arrangement.
(I don't know if "tab order" comes into play. If tab order allows decimals, then inserting something in between is usually not a problem. It is the integer-based tab orders which are a headache IMO.)
I assume you really mean it is easier to find the add location to begin with in the visual tool you are using, because I find that, in the source, "add(control, 123, 234)" is just as tough to find as "add(control)". If we are comparing visual tools, it is just as easy, if not easier, to find the location to add your control, at least in VAJ the last time I used it, (it list a tree view of all the controls starting from your main window, making the nesting of the controls very clear).
For complex GUI's, trees often stink IMO - Limits Of Hierarchies. For example, I often want to group, filter, or move by rather orthogonal criteria. Trees don't handle orthogonality very well.
[You're talking about your convenience at design-time vrs. functionality at run time again. Regardless, it's a functionality of a specific visual tool, not any sort of consequence of coordinate based layout]
I am not sure what you mean.
Or Structured Graphics versus Structured Documents. The former appeals to graphical artists, those with a visual orientation, the latter to textual artists, those with a language orientation.
I didn't understand the term "nested" until the examples were added. How about "flowed" instead? -- Kris Johnson
I think we're actually talking about static vs dynamic... i.e., hardcoding the relationships numerically between componenents/sizes vs dynamically assigning them through constraints of some form.
The fact that a dynamic layout is 'flowed' as opposed to 'nested' (widgets in containers in containers etc) isn't really the issue
The "HTML tables with no height or width attribute" example is what led me to believe that "flowing" is the issue rather than "nesting", as does the "when things don't wrap as anticipated" phrase in the prologue. -- kj
But if it does have height or width attributes in parts of it, is it something in-between? I am okay with "flow" if you want to change the title, but perhaps there is a non-hierarchical flow approach possible, although I cannot envision it right now.
[delete when section when forces resolved]
"Flowing" is a consequence of the second way, but not its defining characteristic. When you're discussing HTML, this tension is characterized as "visual vs. semantic." Not sure if it applies as well here. -- francis
How about
Flow-based:
hierarchical nesting without absolute coordinates
Coordinate-based:
uses absolute coordinates
Mixed:
Uses some nesting and some absolute coordinates
It seems that there are several distinct "axes" being considered here:
Flat organization (all components are siblings) vs. hierarchical organization (containment)
Absolute positioning (components share a common reference point) vs. relative positioning (components use other components as reference points)
Numeric vs. non-numeric position specifications
Screen pixel-based coordinates vs. logical coordinate systems
Explicit layout by developer vs. automated constraint-based layout
Edit/compile-time layout vs. run-time layout
Visual vs. semantic (which I understand to mean "Based on what looks good vs. based on the logical structure of the data")
Most real-world GUI systems provide some level of support for each extreme on each axis. A developer can usually make any system work any place along any of the axes, so this is not an "either-or" situation, but a "how easy is it to do what I want" situation.
While certain combinations are more useful than others, I don't think "coordinate" vs "nested" makes it very clear exactly what we are talking about. Neither does "flow" or "layout". Maybe this page needs to be split up, or maybe somebody needs to clarify what the topic is.
-- Kris Johnson
Associative layouts (most widgets layouts are determined by a relationship to a different widget), as opposed to independent layouts (where most widgets layouts are determined by a relationship to a top-level container)
I think that is the same as "absolute positioning vs. relative positioning" in the bullet list above.
What are examples of "non-numeric" and "logical coordinates"?
"Non-numeric position specifications" would be something like "to the left of object X", or "between objects X and Y", or "at the top of the window" or "half the width of the screen" (you might claim that "half" is numeric). A "logical coordinate system" is one that is not directly tied to hardware pixels; you might use percentages of window size, for example.
What about auto-generation of GUIs, such as a rule based system that generates GUIs for corresponding models? Coordinate systems can't handle that. It would then be more of Esthetic vs Model Based. Esthetic will imply the layout was based on visual appearance (a VB application where coordinates are hardcoded) and Model Based would imply that the GUI is generated from a model and comes out of the model. If your application does not auto-generate GUIs, then Flow based layout does not offer much other than resizability.
Auto-generation is usually used for preliminary arrangement in my experience. One then makes custom adjustments.
I do not mean auto-generation of code. I meant generation of GUI at run-time. -- Anonymous Donor
Users are Picky Down to the Pixel Level
I am suggesting that auto-generated GUI's are fine for prototypes, but users or customers often want to tweak stuff in ways that flow-based cannot easily handle. There are certain esthetic "rules" that a flow engine just has no clue about; perhaps because it is a subjective thing. But The Customer Is Always Right even if they want stupid things. In other words, coordinate-based allows you to do stupid or illogical things easier with the layout.
For example, some things might tend to almost line up out of pure coincidence. The customer looks at it and says, "could you line these up better, please?" not knowing or caring that it is just coincidence that they are almost lined up. The flow engine has no idea that a bunch of things happen to coincide in length or position. With coordinate-based, you simply shuffle things around a bit in a WYSIWYG tool to make them line up all the way. No fuss, no hassle, no NBSP-like filler things. Suppose we have some input fields with labels right-justified against them:
...Sdf asdf sflksjdf:
[____]
....Eosjslk blkjasdl:
[____]
............Grog sdj:
[____]
..........Asdsdf sdf:
[____]
............Sld bsok:
[____]
...Ybwslkj blso apon:
[____]
...Ublkjs mogk robno:
[____]
The user might look at the middle three fields as a visual group and say, "Those are almost aligned on the left side. Can you make them fully aligned?" Our framework only knows left, middle, and right aligned. It is blind to this coincidental alignment of the left-most characters in this case. (I would personally disagree with such a layout, but The Customer Is Always Right. They don't appreciate lectures on "proper" layouts, they just want it as they envision it without "flak".) In a coordinate-based system you simply shuffle the text around to make it fit how the customer wants it to be:
...Sdf asdf sflksjdf:
[____]
....Eosjslk blkjasdl:
[____]
..........Grog sdj..:
[____]
..........Asdsdf sdf:
[____]
..........Sld bsok..:
[____]
...Ybwslkj blso apon:
[____]
...Ublkjs mogk robno:
[____]
(Dots used to avoid Tab Munging. Unfortunately, the dots make it hard to see the alignment issue.)
I realize that we can make a split sub-container in a nested model to left-justify the middle three, but it is a lot more work and it is hard to tell from the code what it is going on until it is actually rendered. I would rather just click-and-drag than introduce more UI fiddle faddle into the software code. The less we have to go back and nit with the code for esthetic purposes, the better. Adding yet more nestedness to "solve" things like this reminds me of Adding Epicycles. -- top
A non-broken layout management system can do that without blinking. Furthermore, unlike the coordinate based approach, it will continue to lay things out as specified even with the labels are translated into another language, or when the user selects a font with different kerning, or when it's run under a theme engine where the controls on the right are a different size. Yes, it involves nesting. No, that doesn't necessarily involve munging around with your code. Besides, what do you think all your pixel-twiddling is? That's code too.
I am skeptical it can do it conveniently. The computer can't "know" it looks odd to humans, especially if it only "bothers" a specific customer. Besides, change in language may create new coincidental oddities that need to be tweaked in or tweaked out just the same. The words that were short may be long in other languages, so the middle block issue may become irrelevant or different altogether with a different language. If you can provide some pseudo-code to illustrate how it can be handled via nested blocks, I would like to see it. I can tweak such easier with mouse dragging than with adding new text block layers into the GUI code. It is simple: See, Grab, Drag, Save, Done. -t
You can do that in a flow layout system if it supports grids (like HTML tables). Having seen that the things almost line up, nail them to a grid.
Why have the middle-man? Just use coordinates. Besides, grids don't handle the "almost" problem described above unless each cell is pixel-sized, in which case you have a coordinate system anyhow. I suggest you try the above exercise using HTML tables and record your changes. Note that at least you are going to have to split one column into two and give "colspan=2" tags on all the non-participant cells. Then you have to worry about the middle-most row lining up properly on the right because it will be left justified if we want it to match its two neighbors on the new left edge. Our first try will probably look like this:
...Sdf asdf sflksjdf:
[____]
....Eosjslk blkjasdl:
[____]
........Grog sdj....:
[____]
........Asdsdf sdf..:
[____]
........Sld bsok....:
[____]
...Ybwslkj blso apon:
[____]
...Ublkjs mogk robno:
[____]
There is no simple HTML command to tell it to line up both on the right side and match its two neighbors as the same time. That is an abstraction that HTML, nor any other known tool, was built to natively handle in a strait-forward way. In HTML it would likely require fiddling around with absolute cell widths or NBSP's. Something that would be 1 minute drag-and-move exercise under coordinates is now a 45-minute exercise in type-text-and-re-render, or as I like to call it: "nudge-N-fudge". Plus, it still might not render the same on another browser even after all that. Further, MS-IE has known layout bugs that they don't seem to be a in a hurry to fix.
Note that this issue applies to the nested approach also, not just auto-generated GUI's.
No it doesn't. Relative coordinates can be specified at any precision required.
Relative coordinates? That is kind of a hybrid between the choices considered.
No it isn't. What you refer to as "nested GUIs" are just GUIs that use relative coordinates. Swing and HTML both support relative and absolute positioning of components, and those positions can be expressed in pixels.
I think we have some serious definition issues to work out.
OK, let's start with the top of the page:
"Examples of coordinate-based GUI: VB, VC++"
These use absolute positions for components.
"Examples of nested GUI: Java AWT and Swing using LayoutManagers, HTML tables with no height or width attribute, Tk."
These support relative positions for components. Swing and AWT have a "null" layout manager that allows absolute positioning. HTML's divs can be relative or absolute.
What other issues are there?
I would not call HTML and Tk style "relative". Relative is "5 pixels from the corner of containing table cell". Here is a working taxonomy:
Absolute coordinates
Relative coordinates
Flow-based positioning
(BTW, Tk has an optional coordinate-based geometry manager.)
I've never used Tk, but HTML/CSS definitely supports relative positioning. (How else would you describe "
"Components"? I admit I don't yet have a satisfactory definition of "flow based" to offer as an alternative, but I don't like "relative" because there is relative positioning like your example above.
That CSS clause is an example of relative coordinates, but not flow-based. Working together, perhaps we can come up with sounder definitions. The best I can say at this point is that "flow based" gives positions relative to other components using non-distance references. Relative coordinates use numeric distances, as do absolute positions. In practice, flow-based may specify borders/buffers/margins (visible or invisible) using distances, which is a slight violation of the rule, but perhaps forgivable.
Different reference techniques:
Absolute coordinates from corner of window
Relative (offset) coordinates from other widgets
Container references (such as which table a given item is "in")[1]
Compass directions relative to other items (including "up", "right", etc.)
Compass directions relative to container (also known as "alignment")
Flow-based generally doesn't use the first two although it is possible to use hybrids, which I have done before to fine-tune (tweak) layouts that the flow-based approach could not by itself get quite right.
[1] HTML, for example, does this by context, not by explicit reference. If one wanted to use an explicit reference for web stuff, they could use a DOM tree path.
-- top
Another force is the need to generate the GUI on runtime (i.e. the number/type of controls cannot be known at compile time). You have to be more careful when generating the GUI in coordinate based system.
It seems to me this would probably greatly depend on the GUI API/framework and perhaps to a lesser extent the language.
We are looking for an abstract way to describe the layout of the components. (A coordinate system is not comfortable enough anyway. For an example the Java Swing Layout Managers are very good.) The underlying implementation should be hidden. We came up with this:
For arranging components we only need the information if they should be set vertically or horizontally and optional we must know which component should be set in the center. To get the complex layout, these components will be grouped and the groups themselves will be layouted.
I disagree. Abstraction does not work with GUI's because the customer wants it to fit their own mental vision (see above), not idealistic notions of grouping and placement. Abstraction works best for things hidden from the user.
Please see also the discussion with the topic Swing: Handling Complex Layouts jinx.swiki.net on the Jinx Wiki on the Swiki Farm
I don't see any visual examples there. Anyhow, nobody is arguing that it cannot be done with enough nesting of different kinds of layout managers and iterative futzing around, but rather that it can get awkward and non-visual. It also risks Discontinuity Spikes if you pick the wrong one. Whenever people have to ponder a long time which approach they take, it is probably a symptom of a non-flexible approach.
Coordinates can handle any screen arrangement you have in mind, unlike pure flow layouts. But flow-based layouts can be a kind of shell around coordinate-based layouts since everything must eventually be translated to coordinates anyhow. Thus, it seems logical to me to have a base protocol based on coordinates, and then put a nested (flow) layout engine on top of that if desired. You cannot do it the other way around.
With an appropriate nesting layout engine, you could go either way. The points of a flow layout are all that need be nested, not the entire components themselves. You can therefore define a grid of points by successive division, and name those points (1,1)(1,2)(1,3)...(2,1)(2,2)...(10,9)(10,10) and so forth. They are equivalent, and so the choice between them should be decided by whichever works best for the platform in mind. -- William Underwood
If I understand you correctly, that leads to the Adding Epicycles complaint given above.
That complaint is ill-founded: epicycles are bad because they add complexity to a model in order to make it match a reality for which it is ill-suited. Adding a new definition of a point to a UI model actually models the UI exactly, because the model _is_ the reality.
In other words, changing the coordinates by hand is the coincidental method of aligning components. It is like making columns line up in a word processor by using a bunch of spaces as opposed to setting a couple tab stops. It is like (nay, it is exactly) making eyeflow in a layout by nudging objects till they look right as opposed to setting a couple guides and having the objects snap to _exactly_ the right spot. (This is well known in publishing circles)
-- William Underwood (who has spent more than his fair share of time doing print layout design in a previous life)
Perhaps 90% of the time, but about 10% it does not have fine enough control. Human minds don't always process images or esthetics with the same rules that the computer uses.
[I disagree with this - I've never seen a layout I couldn't implement in my layout manager of choice.]
Nobody is suggesting it is a can-or-can't situation, but rather an issue of convenience and productivity. Just like Adding Epicycles, with enough layers anything can probably eventually be achieved that way, but there is often a point where it would just be easier to click-and-point and be done with it. You pull out yer gun and just shoot du rabbit rather than build elaborate rabbit traps.
I was forced to make a website that had to make use of extensive absolute CSS positioning. The pain. There seems to be an overall forgetfulness of separating what the GUI creator needs from what the user wants. Which, as was already stated, tends to be illogical - I cherish every possibility of achieving this or that requested cheesy flashy pointless eyecandy without sacrificing all too much design logic. I have wet dreams of being able to create application GUIs with something as elegant and structured as lessay HTML/CSS, without having to wade through pixelwise mess and jungles of identifiers for widgets that only contain other, more semantically important ones.
What the user might want to do with what comes out of my design should not be my concern in the GUI code, but rather the concern of whatever system renders the GUI. A GUI defined with a flow layout is much easier to maintain in case someone has to hack at it and doesn't have your layout drawings, and I seriously doubt allowing the user to change widget position offsets is a horror to implement. As a matter of fact, it is very well implemented on most platforms, though I concede embedded and proprietary systems might pose a problem - but then again, these usually have a limited group of users that know better than to complain about what unneat pointless details their worktool looks to their employer. Serialize said offsets in a configuration file, and presto.
With a coordinate based system, you're actually robbing yourself of the opportunity of easier change at runtime, with having elements divided into small collections independent of each other (should you really feel such workspace chaos is a benefit) whose actual coordinates at temporal index foo need not interest anyone, much less the already overstrained coder. As far as set-based grouping goes for the more esoteric of GUI organizations, no-one's restraining you from defining your own containers and stuff in references to the elements you want, except hardware, but I'm not digressing that far. But not having to do so for the simpler GUIs since you already have a structure you can manipulate present will win out in the case statistics.
To conclude with a small thought-provoker as to whether the user really wants that much control over the GUI: How many of you that use Windows have done something as trivial as relocated the taskbar from the bottom where it sits since the install?
How about something even simpler? How many of you have ever resized a window? Coordinate based systems don't support resizing. If they do support resizing, they do it by implementing layout algorithms - which are the same thing as "flow" based layout. If your app will only ever run on machines with the same resolution, the same size fonts, and you don't permit the user to ever resize the window, then coordinate based layout is fine.
I agree that such is a weakness of coordinate-based. But many managers want fine control over the want for screen-size flexibility. I agree it is not always rational, but we live in a Dilbertian world and cannot fight the inevitable. In short, coordinate better fits the Dilbertian world. We are not on Vulcan anymore, Dorthy.
Honestly, I have trouble seeing how this is even an argument - the claim seems to be that since you it's easier for you to drag stuff around in a form designer, coordinate based layout is somehow superior. Note that HTML isn't the only model here. Positioning in HTML is primitive at best, in a "real" ui there's a lot more control and you have a lot more options.]
Some people relish the ability to futz around with their environment. Some don't care. People vary widely.
Also, there is a reason why OOP won out over structural programming.
Better hype.
Said futzing around is implementable and implemented without having to be present in the GUI logic. And no way I'm taking the better hype argument. True, anything you can do with OOP can be emulated procedurally with strict nomenclature, but I see no point in emulation when i can use a more natural way of expressing. Well, that should prolly go into another page, so I'll stop here.
"Natural" is probably very personal and subjective, based on past long arguments. Anyhow, I lean toward declarative GUI approaches, not so much procedural. Note that coordinate-based approaches do not rule out moving stuff as a group. Some tools make it easier than others. Perhaps that area can be improved by being able to have named sets (not hierarchies) such that one can simply type the name of the group/set and then move them all at the same time. "Snap-to" grids are also nice for such (as long as you can override them on a widget-by-widget basis.) Also, I agree that both styles have their place. It would be nice to be able to convert an existing nested-based layout to a coordinate one when you eventually need finer control. That way you can start out with the cleaner ideal approach, but tweak it at the pixel level if the customer is picky and does not want excuses. I am not sure this is an OOP-versus-non-OOP issue anyhow. Does OOP naturally lead to flow-based layouts? How do you recon that? -- top
[I'd say this has zero to do with OOP. GUI layout isn't programming anyway, any more than writing HTML is. I don't see your need for "finer control", either - if you don't have the control you need with a nested layout manager, then your tools are broken. Sharpen The Saw. -- Chris Mellon]
They are often interrelated. What's possible/practical with programming may greatly affect the GUI design choices. Ideally they'd be separate interests, but in practice they are often intertwined for non-trivial apps. And auto-layout is often just "dumb", as illustrated nearby. It does not understand human aesthetics and psychology, and putting that knowledge into the tool, if technically possible, would make it bloated and unpredictable (just like the users :-)
From what I have seen, people like GUI builders. They appear to like clicking and clicking and dropping and dropping, and clicking and clicking and dropping and dropping. I am buggered if I know why they enjoy it. From experience using auto generation of screens over twenty years in one environment, I am able to assess the usefulness of the generation process as opposed to the screen painting process. And my conclusion is absolutely in the generation camp.
Of 18 screens in the current application I am developing, only 2 require any manual intervention where a field may hog more screen space than required, or where the layout of a table can be improved. This means I had to do no work, zero, none for 15 of the screens. They just magically appeared when I was testing.
Because the screens are generated, the definition applies to any screen, not just the environment's GUI builder. So conversion of the screens to HTML is a no-brainer. Which means I have to do not work to spit data at a browser. Zero, none.
But one generally cannot translate one-to-one between regular GUI's and web GUI's, at least not without targeting a specific browser vendor or goint [?] with a Lowest Common Denominator Interface. Browsers simply lack things like Combo Lists, editable grid controls, tab controls, and outliners (tree browsers).
That is not true - Cache CSP for example (www.intersystems.com) or Design Bais (www.designbais.com)
Sure, but they are not standardized and often require money and installation.
Yes, they are commercial products, but I think the Cache licence will cost you nothing until you deploy - so if you never deploy, it is free. It sounds like they are making a concerted effort to capture the developer market. (I don't use it myself, yet)
Maybe in this case you just had flexible and understanding managers who let designers stay within the practical boundaries of the layout tool. But this is not always the case.
There is that question of user finickitiness. Very difficult to overcome. But after overcoming that, the user is in the long term happier because the generation process causes some consistency within the system. --Peter Lynch
Generally I agree, but the social/political environment often doesn't reward a longer-term view. Sometimes we just have to cater to silly whims if we want the rewards the game offers. Related: Choosing Satisfaction Over Money. You can say, "I will risk my career to make the GUI right instead of please fools." But that decision will not scale to all readers or workers.
I think a lot of things on this page could be summed up as "HTML positioning sucks", which I have no argument whatsoever with, and "Nested layout is too complicated", which is very different from being inferior to coordinate.
I think "tedious" or "code intensive" is a better description than "complicated". GUI's are visual, so it is natural to expect the design to be visual also.
It's natural to expect GUIs to work at different scales, different orientations, different aspect ratios, etc. Until you can show a coordinate approach that satisfies those expectations without being tedious or code intensive there's nothing to discuss. Relative position layout managers, while more complicated internally, aren't more "tedious" and can be configured without writing any code at all via visual tools.
Nested layouts provide a strict superset of capability, at the expense of complexity. Obviously, the need is to reduce complexity, which you can achieve by using form designers that support layout magnets. Qt Designer has a nice feature which lets you drag and drop your controls, which is nice for prototyping and brainstorming, and then to add those controls to layout managers. -- Chris Mellon
Relative position GUIs are much easier to internationalize. Fixed position GUIs don't automatically adjust to different character set sizes, and when you invert the text direction for Arabic or Hebrew the coordinates become meaningless.
To end this useless thesis-antithesis brawl, I will try to show, that the synthesis of both is already widely in place. Examples:
historically relative placement (table-layout with implicitly relative positions of the enclosed elements.
layers and stylesheets for placement independent of the contains-relation.
everything is a box. Boxes in boxes in boxes. Spacing relative to the surrounding box.
But is is easy to put any box into a zero sized boxes (by using negative width/height). With these boxes layers and absolute positioning can be realized (as is done in the graphics-packages).
I guess there are many more examples. The remaining question is not, which is better, but When To Apply Which or Finding The Middle Ways.
Coordinates with Stretch-Zones
I thought of a possible way to allow coordinate-based screens to expand without having to abandon coordinates. Have a vertical and horizontal "stretch bar". In design mode the stretch bar would just be two lines, a cross in the middle whose end-points reach to the edge of the form. When the window is expanded, the line(s) forms a gap and widget points (corners) will stay on each side of the panels. Imagine the cartoons where somebody is on top of and between two train cars with one foot on each train. When the trains separate, the character's legs stretch. The location of the stretch bar would be designer-determined. It probably would not cover 100% of all situations, but may cover most. Also, in some cases some widgets may want a designation of "non-stretch" such that they keep the same proportion, stuck to one side or the other (or top and bottom) as the primary reference.
Are you thinking of Springs And Struts?
No, but if somebody else already tried the idea, that is great. It looks a bit more complicated than what I have in mind, though.
To set how stretching happens across the zones, each edge of the bounding rectangle of the widget needs to be assigned to one of the 2 possible stretch directions.
widget.topStretch = "top"; // or "bottom" widget.rightStretch = "right"; // or "left" widget.leftStretch = "left"; // or "right" widget.bottomStretch = "bottom"; // or "top"
(Perhaps there should also be a "middle".)
These would be the defaults, so normally we wouldn't need to specify these specific ones. Note that buttons do not normally stretch, so bunches of buttons in a group may be best riding on a panel that you create for them, which itself uses the stretch settings.
Another feature would be scaling of widgets such that the user could magnify or shrink the screen, but the relative sizes of everything would stay the same. This would require vector-based fonts, however, which sometimes don't map well to raster-based conventions (at least until pixel sizes get sufficiently small via the technology curve).
This is a "real world" experience that reminds me of the limitations of auto-placement as described above (which I am not outright against if it's optional). I was hanging a picture on the wall the other day and was careful to center it properly between the available wall borders. It was a fairly narrow area. When done, I stood back and was dismayed about how off-center it looked. I checked the centering again, and technically it was correct, but it still looked off-center. It's just the "lay of the land" in that spot that tricks the eye. I'm still kicking around the idea of fudging it to make it look centered, but am too lazy to futz with it further. Many managers and users do complain, or at least notice such things on screen, and it thus affects a developer's score. -t
See also: Gui Coordinate Notation Discussion
See original on c2.com