Apparently it is the season to chime in on opinions on client side decorations (CSDs) versus server side decorations (SSDs) in the context of Wayland. I normally ignore these kinds of quarrels but this is a case where I consider the dominating solution being close to the worst possible one. The two pieces of note, so far, is This post from camp Gnome and This from camp KDE.
For starters, Martin claims:
Nothing in Wayland enforces CSD. Wayland itself is as ignorant about this as X11.
Well, it is the compositor’s job to enforce it (i.e. do nothing really) and he doesn’t want to – it’s a case of bad engineering hygiene for any implementor of a protocol to skew interpretation this blatantly as it counteracts the point of having one. You don’t have to dig deep to find the intention behind the protocol – even the first commit to the project has:
Window management is largely pushed to the clients, they draw their own decorations and move and resize themselves, typically implemented in a toolkit library.
This has been debated again and again, and if the ‘me-too’:ers would have taken even a cursory google glance at the mailing list rather than skewing things to fit their narrative (or just not doing the basic research because why would you), they would immediately run into Kristians recurring opinion about it, and he designed much of the protocol. If that does not align with your wants and needs, perhaps this is not the protocol you should use, much less dig up worse extensions where the client can simply say “no, I want to decorate” and the server have to comply or break protocol, increasing the development cost on both sides as even compliant clients would have to deal with the absence of the particular extension. It also doesn’t do anything about the related mouse cursor issue.
Furthermore, a ton of stuff in Wayland make very little sense without CSDs – particularly wl_subsurface and wl_region but also, to some extent, the xdg_shell parts about window state (maximized, movable, fullscreen, resizing, context menu).
Then we have a long standing topic from the #wayland IRC channel itself:
Please do not argue about server-side vs. client side decorations. It's settled and won't change.
That said, I also think that the agreed upon approach to CSDs is a technically inadequate, and more a counter reaction to the state in X than other possibilities.
Lets first look at what the decorations chatter is all about. In the screenshot below you can see the top bar, the shadow region and the window border. What is not visible yet relevant is the mouse cursor. All these are possibly client defined UI components.
The options to convey this segmentation in wayland is practically as a single unit (a surface) or a complex aggregate that is one of the biggest pains to actually implement, a tree of “subsurfaces”.
Reading through comments on the matter, most seem to focus on the value of the top bar and how much stuff you can shove into it to save vertical space on a floating window manager with widescreen monitors.
Had that been the only thing, I would have cared less, because well, the strategy that will be used to counter this in the various Arcan related projects is a riff on this:
I’ll automatically detect where the top bar is, crop it out and hide it inside a titlebar toggle in the server side defined titlebar where I have the stuff that I want to be able to do to the client. There is practically nothing GTK, or anyone else for that matter, can do to counter that. In durden, the /target/window/crop and /target/window/titlebar/impostor features allows for that, per window.
UPDATE: This has recently been added in its first, quite rough, inception:
Protocol wise, it would be nice if the bar that GTK/GNOME (and to be fair, Chrome, Firefox, …) prefer in their design language had been added as another possible surface role to xdg-surface so the segmentation could go a bit smoother and the border and shadow could all be left entirely to the compositor – but it is, all in all, a minor thing.
There are, however, other areas that I argue that the choice matter a lot more, so lets look at those.
If you look like projects such as QubesOS (see also: border color spoofing) and Prio, the border part of the decorations can be used to signal “compartment” or “security domain”. The server side annotates each client surface with its knowledge about the contents, domain and trust level that the user should associate with the particular client. This part of the equation is invisible to the client and the client has no way of removing it and substituting with its own.
With client side decorations there is, by definition, no contribution from the compositor side of the equation. This means that any client can proxy the look, feel and behaviour of any other.
Even if I have a trusted path of execution where the compositor spawns the specific binary (like Arcan can with a sandboxing chainloader and a policy database) keeping the chain of trust intact. Without a visual way to indicate this, there is no difference from a compromised process spawning and proxying an instance of the same binary as a man in the middle in order to snoop on keyboard and clipboard input and client output. The problem is not insurmountable, but other design language need to be invented and trained.
Not that there are many serious attempts to allow networking for single wayland clients at the moment, but for networked clients, normal window operations like move and drag resize now require a full round-trip and a buffer swap to indicate that the operation is possible since the mouse cursor need to be changed to reflect the current valid state. Having such operations jitter in their responsiveness is really awkward to work with.
This applies locally as well should the client input processing starts to stall, and now the user is faced with the problem of having to wait in order to move a window, or adapt/discover a possible compositor defined override (meta key +drag).
That said, there are other parts of the protocol that has similar problems, particularly the client being responsible for maintaining the keyboard layout state machine thus to implement key-repeats.
Performance and Memory Consumption
The obvious part is that all buffers need to be larger than necessary in order to account for the shadow and border region. This implies that more data needs to be transferred to- and cached on- the GPU side. About the scarcest resource I have on my systems is memory on GPUs and when it thrashes, it hurts. You can do border and even really fancy shadows as shaders during composition at a much lower systemic cost.
Everything the client draws has to be transferred in some way, with the absolute fastest way being one well-aligned buffer that gets sent to the GPU with a handle (descriptor) being returned in kind. This handle gets forwarded to the compositor that maps it to a texture and uses this texture to draw. The crutch here is that there are many ways to draw primitives that use this surface, and the one you chose will have impact on how expensive it will be for the GPU.
We can see with some toolkits, notably GTK, that fade-in, fade-out effects on state changes, such as the creation of a new surface, or focus shifting also now needs to update the entire surface multiple times. While no big deal locally, when using zero-copy handles, this stuff will need to go over the network. This makes such clients inherently much less network friendly, when it makes much more sense to have such effects implemented on the remote (server!) side, but that needs control over the decorations.
The shadow region needs to be drawn with alpha blending enabled or it will be ugly. The vast majority of UI contents can be drawn without any kind of blending enabled, but with client side decorations the compositor is left with no choice. To combat this, there is the wl_region part of the wayland protocol. Basically, you annotate the regions of a surface that is to be ‘opaque” so the compositor can perform the lovely dance of slicing up drawing into degenerate quads and toggle the blend states on and off. My memory may be a bit fuzzy, but I do not recall any compositor that actually bothers with this.
The position here is that the shadow and border- part of CSDs increase the total amount of surface that has to be considered for all drawing, buffering and layout operations. Consequently, they increase the amount of data that has to be updated, transferred and synched. The main contents, furthermore, gets pushed to offsets where they don’t align with efficient transfers, making the most expensive operation we have (texture uploads) worse. They also mutate the contents of the buffer to have a mixed origin and type. Because of this, they also adversely affect compression and other quality parameters like filtering, mipmapping and antialiasing.
This is the part many discussion threads on the matter seem to boil down to so I will not provide much annotation on it. The stance that gets repeated from GTK people is a version of “users do not care, Chrome and friends do this anyhow”. Here you have GTK, QT, EFL, SDL and Weston in loving harmony. Everyone with their own take on colours, icons, buttons and mouse cursor. “Beautiful”.
Suggestions on solving these brings the palm that much closer to the face. Things like “oh but you can have a shared library that implements the rendering”. One should rightfully ask what the point of a protocol is if, at the signs of design flaws, is to immediately reach for a sideband implementation, defeating the point of having a protocol in the first part.
Client and Protocol Complexity
It is considerably more complex to write a conforming wayland client than it is to write an Xorg client. A simple “hello world” press a key and flip a color output is in the range of thousands of lines of code, and the problem of actually knowing how to draw client decorations is open ended, as you get no hints in regards to how the user wants things to look or behave.
This means that some clients just ignore it – to some conflict and chagrin – Wayland SDL and GLFW backends or clients like Retroarch do not, at the time of writing this, implement them at all, for instance. The inflammatory thread in this MPV issue also highlights the problem.
The “I need to draw decorations?” problem becomes worse when the contents the client wants to present is in a format that is hard or costly to draw decorations in because of their rendering pipeline or restrictions on colour formats that should be used, incidentally, the case for games and video playback. This is where the wl_subsurface protocol is needed, and it maps back into the performance perspective.
The tactic is that you split the surface into the “main” area with one buffer format, and draw the decorations in slices using some other. These can be infinitely many, impose a requirement to “wait” for subsurface updates to synch. They are a fun way of making virtually all compositors crash or deadlock if you know what you are doing.
For operations like drag- resize and transforms, these becomes painfully complicated to deal with as you may need to wait for all surfaces in the tree to be “ready” before you can use them again. They can easily consume around ~12 file descriptors (also recall: double buffered state) even if the client is being nice, and with a few tricks it can become much worse.
It is to no surprise that GTK, EFL and others agree on the current situation, as they have already done the work, thus this empowers them at the cost of everyone else. Pundits also typically chime in and say something like “everyone is using a toolkit anyway” to which I say, step out of your filter bubble and widen your sample set, there are plenty of “raw” clients if you know where to look. The natural follow up from the same crowd is something to the effect of “you shouldn’t write a wayland client manually yourself, use a toolkit” – which is nothing but a terrible excuse and undermining the point of agreeing on a protocol when the details gets masked inside library code, it means moving the complexity around, making it less visible, rather than actually reducing complexity.
Adding a dependency to a million-line codebase is a really weird way of making something “simple”.
If the point of wayland is, in fact, the often claimed “make it simpler” and the suggested solution turns out demonstrable worse than the dominating one in that regard, a band aid suggestion of ‘hide the complexity in a library’ is farcical at best.
Real simplicity is observable and permeates all components in a solution, and the reality of Wayland is anything but simple.