This article presents an interpretation of the history surrounding the ability for X clients to interact with X servers that are running on other machines over a network; recent arguments as to that ability being defunct and broken; problems with the feature itself; going into what it was, what happened along the way, and where things seem to be heading.
The high level summary of the argumentation herein is that there is validity to the claims that, to this very day, there is such a thing as network transparency in X. It exists on a higher level than streaming pixel buffers, but has a diminishing degree of practical usability and interest. Its technical underpinnings are fundamentally flawed, dated and criminally inefficient. Alas, similarly dated (VNC/RFB) or perversely complex (RDP) solutions are far from reasonable alternatives.
What are the network features of X?
If you play things strict, all of X are. It should be the very point of having a client / server protocol and not an API/ABI.
Protocol vs. API/ABI tangent: Communication that travel across hard system barriers need to consider things like difference in endianness, loss in transit, remote addressing and so on, while the abstract state machine(s) need to account for parameters that are fairly invisible locally. Some examples of such parameters would be the big sporadic delays caused by packet corruption and retransmission, a constantly high base latency (100+ms) and buffer back-pressure (clients keep sending new frames and commands exceeding the available bandwidth of the communication channel, accumulating into local buffers, like stepping on a garden hose and see the bubble grow). The interplay between versions and revisions also tend to matter more in protocol design than in API design, unless you go cheap and reject client – server version mismatch.
Back to X: The real (and only) deal for X networking is in its practical nature; the way things work from a user standpoint. In the days of yore, one could simply chant the following incantation:
DISPLAY=some.ip:port xeyes
Should the gods be willing, you would have its very soul stare back at you through heavily aliased portals. The only difference to the local version would be a change to the “DISPLAY=:0” form, but other than that, the rest was all transparent to the user.
Now, the some.ip:port form assumed you were OK with anyone between you and the endpoint being able to listen in “on the wire”, possibly doing all kinds of nasty stuff with the information in transit. To add insult to injury, Pixel buffers were also not compressed so when they became too numerous or large, the network was anything but happy. The feature was really only ever ‘good’ through the rose tinted glasses of nostalgia on a local area network; your home, school, or business; certainly not across the internet.
The form above also assumes that the X server itself had not been started with the “-nolisten tcp” argument set, or that you were using the better option of letting an SSH client configure forwarding, introduce compression and provide otherwise preferential treatment like disabling Nagel’s Algorithm. Even then, you had to be practically fine with the idea that some of your communication could be deduced from side channel analysis (hint: even your keypresses looks very distinct from a packet-size over time plot) and so on. Details like this also puts a bit of a dent in the ‘transparent to the user’ idea.
Those details in spite, this was a workable scenario for a long time, even for relatively complex clients like that of the venerable Quake 3. The reason being that even GLX, the X related extensions to OpenGL only had local ‘direct rendering’ as an optional thing. But that was about the tipping point on the OpenGL timeline where the distance between locally optimal rendering and remote optimal rendering became much too great, and the large swath of developers- and users- in charge largely favoured the locally optimal case for desktop like workloads.
The big advantage non-local X had over other remote desktop solutions, of which there are far too many, is exactly this part. As far as the pragmatic user could care, the idea of transparency (or should it be translucency?) was simply to be able to say “hey you, this program, and only this program on this remote machine, get over here!”.
The principal quality was the relative seamlessness of the entire set of features on a per window basis, and that, sadly, goes unmatched to this very day, but with every ‘integrated desktop environment’ advancement, the feature grows weaker and the likelihood of applications being usable partially, or even at all, like this decreases drastically.
What Happened?
An unusably short answer would be: the convergence of many things happened. A slightly longer answer can be found here: X’s network transparency has wound up mostly being a failure. My condensed take is this:
Evolution of accelerated graphics happened, or the ‘Direct Rendering Infrastructure, DRI’ as it is generationally referenced in the Xorg and Linux ecosystems. Applications starting to depend heavily on network unfriendly IPC systems that were being used as a sideband to X rather than in cooperation with it. You wanted sound to go with your application? Sorry. Notification popups going to the wrong machine? oops, now you need D-Bus! and so on.
This technical development is what one side of the argument is poking fun at when they go ‘X is not network transparent!’, while the other side are quick to retort that they are, in fact, running emacs over X on the network to this very day. The easy answer is to try it for yourself, it is not that the mechanisms have suddenly disappeared; it should be a short exercise to gain some practical experience. From my own experiments just prior to writing this article, the results varied wildly from pleasant to painful depending on how the application and its toolkit were written.
Thus far, I have mostly painted a grim portrait, yet there are more interesting sides to this. These more interesting things are XPRA and X2go. X2go address some of the shortcomings in ways that still leverage parts over X without falling back to the lowest “no way out” common denominator of sending an already composited framebuffer across the wire. It does so by using a custom X server with a different line protocol for external communication and a carrier for adding in sound, among other things. Try it out! it is pretty neat.
Alas this approach also falls flat when it comes to accelerated composition past a specific feature-set, which can be seen in the compatibility documentation notes. That aside, X2go is still very actively both developed, and used. The activity on mailing lists, irc and gatherings all act as testament to the relevance of the feature and its current form, from both a user- and a develop- perspective.
What does the future hold?
So outside succumbing to using the web browser and possibly bastardised versions like ‘electron’ as its other springboard, what options are there?
Lets start with the ‘design by committee’ exercise that is Wayland, and use it as an indicator of things that might become a twisted reality.
From what I could find, there is a total of one good blog post/PoC that, in stark contrast to the rambling fever dreams of most forum threads on the subject, experiments technically with the possibility of transparent in the sense of “a client connecting/bridged to a remote server” and not opaque in the sense of “a server compositing and translating n clients to a different protocol”. Particularly note the issues around keyboard and descriptor passing. Those are significant yet still only the tip of a very unpleasant iceberg.
The post itself does a fair job providing notes on some of the problems, and you can discover a few more for yourself if you patch or proxy the wayland client library implementation to simulate various latencies in the buffer dispatch routine, sprinkle a few “timesleeps” in there. Enjoy troubleshooting why clients gets disconnected or crash sporadically. It turns out testing asynchronous event driven implementations reliably is really hard and not enough effort is being put into toolkit backends for Wayland; too bad most of the responsibilities have been pushed to the toolkit backends in order to claim that the server side is so darn simple.
That is not to say that it cannot be done, of course – the linked blog post showed as much. The issue is that the chasm between a. the “basic” proxy-server/patching support libraries and writing over a socket, even with some video compression, and b. getting to even the level of x2go with the aforementioned problems is a daunting task. Then you would still fight the sharp corners with queueing around back-pressure so data-device (clipboard) actions does not stall everything; the usability problems from D-bus dependent features breaking; audio not being paired, synched and resampled to the video it is tied to; and so on.
The reason I bring this up is that what will eventually happen is eluded to in the Wayland FAQ:
This doesn’t mean that remote rendering won’t be possible with Wayland, it just means that you will have to put a remote rendering server on top of Wayland. One such server could be the X.org server, but other options include an RDP server, a VNC server or somebody could even invent their own new remote rendering model.
The dumbest thing that can happen is that people take it for the marketing gospel it is, and actually embed VNC on the compositor side. I tried this out of sheer folly back in ~2013 and the experience was most unpleasant.
RFB, the underlying protocol in ‘VNC’, is seriously terrible; even if you factor in the many extensions, proprietary, as well as public. Making fun of X for having a dated view on graphics and in the next breath considering VNC has quite some air of irony to it. RFBs qualities is the inertia in clients being available on nearly every platform, and that the public part of the protocol (RFC6143) is documented in such a coherent and beautiful way that it puts the soup of scattered XML files and TODO sprinkled PDFs that is “modern” Wayland forever in the corner.
The counterpoint to the inertia quality is that the RFB implementations have subtle incompatibilities with each other, so you do not know which features that can be relied on, when they can be relied on, or to what extent; assuming the connection does not just terminate on connection handshake. The later case was, as an example, the case for many years with Apples VNC server being connected to from one not written by Apple.
The second dumbest thing is to use RDP. It has features. Lots of them. Even a printer server and usb server and file system mount translation. Heck, all the things that Xorg was made fun of for having, is in there, and then some. The reverse engineered implementation of this proprietary Microsoft monstrosity, FreeRDP, is about the code size of the actually used parts of Xorg, give or take some dependencies. In C. In network facing code. See where this is heading? Embed that straight into your privileged Wayland compositor process, and I will just sit here in bitter silence and be annoyed by the fireworks.
The least bad available technology to try and get in there would be the somewhat forgotten SPICE project, which is currently ‘wasted’ as a way of integrating and interacting with KVM/Qemu. In many ways, with the local buffer passing modifications, it makes a reasonably apt local display server API as well.
Rounding things off, the abstract point of the ‘VNC-‘ idea argument is, of course, the core concept of treating client buffers solely as opaque texture bitmaps in relation to an ordered stream of input and display events; not the underlying protocol as such.
The core of the argument is that networked ‘vector’ drawing is defunct and dead or dying. The problem with that argument is that it is trivially shown to be false, well illustrated by the web browser which shows some of the potential and public interest. We are not just streaming pixel buffers, and for good reason. The argument is only partially right in the X case as X2go shows that there is validity to proper segmentation of the buffers, so that the networking part can optimise and chose compression, caching and other transfer parameters based on the actual non-composited contents.
If you made it this far and want to punish yourself extra – visit or revisit this forum thread and contrast it in relation to this article.
Is there any good set of specifications for a remote transparent graphic (multi-media really) terminal protocol? Don’t complain, write it down!
Not really no. In terms of the quality and accuracy of the documentation, the referenced SPICE is probably as good as it gets and it is still quite a limited one. What is being worked on in the scope of Arcan (roadmap target for 0.6, teasers in net-056 branch) is a bit more ambitious, but documenting those details is the final quality assurance stage of that target, not something that will happen in the near future.
Thank you for the interesting article.
May you announce your thoughts for what things will have to be?
May be you”ll shed some light on the following issues:
1. Arcan core don’t care of network trasparency. There may be different implementation of frameservers(?) which will do network transfers and finally put the picture(raster) into arcan client buffer via shmif.
2. What about a chain of frameservers? For example, I have direct native Arcan library/toolkit(==frameserver?) DrawKit which provide drawing of two lines in form X via one function drawX(). What If I want to provide network transparency for this library? I must create some specific program DrawKitNet linked to DrawKit which will listen to network, decode command and call drawX(). Does Arcan care about existence of DrawKitNet?
3. About VR. I didn’t see in your VR experiments that you’ve introduced new type of Arcan client buffer. May be I’ve missed something. As for me, there must be new type of client buffer because things goes from raster to volume. If there is a client buffer of Volume type in Arcan then we can say that in case of VR/3Dscenes logic remain the same and Arcan will handle network transparency for VR by additional frameservers(?) which will do network transfers and finally put the things into arcan client buffer of type ‘Volume’ via shmif.
1 /2. There is a branch (net-056) where a semi- workable netpipe prototype is evolving. See src/tools/netproxy there. It works by exposing itself as a headless simplified shmif- server. It can currently do video and (most) events for one segment.
Recall that from crash recovery we can tell a client where to go if its current server disappears and rebuild itself here. So dynamic handover works with the WM instructing (in durden, target/video/advanced/migrate) the client to leave and go somewhere else, but to come back if something goes wrong.
Then there’s more on https://github.com/letoram/arcan/wiki/Networking
3. VR from a client perspective is so far focused on input, but all the other parts except for the data format(s) itself is prepared for, and it is the same code path that is used to allow some clients to share and modify display color LUTs (see shmif_sub and the vrbridge tool source). The attack plan is rougly to refactor out the current mesh handling from the core (arcan_3dbase.c) and moving it into afsrv_decode where added dependencies and parsing does much less damage and will be sandboxed. Then use a standard mesh packing format (like the one needed by glTF anyhow) for models, and some voxel representation. When the current test cases work and so on, there is little left to let other clients provide 3D output. The ‘pixel buffer’ part of the segment will be used for textures then.
It seems that I got it wrong again.
As “It works by exposing itself as a headless simplified shmif- server” and “..gets rendered off-screen and forwarded to the ‘encode’ frameserver which implements VNC- style compression and serving” my assumption is wrong. Arcan will ’embed’ network transparency. It slightly disappoint me.
I still think you will have to cover Arcan with API (content creation API) for application developers and from that point of view things may have changed. Also I think that such issue like security constraints, network transparency must be outer to Arcan core.
I think that it is just language that confuses us. What do you mean with ’embed’ here? It is a separate, replaceable, optional tool. It act as both a shmif-api client and a shmif-api server but one that doesn’t need gpu, input device access and so on as it will just proxy the “server” side. From the perspective of application developers, nothing is needed, this is entirely transparent to them.
I fully agree that our non-native English is a bit/very poor.
But “exposing itself as a headless simplified shmif- server” gave no chance to interpret this in wrong way. You are going to transfer various arcan buffer types from shmif-local to shmif-remote. But I really think that this is not the the point where network transparency may be introduced.
As you saw I’m looking for an Arcan application API layer(toolkits). Also I believe that shmif will be fully covered by this API (toolkits). In this case there is no need to require all Native Arcan Application to connect to shmif, they do not need ARCAN_CONNPATH. They linked to a toolkit and it is up to toolkit to make network transparency in the best way.
After the review it is suprising to see that you announce “ARCAN_CONNPATH (comparable to X DISPLAY=:number)” for network transparency. My thoughts after your article is that network transparency is a complex problem. And a design with universal solution for network transparency inside is a mistake. You are going to provide universal solution for network transfer of all Arcan buffer types which will ‘automatically’ make all native Arcan application network transparent. It is strange at this stage as there is no ‘heavy’ Arcan application to proof the concept at least.
Again, I think that design architecture and infrastructure apart the real applications and applications API can lead to non optimal decisions.
arcan-shmif, arcan-shmif-tui is the API? more abstract / advanced rendering (say Qt, GTK, style) would render using that as their ‘backend’. shmif-client connects to arcan (shmif-server) and found via CONNPATH. How else would they do it?
There are certainly ‘heavy’ clients enough to cover the gamut (Xarcan, afsrv_decode, afsrv_terminal, afsrv_game, qemu-arcan, arcan-wayland, arcan-lwa).Full screen VM style “browser- class” applications (complex input patterns), 2D/3D games (via the libretro backend, high bandwidth, latency sensitive) and video playback (high-bandwidth, good buffer case) – what category would you say is missing? All except the arcan-wayland bridge work fine (better than VNC or X-fwd) networked on my LAN right now.
“How else would they do it?” This is the real important question which should be carefully analyzed. It come when the issue of API/toolkits layer arise.
Is Arcan here to drop out the megabytes of shim and helpers over poor/outdated API? It seems to me that Arcan design can suggest some flexible and efficent answer.
Again, “How else would they do it?”. Furher something of brainstorm.
First of all, lets show some cases:
1. Excel like apllication builded with toolkit ArcanGTK.
2. ProteinViewer. An application that render protein molecule and allow user to examine it (rotate/zoom).
Is there one ‘network transparency solution’ best efficent for both? The ProteinViewer will heavy render 3D image by some logic based on relatively small dataset of protein characterstic. An VNC style is bad because server which calculate characterstics must also do heavy renders for 1,2 … 20 users. The most efficent way is to put network border in early stage and just transfer protein characterstic to the ProteinRenderer.
Do you think that GTK and QT guys unable to add effiicent network transparency if they will do it from scratch? Just by profiling application and by union most common function sequence to one network command.
I think that arcan core must be bordered by buffer transfer via shmif. But in Arcan infrastructure must be renderers – libraries that do hardware accelerated rendering. So there are two stage:
– buffer production using renderers ;
– outputing buffer to arcan via shmif;
So there will be two toolkits ArcanGTK and ProteinRenderer. Excel-like application will use ArcanGTK and ProteinViewer will use ProteinRenderer. If ProteinRenderer (as any arcan toolkit) want to be ‘network transparent’ they must… (here carefully analyzed requirements)
I think in Arcan infrastucture must have some library with helpers for typical marshalling/command producing/parsing and a shell application ‘onremote’.
So putting protein ABCD4533 on user screen on station12 may be achieved by command:
onremote station12 proteinviewer ABCD4533
onremote will (this is some fast crazy thougths):
– establish connection with station12 via SCTP;
– prepare environment;
– link proteinviewer with network version of ProteinRenderer library;
– execute proteinviewer with argument ABCD4533.
Ok, I think I know how to structure an explanation that can take this forward – but wordpress/browser comment section format is a bit hard to work with. Please send me an e-mail if that is ok with you.
Really great work!
As it matures, with some preset application “use templates” – I can see enabling people to focus on productivity tasks without being frustrated by mundane interface limitations.
For that though I think a cultural shift in expectations about what makes a user-friendly desktop has to occur. It is hard to explain the benefits of tearing up everything we *think* we know about how to write and format a document – blind to the fact that we shouldn’t have to concern ourselves with the format – beyond specifiying the use case. That requires giving up some of our individual quirks / choices to gain something that is more efficient and no doubt more tuneable to people with disabilities / accessibility constraints.
You may be familiar with the approach taken in the print industry with the use of TeX / LaTeX templates. The trade off of loosing some custom options (like text flowing around a path) is worthwhile when you are assured pagination and orphaned text can be managed without introducing any more human effort or errors.
Thanks for the praise. There are many details to unpack here, so keep in mind that I am working from a principled system design of sorts that will be explained soon enough. The high-level article is mostly finished, but the timing is not right.
Looking at the strategies for encouraging a cultural shift, I see three relevant ones:
1. The “soapbox” : Go to the speaker’s corner and claw at the public mentality with flaming passion in hopes of starting an avalanche, vive la revolution!
2. The “boiled frog” : This is one that you can find elsewhere in large numbers; Spit polish “the old”, sell it as “the new”. Attach to established brands and trends and market yourself as better to try and attracting enthusiasts. Live of the sunken cost and choice-supportive bias cocktail and grow on appeal to popularity. Now you can adjust the temperature and shift the narrative towards the new testament.
3. The “trail of breadcrumbs” : Realise there is a niche for everything, and that the side effects of operating within such a niche can be more rewarding at less exposure.
I don’t exactly thrive on attention, so 1 is out. I find 2 appalling, the ends does not come close to justifying the means. Alas a world of keeping scores of +1s and “/like”s biases towards the two. They are also limiting in other ways, as you can’t stray too far away from the initial pitch or there’ll be forks, and established territories need to be defended, so more conflict.
So it comes as no surprise that it’s door number 3 that I am going with here, “discover” not “search”, to be even more vague. The longwinded thought as to how a cultural shift can be encouraged is the idea that it has already happened in the minds of some people and the lack of ability to act on the idea. Appeal to that demographic and err on the side of caution.
As to how a different interaction model should look more precisely, I think of it as the ‘app’ as a single prepackaged data transforms (this is the pipes and filters of yore, but the structure turned out much too simple and branched on complex input arguments), the DE links the transforms with individual interaction presets (disabilities, muscle memory, …). There are other dimensions like observability, and aggregation models (html became a popular one, but there are imo more interesting angles here).
Resisting feature creep must take a lot of discipline.
– the possibilities / directions you could take the base project in must at times seem overwhelming. Glad to see you are prioritizing security and stability issues.
My only hope is by the time it reaches 1.0 you can somehow provide points of parity with ordinary GUI desktop paradigm.
– it would be a pity for your innovation to only get used by a minority of boffins.
Beyond coping with brief twitter like markup commands
– regular users are bound to be put off by terminal configurations / interaction.
I look forward to seeing a youtube demo of 0.6 🙂
Artificial constraints help. Durden has a lot of feature creep, but it is also in a way that features can be removed by deleting files, almost at random and the project should still work, and many of them are to test the design, not expecting much real use.
One trick to avoid creep in this context is the whole “Scripting layer first” with engine developments mostly constrained by roadmap. If an interesting feature can be solved at the scripting layer, it gets solved there. If the solution gets clean enough, the script is added as a builtin for the next iteration.
Actually making changes to the exposed script functions is a slow process: writing documentation, test, deploying a new release so it can be used, thinking of backward compatibility, … Over time there have been less and less changes to the API mainly for this reason.
As per points of parity with ordinary GUI desktop things, it’s like, 95% there, not just being presented / packaged to be available and accessible as such by default. It’s a pressure valve that can be released, but that would increase the attention and lower signal/noise ratio.
Development ethos.
You mentioned choosing between three common doors.
But what if you could create your own door
– where you were able to pick and choose the best aspects from each option
– what would your door look like?
Would you inevitably be pulled towards the middle-door – typified by the cathedral
– since sustainable development needs a balance of stability and innovation.
I think your door explanation is in essence a sociopolitical preference.
Extending the concept to 5 doors, outlooks…
Door A – politically left-wing / Marxist ethos characteristic of “the hermitage”
– niche development, often nostalgic, typically stagnates / gets abandoned,
– (e.g. GNU / FSF / Copyleft / GPL, Free ‘Libre’ software)
– is this like your door1 “soapbox”?
Door B – politically center-left / social-democrat ethos characteristic of “the market bazaar”
– typically free (as in beer) or shareware “unpolished but innovative”
– e.g. Linux Distributions / LGPL Open software)
– innovation created here often gets ported to Door C and
exploited by capitalists in DOOR D
– is this like your description of door 3?
Door C – politically centered ethos characteristic of “the cathedral pull-pit”
– often free as in beer but non-libre
– (e.g. BSD / X.org / MIT & BSD License) center independent
– innovation created here in Door C often gets shared / ported / adapted / extended.
Door D – center-right / republican, conservative ethos characteristic of ‘the super-market’
– usually very polished, non-free / patent encumbered
– (e.g. Unix / Microsoft / Amazon / Google / Apple / DRM )
Door E – politically right-wing ‘guarded / establishment’ ethos characteristic of ‘the castle’
– (e.g. IBM / Proprietary, Closed-source)
I personally feel conflicted, I’d traditionally go though Door B, but with growing disillusion about the mainstream hijacking of GNU-Linux development (system-d etc) – I am finding a safe-haven in the stability provided by Door C.
I am no BSD-fanboy, but I can appreciate the structured approach even if I don’t necessarily like strict hierarchy.
So the choice was framed more for marketing/exposure/publicity, the larger community, project direction, business etc. sides were omitted — it’s a very big topic.
One tactic I didn’t bring up, for instance, was the TempleOS “shock and awe” one. Perhaps more coincidence or nature than deliberate in that case, but the project gathered much more attention than any of mine, or others that deserve to be considered, like Genode (particularly its display system). But was the attention useful or harmful?
I think the market powers have figured out how to take advantage of open source efforts without contributing, regardless of what legal dance we do, so focusing on that feels about as futile as trying to DRM something against piracy. You buy some time, perhaps it’s enough to give you an edge, perhaps not.
I am more in favour of a BSD approach and then have some business sense as to how collaborations are set up, who gets to influence project direction and why, configure for synergies etc.
Hello! I just learned of Arcan & thought I ought to see if its goals are similar to mine, but as a longtime fan of remote operation, preferably graphical, I had to check this post out first. I got on well enough with remote X in the first few years of the century, but it did get a bit slow when I used a heavily pixmapped Gtk+ theme. (This was on a 10/100 LAN only.) Looking up web pages was also “pushing it”. 🙂 But after a year or two, confusing authentication requirements killed it for me.
But remote X was never my ideal system. I wanted to keep the same stuff running as I moved from my desk to my kitchen to my bed, the latter especially when I wasn’t feeling too well, which was all the time. Thus, VNC was my choice, and yes, I’ve had lots of compatibility problems and some performance problems. Once, when I ran it on IP/IEEE 1394, it even got keys out of order. This never happened on Ethernet. (That reminds me, a few years ago, I heard Wayland over X got keys out of order even over Ethernet, and it was considered unfixable because Wayland leaves the details of input up to the apps. I’m so not interested in Wayland.) My compatibility “solution” was generally to run a TightVNC or X11VNC server. Most clients can cope with those, sometimes with settings tweaks of course. I hated configuring them though. I didn’t have those options on Plan 9 which has its own server and client, but again they were compatible enough for my limited needs at the time: Plan 9 ↔ Plan 9 ↔ Linux/X11, and all on a LAN.
Plan 9 has its own remote display. In fact, it does nearly everything over virtual files with the expectation that files will be exported over the network. (Its kernel can even be described as a multiplexer for its remote filesystem protocol.) I rejected it for the same reasons I rejected X11: lack of display persistence and confusing security. In Plan 9’s case, the security documentation is intentionally wrong and you have to read the code, which I found impenetrable. (Other people love reading Plan 9’s code. *shrug*) I may be wrong about the persistence issue. The aan command (always available network) allows reconnection of filesystems. My trouble was, by the time aan was fixed, I was overwhelmingly busy and didn’t have time to set it up. Eventually, I stopped using Plan 9 for other reasons.
There’s a curious thing about programs which worked well over remote-X, cached VNC, and likely Plan 9’s remote display too. They don’t work smoothly with modern display technology. From my perspective, I sometimes think display technology has regressed rather than progressed. 😉 It’s not just 2D; some 3D games from ’06 almost to the present used a particularly cranky and dull way of rendering water, in contrast to the beautiful water one of those very games had in ’05. But that’s enough grumbling.
I forgot to mention SPICE. It’s good to know it’s a better option. If my OS plans ever get off the ground, I should look into it.