Arcan “Monthly”, September Edition

Revising the approach to dissemination slightly, we will try out having a monthly (or bi-monthly if there is not enough relevant changes for a monthly one) update to the project and sub-projects.

For this round, there’s a new tagged Arcan (i.e. the Display Server) version (0.5.1) and a new tagged Durden (i.e. the example “Desktop Environment”) version (0.2). Although some new features can’t be recorded with the setup I have here, the following demo video covers some of the major changes:

I did not have the opportunity to record voice overs this time around, but here are the rough notes on what’s happening.

1: Autolayouter

The autolayouter is an example of a complex drop:in able tool script that adds additional optional features to Durden. It can be activated per workspace and takes control over the tiling layout mode, with the idea of removing the need for manual resizing/reassignment etc. It divides the screen into three distinct columns, with a configurable ratio between the focus area in the middle and the two side columns. New windows are spawned defocused in a column, spaced and sized evenly, and you either click the window or use the new target/window/swap menu path to swap with the focus area.

It can operate in two different modes, non-scaled and scaled. The non-scaled version acts like any normal tiling resize. The scaled version ‘lies’ to all the clients, saying that they have the properties of the focus area. This means the side- columns get ‘live previews’ that can be swapped instantly without any resize negotiation taking place, reducing the amount of costly resize operations.

You also see a ‘quake style’ drop down terminal being used. This is another drop-in tool script best bound to a keybinding. Its primary use is when you need a persistent terminal with a more restricted input path (keybindings etc. are actually disabled and there’s no activated scripting path to inject input) that works outside the normal desktop environment. In some ways safer than having a sudo terminal around somewhere…

2: Model Window

This is another example drop-in tool script that was ported from the old AWB demo video (the amiga desktop meets BeOS demo from ~2013). What it does is that it simply loads a 3d model, binds to a window and allows you to map the contents of another window to a display part of the 3d model.

There’s clearly not much work put into the actual rendering here, and the model format itself is dated and not particularly well thought out, but serves to illustrate a few codepaths that are the prerequisite for more serious 3D and VR related user interfaces – offscreen render-to-texture of a user-controlled view- and perspective- transform with content from a third party process, with working I/O routing from the model space back to the third party process.

3: Region-OCR to Clipboard

This is an addition to the encode frameserver (assuming the tesseract libraries are present) and re-uses the same code paths as display region monitor, record and share. What happens is that the selected region gets snapshotted and sent as a new input segment to the encode frameserver, that runs it through the OCR engine and puts any results back as a clipboard- style message.

4: Display Server Crash Recovery

We can already recover from errors in the scripts by having fallback applications that adopt external connections and continue from where they left off. A crash in the arcan process itself, would still mean sessions were lost.

The new addition is that if the connection is terminated due to a parent process crash, external connections keep their state and try to migrate to a new connection point. This can be the same one they used, or a different one. Thus, this feature is an important part in allowing connections to switch display servers in order to migrate between local and networked operation, or as a means of load balancing.

5: Path- Activated Cheatsheets

The menu path activated widgets attached to the global and target menu screens were already in place in the last version, but as a primer to the new feature, we’ll show them again quickly. The idea is to have pluggable, but optional, dynamic information or configuration tools integrated in the normal workflow.

What is new this time is the support for target window identity activation. Any external process has an fixed archetype, a static identifier, a dynamic identifier and a user definable tag. The dynamic identifier was previously just used to update titlebar text, but can now be used as an activation path for a widget.

To exemplify this, a cheatsheet widget was created that shows a different cheatsheet based on target identity. The actual sheets are simply text files with a regex- on the first line and empty lines between groups. The widget is set to activate on the root- level of the target menu.

The normal OSC- command for updating window title is used to update the target identity that is used as a selector for the sheet. Vim can be set to update with the filename of the current file and the shell can be set up to change the identity to the last executed command, as shown in the video when triggering the lldb cheat sheet.

6. Connection- and Spawn- Rate Limiting

This another safety feature to let you recover from the possible Denial-Of-Service a ‘connection bomb’ or ‘subwindow-spawn-bomb’ can do to your desktop session. In short, it’s a way to recover from something bad like:

while true; do connect_terminal & done

which has a tendency to crash, 100% live lock or just stall some desktop environments. Here we add the option to limit the amount of external connections to an upper limit or to only allow a certain number of connections over a specified time slice.

7. Dedicated Fullscreen

This feature is still slightly immature and looks like the normal fullscreen but with a few caveats. One is that we circumvent normal composition, so post processing effects, shaders etc. stop working.

The benefit is that we reduce the amount of bandwidth is required. The more important part is what this feature will be used for in the near future, and that is to prioritize bandwidth, latency and throughput to a specified target.

 8. QEmu/SDL1.2/SDL2

As part of slowly starting to allow 3rd party producers/consumers, there is now an Arcan  QEmu display driver (maintained in a separate GIT) that’s at the point where single display video and keyboard / mouse input is working.

The hacky ‘SDL1.2’ preload library has been updated somewhat to work better on systems with no X server available (and there’s an xlib- preload library to work around some parasitic dependencies many has to glX related functions, but it’s more a cute thing than a serious feature).

There is also a SDL2 driver (maintained in a separate GIT) that support Audio/Video/Input right now, but with quite a lot of stability work and quirk-features (clipboard, file DnD, multi-window management) still missing.

Condensed Changelog:

Arcan – 0.5.1 Tagged

In addition to the normal round of bug fixes, this version introduces the following major changes:

  • Encode frameserver: OCR support added (if built with tesseract support)
  • Free/DragonflyBSD input layer [Experimental] : If the stars align, your hardware combination works and you have a very recent version of Free- or Dragonfly- BSD (10.3+, 4.4+), it should now be possible to run durden etc. using the egl-dri backend from the console. Some notes on setup and use can be found in the wiki as there are a few caveats to sort out.
  • Terminal: added support for some mouse protocols, OSC title command, bracket paste and individual palette overrides.
  • Shmif [Experimental] : Migration support – A shmif- connection can now migrate to a different connection point or server based on an external request or a monitored event (connection dropped due to server crash). This complements the previous safety feature with appl- adoption on Lua-VM script error. The effect is that external connections can transparently reconnect or migrate to another server, either upon request or with external connection adoption on a dropped connection in the event of a server crash. When this is combined with an upcoming networking proxy, it will also be used for re-attachable network transparency.
  • Evdev input: (multi) touch- fixes
  • Shmif- ext : Shmif now builds two libraries (if your build configuration enables ARCAN_LWA with the egl-dri VIDEO_PLATFORM), where the second library contains the helper code that was previously part of the main platform used for setup for accelerated buffer passing. This will swallow some of the text-based UI code from the terminal. The patched SDL2 build mentioned above requires this lib, and arcan_lwa and game frameserver (with 3D enabled) have been refactored to use it.

Lua API Changes:

  • target_displayhint : added PRIMARY flag to specify synch-group membership
  • rendertarget_forceupdate : can now change the update- rate after creation
  • new function: rendertarget_vids – use to enumerate primary attached vids
  • set_context_attachment : can now be used to query default attachment
  • system_collapse : added optional argument to disable frameserver-vid adoption
  • new function: target_devicehint – (experimental) can be used to force connection migration, send render-node descriptor or inform of lost GPU access
  • new function: video_displaygamma – get or set the gamma ramps for a display
  • target_seek : added argument to specify seek domain


  • New Tools: 3D Model Viewer, Autolayouter, Drop-Down Terminal
  • Dedicated fullscreen mode where a consumer is directly mapped to the output device without going through compositing. More engine work is needed for this to be minimal overhead/minimal latency though (part of 0.5.2 work).
  • Double-Tap meta1- or meta2- to toggle “raw” window input-lock / release.
  • Added display-region selection to clipboard OCR.
  • [Accessibility] Added support for sticky meta keys.
  • Consolidates most device profiles into the devmaps folder and its subfolders.
  • Added a ‘slotted grab’ that always forwards game-device input management to a separate window, meaning that you can have other windows focused and still play games.
  • Multiple- resize performance issues squashed.
  • Locked- input routing for mouse devices should work better now.
  • Basic trackpad/touch display/tablet input classifiers, see devmaps/touch.
  • Format- string control titlebar contents
  • External connection and window- rate limiting
  • Statusbar is now movable top/bottom and the default is top so that those trying things out using the SDL backend won’t be frightened when they are met with a black screen.
  • Target- identity trigered cheat sheets
  • Button release can now be bound to menu path


The Senseye subproject is mainly undergoing refactoring (in a separate branch), changing all the UI code to use a subset of the Durden codebase, but with a somewhat more rigid window management model.

This UI refactoring along with Keystone based assembly code generation and live- injection will comprise the next release, although that is not a strong priority at the moment.

Upcoming Development

In addition to further refining the 3rd party compatibility targets, the following (bigger) changes are expected for the next (1-2) releases:

  • LED driver backend rework (led controllers, backlight, normal status LEDs and more advanced keyboards)
  • Text-to-Speech support
  • LWA bind subsegment to rendertarget
  • GPU(1) <-> GPU(2) migration, Multi-GPU support
  • Vulkan Graphics Backend
  • On-Drag bindable Mouse cursor regions
  • More UI tools: On-Screen Keyboard, Dock, Desktop Icons
Posted in Uncategorized | Leave a comment

Some Questions & Answers

A few days have gone by since the project was presented, and while I am not very active on the forums and other places where the project have been discussed, I have seen some questions and received some directed ones that I think should be replied to in public view.

1. If I would build and install Arcan, what can I do with it?
To just try things out and play with it, you can for starters build it with SDL as the video platform and run it from X or OSX. It won’t be as fast or have as many features as a more native one like egl-dri, but enough to try it out and play around. A few brave souls have started packaging so that will also help soon. The main application you want to try with it is probably the desktop environment, durden. With it, you have access to the terminal emulator, libretro- cores for games, video player and a vnc client. There is a work-in-progress QEmu integration git and soon a SDL-2 backend. If you are adventurous, it is also possible to build with -DDISABLE_HIJACK=OFF and get a Run with LD_PRELOAD=/path/to/ /my/sdl1.2/program and you should be able to run many (most?) of SDL-1.2 based games and applications.

2. Will this replace
That depends on your needs. For me, it replaced X quite a while ago; I can run my terminal sessions, connect to VNC, run my QEMU virtual machines natively and the emulators I like to play around with all work thanks to libretro. The default video decoder does its job ‘poorly but ok enough’ for my desktop viewing and my multi-monitor setup works better now than it has even done in my 20+ years of trying to stand XFree86/ For others, that’s not enough so that might be reason to wait or simply stay away. It is not like you lack options.

3. How does this all relate to Wayland?
I tried to answer that in the presentation, but it was at the end and perhaps I did not express myself clearly. I intend to support Wayland both as a server and as a client. I’ve had a good look at the protocol (and Quartz, SurfaceFlinger, DWM, for that matter…), and there’s nothing a Wayland implementation needs that isn’t already in place – in terms of features – but the API design and the amount of ‘X’ behaviors Wayland would introduce means that it will an optional thing. There is nothing in Wayland that I have any use for, but there are many things I need in terms of better integration with virtual machine guests and the recent developments in QEmu 2.5/2.6 in regards to dma-buf/render-nodes is highly interesting, so it comes down to priorities or waiting for pull-requests😉

4. Is the Lua scripting necessary?
No, it should take little more effort than removing a compilation unit and about 50 lines of code or so for the scripting interface to disappear in order to run the engine C only – but it is a lot more work telling it what to do and with less support- code for you to re-use. A lot of scripts in Durden, for instance, were written so that you could cut and paste them into other projects. That’s how Senseye will be made usable for people other than myself🙂

The engine will get a library- build version for such purposes further down the road, but right now there’s no guarantee to the stability of internal interfaces. The same applies to the shared memory interface, even though that already has a library form. I have a few unresolved problems that may require larger changes in these interfaces without considering how any change would affect other people.

5. Will this run faster / better with games?
I have no data to support such a claim, so that’s a maybe. A big point however, is that you can (if you know your Lua, which isn’t very hard) have very good control over what “actually happens” in order to optimize for your needs. For gaming, that would be things like mapping the game output directly to the selected display, without the insanity of the game trying to understand resolution switching and whatever ‘fullscreen’ means. Another possibility would be switching to a simpler set of scripts or mode of operation that suspend and ignores windows that don’t contribute to what you want to do.

6. Is the database- application whitelisting necessary?
No, you can connect to the server using another set of primitives (ARCAN_CONNPATH=…), if the set of scripts you are using allows you to. This is what is meant by “non-authoritative” connection mode and the database can be entirely :memory if you don’t want any settings to be stored. The whitelisting will come into better use later, when you can establish your own “chain of trust”.

7. Is there a way to contribute? 

There are many ways, besides ‘spreading the word’ (and I could use a Vive ;-)). See the wiki page here:

8. The ‘Amiga UI’ is not working?

That’s the reason it was marked as abandoned (and practically since end of 2013). It was just a thing I did to get a feel for how much code it would take to do something like ‘Amiga meets BeOS’ and find out some places where the API had gone wrong. Afterwards, I changed those parts but never updated the related scripts. That said, it is not a big effort to get it up and running again, so maybe…

8. Where does this fit in the Linux/BSD ecosystem?

Where does awk, sed and grep fit? Arcan is a versatile tool that you can use for a lot of kinds of graphics processing and the Desktop case illustrated by Durden is just one. I use a minimal init and boot straight into Durden, using a handful of preset mount and networking settings that render current state and controls into small widgets. No service manager, display manager, boot animation, login manager or message passing IPC.

One of the many problems with interactive graphics in a ‘pipes and filters‘ like ‘user freedom UNIX- way model‘ is that the performance and latency breaks down. You are much more sensitive to those things thanks to the wonders of human cognition. I know some people still think in the ways of ‘a framebuffer with pixels’ but the days of Mode 13 are gone. The process now is highly asynchronous and triggered by events far more complicated than a VBLANK interrupt. The design behind Arcan resembles about as close to the ‘pipes and filters’ I think I can come without becoming slow or esoteric.

9. Why is there no X support?
This is a big question and ties in with answer 3.  A small part is the cost and pain of implementing such a complete mess, which would mean less time for more interesting things. This is a completely self-financed project, fueled mostly by dissent, cocktails and electronic music, with no strong commercial ambitions — all in the tradition of dumb idealism.

A bigger part in committing to a protocol, or saying ‘I should be compatible with- or replace- project XYZ’ is that you limit yourself to thinking in terms of how those project works and how you should be better than them or outcompete in some way, rather than in terms of ‘how can I do something interesting with this problem in a way that is different from how others have approached it’.

Collectively speaking, we don’t need yet another project or implementation that takes on X and if that already feeds your needs, why change? Some of us, however, need something different.


Posted in Uncategorized | Leave a comment

I wrote a Lua programmable display-server++ [arcan], a desktop environment [durden] and a nifty debugging/reversing tool [senseye]

This post is to consolidate some information about the project and sub-projects in an attempt to disseminate what I have been spending way too much time on.

  • Arcan – When a Game Engine meets a Display Server meets a Multimedia Framework
  • Durden – “Keyboard centric” Tiling Desktop Environment
  • Senseye – Visualization for Debugging and Reverse Engineering

Elevator pitch : Many years ago, I grew tired of the unnecessarily large codebases, crazy dependencies,  vast attack surfaces and general Rube-Goldbergness of the software tools I had to use on a day to day basis. This is my attempt of [queue futurama:bender voice] ‘building my own themepark, with blackjack and …’ – in order to get some peace and quiet.

This video gives a high-level presentation of the project, development and goals. Here are the slides  and others regarding design and here is a ton of documentation. If you know your code, dive into the main github repository or check out the other demo videos, or look us up on #arcan @Freenode IRC.

Everything is free, open sourced and shared with the slightest of hope that it will be useful and relevant to others out there. In the sprit and dedication of the ever so relevant +Fravia, I might have hidden some other fun stuff out there in the world for those that still remember what it means to search.


Posted in Uncategorized | 3 Comments

Meet Durden

Following the Arcan 0.5 release, here is Durden – the desktop environment I prefer to use for most computing endeavors these days.

While there is no voice-over presentation, there is at least a longer playlist that shows off most (display- related features are hard to record without a camera setup) currently available features, and the normal round of presentation slides.

Posted in Uncategorized | Leave a comment

Arcan 0.5

This took a while, but for good reason. In a sense, the project has reached its halfway point and most of the scaffolding is now being removed. The major changes are so numerous that I will not try and elaborate on them here, but a few posts will be coming in rapid succession that shows off what has been extended and some of what the overarching goals actually are.

The sad part with this release is that we deprecate a few things, e.g. windows support and the AWB/Gridle applications as these parts have well outlived their respective usefulness. The video below is the first attempt in trying to explain some of what this project is about.

For those that dislikes slow videos, here are a few link to slides:

  1. High-Level description


2. Design


3. Developer Introduction


And for more detailed descriptions, there’s still the wiki@:

For the sake of form, here is the condensed list of of changes:

Continue reading

Posted in Uncategorized | Leave a comment

Senseye 0.3

Finally tagged a new release of the very-much-in-progress experimental mixup between reverse engineering, visualization and debugging.

Overview presentation slides can be found at: and the code can, as usual, be found at:


Make sure to build and run against an arcan version repo >= 26cbe43 (master usually works :-))


  • Support for Overlays added, this is a feature that is connected to some translators that, in addition to the higher-level representation provided, also adds a floating overlay to the main data-window to show additional metadata.
  • Support for injecting data corruption, using the zoom-tool and pressing tab while dragging will switch to a red square that indicates the area to inject temporary (most sensors) or permanent (memsense, if enabled) corruption.
  • Translator- automatic reconnection on crash
  • File sensor preview window can now be set to highlight rows with statistical changes above/below a certain threshold.
  • multi-file translator now supports individual tile-offset control, single / multiple lockstepping along with a new 3d view and the ability to use meta-tiles (tile1^tile2 for instance).
  • Compressed image capable translator built using stb_image as default encoder.
  • Memory- sensor now has OS X support (courtesy of p0sixninja)
  • Packing mode improvements, split / added bigram/tuple to have normal / accumulated mode
Posted in Uncategorized | Leave a comment

Digging For Pixels

A few months back, there was this buzz on Twitter and Reddit in regards to the possibilities of automatically extracting raw images from dumps- or live- memory.

I was a bit indisposed at the time, but thought that now — with GPU Malware and similar nastiness appearing on the horizon — was a good a time as any to contribute a tiny bit on the subject.

This post is the first in what will hopefully (the whole ‘if time permits’ thing) be a series that will double as a scratch pad for when I take the time to work on features for Senseye.

The initial Q from the neighbourly master of hiding things inside things that are themselves hiding inside of other things, @angealbertini went exactly like this:

zoCWgXkE_400x400any script/tool worth checking to automagically identify raw pictures (and their width) in memory dumps?

The discussion that followed is summarized rather well in this blogpost by @gynvael.

Lets massage this little problem a bit and break it down:

Classification  Format detection  Pitch tuning  Edge detection.


The first part of the problem is distinguishing the bytes that correspond to what we want from the bytes that are irrelevant or, in other words, to group based on some property or pattern and identify this as either what we are looking for (our pixel buffer) or as something to discard — finding the outline of the image in our virtual pile.

The problem lies in the beholders eyes, consider the following images:

case1 kitchen-5 manul


Their respective statistical profiles are all quite different and appear distinct here, but rip them out the from the context of a formatted and rendered webpage, and hide in some 4 Gb of VRAM (where they will probably be hiding on your computer as you read now) and then try to recover.

Which ones that are the most interesting to you will, of course, depend on some context, as will the criterion that would make them stand out from other bits in a bytestream, so we will need some way of specifying what we want.

The options that immediately comes to mind:

  • “Human Vision” – This means cheat and let a human help us. That already slightly means we fail the »automatically« part and corresponds to the solutions from the discussion link.
  • Various degrees and levels of statistics and signal processing hurt – Histogram matching against databases and auto- correlation are possibilities, albeit rather expensive ones.
  • Magical machine learning algorithms – These require proper training, regular massage and a hefty portion of luck and can still mistake a penguin for a polarbear.

With the addition of the traditionally ‘easy’ ones – context hints in the case of pointers; metadata leftovers; intercepting execution and so on. These have been done to death by forensics tools already, and I would also consider them outside the scope here.

Format Detection

»Assuming that we manage to actually classify parts of a byte stream as belonging to a pixel buffer« Then we have the matter of determining the underlying storage format. While it may be tempting to think that this would just be a matter of some red, green and blues — that is hopelessly naive and quite far from the more harsh reality: Depending on the display context (graphics adapter, output signal connector, state in graphics processing chain etc.) there is a large space of possible raw (uncompressed) pixel storage formats.

To name a few parameters that ought to at least be considered:

  • layout – interlaced, progressive, planar, tiled (tons of GPU and driver specific formats)
  • orientation – horizontal, vertical, bi- / quad- directional, polar(?).
  • numerical format floating point, integral
  • channel count/depth – [1,3,4] channels * [8, 10, 15, 16, 24, 30, 32, …] bits per channel
  • color space – [monochromatic, RGB, YUV, CMYK, HSV, HSL] * [linear,
    non-linear] * [indexed (palette) or direct]
  • row-padding – pixel buffers may at times need to be padded to fit power-of-two or perhaps 16-byte vector instruction alignments, see also, Image stride.

Along with the possibility of interleaving additional buffers in padding byte areas, and … I’ve probably missed quite a few options here. Even compressed vs. uncompressed images can ‘look’ surprisingly similar:

aorbOne of the two images above is decodable without considering compression, the other is not.

Pitch Tuning and Edge Detection

After finding a possible pixel buffer and detecting or forcing a specific format, the next step should be finding out what pitch (width) that it has and my guess is that we will need some heuristic (fancy word for evaluation function) to move forward.

This heuristic will, similarly to whatever searching strategy used in 1. have its fair share of false positives, false negatives and, hopefully, true positives (matches). The images below are from three different automatic pitch detection runs with different testing heuristics.

stage1 stage2 stage3

With the width figured out, all that remains is the matter of the beginning, the end (end – beginning = height) and an offset (because chances are that our search window landed a few byte in, as is the case with the rightmost image above).

For the most part, this last step should be reducible to an edge detection filter (like a Sobel Operator ) and then look for horizontal- and for vertical- lines. Note: this will be something of a problem for multiple similar images (or gradients) stacked tightly after each other.

detected edges

The image above shows two copies of a horizontally easy- and vertically moderately difficult- case, with an edge detection filter applied to one of them.

Another alternative (or complement) for detecting the edge would be to do histogram comparison on a per row basis, as the row-to-row changes is usually rather small.


OK, so developing, selecting, and evaluating solutions for all of the above is within the reasonable scope of a Ph.D thesis, lovely. Massage the regular ‘needle in haystack‘ cash cows such as forensics for finding traces of child pornography or drawings of airport security checkpoints (while secretly polish your master plan of scraping credit card numbers, passwords and extortion-able webcam sessions from GPU memory dumps – just sayin’) and the next few years are all pretty much lined up. Lets take a look at what Senseye has to offer on the subject.

Note: Senseye is not intended as an automated push-button tool but rather a long list of instruments and measuring techniques for the informed explorer, so there are a lot of trade-offs to consider that removes the possibility for an optimal solution here — but the approach could of course be lifted to a dedicated tool.

Which parts of Senseye would be useful for experimenting with this? Well, for classification – we have histogram matching and pattern matching. Both of them require some kind of reference picture that is related to what you are looking for; both are also rather young features. At the time of writing, they can’t load presets from external files (have to be part of the sensor data stream) and the comparison functions in the pattern matching feature included all work on the form:

Reference image + per pixel comparison shader = 1-bit output. Sum output and compare against threshold value. Collect Underpants. Profit.


With the changes planned for 0.3 the ‘automatic search’ part can probably be fulfilled, so we’ll wait with a classification discussion until then. The screenshots above were taken by just bigram- mapping (first byte X, second byte Y) the same sample images used in the training grounds section further below. But judging from them, it seems like a lot of pictures could probably be classified by modelling bigrams as a distance function from the XY diagonal (or something actually clever, my computer graphics background is pretty much all mix of high-school level math, curiosity and insomnia).

For tuning, we have the Picture Tuner that has some limited capability for automatic tuning along with manual adjustments for starting offset. Underlying design problems and coding issues that relate to working with a seriously outdated OpenGL version (2.1 because portability- and drivers- are shit) limits this in a few ways, with the big one being that image size must be > sample window width, and the maximum detectable width is also a function of the sample window size. This is usually not a problem unless you are looking for single icons.

The automatic tuner works roughly like this:

Unpack Shader (input buffer being RGBx) → Tuning ShaderSample TilesCPU readbackScoring Function.

This is just repeated with a brute-force linear-search through the range of useful widths. Keep the one with the highest score. The purpose of the tile- sampling stage is to cut down on memory bandwidth requirements and cost per evaluation (conservative numbers are some 256x256x4 bytes per buffer, 4-5 intermediate copy steps and some 2-3000 evaluations per image).

The included scoring function looks for vertical continuity, meaning run lengths of similar vertical lines, discarding single coloured block results. As can be seen with the picture below, the wrong pitch will give sharp breaks in vertical continuity (lower score). This also happens to favours borders and other common desktop- screenshot features.


Training Grounds

We will just pick a common enough combination and see where the rabbit hole takes us: RGB color format, progressive, 8 bits per channel, vertically oriented. We strip the alpha channel just because having it there would be cheating (hmm, every 4th byte suddenly turns into 0xff or has very low entropy, what could it ever be?).

Before going out into the harsh real world, lets take a few needles:

SONY DSCsnakesala centrifughe

Note that the bunny snake (wouldn’t that be a terrifying cross-breed?) has some black padding added to simulate the “padding for alignment” case. Run them through imagemagick:s ‘convert’ utility to get a raw r8g8b8 out:

convert snake.png snake.rgb

and sandwich them between some nice slices of CSPRNG (because uniforms are hot) like this:

noisecmd="/bin/dd if=/dev/urandom of=pile conv=notrunc \
  bs=1024 count=$(expr $RANDOM % 512 + 64) oflag=append"


for fn in ./*.rgb ; do
/bin/dd if=$fn of=pile conv=notrunc oflag=append

Loading it (from the build directory of a senseye checkout):

arcan -p ../res ../senseye &
sense_file ./pile

And, like all good cooking shows, this video here is a realtime recording of how it would go down:

There is a lot of corner cases that are not covered, as you can see in the first attempt to autotune Mr. Bunny-snake; usually the SCADA image takes more tries due to the high number of continuous regions that will have the evaluation tiles be ignored. Running the list of everything that is flawed or too immature with this process would make this already too lengthy post even worse, but good starting points would be the tile management: Placement needs to be smarter, evaluation quicker and readback transfers should pack multiple evaluation widths into the same buffer (readback synchronization cost is insane).

Among the many things left as an exercise for the reader, try and open some pictures in a program of your liking, dump process memory with your favourite tool (I am slightly biased towards ECFS because of all the other cool features in the format) and try to dig out the pixels. When that over and done with, look into something a lot more hardcore (which won’t work by just clicking around in the tools provided):

2015 DFRWS Forensic Challenge

Posted in Uncategorized | Leave a comment