Some Questions & Answers

A few days have gone by since the project was presented, and while I am not very active on the forums and other places where the project have been discussed, I have seen some questions and received some directed ones that I think should be replied to in public view.

1. If I would build and install Arcan, what can I do with it?
To just try things out and play with it, you can for starters build it with SDL as the video platform and run it from X or OSX. It won’t be as fast or have as many features as a more native one like egl-dri, but enough to try it out and play around. A few brave souls have started packaging so that will also help soon. The main application you want to try with it is probably the desktop environment, durden. With it, you have access to the terminal emulator, libretro- cores for games, video player and a vnc client. There is a work-in-progress QEmu integration git and soon a SDL-2 backend. If you are adventurous, it is also possible to build with -DDISABLE_HIJACK=OFF and get a libahijack_sdl12.so. Run with LD_PRELOAD=/path/to/libahijack_sdl12.so /my/sdl1.2/program and you should be able to run many (most?) of SDL-1.2 based games and applications.

2. Will this replace X.org?
That depends on your needs. For me, it replaced X quite a while ago; I can run my terminal sessions, connect to VNC, run my QEMU virtual machines natively and the emulators I like to play around with all work thanks to libretro. The default video decoder does its job ‘poorly but ok enough’ for my desktop viewing and my multi-monitor setup works better now than it has even done in my 20+ years of trying to stand XFree86/X.org. For others, that’s not enough so that might be reason to wait or simply stay away. It is not like you lack options.

3. How does this all relate to Wayland?
I tried to answer that in the presentation, but it was at the end and perhaps I did not express myself clearly. I intend to support Wayland both as a server and as a client. I’ve had a good look at the protocol (and Quartz, SurfaceFlinger, DWM, for that matter…), and there’s nothing a Wayland implementation needs that isn’t already in place – in terms of features – but the API design and the amount of ‘X’ behaviors Wayland would introduce means that it will an optional thing. There is nothing in Wayland that I have any use for, but there are many things I need in terms of better integration with virtual machine guests and the recent developments in QEmu 2.5/2.6 in regards to dma-buf/render-nodes is highly interesting, so it comes down to priorities or waiting for pull-requests😉

4. Is the Lua scripting necessary?
No, it should take little more effort than removing a compilation unit and about 50 lines of code or so for the scripting interface to disappear in order to run the engine C only – but it is a lot more work telling it what to do and with less support- code for you to re-use. A lot of scripts in Durden, for instance, were written so that you could cut and paste them into other projects. That’s how Senseye will be made usable for people other than myself🙂

The engine will get a library- build version for such purposes further down the road, but right now there’s no guarantee to the stability of internal interfaces. The same applies to the shared memory interface, even though that already has a library form. I have a few unresolved problems that may require larger changes in these interfaces without considering how any change would affect other people.

5. Will this run faster / better with games?
I have no data to support such a claim, so that’s a maybe. A big point however, is that you can (if you know your Lua, which isn’t very hard) have very good control over what “actually happens” in order to optimize for your needs. For gaming, that would be things like mapping the game output directly to the selected display, without the insanity of the game trying to understand resolution switching and whatever ‘fullscreen’ means. Another possibility would be switching to a simpler set of scripts or mode of operation that suspend and ignores windows that don’t contribute to what you want to do.

6. Is the database- application whitelisting necessary?
No, you can connect to the server using another set of primitives (ARCAN_CONNPATH=…), if the set of scripts you are using allows you to. This is what is meant by “non-authoritative” connection mode and the database can be entirely :memory if you don’t want any settings to be stored. The whitelisting will come into better use later, when you can establish your own “chain of trust”.

7. Is there a way to contribute? 

There are many ways, besides ‘spreading the word’ (and I could use a Vive ;-)). See the wiki page here: https://github.com/letoram/arcan/wiki/contrib

8. The ‘Amiga UI’ is not working?

That’s the reason it was marked as abandoned (and practically since end of 2013). It was just a thing I did to get a feel for how much code it would take to do something like ‘Amiga meets BeOS’ and find out some places where the API had gone wrong. Afterwards, I changed those parts but never updated the related scripts. That said, it is not a big effort to get it up and running again, so maybe…

8. Where does this fit in the Linux/BSD ecosystem?

Where does awk, sed and grep fit? Arcan is a versatile tool that you can use for a lot of kinds of graphics processing and the Desktop case illustrated by Durden is just one. I use a minimal init and boot straight into Durden, using a handful of preset mount and networking settings that render current state and controls into small widgets. No service manager, display manager, boot animation, login manager or message passing IPC.

One of the many problems with interactive graphics in a ‘pipes and filters‘ like ‘user freedom UNIX- way model‘ is that the performance and latency breaks down. You are much more sensitive to those things thanks to the wonders of human cognition. I know some people still think in the ways of ‘a framebuffer with pixels’ but the days of Mode 13 are gone. The process now is highly asynchronous and triggered by events far more complicated than a VBLANK interrupt. The design behind Arcan resembles about as close to the ‘pipes and filters’ I think I can come without becoming slow or esoteric.

9. Why is there no X support?
This is a big question and ties in with answer 3.  A small part is the cost and pain of implementing such a complete mess, which would mean less time for more interesting things. This is a completely self-financed project, fueled mostly by dissent, cocktails and electronic music, with no strong commercial ambitions — all in the tradition of dumb idealism.

A bigger part in committing to a protocol, or saying ‘I should be compatible with- or replace- project XYZ’ is that you limit yourself to thinking in terms of how those project works and how you should be better than them or outcompete in some way, rather than in terms of ‘how can I do something interesting with this problem in a way that is different from how others have approached it’.

Collectively speaking, we don’t need yet another project or implementation that takes on X and if that already feeds your needs, why change? Some of us, however, need something different.

 

Posted in Uncategorized | Leave a comment

I wrote a Lua programmable display-server++ [arcan], a desktop environment [durden] and a nifty debugging/reversing tool [senseye]

This post is to consolidate some information about the project and sub-projects in an attempt to disseminate what I have been spending way too much time on.

  • Arcan – When a Game Engine meets a Display Server meets a Multimedia Framework
  • Durden – “Keyboard centric” Tiling Desktop Environment
  • Senseye – Visualization for Debugging and Reverse Engineering

Elevator pitch : Many years ago, I grew tired of the unnecessarily large codebases, crazy dependencies,  vast attack surfaces and general Rube-Goldbergness of the software tools I had to use on a day to day basis. This is my attempt of [queue futurama:bender voice] ‘building my own themepark, with blackjack and …’ – in order to get some peace and quiet.

This video gives a high-level presentation of the project, development and goals. Here are the slides  and others regarding design and here is a ton of documentation. If you know your code, dive into the main github repository or check out the other demo videos, or look us up on #arcan @Freenode IRC.

Everything is free, open sourced and shared with the slightest of hope that it will be useful and relevant to others out there. In the sprit and dedication of the ever so relevant +Fravia, I might have hidden some other fun stuff out there in the world for those that still remember what it means to search.

durden_thumbsenseye-47

Posted in Uncategorized | 3 Comments

Meet Durden

Following the Arcan 0.5 release, here is Durden – the desktop environment I prefer to use for most computing endeavors these days.

While there is no voice-over presentation, there is at least a longer playlist that shows off most (display- related features are hard to record without a camera setup) currently available features, and the normal round of presentation slides.

Posted in Uncategorized | Leave a comment

Arcan 0.5

This took a while, but for good reason. In a sense, the project has reached its halfway point and most of the scaffolding is now being removed. The major changes are so numerous that I will not try and elaborate on them here, but a few posts will be coming in rapid succession that shows off what has been extended and some of what the overarching goals actually are.

The sad part with this release is that we deprecate a few things, e.g. windows support and the AWB/Gridle applications as these parts have well outlived their respective usefulness. The video below is the first attempt in trying to explain some of what this project is about.

For those that dislikes slow videos, here are a few link to slides:

  1. High-Level description

 

2. Design

 

3. Developer Introduction

 

And for more detailed descriptions, there’s still the wiki@:

https://github.com/letoram/arcan/wiki

For the sake of form, here is the condensed list of of changes:

Continue reading

Posted in Uncategorized | Leave a comment

Senseye 0.3

Finally tagged a new release of the very-much-in-progress experimental mixup between reverse engineering, visualization and debugging.

Overview presentation slides can be found at: https://speakerdeck.com/letoram/senseye and the code can, as usual, be found at: https://github.com/letoram/senseye

senseye-47

Make sure to build and run against an arcan version repo >= 26cbe43 (master usually works :-))

Changelog:

  • Support for Overlays added, this is a feature that is connected to some translators that, in addition to the higher-level representation provided, also adds a floating overlay to the main data-window to show additional metadata.
  • Support for injecting data corruption, using the zoom-tool and pressing tab while dragging will switch to a red square that indicates the area to inject temporary (most sensors) or permanent (memsense, if enabled) corruption.
  • Translator- automatic reconnection on crash
  • File sensor preview window can now be set to highlight rows with statistical changes above/below a certain threshold.
  • multi-file translator now supports individual tile-offset control, single / multiple lockstepping along with a new 3d view and the ability to use meta-tiles (tile1^tile2 for instance).
  • Compressed image capable translator built using stb_image as default encoder.
  • Memory- sensor now has OS X support (courtesy of p0sixninja)
  • Packing mode improvements, split / added bigram/tuple to have normal / accumulated mode
Posted in Uncategorized | Leave a comment

Digging For Pixels

A few months back, there was this buzz on Twitter and Reddit in regards to the possibilities of automatically extracting raw images from dumps- or live- memory.

I was a bit indisposed at the time, but thought that now — with GPU Malware and similar nastiness appearing on the horizon — was a good a time as any to contribute a tiny bit on the subject.

This post is the first in what will hopefully (the whole ‘if time permits’ thing) be a series that will double as a scratch pad for when I take the time to work on features for Senseye.

The initial Q from the neighbourly master of hiding things inside things that are themselves hiding inside of other things, @angealbertini went exactly like this:

zoCWgXkE_400x400any script/tool worth checking to automagically identify raw pictures (and their width) in memory dumps?

The discussion that followed is summarized rather well in this blogpost by @gynvael.

Lets massage this little problem a bit and break it down:

Classification → Format detection → Pitch tuning → Edge detection.

Classification

The first part of the problem is distinguishing the bytes that correspond to what we want from the bytes that are irrelevant or, in other words, to group based on some property or pattern and identify this as either what we are looking for (our pixel buffer) or as something to discard — finding the outline of the image in our virtual pile.

The problem lies in the beholders eyes, consider the following images:

case1 kitchen-5 manul

 

Their respective statistical profiles are all quite different and appear distinct here, but rip them out the from the context of a formatted and rendered webpage, and hide in some 4 Gb of VRAM (where they will probably be hiding on your computer as you read now) and then try to recover.

Which ones that are the most interesting to you will, of course, depend on some context, as will the criterion that would make them stand out from other bits in a bytestream, so we will need some way of specifying what we want.

The options that immediately comes to mind:

  • “Human Vision” – This means cheat and let a human help us. That already slightly means we fail the »automatically« part and corresponds to the solutions from the discussion link.
  • Various degrees and levels of statistics and signal processing hurt – Histogram matching against databases and auto- correlation are possibilities, albeit rather expensive ones.
  • Magical machine learning algorithms – These require proper training, regular massage and a hefty portion of luck and can still mistake a penguin for a polarbear.

With the addition of the traditionally ‘easy’ ones – context hints in the case of pointers; metadata leftovers; intercepting execution and so on. These have been done to death by forensics tools already, and I would also consider them outside the scope here.

Format Detection

»Assuming that we manage to actually classify parts of a byte stream as belonging to a pixel buffer« Then we have the matter of determining the underlying storage format. While it may be tempting to think that this would just be a matter of some red, green and blues — that is hopelessly naive and quite far from the more harsh reality: Depending on the display context (graphics adapter, output signal connector, state in graphics processing chain etc.) there is a large space of possible raw (uncompressed) pixel storage formats.

To name a few parameters that ought to at least be considered:

  • layout – interlaced, progressive, planar, tiled (tons of GPU and driver specific formats)
  • orientation – horizontal, vertical, bi- / quad- directional, polar(?).
  • numerical format floating point, integral
  • channel count/depth – [1,3,4] channels * [8, 10, 15, 16, 24, 30, 32, …] bits per channel
  • color space – [monochromatic, RGB, YUV, CMYK, HSV, HSL] * [linear,
    non-linear] * [indexed (palette) or direct]
  • row-padding – pixel buffers may at times need to be padded to fit power-of-two or perhaps 16-byte vector instruction alignments, see also, Image stride.

Along with the possibility of interleaving additional buffers in padding byte areas, and … I’ve probably missed quite a few options here. Even compressed vs. uncompressed images can ‘look’ surprisingly similar:

aorbOne of the two images above is decodable without considering compression, the other is not.

Pitch Tuning and Edge Detection

After finding a possible pixel buffer and detecting or forcing a specific format, the next step should be finding out what pitch (width) that it has and my guess is that we will need some heuristic (fancy word for evaluation function) to move forward.

This heuristic will, similarly to whatever searching strategy used in 1. have its fair share of false positives, false negatives and, hopefully, true positives (matches). The images below are from three different automatic pitch detection runs with different testing heuristics.

stage1 stage2 stage3

With the width figured out, all that remains is the matter of the beginning, the end (end – beginning = height) and an offset (because chances are that our search window landed a few byte in, as is the case with the rightmost image above).

For the most part, this last step should be reducible to an edge detection filter (like a Sobel Operator ) and then look for horizontal- and for vertical- lines. Note: this will be something of a problem for multiple similar images (or gradients) stacked tightly after each other.

detected edges

The image above shows two copies of a horizontally easy- and vertically moderately difficult- case, with an edge detection filter applied to one of them.

Another alternative (or complement) for detecting the edge would be to do histogram comparison on a per row basis, as the row-to-row changes is usually rather small.

Approach

OK, so developing, selecting, and evaluating solutions for all of the above is within the reasonable scope of a Ph.D thesis, lovely. Massage the regular ‘needle in haystack‘ cash cows such as forensics for finding traces of child pornography or drawings of airport security checkpoints (while secretly polish your master plan of scraping credit card numbers, passwords and extortion-able webcam sessions from GPU memory dumps – just sayin’) and the next few years are all pretty much lined up. Lets take a look at what Senseye has to offer on the subject.

Note: Senseye is not intended as an automated push-button tool but rather a long list of instruments and measuring techniques for the informed explorer, so there are a lot of trade-offs to consider that removes the possibility for an optimal solution here — but the approach could of course be lifted to a dedicated tool.

Which parts of Senseye would be useful for experimenting with this? Well, for classification – we have histogram matching and pattern matching. Both of them require some kind of reference picture that is related to what you are looking for; both are also rather young features. At the time of writing, they can’t load presets from external files (have to be part of the sensor data stream) and the comparison functions in the pattern matching feature included all work on the form:

Reference image + per pixel comparison shader = 1-bit output. Sum output and compare against threshold value. Collect Underpants. Profit.

tuple

With the changes planned for 0.3 the ‘automatic search’ part can probably be fulfilled, so we’ll wait with a classification discussion until then. The screenshots above were taken by just bigram- mapping (first byte X, second byte Y) the same sample images used in the training grounds section further below. But judging from them, it seems like a lot of pictures could probably be classified by modelling bigrams as a distance function from the XY diagonal (or something actually clever, my computer graphics background is pretty much all mix of high-school level math, curiosity and insomnia).

For tuning, we have the Picture Tuner that has some limited capability for automatic tuning along with manual adjustments for starting offset. Underlying design problems and coding issues that relate to working with a seriously outdated OpenGL version (2.1 because portability- and drivers- are shit) limits this in a few ways, with the big one being that image size must be > sample window width, and the maximum detectable width is also a function of the sample window size. This is usually not a problem unless you are looking for single icons.

The automatic tuner works roughly like this:

Unpack Shader (input buffer being RGBx) → Tuning Shader → Sample Tiles → CPU readback → Scoring Function.

This is just repeated with a brute-force linear-search through the range of useful widths. Keep the one with the highest score. The purpose of the tile- sampling stage is to cut down on memory bandwidth requirements and cost per evaluation (conservative numbers are some 256x256x4 bytes per buffer, 4-5 intermediate copy steps and some 2-3000 evaluations per image).

The included scoring function looks for vertical continuity, meaning run lengths of similar vertical lines, discarding single coloured block results. As can be seen with the picture below, the wrong pitch will give sharp breaks in vertical continuity (lower score). This also happens to favours borders and other common desktop- screenshot features.

stage1

Training Grounds

We will just pick a common enough combination and see where the rabbit hole takes us: RGB color format, progressive, 8 bits per channel, vertically oriented. We strip the alpha channel just because having it there would be cheating (hmm, every 4th byte suddenly turns into 0xff or has very low entropy, what could it ever be?).

Before going out into the harsh real world, lets take a few needles:

SONY DSCsnakesala centrifughe

Note that the bunny snake (wouldn’t that be a terrifying cross-breed?) has some black padding added to simulate the “padding for alignment” case. Run them through imagemagick:s ‘convert’ utility to get a raw r8g8b8 out:

convert snake.png snake.rgb

and sandwich them between some nice slices of CSPRNG (because uniforms are hot) like this:


#!/bin/bash
noisecmd="/bin/dd if=/dev/urandom of=pile conv=notrunc \
  bs=1024 count=$(expr $RANDOM % 512 + 64) oflag=append"

$noisecmd

for fn in ./*.rgb ; do
/bin/dd if=$fn of=pile conv=notrunc oflag=append
$noisecmd
done

Loading it (from the build directory of a senseye checkout):

arcan -p ../res ../senseye &
sense_file ./pile

And, like all good cooking shows, this video here is a realtime recording of how it would go down:

There is a lot of corner cases that are not covered, as you can see in the first attempt to autotune Mr. Bunny-snake; usually the SCADA image takes more tries due to the high number of continuous regions that will have the evaluation tiles be ignored. Running the list of everything that is flawed or too immature with this process would make this already too lengthy post even worse, but good starting points would be the tile management: Placement needs to be smarter, evaluation quicker and readback transfers should pack multiple evaluation widths into the same buffer (readback synchronization cost is insane).

Among the many things left as an exercise for the reader, try and open some pictures in a program of your liking, dump process memory with your favourite tool (I am slightly biased towards ECFS because of all the other cool features in the format) and try to dig out the pixels. When that over and done with, look into something a lot more hardcore (which won’t work by just clicking around in the tools provided):

2015 DFRWS Forensic Challenge

Posted in Uncategorized | Leave a comment

Senseye 0.2

Senseye has received quite a lot of attention, fixes and enhancements the last few months and is well overdue for a new tagged release. Highlights (not including tweaks, performance boosts, UI and bugfixes)  since last time include:

  • Translators: It is now possible to connect high-level decoders that track selected cursor position in data window to give more abstract views e.g. Hex, ASCII, Disassembly (using Capstone)
  • New sensor: MFile – This sensor takes multiple input files of (preferably) the same format and shows them side by side in a tile- style layout, along with a support window that highlights
  • New measuring tools: Byte Distance – This view looks a lot like the normal histogram, but each bin shows the number of bytes that pass from a set marker position, to the next time each possible value was found.
  • New visual tool: Picture Tuner – This tool is used for manually and/or automatically finding raw image parameters (stride, color format and so on)
  • Pattern Matching: Pattern finding using a visual pattern reference (like n-gram based mapping) and/or histograms matching.
  • Improved seeking and playback control: multiple stepping sizes to chose from, along with the option to align to a specific value.
  • File- sensor now updates preview window progressively, and works a lot better with larger (multiple gigabytes) input sources.

True to form, the “quick” demo-video below tries to show off all the new features. Be sure to watch with annotations enabled. In addition, a more detailed write-up on the picture tuner will be posted in a day or two. Make sure that you synch- and rebuild- Arcan before trying this out as a lot of core engine changes have been made.

  • 0:00 – 1:20, MFile sensor.
  • 1:20 – 2:50, File enchantments, Coloring, Histogram Updates, Byte Distance.
  • 2:50 – 5:40, Memsense updates, Translator feature.
  • 5:40 – 7:00, Picture Tuner.
Posted in Uncategorized | Leave a comment