I wrote a Lua programmable display-server++ [arcan], a desktop environment [durden] and a nifty debugging/reversing tool [senseye]

This post is to consolidate some information about the project and sub-projects in an attempt to disseminate what I have been spending way too much time on.

  • Arcan – When a Game Engine meets a Display Server meets a Multimedia Framework
  • Durden – “Keyboard centric” Tiling Desktop Environment
  • Senseye – Visualization for Debugging and Reverse Engineering

Elevator pitch : Many years ago, I grew tired of the unnecessarily large codebases, crazy dependencies,  vast attack surfaces and general Rube-Goldbergness of the software tools I had to use on a day to day basis. This is my attempt of [queue futurama:bender voice] ‘building my own themepark, with blackjack and …’ – in order to get some peace and quiet.

This video gives a high-level presentation of the project, development and goals. Here are the slides  and others regarding design and here is a ton of documentation. If you know your code, dive into the main github repository or check out the other demo videos, or look us up on #arcan @Freenode IRC.

Everything is free, open sourced and shared with the slightest of hope that it will be useful and relevant to others out there. In the sprit and dedication of the ever so relevant +Fravia, I might have hidden some other fun stuff out there in the world for those that still remember what it means to search.

durden_thumbsenseye-47

Posted in Uncategorized | 3 Comments

Meet Durden

Following the Arcan 0.5 release, here is Durden – the desktop environment I prefer to use for most computing endeavors these days.

While there is no voice-over presentation, there is at least a longer playlist that shows off most (display- related features are hard to record without a camera setup) currently available features, and the normal round of presentation slides.

Posted in Uncategorized | Leave a comment

Arcan 0.5

This took a while, but for good reason. In a sense, the project has reached its halfway point and most of the scaffolding is now being removed. The major changes are so numerous that I will not try and elaborate on them here, but a few posts will be coming in rapid succession that shows off what has been extended and some of what the overarching goals actually are.

The sad part with this release is that we deprecate a few things, e.g. windows support and the AWB/Gridle applications as these parts have well outlived their respective usefulness. The video below is the first attempt in trying to explain some of what this project is about.

For those that dislikes slow videos, here are a few link to slides:

  1. High-Level description

 

2. Design

 

3. Developer Introduction

 

And for more detailed descriptions, there’s still the wiki@:

https://github.com/letoram/arcan/wiki

For the sake of form, here is the condensed list of of changes:

Continue reading

Posted in Uncategorized | Leave a comment

Senseye 0.3

Finally tagged a new release of the very-much-in-progress experimental mixup between reverse engineering, visualization and debugging.

Overview presentation slides can be found at: https://speakerdeck.com/letoram/senseye and the code can, as usual, be found at: https://github.com/letoram/senseye

senseye-47

Make sure to build and run against an arcan version repo >= 26cbe43 (master usually works :-))

Changelog:

  • Support for Overlays added, this is a feature that is connected to some translators that, in addition to the higher-level representation provided, also adds a floating overlay to the main data-window to show additional metadata.
  • Support for injecting data corruption, using the zoom-tool and pressing tab while dragging will switch to a red square that indicates the area to inject temporary (most sensors) or permanent (memsense, if enabled) corruption.
  • Translator- automatic reconnection on crash
  • File sensor preview window can now be set to highlight rows with statistical changes above/below a certain threshold.
  • multi-file translator now supports individual tile-offset control, single / multiple lockstepping along with a new 3d view and the ability to use meta-tiles (tile1^tile2 for instance).
  • Compressed image capable translator built using stb_image as default encoder.
  • Memory- sensor now has OS X support (courtesy of p0sixninja)
  • Packing mode improvements, split / added bigram/tuple to have normal / accumulated mode
Posted in Uncategorized | Leave a comment

Digging For Pixels

A few months back, there was this buzz on Twitter and Reddit in regards to the possibilities of automatically extracting raw images from dumps- or live- memory.

I was a bit indisposed at the time, but thought that now — with GPU Malware and similar nastiness appearing on the horizon — was a good a time as any to contribute a tiny bit on the subject.

This post is the first in what will hopefully (the whole ‘if time permits’ thing) be a series that will double as a scratch pad for when I take the time to work on features for Senseye.

The initial Q from the neighbourly master of hiding things inside things that are themselves hiding inside of other things, @angealbertini went exactly like this:

zoCWgXkE_400x400any script/tool worth checking to automagically identify raw pictures (and their width) in memory dumps?

The discussion that followed is summarized rather well in this blogpost by @gynvael.

Lets massage this little problem a bit and break it down:

Classification  Format detection  Pitch tuning  Edge detection.

Classification

The first part of the problem is distinguishing the bytes that correspond to what we want from the bytes that are irrelevant or, in other words, to group based on some property or pattern and identify this as either what we are looking for (our pixel buffer) or as something to discard — finding the outline of the image in our virtual pile.

The problem lies in the beholders eyes, consider the following images:

case1 kitchen-5 manul

 

Their respective statistical profiles are all quite different and appear distinct here, but rip them out the from the context of a formatted and rendered webpage, and hide in some 4 Gb of VRAM (where they will probably be hiding on your computer as you read now) and then try to recover.

Which ones that are the most interesting to you will, of course, depend on some context, as will the criterion that would make them stand out from other bits in a bytestream, so we will need some way of specifying what we want.

The options that immediately comes to mind:

  • “Human Vision” – This means cheat and let a human help us. That already slightly means we fail the »automatically« part and corresponds to the solutions from the discussion link.
  • Various degrees and levels of statistics and signal processing hurt – Histogram matching against databases and auto- correlation are possibilities, albeit rather expensive ones.
  • Magical machine learning algorithms – These require proper training, regular massage and a hefty portion of luck and can still mistake a penguin for a polarbear.

With the addition of the traditionally ‘easy’ ones – context hints in the case of pointers; metadata leftovers; intercepting execution and so on. These have been done to death by forensics tools already, and I would also consider them outside the scope here.

Format Detection

»Assuming that we manage to actually classify parts of a byte stream as belonging to a pixel buffer« Then we have the matter of determining the underlying storage format. While it may be tempting to think that this would just be a matter of some red, green and blues — that is hopelessly naive and quite far from the more harsh reality: Depending on the display context (graphics adapter, output signal connector, state in graphics processing chain etc.) there is a large space of possible raw (uncompressed) pixel storage formats.

To name a few parameters that ought to at least be considered:

  • layout – interlaced, progressive, planar, tiled (tons of GPU and driver specific formats)
  • orientation – horizontal, vertical, bi- / quad- directional, polar(?).
  • numerical format floating point, integral
  • channel count/depth – [1,3,4] channels * [8, 10, 15, 16, 24, 30, 32, …] bits per channel
  • color space – [monochromatic, RGB, YUV, CMYK, HSV, HSL] * [linear,
    non-linear] * [indexed (palette) or direct]
  • row-padding – pixel buffers may at times need to be padded to fit power-of-two or perhaps 16-byte vector instruction alignments, see also, Image stride.

Along with the possibility of interleaving additional buffers in padding byte areas, and … I’ve probably missed quite a few options here. Even compressed vs. uncompressed images can ‘look’ surprisingly similar:

aorbOne of the two images above is decodable without considering compression, the other is not.

Pitch Tuning and Edge Detection

After finding a possible pixel buffer and detecting or forcing a specific format, the next step should be finding out what pitch (width) that it has and my guess is that we will need some heuristic (fancy word for evaluation function) to move forward.

This heuristic will, similarly to whatever searching strategy used in 1. have its fair share of false positives, false negatives and, hopefully, true positives (matches). The images below are from three different automatic pitch detection runs with different testing heuristics.

stage1 stage2 stage3

With the width figured out, all that remains is the matter of the beginning, the end (end – beginning = height) and an offset (because chances are that our search window landed a few byte in, as is the case with the rightmost image above).

For the most part, this last step should be reducible to an edge detection filter (like a Sobel Operator ) and then look for horizontal- and for vertical- lines. Note: this will be something of a problem for multiple similar images (or gradients) stacked tightly after each other.

detected edges

The image above shows two copies of a horizontally easy- and vertically moderately difficult- case, with an edge detection filter applied to one of them.

Another alternative (or complement) for detecting the edge would be to do histogram comparison on a per row basis, as the row-to-row changes is usually rather small.

Approach

OK, so developing, selecting, and evaluating solutions for all of the above is within the reasonable scope of a Ph.D thesis, lovely. Massage the regular ‘needle in haystack‘ cash cows such as forensics for finding traces of child pornography or drawings of airport security checkpoints (while secretly polish your master plan of scraping credit card numbers, passwords and extortion-able webcam sessions from GPU memory dumps – just sayin’) and the next few years are all pretty much lined up. Lets take a look at what Senseye has to offer on the subject.

Note: Senseye is not intended as an automated push-button tool but rather a long list of instruments and measuring techniques for the informed explorer, so there are a lot of trade-offs to consider that removes the possibility for an optimal solution here — but the approach could of course be lifted to a dedicated tool.

Which parts of Senseye would be useful for experimenting with this? Well, for classification – we have histogram matching and pattern matching. Both of them require some kind of reference picture that is related to what you are looking for; both are also rather young features. At the time of writing, they can’t load presets from external files (have to be part of the sensor data stream) and the comparison functions in the pattern matching feature included all work on the form:

Reference image + per pixel comparison shader = 1-bit output. Sum output and compare against threshold value. Collect Underpants. Profit.

tuple

With the changes planned for 0.3 the ‘automatic search’ part can probably be fulfilled, so we’ll wait with a classification discussion until then. The screenshots above were taken by just bigram- mapping (first byte X, second byte Y) the same sample images used in the training grounds section further below. But judging from them, it seems like a lot of pictures could probably be classified by modelling bigrams as a distance function from the XY diagonal (or something actually clever, my computer graphics background is pretty much all mix of high-school level math, curiosity and insomnia).

For tuning, we have the Picture Tuner that has some limited capability for automatic tuning along with manual adjustments for starting offset. Underlying design problems and coding issues that relate to working with a seriously outdated OpenGL version (2.1 because portability- and drivers- are shit) limits this in a few ways, with the big one being that image size must be > sample window width, and the maximum detectable width is also a function of the sample window size. This is usually not a problem unless you are looking for single icons.

The automatic tuner works roughly like this:

Unpack Shader (input buffer being RGBx) → Tuning ShaderSample TilesCPU readbackScoring Function.

This is just repeated with a brute-force linear-search through the range of useful widths. Keep the one with the highest score. The purpose of the tile- sampling stage is to cut down on memory bandwidth requirements and cost per evaluation (conservative numbers are some 256x256x4 bytes per buffer, 4-5 intermediate copy steps and some 2-3000 evaluations per image).

The included scoring function looks for vertical continuity, meaning run lengths of similar vertical lines, discarding single coloured block results. As can be seen with the picture below, the wrong pitch will give sharp breaks in vertical continuity (lower score). This also happens to favours borders and other common desktop- screenshot features.

stage1

Training Grounds

We will just pick a common enough combination and see where the rabbit hole takes us: RGB color format, progressive, 8 bits per channel, vertically oriented. We strip the alpha channel just because having it there would be cheating (hmm, every 4th byte suddenly turns into 0xff or has very low entropy, what could it ever be?).

Before going out into the harsh real world, lets take a few needles:

SONY DSCsnakesala centrifughe

Note that the bunny snake (wouldn’t that be a terrifying cross-breed?) has some black padding added to simulate the “padding for alignment” case. Run them through imagemagick:s ‘convert’ utility to get a raw r8g8b8 out:

convert snake.png snake.rgb

and sandwich them between some nice slices of CSPRNG (because uniforms are hot) like this:


#!/bin/bash
noisecmd="/bin/dd if=/dev/urandom of=pile conv=notrunc \
  bs=1024 count=$(expr $RANDOM % 512 + 64) oflag=append"

$noisecmd

for fn in ./*.rgb ; do
/bin/dd if=$fn of=pile conv=notrunc oflag=append
$noisecmd
done

Loading it (from the build directory of a senseye checkout):

arcan -p ../res ../senseye &
sense_file ./pile

And, like all good cooking shows, this video here is a realtime recording of how it would go down:

There is a lot of corner cases that are not covered, as you can see in the first attempt to autotune Mr. Bunny-snake; usually the SCADA image takes more tries due to the high number of continuous regions that will have the evaluation tiles be ignored. Running the list of everything that is flawed or too immature with this process would make this already too lengthy post even worse, but good starting points would be the tile management: Placement needs to be smarter, evaluation quicker and readback transfers should pack multiple evaluation widths into the same buffer (readback synchronization cost is insane).

Among the many things left as an exercise for the reader, try and open some pictures in a program of your liking, dump process memory with your favourite tool (I am slightly biased towards ECFS because of all the other cool features in the format) and try to dig out the pixels. When that over and done with, look into something a lot more hardcore (which won’t work by just clicking around in the tools provided):

2015 DFRWS Forensic Challenge

Posted in Uncategorized | Leave a comment

Senseye 0.2

Senseye has received quite a lot of attention, fixes and enhancements the last few months and is well overdue for a new tagged release. Highlights (not including tweaks, performance boosts, UI and bugfixes)  since last time include:

  • Translators: It is now possible to connect high-level decoders that track selected cursor position in data window to give more abstract views e.g. Hex, ASCII, Disassembly (using Capstone)
  • New sensor: MFile – This sensor takes multiple input files of (preferably) the same format and shows them side by side in a tile- style layout, along with a support window that highlights
  • New measuring tools: Byte Distance – This view looks a lot like the normal histogram, but each bin shows the number of bytes that pass from a set marker position, to the next time each possible value was found.
  • New visual tool: Picture Tuner – This tool is used for manually and/or automatically finding raw image parameters (stride, color format and so on)
  • Pattern Matching: Pattern finding using a visual pattern reference (like n-gram based mapping) and/or histograms matching.
  • Improved seeking and playback control: multiple stepping sizes to chose from, along with the option to align to a specific value.
  • File- sensor now updates preview window progressively, and works a lot better with larger (multiple gigabytes) input sources.

True to form, the “quick” demo-video below tries to show off all the new features. Be sure to watch with annotations enabled. In addition, a more detailed write-up on the picture tuner will be posted in a day or two. Make sure that you synch- and rebuild- Arcan before trying this out as a lot of core engine changes have been made.

  • 0:00 – 1:20, MFile sensor.
  • 1:20 – 2:50, File enchantments, Coloring, Histogram Updates, Byte Distance.
  • 2:50 – 5:40, Memsense updates, Translator feature.
  • 5:40 – 7:00, Picture Tuner.
Posted in Uncategorized | Leave a comment

Next Experiment, Senseye

The development strategy behind Arcan has always been to work with experimental proof of concepts doing ‘traditional’ tasks in odd way and use that as feedback to refactor and improve the Engine, API, testing and documentation.

For instance — the Video decoding, encoding and tagging experiments added process separation and greatly helped shape the shared memory interface. The arcade frontend experiments à Gridle improved support for odd input combinations (2 mice, 3 keyboards and 5 gamepads? not a problem), support for synchronization with time sensitive processes (libretro frameserver) where buffering and other common solutions were not available. The AWB experiments helped define controlled and segmented data sharing, along with performance considerations in tricky UI situations (hierarchies of windows where size and position relied on dynamic sources, drag+resize and watch hell break lose).

The end goals, getting a portable graphics- focused backend for putting together embedded, mobile and desktop system interfaces; balancing security, performance, stability and no-nonsense style- ease of use — is still out of reach, but great strides have been made. The last couple of months have mostly been stuck documenting, testing and working with the corners that dynamic multiscreen entails.

The next experiment in this regard is Senseye, which is targeted towards the more rugged of computing travellers; the reverse engineers, the security ‘enthusiasts’ and the system analysts. It is a tool for navigating and controlling non-native representations of large, unknown, binary data streams and blocks. Both statically in terms of files and dumps, and dynamic through live acquisition of memory from running processes.

The video above shows using senseye for navigating a suspicious binary and for poking around the memory pages allocated by pid 1 (still init though in its twilight..).

Posted in Uncategorized | Leave a comment