A few months back, there was this buzz on Twitter and Reddit in regards to the possibilities of automatically extracting raw images from dumps- or live- memory.
I was a bit indisposed at the time, but thought that now — with GPU Malware and similar nastiness appearing on the horizon — was a good a time as any to contribute a tiny bit on the subject.
This post is the first in what will hopefully (the whole ‘if time permits’ thing) be a series that will double as a scratch pad for when I take the time to work on features for Senseye.
The initial Q from the neighbourly master of hiding things inside things that are themselves hiding inside of other things, @angealbertini went exactly like this:
any script/tool worth checking to automagically identify raw pictures (and their width) in memory dumps?
The discussion that followed is summarized rather well in this blogpost by @gynvael.
Lets massage this little problem a bit and break it down:
Classification → Format detection → Pitch tuning → Edge detection.
Classification
The first part of the problem is distinguishing the bytes that correspond to what we want from the bytes that are irrelevant or, in other words, to group based on some property or pattern and identify this as either what we are looking for (our pixel buffer) or as something to discard — finding the outline of the image in our virtual pile.
The problem lies in the beholders eyes, consider the following images:
Their respective statistical profiles are all quite different and appear distinct here, but rip them out the from the context of a formatted and rendered webpage, and hide in some 4 Gb of VRAM (where they will probably be hiding on your computer as you read now) and then try to recover.
Which ones that are the most interesting to you will, of course, depend on some context, as will the criterion that would make them stand out from other bits in a bytestream, so we will need some way of specifying what we want.
The options that immediately comes to mind:
- “Human Vision” – This means cheat and let a human help us. That already slightly means we fail the »automatically« part and corresponds to the solutions from the discussion link.
- Various degrees and levels of statistics and signal processing hurt – Histogram matching against databases and auto- correlation are possibilities, albeit rather expensive ones.
- Magical machine learning algorithms – These require proper training, regular massage and a hefty portion of luck and can still mistake a penguin for a polarbear (no cigar).
With the addition of the traditionally ‘easy’ ones – context hints in the case of pointers; metadata leftovers; intercepting execution and so on. These have been done to death by forensics tools already, and I would also consider them outside the scope here.
Format Detection
»Assuming that we manage to actually classify parts of a byte stream as belonging to a pixel buffer« Then we have the matter of determining the underlying storage format. While it may be tempting to think that this would just be a matter of some red, green and blues — that is hopelessly naive and quite far from the more harsh reality: Depending on the display context (graphics adapter, output signal connector, state in graphics processing chain etc.) there is a large space of possible raw (uncompressed) pixel storage formats.
To name a few parameters that ought to at least be considered:
- layout – interlaced, progressive, planar, tiled (tons of GPU and driver specific formats)
- orientation – horizontal, vertical, bi- / quad- directional, polar(?).
- numerical format – floating point, integral
- channel count/depth – [1,3,4] channels * [8, 10, 15, 16, 24, 30, 32, …] bits per channel
- color space – [monochromatic, RGB, YUV, CMYK, HSV, HSL] * [linear,
non-linear] * [indexed (palette) or direct] - row-padding – pixel buffers may at times need to be padded to fit power-of-two or perhaps 16-byte vector instruction alignments, see also, Image stride.
Along with the possibility of interleaving additional buffers in padding byte areas, and … I’ve probably missed quite a few options here. Even compressed vs. uncompressed images can ‘look’ surprisingly similar:
One of the two images above is decodable without considering compression, the other is not.
Pitch Tuning and Edge Detection
After finding a possible pixel buffer and detecting or forcing a specific format, the next step should be finding out what pitch (width) that it has and my guess is that we will need some heuristic (fancy word for evaluation function) to move forward.
This heuristic will, similarly to whatever searching strategy used in 1. have its fair share of false positives, false negatives and, hopefully, true positives (matches). The images below are from three different automatic pitch detection runs with different testing heuristics.
With the width figured out, all that remains is the matter of the beginning, the end (end – beginning = height) and an offset (because chances are that our search window landed a few byte in, as is the case with the rightmost image above).
For the most part, this last step should be reducible to an edge detection filter (like a Sobel Operator ) and then look for horizontal- and for vertical- lines. Note: this will be something of a problem for multiple similar images (or gradients) stacked tightly after each other.
The image above shows two copies of a horizontally easy- and vertically moderately difficult- case, with an edge detection filter applied to one of them.
Another alternative (or complement) for detecting the edge would be to do histogram comparison on a per row basis, as the row-to-row changes is usually rather small.
Approach
OK, so developing, selecting, and evaluating solutions for all of the above is within the reasonable scope of a Ph.D thesis, lovely. Massage the regular ‘needle in haystack‘ cash cows such as forensics for finding traces of child pornography or drawings of airport security checkpoints (while secretly polish your master plan of scraping credit card numbers, passwords and extortion-able webcam sessions from GPU memory dumps – just sayin’) and the next few years are all pretty much lined up. Lets take a look at what Senseye has to offer on the subject.
Note: Senseye is not intended as an automated push-button tool but rather a long list of instruments and measuring techniques for the informed explorer, so there are a lot of trade-offs to consider that removes the possibility for an optimal solution here — but the approach could of course be lifted to a dedicated tool.
Which parts of Senseye would be useful for experimenting with this? Well, for classification – we have histogram matching and pattern matching. Both of them require some kind of reference picture that is related to what you are looking for; both are also rather young features. At the time of writing, they can’t load presets from external files (have to be part of the sensor data stream) and the comparison functions in the pattern matching feature included all work on the form:
Reference image + per pixel comparison shader = 1-bit output. Sum output and compare against threshold value. Collect Underpants. Profit.
With the changes planned for 0.3 the ‘automatic search’ part can probably be fulfilled, so we’ll wait with a classification discussion until then. The screenshots above were taken by just bigram- mapping (first byte X, second byte Y) the same sample images used in the training grounds section further below. But judging from them, it seems like a lot of pictures could probably be classified by modelling bigrams as a distance function from the XY diagonal (or something actually clever, my computer graphics background is pretty much all mix of high-school level math, curiosity and insomnia).
For tuning, we have the Picture Tuner that has some limited capability for automatic tuning along with manual adjustments for starting offset. Underlying design problems and coding issues that relate to working with a seriously outdated OpenGL version (2.1 because portability- and drivers- are shit) limits this in a few ways, with the big one being that image size must be > sample window width, and the maximum detectable width is also a function of the sample window size. This is usually not a problem unless you are looking for single icons.
The automatic tuner works roughly like this:
Unpack Shader (input buffer being RGBx) → Tuning Shader → Sample Tiles → CPU readback → Scoring Function.
This is just repeated with a brute-force linear-search through the range of useful widths. Keep the one with the highest score. The purpose of the tile- sampling stage is to cut down on memory bandwidth requirements and cost per evaluation (conservative numbers are some 256x256x4 bytes per buffer, 4-5 intermediate copy steps and some 2-3000 evaluations per image).
The included scoring function looks for vertical continuity, meaning run lengths of similar vertical lines, discarding single coloured block results. As can be seen with the picture below, the wrong pitch will give sharp breaks in vertical continuity (lower score). This also happens to favours borders and other common desktop- screenshot features.
Training Grounds
We will just pick a common enough combination and see where the rabbit hole takes us: RGB color format, progressive, 8 bits per channel, vertically oriented. We strip the alpha channel just because having it there would be cheating (hmm, every 4th byte suddenly turns into 0xff or has very low entropy, what could it ever be?).
Before going out into the harsh real world, lets take a few needles:
Note that the bunny snake (wouldn’t that be a terrifying cross-breed?) has some black padding added to simulate the “padding for alignment” case. Run them through imagemagick:s ‘convert’ utility to get a raw r8g8b8 out:
convert snake.png snake.rgb
and sandwich them between some nice slices of CSPRNG (because uniforms are hot) like this:
#!/bin/bash noisecmd="/bin/dd if=/dev/urandom of=pile conv=notrunc \ bs=1024 count=$(expr $RANDOM % 512 + 64) oflag=append" $noisecmd for fn in ./*.rgb ; do /bin/dd if=$fn of=pile conv=notrunc oflag=append $noisecmd done
Loading it (from the build directory of a senseye checkout):
arcan -p ../res ../senseye & sense_file ./pile
And, like all good cooking shows, this video here is a realtime recording of how it would go down:
There is a lot of corner cases that are not covered, as you can see in the first attempt to autotune Mr. Bunny-snake; usually the SCADA image takes more tries due to the high number of continuous regions that will have the evaluation tiles be ignored. Running the list of everything that is flawed or too immature with this process would make this already too lengthy post even worse, but good starting points would be the tile management: Placement needs to be smarter, evaluation quicker and readback transfers should pack multiple evaluation widths into the same buffer (readback synchronization cost is insane).
Among the many things left as an exercise for the reader, try and open some pictures in a program of your liking, dump process memory with your favourite tool (I am slightly biased towards ECFS because of all the other cool features in the format) and try to dig out the pixels. When that over and done with, look into something a lot more hardcore (which won’t work by just clicking around in the tools provided):