Just wondering if anyone has toyed with the idea of exporting the entire contents of a disk image into a SQL database (be it MySQL, PostGres, etc)?
Sure, this would increase storage overhead, but it'd increase speed phenominally in regards to processing through items within the image. That said, to what extent does pyFLAG go when it creates its db?
To what end would you store the image in the database? what schema would you use? what would you do with the results?
Pyflag stores things like file/inode properties, analysed results etc. We generally dont store contents of files in the db because that would be much slower than just using a real image file.
Correct me if I am wrong, but to work on an image in the conventional manner, we have an image file (essentially one big non relational database) mounted via loopback. We then have a filesystem driver interpreting the contents of that file. You then have files spread out throughout that file, potentially across numerous sectors, and non-consequetive at that.
Seems like a fair amount of overhead and disk seeking to me.
**It also means that I can only examine the data with the tools that can interact with the data in this manner.**
The idea behind munging the entire image into a db is that it then becomes accessible to any tool for the examiner to interrogate it with.
If we were to store the contents of each file as a blob in a SQL compliant database, surely this would reduce overhead and increase processing throughput.
I haven't devoted much (i.e. any) time into designing a good schema for it yet, but I imagine you could create a table for the MFT (or equiv), another for file properties (MAC, etc), one for the actual files, keep a seperate table for the unallocated clusters, and perhaps one for file slack. What are your thoughts?