sleuthkit-users Mailing List for The Sleuth Kit (Page 45)
Brought to you by:
carrier
You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(11) |
Oct
(5) |
Nov
(4) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(1) |
Feb
(20) |
Mar
(60) |
Apr
(40) |
May
(24) |
Jun
(28) |
Jul
(18) |
Aug
(27) |
Sep
(6) |
Oct
(14) |
Nov
(15) |
Dec
(22) |
2004 |
Jan
(34) |
Feb
(13) |
Mar
(28) |
Apr
(23) |
May
(27) |
Jun
(26) |
Jul
(37) |
Aug
(19) |
Sep
(20) |
Oct
(39) |
Nov
(17) |
Dec
(9) |
2005 |
Jan
(45) |
Feb
(43) |
Mar
(66) |
Apr
(36) |
May
(19) |
Jun
(64) |
Jul
(10) |
Aug
(11) |
Sep
(35) |
Oct
(6) |
Nov
(4) |
Dec
(13) |
2006 |
Jan
(52) |
Feb
(34) |
Mar
(39) |
Apr
(39) |
May
(37) |
Jun
(15) |
Jul
(13) |
Aug
(48) |
Sep
(9) |
Oct
(10) |
Nov
(47) |
Dec
(13) |
2007 |
Jan
(25) |
Feb
(4) |
Mar
(2) |
Apr
(29) |
May
(11) |
Jun
(19) |
Jul
(13) |
Aug
(15) |
Sep
(30) |
Oct
(12) |
Nov
(10) |
Dec
(13) |
2008 |
Jan
(2) |
Feb
(54) |
Mar
(58) |
Apr
(43) |
May
(10) |
Jun
(27) |
Jul
(25) |
Aug
(27) |
Sep
(48) |
Oct
(69) |
Nov
(55) |
Dec
(43) |
2009 |
Jan
(26) |
Feb
(36) |
Mar
(28) |
Apr
(27) |
May
(55) |
Jun
(9) |
Jul
(19) |
Aug
(16) |
Sep
(15) |
Oct
(17) |
Nov
(70) |
Dec
(21) |
2010 |
Jan
(56) |
Feb
(59) |
Mar
(53) |
Apr
(32) |
May
(25) |
Jun
(31) |
Jul
(36) |
Aug
(11) |
Sep
(37) |
Oct
(19) |
Nov
(23) |
Dec
(6) |
2011 |
Jan
(21) |
Feb
(20) |
Mar
(30) |
Apr
(30) |
May
(74) |
Jun
(50) |
Jul
(34) |
Aug
(34) |
Sep
(12) |
Oct
(33) |
Nov
(10) |
Dec
(8) |
2012 |
Jan
(23) |
Feb
(57) |
Mar
(26) |
Apr
(14) |
May
(27) |
Jun
(27) |
Jul
(60) |
Aug
(88) |
Sep
(13) |
Oct
(36) |
Nov
(97) |
Dec
(85) |
2013 |
Jan
(60) |
Feb
(24) |
Mar
(43) |
Apr
(32) |
May
(22) |
Jun
(38) |
Jul
(51) |
Aug
(50) |
Sep
(76) |
Oct
(65) |
Nov
(25) |
Dec
(30) |
2014 |
Jan
(19) |
Feb
(41) |
Mar
(43) |
Apr
(28) |
May
(61) |
Jun
(12) |
Jul
(10) |
Aug
(37) |
Sep
(76) |
Oct
(31) |
Nov
(41) |
Dec
(12) |
2015 |
Jan
(33) |
Feb
(28) |
Mar
(53) |
Apr
(22) |
May
(29) |
Jun
(20) |
Jul
(15) |
Aug
(17) |
Sep
(52) |
Oct
(3) |
Nov
(18) |
Dec
(21) |
2016 |
Jan
(20) |
Feb
(8) |
Mar
(21) |
Apr
(7) |
May
(13) |
Jun
(35) |
Jul
(34) |
Aug
(11) |
Sep
(14) |
Oct
(22) |
Nov
(31) |
Dec
(23) |
2017 |
Jan
(20) |
Feb
(7) |
Mar
(5) |
Apr
(6) |
May
(6) |
Jun
(22) |
Jul
(11) |
Aug
(16) |
Sep
(8) |
Oct
(1) |
Nov
(1) |
Dec
(1) |
2018 |
Jan
|
Feb
|
Mar
(16) |
Apr
(2) |
May
(6) |
Jun
(5) |
Jul
|
Aug
(2) |
Sep
(4) |
Oct
|
Nov
(16) |
Dec
(13) |
2019 |
Jan
|
Feb
(1) |
Mar
(25) |
Apr
(9) |
May
(2) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
(2) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
|
Oct
(5) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(4) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2022 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2024 |
Jan
|
Feb
(3) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Alex N. <ajn...@cs...> - 2014-03-15 22:38:38
|
If you would like to show multiple time attributes using DFXML and Fiwalk, we would need to first design what the XML would look like. A Python API for reading the multiple timestamps wouldn't be hard to write after the design's done. I just took a stab at designing some extension attributes, and would appreciate feedback: https://github.com/dfxml-working-group/dfxml_schema/issues/16 By the way, to my knowledge, there are this many timestamp sets (four timestamps per set) available per file: * 1 from $STANDARD_INFORMATION. * 1 from each $FILE_NAME. * 1 from each directory entry that references the file. So, that's a median of 12-16 timestamps for files (noting that NTFS does make use of multiple hard links). Unfortunately, I think the multiplicities make extending the bodyfile format impractical. I welcome corrections to this reasoning if you have them; I did that design from memory of the File System Forensic Analysis book, and some NTFS docs I googled. --Alex On Mar 14, 2014, at 10:10 , RB <ao...@gm...> wrote: > On Fri, Mar 14, 2014 at 7:27 AM, Brian Carrier <ca...@sl...> wrote: >> tsk_gettimes will display two lines for each file. One with times from STD_INFO and one from $FILE_NAME. It has the limitation that if there are multiple $FILE_NAME attributes, it shows only one. > > I see that, and that's pretty cool - regardless of the limitation > that's extremely useful. Looks like "fls -arpm/" may be permanently > replaced for me now. > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Alex N. <ajn...@cs...> - 2014-03-15 21:58:17
|
Short answer: Nothing's immediately available. Something would not be hard to code up, though. I think the quickest answer you'd get---person-quick, not necessarily machine-quick---would be just summing the size of all allocated files and directories, identified by inode/MFT entry to account for multiple hard links. Subtract that sum from the image size. You should use tsk_loaddb or fiwalk to do the walk for sizes. Do note that this will be a rough estimate of allocated space usage. You'd get allocated regular files, likely the dominant space consumers; but you might not get directories (sometimes noted as 0 bytes long), or hidden or otherwise irregular metadata (e.g. alternate data streams and indices in NTFS). I think Windows' Volume Shadow Copies will also be missed with The SleuthKit's current tooling (anyone else, please correct me if I've missed something recent). But in most cases, the allocated files would be good enough for eyeballing. --Alex On Mar 14, 2014, at 09:38 , Brian McHughs <br...@in...> wrote: > Is there a command that will give me the Allocated and Unallocated data sizes of an E01 image? > > I would like to be able to quickly look at an E01 image file and see how much of its contents are allocated vs unallocated. > > Thanks! > Brian McHughs > > > br...@in... (email) > www.indexed.io (web) > 888.840.0709 x101 (office) > 303.900.3364 (cell) > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Emmanuelle D. <emm...@gm...> - 2014-03-15 18:41:31
|
Hi, Question to the SCALPEL developer team : Scalpel runs native on Mac OS X, which uses two filesystems : AFS ( "Apple File System », a Legacy From Mac OS 9 ) and UFS ( « Unix File System », From BSD - where it’s called FFS, for « Fast File System » ). Obviously, Scalpel’s got to be able to scan/carve both Filesystem. FREEBSD, as a member of the BSD UNIX Family, uses FFS (UFS) too : so, logically, Scalpel - say, the Linux version of it - ought to be able to properly scan/carve disk partitions formatted in FFS by FREEBSD; am I correct ? Thanks for your prompt reply ( and if I’m missing something, please let me know what) . Emmanuelle delouvée |
From: RB <ao...@gm...> - 2014-03-14 14:10:38
|
On Fri, Mar 14, 2014 at 7:27 AM, Brian Carrier <ca...@sl...> wrote: > tsk_gettimes will display two lines for each file. One with times from STD_INFO and one from $FILE_NAME. It has the limitation that if there are multiple $FILE_NAME attributes, it shows only one. I see that, and that's pretty cool - regardless of the limitation that's extremely useful. Looks like "fls -arpm/" may be permanently replaced for me now. |
From: Brian C. <ca...@sl...> - 2014-03-14 13:44:12
|
tsk_gettimes will display two lines for each file. One with times from STD_INFO and one from $FILE_NAME. It has the limitation that if there are multiple $FILE_NAME attributes, it shows only one. On Mar 13, 2014, at 6:48 PM, RB <ao...@gm...> wrote: > On Thu, Mar 13, 2014 at 4:10 PM, Brian Carrier <ca...@sl...> wrote: >> tsk_loaddb is fixed in the develop branch. > > Excellent, thank you - tsk_gettimes -m is sufficing for what I need right now. > > I know we discussed the 8-timestamp thing a while ago (Simson > included), and the initial decision was to put that out as more > columns. Has any progress been made on that? I'd love to see either > (or both) TSK and fiwalk produce that kind of data with a runtime > flag. > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Brian M. <br...@in...> - 2014-03-14 13:38:57
|
Is there a command that will give me the Allocated and Unallocated data sizes of an E01 image? I would like to be able to quickly look at an E01 image file and see how much of its contents are allocated vs unallocated. Thanks! Brian McHughs br...@in... (email) www.indexed.io (web) 888.840.0709 x101 (office) 303.900.3364 (cell) |
From: Grundy B. J T. <Bar...@ti...> - 2014-03-14 12:10:39
|
Have you run mmls on the image to determine if there is a file system offset? /******************************************* Barry J. Grundy Assistant Special Agent in Charge Digital Forensic Support Group Electronic Crimes and Intelligence Division Treasury Inspector General for Tax Administration (301) 210-8741 (w) (202) 527-5778 (c) Bar...@ti... ********************************************\ From: Brian McHughs [mailto:br...@in...] Sent: Thursday, March 13, 2014 6:10 PM To: sle...@li... Subject: [sleuthkit-users] tsk_recover E01 extraction issues I have an image file (split King.E01, King.E02) that I'm trying to utilize the commandline tsk_recover to extract all allocated files into a specified output directory. command I'm running: tsk_recover ./King.E01 ./Output I get: Cannot determine file system type (Sector offset: 0)Files Recovered: 0 So I updated my command to: tsk_recover -f fat ./King.E01 ./Output I get: Invalid magic value (Not a FATFS file system (magic)) (Sector offset: 0)Files Recovered: 0 My goal is to simply extract everything in the E01 image files out into the Output directory. Can anyone please tell me what I'm missing? Environment MAC: OS X 10.9.1 thanks, Brian McHughs [http://www.indexed.io/wp-content/uploads/2013/11/iio_logo_medium.png] br...@in...<mailto:br...@in...> (email) www.indexed.io<http://www.indexed.io> (web) 888.840.0709 x101 (office) 303.900.3364 (cell) |
From: Simson G. <si...@ac...> - 2014-03-14 02:26:52
|
Hi, Stuart. I recall from your presentation before that you were doing something along these lines. You may wish to look at the DFXML toolkit, which provides a lot of this functionality. As far as Java JNI bindings, my understanding is that these are already part of SleuthKit if you want them. Regards, Simson On Mar 13, 2014, at 8:27 PM, Stuart Maclean <st...@ap...> wrote: > By checksum do you mean the md5 hash field? If yes, I have some Java > code, somewhat dormant but functional, that binds via JNI to the > Sleuthkit library. And one sample program actually walks the filesystem > and runs md5sum over each file's main attribute. You are welcome to > it. Build instructions are a bit sketchy still, it's Maven based with a > bit of make to do the C parts. > On a related note, I have some bodyfile object comparison logic that > might prove useful for comparing filesystems. > > Stuart > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Stuart M. <st...@ap...> - 2014-03-14 00:47:51
|
By checksum do you mean the md5 hash field? If yes, I have some Java code, somewhat dormant but functional, that binds via JNI to the Sleuthkit library. And one sample program actually walks the filesystem and runs md5sum over each file's main attribute. You are welcome to it. Build instructions are a bit sketchy still, it's Maven based with a bit of make to do the C parts. On a related note, I have some bodyfile object comparison logic that might prove useful for comparing filesystems. Stuart |
From: Gqman10 <gq...@gm...> - 2014-03-14 00:08:08
|
Brian, Try using affuse to make it look like one big file. Sent from my iPhone > On Mar 13, 2014, at 6:09 PM, Brian McHughs <br...@in...> wrote: > > I have an image file (split King.E01, King.E02) that I'm trying to utilize the commandline tsk_recover to extract all allocated files into a specified output directory. > > command I'm running: > > tsk_recover ./King.E01 ./Output > > I get: > Cannot determine file system type (Sector offset: 0)Files Recovered: 0 > > > > So I updated my command to: > > tsk_recover -f fat ./King.E01 ./Output > > I get: > > Invalid magic value (Not a FATFS file system (magic)) (Sector offset: 0)Files Recovered: 0 > > > > My goal is to simply extract everything in the E01 image files out into the Output directory. Can anyone please tell me what I'm missing? > > Environment > > MAC: OS X 10.9.1 > > thanks, > > Brian McHughs > > > br...@in... (email) > www.indexed.io (web) > 888.840.0709 x101 (office) > 303.900.3364 (cell) > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Alex N. <ajn...@cs...> - 2014-03-13 23:05:02
|
Is libewf built into your binaries? That is, does tsk_recover show ewf images as an available format? For example, here's my '-i list' output: $ tsk_recover -i list Supported image format types: raw (Single or split raw file (dd)) aff (Advanced Forensic Format) afd (AFF Multiple File) afm (AFF with external metadata) afflib (All AFFLIB image formats (including beta ones)) ewf (Expert Witness format (encase)) --Alex On Mar 13, 2014, at 18:09 , Brian McHughs <br...@in...> wrote: > I have an image file (split King.E01, King.E02) that I'm trying to utilize the commandline tsk_recover to extract all allocated files into a specified output directory. > > command I'm running: > > tsk_recover ./King.E01 ./Output > > I get: > Cannot determine file system type (Sector offset: 0)Files Recovered: 0 > > > > So I updated my command to: > > tsk_recover -f fat ./King.E01 ./Output > > I get: > > > Invalid magic value (Not a FATFS file system (magic)) (Sector offset: 0)Files Recovered: 0 > > > > My goal is to simply extract everything in the E01 image files out into the Output directory. Can anyone please tell me what I'm missing? > > Environment > > MAC: OS X 10.9.1 > > thanks, > > Brian McHughs > > > br...@in... (email) > www.indexed.io (web) > 888.840.0709 x101 (office) > 303.900.3364 (cell) > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: RB <ao...@gm...> - 2014-03-13 22:48:34
|
On Thu, Mar 13, 2014 at 4:10 PM, Brian Carrier <ca...@sl...> wrote: > tsk_loaddb is fixed in the develop branch. Excellent, thank you - tsk_gettimes -m is sufficing for what I need right now. I know we discussed the 8-timestamp thing a while ago (Simson included), and the initial decision was to put that out as more columns. Has any progress been made on that? I'd love to see either (or both) TSK and fiwalk produce that kind of data with a runtime flag. |
From: Brian C. <ca...@sl...> - 2014-03-13 22:10:46
|
tsk_loaddb is fixed in the develop branch. On Mar 13, 2014, at 5:13 PM, Brian Carrier <ca...@sl...> wrote: > Hmm, yea we need to fix that error in tsk_loaddb. Not sure when that was introduced. > > The easier answer though is to use 'tsk_gettimes -m'. > > > On Mar 13, 2014, at 4:43 PM, RB <ao...@gm...> wrote: > >> Continuing the practice of responding to myself, it would appear that >> "tsk_loaddb -h" is broken. It seems to be issuing bad SQL statements >> to insert the MD5s of the files, it gets a syntax error per file >> 'unrecognized token: "<obvious md5>"'. >> >> ------------------------------------------------------------------------------ >> Learn Graph Databases - Download FREE O'Reilly Book >> "Graph Databases" is the definitive new guide to graph databases and their >> applications. Written by three acclaimed leaders in the field, >> this first edition is now available. Download your free book today! >> http://p.sf.net/sfu/13534_NeoTech >> _______________________________________________ >> sleuthkit-users mailing list >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >> http://www.sleuthkit.org > > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Brian M. <br...@in...> - 2014-03-13 22:09:52
|
I have an image file (split King.E01, King.E02) that I'm trying to utilize the commandline tsk_recover to extract all allocated files into a specified output directory. command I'm running: tsk_recover ./King.E01 ./Output I get: Cannot determine file system type (Sector offset: 0)Files Recovered: 0 So I updated my command to: tsk_recover -f fat ./King.E01 ./Output I get: Invalid magic value (Not a FATFS file system (magic)) (Sector offset: 0)Files Recovered: 0 My goal is to simply extract everything in the E01 image files out into the Output directory. Can anyone please tell me what I'm missing? Environment MAC: OS X 10.9.1 thanks, Brian McHughs br...@in... (email) www.indexed.io (web) 888.840.0709 x101 (office) 303.900.3364 (cell) |
From: Simson G. <si...@ac...> - 2014-03-13 22:01:46
|
fiwalk should do this, but it won’t give you all 8 timestamps. It could be modified to do so, however. On Mar 13, 2014, at 4:10 PM, RB <ao...@gm...> wrote: > I guess tsk_loaddb would be the appropriate answer here, but is there > anything that, say, would do dfxml output of all 8 timestamps from > NTFS? > > On Thu, Mar 13, 2014 at 1:52 PM, RB <ao...@gm...> wrote: >> Does anyone know of a tool that actually fills out the checksum field >> of the current body file format? I'm trying to find more efficient >> ways to snag the checksums than mounting the partition and runnning >> md5deep over it. Thoughts? > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Brian C. <ca...@sl...> - 2014-03-13 21:13:14
|
Hmm, yea we need to fix that error in tsk_loaddb. Not sure when that was introduced. The easier answer though is to use 'tsk_gettimes -m'. On Mar 13, 2014, at 4:43 PM, RB <ao...@gm...> wrote: > Continuing the practice of responding to myself, it would appear that > "tsk_loaddb -h" is broken. It seems to be issuing bad SQL statements > to insert the MD5s of the files, it gets a syntax error per file > 'unrecognized token: "<obvious md5>"'. > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: RB <ao...@gm...> - 2014-03-13 20:43:16
|
Continuing the practice of responding to myself, it would appear that "tsk_loaddb -h" is broken. It seems to be issuing bad SQL statements to insert the MD5s of the files, it gets a syntax error per file 'unrecognized token: "<obvious md5>"'. |
From: RB <ao...@gm...> - 2014-03-13 20:10:45
|
I guess tsk_loaddb would be the appropriate answer here, but is there anything that, say, would do dfxml output of all 8 timestamps from NTFS? On Thu, Mar 13, 2014 at 1:52 PM, RB <ao...@gm...> wrote: > Does anyone know of a tool that actually fills out the checksum field > of the current body file format? I'm trying to find more efficient > ways to snag the checksums than mounting the partition and runnning > md5deep over it. Thoughts? |
From: RB <ao...@gm...> - 2014-03-13 19:53:02
|
Does anyone know of a tool that actually fills out the checksum field of the current body file format? I'm trying to find more efficient ways to snag the checksums than mounting the partition and runnning md5deep over it. Thoughts? |
From: Okti <wma...@gm...> - 2014-03-13 11:57:41
|
Hi, Pardon if this may seem as a "stupid" or a "newbie" question, but because I’m self-learning and don't really have nobody to ask, and since i'm using TSK tools I think it would be ok to post. So I'm learning about ext2/3 fs's and was trying to recover some files "manually" (which was easy at first, but not so easy when recovering large files). I was trying to look at direct and indirect block pointers to see how this data actually looks (both for deleted and existing files), but it seems I cannot really find these data structures. First of all I would like to say that I have a bit of an unusual setup. Since I have only 500GB disk on my pc and my linux partition is 350GB, i cannot just make an image of this partition. So instead, I copied 10GB from my windows partition (with dd), then zeroed out all remaining data (by copying bits from /dev/zero, again with dd), and made ext3 fs with mkfs. All looks good and I can mount this "disk" as it would be a valid ext3 partition. Then for some reason I thought it would be OK to create a partition table for this "disk", to make it look more "real" but then decided it was not such a good idea so I removed partition table. I don't know if this could be relevant to the problem which I'm having, or maybe simply data in my image is corrupt now. TSK tools recognize this as a valid ext3 partition. $ dd if=data.dd bs=512 skip=2 count=1 | xxd 0000000: 80fa 0a00 b0e3 2b00 c831 0200 b495 2a00 ......+..1....*. 0000010: 67fa 0a00 0000 0000 0200 0000 0200 0000 g............... 0000020: 0080 0000 0080 0000 f01f 0000 e23e 2053 .............> S 0000030: 473f 2053 0900 ffff 53ef 0100 0100 0000 G? S....S....... 0000040: 108c 1853 0000 0000 0000 0000 0100 0000 ...S............ 0000050: 0000 0000 0b00 0000 0001 0000 3c00 0000 ............<... 0000060: 0200 0000 0300 0000 5c71 eed4 4d97 4dd8 ........\q..M.M. 0000070: 843d 6ab0 7677 a6e5 506f 7765 7253 6f75 .=j.vw..PowerSou 0000080: 7263 6500 0000 0000 0000 0000 0000 0000 rce............. 0000090: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000b0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000c0: 0000 0000 0000 0000 0000 0000 0000 be02 ................ 00000d0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000e0: 0800 0000 0000 0000 0000 0000 f571 8b86 .............q.. 00000f0: 6b6b 48fe 8921 c500 33ab 6667 0101 0000 kkH..!..3.fg.... 0000100: 0c00 0000 0000 0000 108c 1853 0102 1500 ...........S.... 0000110: 0202 1500 0302 1500 0402 1500 0502 1500 ................ 0000120: 0602 1500 0702 1500 0802 1500 0902 1500 ................ 0000130: 0a02 1500 0b02 1500 0c02 1500 0d02 1500 ................ 0000140: 0e06 1500 0000 0000 0000 0000 0000 0008 ................ 0000150: 0000 0000 0000 0000 0000 0000 1c00 1c00 ................ 0000160: 0100 0000 0000 0000 0000 0000 0000 0000 ................ 0000170: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000180: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000190: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00001a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00001b0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00001c0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00001d0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00001e0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00001f0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ So we can see Volume label and superblock signatures, so all looks good (or correct me if I'm wrong). Now let's say I want to view some inode data structures, First thing we need view primary group descriptor table which is located one block after a superblock, or it supposed to be.. $ blkcat -f ext3 data.dd 1 | xxd 0000000: c002 0000 c102 0000 c202 0000 397b e51f ............9{.. 0000010: 0200 0000 0000 0000 0000 0000 0000 0000 ................ 0000020: c082 0000 c182 0000 c282 0000 3f7b f01f ............?{.. 0000030: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000040: 0000 0100 0100 0100 0200 0100 ff7d f01f .............}.. 0000050: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000060: c082 0100 c182 0100 c282 0100 3f7b f01f ............?{.. 0000070: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000080: 0000 0200 0100 0200 0200 0200 ff7d f01f .............}.. 0000090: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000a0: c082 0200 c182 0200 c282 0200 3f7b f01f ............?{.. 00000b0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000c0: 0000 0300 0100 0300 0200 0300 ff7d f01f .............}.. 00000d0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000e0: c082 0300 c182 0300 c282 0300 3f7b f01f ............?{.. 00000f0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000100: 0000 0400 0100 0400 0200 0400 fe7d ef1f .............}.. So bytes 8-11 should show starting block address of inode table, which in my case is c202. Um, i'm not really sure how to interpret this value. If this is in hexadecimal and I would want to covert this to decimal, minding endian ordering I would get 8236. Which seems rather stupid to have inode table located in this block. Again please correct me if I'm wrong. So my first question would be, why do I have output like this? It's easy how it is displayed in the book block address 4 (0400) My second question: Let's say I want to view the contents of inode specially direct and indirect block pointers. I have inode 237114 (which is an existing .jpg file). First I need to locate to which group this inode belongs to which is easy, I can calculate this myself or just look at istat's output. Next I need to find group descriptor table for this group, and find inode table block address. I'm note sure how to do that, assuming this group (group 29) has no superblock backup in the first block, so I could take starting block address of this group plus one and I should be looking at group descriptor right? Well if I try to do that, all I got is 0's and f's : $ dd if=data.dd bs=4096 skip=950273 count=1 | xxd 0000000: ff01 0000 0000 0000 0000 0000 0000 0000 ................ 0000010: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000020: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000030: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000040: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000050: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000060: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000070: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000080: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000090: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000b0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000c0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000d0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000e0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000f0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000100: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000110: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000120: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000130: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000140: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000150: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000160: 0000 0000 0000 0000 0000 0000 0000 0000 ................ Although this is not necessary. According to ext kernel wiki page, I can calculate the location of this inode in the inode table by: (inode - 1) % inodes/per group. There is also an example how to get byte address within the inode table, but I’m not sure if I need this one. $ echo "(237114-1)%8176" | bc 9 This is in fact correct, if I would use debugfs utility to check this information all looks good: $ debugfs data.dd debugfs 1.42 (29-Nov-2011) debugfs: imap <237114> Inode 237114 is part of block group 29 located at block 950274, offset 0x0900 debugfs: I don't exactly understand what this offset means? Offset within the inode table? If so, shouldn't user created files start from inode 16?. So we got block address of the inode table in this group (which is also shown in fsstat's output). Also apparently my inode size is 256 bytes, not standard 128. $ dd if=data.dd bs=4096 skip=950274 | dd bs=256 skip=9 count=1 | xxd 1+0 records in 1+0 records out 256 bytes (256 B) copied, 0.000525543 s, 487 kB/s 0000000: a481 0000 e1af 0000 2e3f 2053 2e3f 2053 .........? S.? S 0000010: 2e3f 2053 0000 0000 0000 0100 5800 0000 .? S........X... 0000020: 0000 0000 0000 0000 6590 0e00 6690 0e00 ........e...f... 0000030: 6790 0e00 6890 0e00 6990 0e00 6a90 0e00 g...h...i...j... 0000040: 6b90 0e00 6c90 0e00 6d90 0e00 6e90 0e00 k...l...m...n... 0000050: 6f90 0e00 0000 0000 0000 0000 0000 0000 o............... 0000060: 0000 0000 0f5e c3a7 0000 0000 0000 0000 .....^.......... 0000070: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000080: 0400 0000 0000 0000 0000 0000 0000 0000 ................ 0000090: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000b0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000c0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000d0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000e0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000f0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ This looks like a valid inode data structure, first two bytes should show us file mode (814a), and the rest of it looks good. However, when I try look at that location of first direct block pointers bytes 40-43, i got : 6590 0e00, when converted to decimal minding endian ordering I got 955984 (again I can be wrong with this number). But this looks rather good, I checked block range for this group and block address 955984 falls within group 29. When i try to look at the contents of this block all I got is 0' and f's (i piped my output through "less" to not mess up my terminal, but i can assure you there was nothing). So why is that? For all i know i could be looking at some random inode data structure, totally not related to my file.. When I removed this file and tried to look at this again everything looked fine, block pointers have been wiped as it should be on deleted files: $ dd if=data.dd bs=4096 skip=950274 | dd bs=256 skip=9 count=1 | xxd 1+0 records in 1+0 records out 0000000: a481 0000 0000 0000 2e3f 2053 3387 2153 .........? S3.!S 0000010: 3387 2153 3387 2153 0000 0000 0000 0000 3.!S3.!S........ 256 bytes (256 B) copied0000020: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000030: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000040: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000050: 0000 0000 0000 0000 0000 0000 0000 0000 ................ , 0.000143114 s, 1.8 MB/s 0000060: 0000 0000 0f5e c3a7 0000 0000 0000 0000 .....^.......... 0000070: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 0000080: 0400 0000 0000 0000 0000 0000 0000 0000 ................ 0000090: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000b0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000c0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000d0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000e0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ 00000f0: 0000 0000 0000 0000 0000 0000 0000 0000 ................ So I'm a really looking at the correct inode data structures? Thanks. |
From: Atila <ati...@dp...> - 2014-03-12 12:40:02
|
Ok , I just got curious if there was some technology that a forensic examiner could apply to recover data in this situation. On 12-03-2014 09:30, Simson Garfinkel wrote: > The dd command may not finish. You don’t know unless you check. > The drive controller may not write NULLs, but instead just indicate that the sector is blanked. You need to write a pseudorandom sequence and verify it by reading it back. > You asked for ‘secure’ erase, not erase. > > On Mar 12, 2014, at 8:05 AM, Atila <ati...@dp...> wrote: > >> On 12-03-2014 08:39, Simson Garfinkel wrote: >>> That will erase, but not securely. >> But, given that someone erased a disk with dd, how would a forensic >> examiner recover data from it? >> >> ------------------------------------------------------------------------------ >> Learn Graph Databases - Download FREE O'Reilly Book >> "Graph Databases" is the definitive new guide to graph databases and their >> applications. Written by three acclaimed leaders in the field, >> this first edition is now available. Download your free book today! >> http://p.sf.net/sfu/13534_NeoTech >> _______________________________________________ >> sleuthkit-users mailing list >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >> http://www.sleuthkit.org |
From: Simson G. <si...@ac...> - 2014-03-12 12:30:21
|
The dd command may not finish. You don’t know unless you check. The drive controller may not write NULLs, but instead just indicate that the sector is blanked. You need to write a pseudorandom sequence and verify it by reading it back. You asked for ‘secure’ erase, not erase. On Mar 12, 2014, at 8:05 AM, Atila <ati...@dp...> wrote: > On 12-03-2014 08:39, Simson Garfinkel wrote: >> That will erase, but not securely. > But, given that someone erased a disk with dd, how would a forensic > examiner recover data from it? > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Atila <ati...@dp...> - 2014-03-12 12:05:43
|
On 12-03-2014 08:39, Simson Garfinkel wrote: > That will erase, but not securely. But, given that someone erased a disk with dd, how would a forensic examiner recover data from it? |
From: Simson G. <si...@ac...> - 2014-03-12 11:39:45
|
It really depends on how the Raid 5 is created and whether or not there are any spare drives. In my research I once bought a used RAID system which had been securely erased, but it turns out that two of the drives had minor failures and the “spare” drives had been swapped in. The previous owner had tried to erase by erasing the RAID without undoing it and the failed drives had not been erased. In general, it’s best to disassemble the RAID and securely erase each device. Another poster recommended using ‘dd if=/dev/zero of=<DEV>’ for secure erasing. That will erase, but not securely. You should do a secure erase with a bootable disk such as DBAN. Simson On Mar 11, 2014, at 10:47 PM, maría elena darahuge <dar...@gm...> wrote: > Hi: > > I would like to know if it is possible with dd to secure erase a raid 5 or the first step is to undo the raid and then the secure erase and how to verify it? > > -- > Prof. Ing. María Elena Darahuge > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Atila <ati...@dp...> - 2014-03-12 11:18:59
|
On 11-03-2014 23:47, maría elena darahuge wrote: > Hi: > > I would like to know if it is possible with dd to secure erase a raid 5 Sure, dd if=/dev/zero of=/dev/md0 > or the first step is to undo the raid and then the secure erase that works too, dd if=/dev/zero of=/dev/sdX > and how to verify it? You really don't need to, in any of the approaches your data is gone for good. |