sleuthkit-users Mailing List for The Sleuth Kit (Page 198)
Brought to you by:
carrier
You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(11) |
Oct
(5) |
Nov
(4) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(1) |
Feb
(20) |
Mar
(60) |
Apr
(40) |
May
(24) |
Jun
(28) |
Jul
(18) |
Aug
(27) |
Sep
(6) |
Oct
(14) |
Nov
(15) |
Dec
(22) |
2004 |
Jan
(34) |
Feb
(13) |
Mar
(28) |
Apr
(23) |
May
(27) |
Jun
(26) |
Jul
(37) |
Aug
(19) |
Sep
(20) |
Oct
(39) |
Nov
(17) |
Dec
(9) |
2005 |
Jan
(45) |
Feb
(43) |
Mar
(66) |
Apr
(36) |
May
(19) |
Jun
(64) |
Jul
(10) |
Aug
(11) |
Sep
(35) |
Oct
(6) |
Nov
(4) |
Dec
(13) |
2006 |
Jan
(52) |
Feb
(34) |
Mar
(39) |
Apr
(39) |
May
(37) |
Jun
(15) |
Jul
(13) |
Aug
(48) |
Sep
(9) |
Oct
(10) |
Nov
(47) |
Dec
(13) |
2007 |
Jan
(25) |
Feb
(4) |
Mar
(2) |
Apr
(29) |
May
(11) |
Jun
(19) |
Jul
(13) |
Aug
(15) |
Sep
(30) |
Oct
(12) |
Nov
(10) |
Dec
(13) |
2008 |
Jan
(2) |
Feb
(54) |
Mar
(58) |
Apr
(43) |
May
(10) |
Jun
(27) |
Jul
(25) |
Aug
(27) |
Sep
(48) |
Oct
(69) |
Nov
(55) |
Dec
(43) |
2009 |
Jan
(26) |
Feb
(36) |
Mar
(28) |
Apr
(27) |
May
(55) |
Jun
(9) |
Jul
(19) |
Aug
(16) |
Sep
(15) |
Oct
(17) |
Nov
(70) |
Dec
(21) |
2010 |
Jan
(56) |
Feb
(59) |
Mar
(53) |
Apr
(32) |
May
(25) |
Jun
(31) |
Jul
(36) |
Aug
(11) |
Sep
(37) |
Oct
(19) |
Nov
(23) |
Dec
(6) |
2011 |
Jan
(21) |
Feb
(20) |
Mar
(30) |
Apr
(30) |
May
(74) |
Jun
(50) |
Jul
(34) |
Aug
(34) |
Sep
(12) |
Oct
(33) |
Nov
(10) |
Dec
(8) |
2012 |
Jan
(23) |
Feb
(57) |
Mar
(26) |
Apr
(14) |
May
(27) |
Jun
(27) |
Jul
(60) |
Aug
(88) |
Sep
(13) |
Oct
(36) |
Nov
(97) |
Dec
(85) |
2013 |
Jan
(60) |
Feb
(24) |
Mar
(43) |
Apr
(32) |
May
(22) |
Jun
(38) |
Jul
(51) |
Aug
(50) |
Sep
(76) |
Oct
(65) |
Nov
(25) |
Dec
(30) |
2014 |
Jan
(19) |
Feb
(41) |
Mar
(43) |
Apr
(28) |
May
(61) |
Jun
(12) |
Jul
(10) |
Aug
(37) |
Sep
(76) |
Oct
(31) |
Nov
(41) |
Dec
(12) |
2015 |
Jan
(33) |
Feb
(28) |
Mar
(53) |
Apr
(22) |
May
(29) |
Jun
(20) |
Jul
(15) |
Aug
(17) |
Sep
(52) |
Oct
(3) |
Nov
(18) |
Dec
(21) |
2016 |
Jan
(20) |
Feb
(8) |
Mar
(21) |
Apr
(7) |
May
(13) |
Jun
(35) |
Jul
(34) |
Aug
(11) |
Sep
(14) |
Oct
(22) |
Nov
(31) |
Dec
(23) |
2017 |
Jan
(20) |
Feb
(7) |
Mar
(5) |
Apr
(6) |
May
(6) |
Jun
(22) |
Jul
(11) |
Aug
(16) |
Sep
(8) |
Oct
(1) |
Nov
(1) |
Dec
(1) |
2018 |
Jan
|
Feb
|
Mar
(16) |
Apr
(2) |
May
(6) |
Jun
(5) |
Jul
|
Aug
(2) |
Sep
(4) |
Oct
|
Nov
(16) |
Dec
(13) |
2019 |
Jan
|
Feb
(1) |
Mar
(25) |
Apr
(9) |
May
(2) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
(2) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
|
Oct
(5) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(4) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2022 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2024 |
Jan
|
Feb
(3) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: samwun <sa...@hg...> - 2004-02-13 18:40:25
|
Dear all, Is there any detailed documentation about how to use sleuthhkit? Thanks Sam |
From: Brian C. <ca...@sl...> - 2004-02-04 15:06:12
|
> I have a problem with md5sum. > I have created an image from /dev/sdb1 with dd_rescue (dd_rescue -v > /dev/sdb1 /mnt//hda1/sdb1.dd). > ... > So my question is: Why creates md5sum at once the output: > "/mnt/hda1/sdb1.dd: Erfolg" and other time "hkhkjfizo8965kgi778 test" Does the md5sum tool support large files? brian |
From: Guido b. yahoo.d. <gui...@ya...> - 2004-02-04 07:51:06
|
Hello, first I apologized for my bumpy english. This question is not SleuthKit/Autopsy releated, but I think here are many people, who handling with forensics. I have a problem with md5sum. I have created an image from /dev/sdb1 with dd_rescue (dd_rescue -v /dev/sdb1 /mnt//hda1/sdb1.dd). After that I want to know the md5 hash value of the source and the target file and wrote at the commandline the following: "md5sum /dev/sdb1" and "md5sum /mnt/hda1/sdb1.dd" but the output was not, what I expected: Output: 1. "/dev/sdb1: Erfolg" (I use the german language pack, but in english it could be: "/dev/sdb1: success") 2. "/mnt/hda1/sdb1.dd: Erfolg" Before I had made an image of /dev/sdb and there was a correct output: "980710470hjhj09820701 /dev/sdb" (the hash is not real, for this mail I wrote random letters and numbers, because I have forgotten the real hash value ;-) ). If I create a file in /dev/hda1 with "touch test" and then "md5sum test" it works fine and give me the hash value. So my question is: Why creates md5sum at once the output: "/mnt/hda1/sdb1.dd: Erfolg" and other time "hkhkjfizo8965kgi778 test" Greetings from Berlin Guido Metzner www.guframe.de |
From: Brian C. <ca...@ce...> - 2004-02-03 23:57:08
|
On Feb 3, 2004, at 1:22 PM, Thanh Tran wrote: > Hi Brian, > According to dls man page: > "By default, dls copies unallocated data blocks only" > > Does this mean that dls copy all unallocated data > blocks whether or not the block was once allocated > (i.e deleted)? Yes. dls only knows the current allocation status of the block. it doesn't know if it was allocated before or not. > Is there a way for dls to copy only > unallocated blocks that associate with deleted stuff > (I don't mean for a particular deleted file but any > deleted file)? no, dls can't do that. But, the bigger question is how can you tell that a block has deleted stuff in it? Maybe a block of all zeros is really part of a file and then the carving tool would not find it and the resulting file would be corrupt. > The reason I ask because for large > drive with not much data, most of the unallocated > blocks really are empty. What would you do with the data even w/out the empty blocks? Some of the carving and searching tools would run faster (if they weren't indexed), but then you are faced with the problem of tying the data back to the original sector location on the disk. brian |
From: Thanh T. <ttr...@ya...> - 2004-02-03 19:34:15
|
Hi Brian, According to dls man page: "By default, dls copies unallocated data blocks only" Does this mean that dls copy all unallocated data blocks whether or not the block was once allocated (i.e deleted)? Is there a way for dls to copy only unallocated blocks that associate with deleted stuff (I don't mean for a particular deleted file but any deleted file)? The reason I ask because for large drive with not much data, most of the unallocated blocks really are empty. Thanks. __________________________________ Do you Yahoo!? Yahoo! SiteBuilder - Free web site building tool. Try it! http://webhosting.yahoo.com/ps/sb/ |
From: Brian C. <ca...@sl...> - 2004-02-01 14:36:55
|
>> There is a header entry on top of all of the hash files. It looks >> like >> [...] >> Those are considered 'invalid' and the NSRL site says there are over >> 2600 sets, so they are probably from that. Check out how many zip >> files you downloaded. > > Cool... Although, I have 2,603 files, and 2,602 errors. Does hfind > ignore the > first error by default because it expects at least one at the top of > the > file? Yea, it reads the first entry to get the database format version and then ignores it. I just updated the code to add that entry to the final number of ignored entries. brian |
From: Paul S. <pa...@vn...> - 2004-01-31 23:46:22
|
On Saturday 31 January 2004 11:16, Brian Carrier wrote: > There is a header entry on top of all of the hash files. It looks like >[...] > Those are considered 'invalid' and the NSRL site says there are over > 2600 sets, so they are probably from that. Check out how many zip > files you downloaded. Cool... Although, I have 2,603 files, and 2,602 errors. Does hfind ignore = the=20 first error by default because it expects at least one at the top of the=20 file? Paul |
From: Brian C. <ca...@sl...> - 2004-01-31 16:16:42
|
On Jan 29, 2004, at 10:38 PM, Paul Stillwell wrote: > Hi all, > > I just wrote a little script that does this. Great. > Autopsy's make uses it with the following output. Any info > on what the 2,602 errors might be and how I could eliminate them would > be > appreciated. [...] > -------------- begin hfind output -------------- > Extracting Data from Database (/Forensics/NSRL/NSRLFile.txt) > Valid Database Entries: 17292990 > Invalid Database Entries (headers or errors): 2602 > Index File Entries (optimized): 16572711 > Sorting Index (/Forensics/NSRL/NSRLFile.txt-md5.idx) > --------------- end hfind output --------------- There is a header entry on top of all of the hash files. It looks like (or similar depending on the format version): "SHA -1","FileName","FileSize","ProductCode","OpSystemCode","MD4","MD5","CRC3 2"," SpecialCode" Those are considered 'invalid' and the NSRL site says there are over 2600 sets, so they are probably from that. Check out how many zip files you downloaded. brian |
From: Paul S. <pa...@vn...> - 2004-01-31 06:56:29
|
Hi all, I just wrote a little script that does this. I d/l'ed the NSRL 2.3 in=20 format-15 (1.5?) and wrote a little script that takes the NSRLFile.txt=20 directly from the zip file and sticks it in an output file NSRLFile.txt. Is= =20 fromat-15 the correct one to use? I couldn't find any doc on that. Here's the script I used: #!/bin/bash # remove NSRLfile.txt to eliminate possibility of duplicating the database rm NSRLFile.txt for x in `ls *.zip` do unzip -qq -c $x NSRLFile.txt >> NSRLFile.txt done Run it from the directory where you downloaded all of the .zip, .sha, and .= md5=20 files. It'll give you the big database file as described below. Beware...= =20 it is BIG - 2.6G big :-) Caveat - I did absolutely no error checking but i= t=20 seems to work. Autopsy's make uses it with the following output. Any info= =20 on what the 2,602 errors might be and how I could eliminate them would be=20 appreciated. Enter the directory where you installed it: /Forensics/NSRL NSRL database was found (NSRLFile.txt) NSRL Index file not found, do you want it created? (y/n) [n]: y =2D------------- begin hfind output -------------- Extracting Data from Database (/Forensics/NSRL/NSRLFile.txt) Valid Database Entries: 17292990 Invalid Database Entries (headers or errors): 2602 Index File Entries (optimized): 16572711 Sorting Index (/Forensics/NSRL/NSRLFile.txt-md5.idx) =2D-------------- end hfind output --------------- Paul On Friday 02 January 2004 00:33, Brian Carrier wrote: > Mike, > > You want to have one big NSRLFile.txt file. I'm not sure of the > details of all of the NSRL versions, but some of the distributions have > multiple NSRLFile.txt files because they don't all fit on once CD. > Concatenate the NSRLFile.txt files together into one file and give that > location to Autopsy. So, it would be something like: > > cat NSRLFile-1.txt NSRLFile-2.txt > NSRLFile.txt > > That database will need to be indexed by Autopsy / Sleuth Kit and then > it can be used. > > brian > > On Wednesday, December 31, 2003, at 04:38 PM, Michael Dundas wrote: > > I've been using autopsy for some time now, but not with the NSRL > > database. I've downloaded the entire database, format-15. It is in > > many ZIP files. How does one make this work with autopsy? I know you > > indicate the location of the NSRL DB during installation, but what is > > one to do with all these zip files? Do you unzip them all, then > > append the data in the files in the zip files to one big file? If > > there are scripts written to do this, I'd like to know where one can > > get a copy? If not, happy to write them, but don't understand the end > > goal? Maybe I'm making this too complicated. Any help appreciated. > > ------------------------------------------------------- > This SF.net email is sponsored by: IBM Linux Tutorials. > Become an expert in LINUX or just sharpen your skills. Sign up for IBM's > Free Linux Tutorials. Learn everything from the bash shell to sys admin. > Click now! http://ads.osdn.com/?ad_id=3D1278&alloc_id=3D3371&op=3Dclick > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Nathan C. <ns...@qs...> - 2004-01-28 08:12:03
|
> Hi there, > > I have noticed a minor quirk using istat, that only effects small > files on ntfs volumes, as far as I am aware: > > When istating an inode of a small file (under one block not sure if this is > disk sectors or fs blocks) no blocks are displayed in the output. I am using > v1.65. If a file is small enough that all of its attributes can fit within the MFT record for the file, it is stored entirely within the MFT. i.e. No allocated nodes. If you notice the data attribute in the first file is marked as "Resident" in the MFT. Type: $DATA (128-4) Name: $Data Resident size: 192 Where as in the second it is "Non-Resident" in the MFT; Type: $DATA (128-4) Name: $Data Non-Resident size: 214416 regards, Nathan. |
From: Brian C. <ca...@sl...> - 2004-01-27 23:20:10
|
On Tuesday, January 27, 2004, at 01:58 PM, nighty wrote: > today I found an interesting article titled "Defeating Forensic > Analysis on > Unix" in the phrack magazine #59 dealing with several anti-forensic > strategies, as well an with flaws of forensic tools, "The Coroner's > Toolkit" [...] > It would be interesting to know, whether the technical insufficiencies > presented in the article have also any validity for the Sleuth Kit's > capabilities of forensic analysis. I haven't read that in a while, but it dealt with not being able to view inode #1. When The Sleuth Kit was developed, that limitation was removed and it was able to view the contents of inode #1. TCT fixed the bug at some point, but I'm not sure which version. brian |
From: nighty <nig...@gm...> - 2004-01-27 18:55:44
|
Hi everyone, today I found an interesting article titled "Defeating Forensic Analysis = on=20 Unix" in the phrack magazine #59 dealing with several anti-forensic=20 strategies, as well an with flaws of forensic tools, "The Coroner's Toolk= it"=20 in particular. The article can be found on: www.phrack.org/phrack/59/p59-0x06.txt It was published on 2002-07-28 (as stated in the magazine). I found it=20 usefull, at least interesting, and hope that you will have a look at it, = if=20 you haven't already done so. It would be interesting to know, whether the technical insufficiencies=20 presented in the article have also any validity for the Sleuth Kit's=20 capabilities of forensic analysis. Regards, Harald |
From: chris <ch...@cc...> - 2004-01-26 18:52:16
|
Hi there, I have noticed a minor quirk using istat, that only effects small files on ntfs volumes, as far as I am aware: When istating an inode of a small file (under one block not sure if this is disk sectors or fs blocks) no blocks are displayed in the output. I am using v1.65. I have attached the output of istat for two files in the root directory of an ntfs volume, istat-a is the problematic small file and istat-b is a larger file which works fine. Regards Chris Barbour |
From: Guido b. yahoo.d. <gui...@ya...> - 2004-01-23 14:44:06
|
Hello, I'm new in using Sleuth Kit and Autopsy. And I have two problems with it: 1. I have installed the latest version of them (today I load the sourcefiles) , but before I use an older version from the Debian unstable tree and there was the same error: I aquired images from floppies and would examine it in Sleuth Kit. Therefore I have created an case and a host and the folders in my evidence locker would also created from autopsy. Then I get the floppy image over a symlink in the host and it also works. But if I then klick on the "file analyses"-button, I get a list with the undeleted and deleted files and over every entry it shows me a line : "Error Parsing File (Invalid Charakter?)". The entries seems to be right, it schows me the dates (written, access and created) and the size of the file, but it is difficult to recognize, if the file ist deleted or not. Here is a link to a screenshot: http://www.guframe.de/ftp/error_pasing_file.jpg. 2. How can I detect the filesystem off an unmountet floppy? I need it, in order to include it in autopsy? For disks I use fdisk -l , but for floppies? Thank you for your help. Kindly regards Guido Metzner |
From: Matthias H. <mat...@mh...> - 2004-01-22 23:26:20
|
Brian Carrier said: > [I was hoping you would be interested in this topic in light of your > new database :)] Yup, I am interested ;-) [...] > > I'm assuming that you are referring to a global database in the local > sense. That each person has their own "global" database that they > create and can add and remove hashes from. Not a global database in > the Solaris Fingerprint DB sense. Yes, I meant "local sense". Each user has other needs/requirements, so a global database for everybody should be out of question. >> The interface to >> autopsy and sleuthkit should allow to query only certain categories, >> only >> known bads, a certain category as known bad or not(-> e. g. remote >> management tools). The biggest problem here is to manage the category >> mapping table for all the different tools. > > I agree. Especially when you start merging the home made hashes with > those from the NSRL and hashkeeper. I guess we could have a generic > category of 'Always Good' or 'Always Bad'. > >> The technical problem is to manage such a huge amount of raw data. Wit= h >> NSRL alone, we have millions of hash sets. This requires a new query >> mechanism. With a RDBMS, we need persistent connections and the >> possibility >> to bulk query large data sets very fast. With the current sorter|hfind >> design, sorter calls hfind one time per hash analyzed. This is >> definitely >> a big bottleneck. > > Yea, I have no problem if the end solution requires a redesign of hfind > and sorter. > > I'm just not sure what the end solution should be. Some open questions= : > - what application categories are needed? Are the NSRL ones sufficient > or are there too many / too few of them? > - How do you specify in the query which cat are bad and which are good? > - How do you specify to 'sorter' which cat are bad and which are good? > - Do we want to require a real database (i.e. SQL) or should there also > be an ASCII file version? I think the NSRL segmentation in products/operation systems/manufacturers= is a good idea. Yet, the NSRL provided categories are partially duplicate an= d partially too much segmented. There is no simple solution for a query lik= e "check only against Linux system hashes". I think we should define a basic set of operation systems and other classification data and maintain a mapping table for imports of NSRL and other Hashsets. In my opinion a SQL database should be the base (easier structure, multi-index ...). On the other hand, there is no reason not to provide an export utility for ASCII exports in a defined format. This should handle both requirements. Matthias |
From: Brian C. <ca...@sl...> - 2004-01-22 22:52:26
|
On Thursday, January 22, 2004, at 04:02 PM, Angus Marshall wrote: > Just a quick note to let the list know that I gave evidence in English > Crown > Court yesterday in a murder trial. Testimony was based on timelines > established and files recovered through sleuthkit & autopsy. No > challenge of > the validity of this evidence was made during cross-examination. (For > those > really interested - R. vs. Coutts, Lewes Crown Court) Great to hear, and congratulations! thanks, brian |
From: Brian C. <ca...@sl...> - 2004-01-22 22:38:36
|
[I was hoping you would be interested in this topic in light of your new database :)] On Thursday, January 22, 2004, at 01:33 PM, Matthias Hofherr wrote: > Logical we need > to maintain a potential huge amount of data and categorize every single > hash entry. Furthermore, we have to decide for each entry if it is a > known-bad or a known-good. I think a useful solution is to maintain a > global database with both freely available hashsums like > NSRL,KnownGoods > combined > with selfmade hash set (md5sum/graverobber ...). I'm assuming that you are referring to a global database in the local sense. That each person has their own "global" database that they create and can add and remove hashes from. Not a global database in the Solaris Fingerprint DB sense. > The interface to > autopsy and sleuthkit should allow to query only certain categories, > only > known bads, a certain category as known bad or not(-> e. g. remote > management tools). The biggest problem here is to manage the category > mapping table for all the different tools. I agree. Especially when you start merging the home made hashes with those from the NSRL and hashkeeper. I guess we could have a generic category of 'Always Good' or 'Always Bad'. > The technical problem is to manage such a huge amount of raw data. With > NSRL alone, we have millions of hash sets. This requires a new query > mechanism. With a RDBMS, we need persistent connections and the > possibility > to bulk query large data sets very fast. With the current sorter|hfind > design, sorter calls hfind one time per hash analyzed. This is > definitely > a big bottleneck. Yea, I have no problem if the end solution requires a redesign of hfind and sorter. I'm just not sure what the end solution should be. Some open questions: - what application categories are needed? Are the NSRL ones sufficient or are there too many / too few of them? - How do you specify in the query which cat are bad and which are good? - How do you specify to 'sorter' which cat are bad and which are good? - Do we want to require a real database (i.e. SQL) or should there also be an ASCII file version? thanks, brian |
From: Angus M. <an...@n-...> - 2004-01-22 21:03:32
|
Just a quick note to let the list know that I gave evidence in English Crown Court yesterday in a murder trial. Testimony was based on timelines established and files recovered through sleuthkit & autopsy. No challenge of the validity of this evidence was made during cross-examination. (For those really interested - R. vs. Coutts, Lewes Crown Court) I therefore assume that, under English law, these tools have been accepted as an acceptable method of recovering evidence. Angus Marshall (programme chair FIDES04 - Forensic Institute Digital Evidence Symposium 2004 - http://fides04.n-gate.net/ ) |
From: Matthias H. <mat...@mh...> - 2004-01-22 18:33:08
|
Brian, I think we have technical and logical issues here. Logical we need to maintain a potential huge amount of data and categorize every single hash entry. Furthermore, we have to decide for each entry if it is a known-bad or a known-good. I think a useful solution is to maintain a global database with both freely available hashsums like NSRL,KnownGoods combined with selfmade hash set (md5sum/graverobber ...). The interface to autopsy and sleuthkit should allow to query only certain categories, only known bads, a certain category as known bad or not(-> e. g. remote management tools). The biggest problem here is to manage the category mapping table for all the different tools. The technical problem is to manage such a huge amount of raw data. With NSRL alone, we have millions of hash sets. This requires a new query mechanism. With a RDBMS, we need persistent connections and the possibility to bulk query large data sets very fast. With the current sorter|hfind design, sorter calls hfind one time per hash analyzed. This is definitely a big bottleneck. Best regards, Matthias --=20 Matthias Hofherr mail: mat...@mh... web: http://www.forinsect.de gpg: http://www.forinsect.de/pubkey.asc Brian Carrier said: > Is anyone interested in looking into the best way to manage hashes? The > definition of "good" versus "bad" is relative to the current > investigation and I don't know the best way to handle this in The > Sleuth Kit and Autopsy. There could be a single database with > categories of hashes and you choose which are good and which are bad > for that investigation (similar to the new Forensic Hash Database that > was announced and NSRL). Or, you could import tens of hash databases > and identify them as bad or good (like hashkeeper). > > I think hashkeepr is LE-only, so I would rather focus on using NSRL and > custom hashes made by md5sum. If anyone is interested in working on a > workable solution to this, let me know. |
From: McMillon, M. <Mat...@qw...> - 2004-01-21 22:23:11
|
> That still leaves the problem of organizing what is "good" though. is > pcAnywhere a good or bad hash? Depends on the investigation. I suppose this is why NSRL took the approach of simply categorizing all the hashes as "known" and anything that wasn't in the DB as "unknown." One simple way to approach this would be to have the option to import individual hashes or hash sets based on some category tree structure, and then select the option of 1) display all files that match the imported hashes, 2) display all files that don't, 3) display file whose hashes match, but file names don't, etc.. Kind of an "autopsy reports, you decide" tact. <--- hoping I don't get sued by Fox News. >There are Application types in the schema, but I'm not sure how they=20 >were chosen or how many there are. You can see a list here: Seems to map somewhat to the members of the Software Business Alliance, but since NIST is a "neutral" organization I doubt there is any connection there :)=20 |
From: Brian C. <ca...@sl...> - 2004-01-21 21:11:24
|
> However, I am beginning to wonder how effective hash sets of > "known-bad" > are going to be moving into the future--I think they have shown some > benefit to LEA and others investigating child porn, malware, etc. but > as the perps get wise to this technique, you'll probably start seeing > more things like polymorphic archives, encrypted executables, and other > files types that may change based on context or just randomly when > accessed. Manually modifying files with a hex editor would be a simple > way to change the sums of any file--which is much more of a current > reality. We've seen this somewhat in the anti-virus industry which > makes me wonder how some sort of heuristics system may be more > effective > for this area. That is a good point. And one trojan source file can generate many different execs with different hashes based on what compiler flags were used. That still leaves the problem of organizing what is "good" though. is pcAnywhere a good or bad hash? Depends on the investigation. > The other big issue is categorizing the large number of hashes, I think > the reference data set of NSRL is 17.9 million hashes. Manually > categorizing them would not be possible--would have to look closer at > the NSRL "schema" to see if an automated process could be developed > once > categories were determined. There are Application types in the schema, but I'm not sure how they were chosen or how many there are. You can see a list here: http://www.nsrl.nist.gov/index/apptype.index.txt The reason that I am asking this is because it is an important issue, but I already have too many things on my plate. So, if people are interested in finding a solution to this, then please do. I won't get to it for several months. thanks, brian |
From: McMillon, M. <Mat...@qw...> - 2004-01-21 19:40:27
|
Just some random thoughts on hashes: I think managing a collection of baseline OS and application hashes would be pretty straight forward as long as you limited scope vendor "gold master" releases. Version skew from subsequent patches may cause some issues, but this would allow you to load the hash sets for the OS you are examining and quickly identify what is off of the baseline, which is pretty much what NSRL is designed for but with a much broader brush. =20 However, I am beginning to wonder how effective hash sets of "known-bad" are going to be moving into the future--I think they have shown some benefit to LEA and others investigating child porn, malware, etc. but as the perps get wise to this technique, you'll probably start seeing more things like polymorphic archives, encrypted executables, and other files types that may change based on context or just randomly when accessed. Manually modifying files with a hex editor would be a simple way to change the sums of any file--which is much more of a current reality. We've seen this somewhat in the anti-virus industry which makes me wonder how some sort of heuristics system may be more effective for this area. =20 The other big issue is categorizing the large number of hashes, I think the reference data set of NSRL is 17.9 million hashes. Manually categorizing them would not be possible--would have to look closer at the NSRL "schema" to see if an automated process could be developed once categories were determined. Matt -----Original Message----- From: sle...@li... [mailto:sle...@li...] On Behalf Of Brian Carrier Sent: Wednesday, January 21, 2004 11:15 AM To: sle...@li... Cc: sle...@li... Subject: [sleuthkit-users] Good vs. Bad Hashes Is anyone interested in looking into the best way to manage hashes? The=20 definition of "good" versus "bad" is relative to the current=20 investigation and I don't know the best way to handle this in The=20 Sleuth Kit and Autopsy. There could be a single database with=20 categories of hashes and you choose which are good and which are bad=20 for that investigation (similar to the new Forensic Hash Database that=20 was announced and NSRL). Or, you could import tens of hash databases=20 and identify them as bad or good (like hashkeeper). I think hashkeepr is LE-only, so I would rather focus on using NSRL and=20 custom hashes made by md5sum. If anyone is interested in working on a=20 workable solution to this, let me know. brian ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ sleuthkit-users mailing list https://lists.sourceforge.net/lists/listinfo/sleuthkit-users http://www.sleuthkit.org |
From: Brian C. <ca...@sl...> - 2004-01-21 18:15:15
|
Is anyone interested in looking into the best way to manage hashes? The definition of "good" versus "bad" is relative to the current investigation and I don't know the best way to handle this in The Sleuth Kit and Autopsy. There could be a single database with categories of hashes and you choose which are good and which are bad for that investigation (similar to the new Forensic Hash Database that was announced and NSRL). Or, you could import tens of hash databases and identify them as bad or good (like hashkeeper). I think hashkeepr is LE-only, so I would rather focus on using NSRL and custom hashes made by md5sum. If anyone is interested in working on a workable solution to this, let me know. brian |
From: Brian C. <ca...@ce...> - 2004-01-21 03:02:10
|
Ok, problem fixed in The Sleuth Kit (but read on). The solution is more elegant and uses the 'mktime()' function instead of a manual calculation. The new fatfs.c file can be found here: http://sleuthkit.sourceforge.net/sleuthkit/fatfs.c Replace the one in src/fstools and recompile. The fix will be in the next release. But, the code that I was using was based on the OpenBSD kernel code, which is basically the same as the Linux code: http://lxr.linux.no/source/fs/fat/misc.c?v=2.6.0#L215 That function does not take daylight savings into account. Therefore, I wanted to mount my test image in Linux to see what it gave for results. When I did this on a redhat 8 system, the times were 5 hours slow ... (not fast like is expected if it were set to GMT). Is there a flag I need to set somewhere, or is this normal? The timezone of my system is set to EST and I haven't done much with FAT in loopback on this system, so I'm not sure if it is my system configuration or not. So, based on the FAT kernel code, it seems that Linux has the same problem unless daylight savings is taken into account somewhere else in the kernel. Can anyone else verify this by comparing the M- or C-time of a file made in the summer in Windows and in Linux? thanks, brian On Tuesday, January 20, 2004, at 10:03 AM, Brian Carrier wrote: > Randall, > > Don't be so confident yet :) > > I just tested something on a random file and image before I replied to > this and I think I may have identified a problem. Is the date of the > file in daylight savings time? > > FAT does not care about timezones, but The Sleuth Kit makes it care > about timezones by converting the FAT time into the UNIX time (which is > timezone relative). The conversion may break down with daylight > savings. I'll fix it later today. > > I added a bug report for it: > https://sourceforge.net/tracker/ > index.php?func=detail&aid=880606&group_id=55685&atid=477889 > > How ironic. EnCase v3 used to ignore daylight savings and now I'm > using it when it is not needed. > > brian > > > On Tuesday, January 20, 2004, at 09:20 AM, Randall Shane wrote: > >> To Our Collective Group, >> >> I am analyzing a Windows ME machine and there seems to be some >> discrepency in file dates. I was hoping for some input. My analysis >> through Autopsy reveals a deletion time in the 9 o'clock range. I had >> a peer >> review my work and utilizing two separate utilites, X-ways-trace and >> the encase >> info >> record finder script, both read the Info2 file recycler 'log' as the >> deletion occurring in the 8 o'clock range. >> >> Here's my question - The system time was set to CST, (confirmed >> through registry >> >> settings), and this is consistent with what was plugged into Autopsy. >> I am >> confident >> that the file deletion times from Autopsy are accurate but how can I >> validate >> this? |
From: Brian C. <ca...@sl...> - 2004-01-18 02:36:49
|
On Friday, January 16, 2004, at 09:53 AM, Horner, Jonathan J (JH8) wrote: > Where can I find a good intro to using these packages? The website has the most documents, including the Sleuth Kit Informer. There are a couple of books in production that include the basics as well, but they are not available yet. > I've got them > installed, installed the NSRL hash sets, and I've got an image loaded > to > examine. I can generally find most things, but I am unsure how to > exclude > from my file listings any file that appears to be normal (a.k.a. > matches the > hash for a known good OS file). That option doesn't exist. There is the file type sorting feature which ignores the known-good files and organizes the rest by type (not directory). The NSRL is actually not used much in Autopsy anymore. The NSRL contains hashes of both good and bad files and there is no easy way to make an index of what hashes are good and what hashes are bad. So until such a system exists, the NSRL is just there as a database that can be used for lookups in the "Meta Data" mode. If anyone wants to volunteer to maintain such an NSRL index, let me know :) brian |