sleuthkit-users Mailing List for The Sleuth Kit (Page 47)
Brought to you by:
carrier
You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(11) |
Oct
(5) |
Nov
(4) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(1) |
Feb
(20) |
Mar
(60) |
Apr
(40) |
May
(24) |
Jun
(28) |
Jul
(18) |
Aug
(27) |
Sep
(6) |
Oct
(14) |
Nov
(15) |
Dec
(22) |
2004 |
Jan
(34) |
Feb
(13) |
Mar
(28) |
Apr
(23) |
May
(27) |
Jun
(26) |
Jul
(37) |
Aug
(19) |
Sep
(20) |
Oct
(39) |
Nov
(17) |
Dec
(9) |
2005 |
Jan
(45) |
Feb
(43) |
Mar
(66) |
Apr
(36) |
May
(19) |
Jun
(64) |
Jul
(10) |
Aug
(11) |
Sep
(35) |
Oct
(6) |
Nov
(4) |
Dec
(13) |
2006 |
Jan
(52) |
Feb
(34) |
Mar
(39) |
Apr
(39) |
May
(37) |
Jun
(15) |
Jul
(13) |
Aug
(48) |
Sep
(9) |
Oct
(10) |
Nov
(47) |
Dec
(13) |
2007 |
Jan
(25) |
Feb
(4) |
Mar
(2) |
Apr
(29) |
May
(11) |
Jun
(19) |
Jul
(13) |
Aug
(15) |
Sep
(30) |
Oct
(12) |
Nov
(10) |
Dec
(13) |
2008 |
Jan
(2) |
Feb
(54) |
Mar
(58) |
Apr
(43) |
May
(10) |
Jun
(27) |
Jul
(25) |
Aug
(27) |
Sep
(48) |
Oct
(69) |
Nov
(55) |
Dec
(43) |
2009 |
Jan
(26) |
Feb
(36) |
Mar
(28) |
Apr
(27) |
May
(55) |
Jun
(9) |
Jul
(19) |
Aug
(16) |
Sep
(15) |
Oct
(17) |
Nov
(70) |
Dec
(21) |
2010 |
Jan
(56) |
Feb
(59) |
Mar
(53) |
Apr
(32) |
May
(25) |
Jun
(31) |
Jul
(36) |
Aug
(11) |
Sep
(37) |
Oct
(19) |
Nov
(23) |
Dec
(6) |
2011 |
Jan
(21) |
Feb
(20) |
Mar
(30) |
Apr
(30) |
May
(74) |
Jun
(50) |
Jul
(34) |
Aug
(34) |
Sep
(12) |
Oct
(33) |
Nov
(10) |
Dec
(8) |
2012 |
Jan
(23) |
Feb
(57) |
Mar
(26) |
Apr
(14) |
May
(27) |
Jun
(27) |
Jul
(60) |
Aug
(88) |
Sep
(13) |
Oct
(36) |
Nov
(97) |
Dec
(85) |
2013 |
Jan
(60) |
Feb
(24) |
Mar
(43) |
Apr
(32) |
May
(22) |
Jun
(38) |
Jul
(51) |
Aug
(50) |
Sep
(76) |
Oct
(65) |
Nov
(25) |
Dec
(30) |
2014 |
Jan
(19) |
Feb
(41) |
Mar
(43) |
Apr
(28) |
May
(61) |
Jun
(12) |
Jul
(10) |
Aug
(37) |
Sep
(76) |
Oct
(31) |
Nov
(41) |
Dec
(12) |
2015 |
Jan
(33) |
Feb
(28) |
Mar
(53) |
Apr
(22) |
May
(29) |
Jun
(20) |
Jul
(15) |
Aug
(17) |
Sep
(52) |
Oct
(3) |
Nov
(18) |
Dec
(21) |
2016 |
Jan
(20) |
Feb
(8) |
Mar
(21) |
Apr
(7) |
May
(13) |
Jun
(35) |
Jul
(34) |
Aug
(11) |
Sep
(14) |
Oct
(22) |
Nov
(31) |
Dec
(23) |
2017 |
Jan
(20) |
Feb
(7) |
Mar
(5) |
Apr
(6) |
May
(6) |
Jun
(22) |
Jul
(11) |
Aug
(16) |
Sep
(8) |
Oct
(1) |
Nov
(1) |
Dec
(1) |
2018 |
Jan
|
Feb
|
Mar
(16) |
Apr
(2) |
May
(6) |
Jun
(5) |
Jul
|
Aug
(2) |
Sep
(4) |
Oct
|
Nov
(16) |
Dec
(13) |
2019 |
Jan
|
Feb
(1) |
Mar
(25) |
Apr
(9) |
May
(2) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
(2) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
|
Oct
(5) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(4) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2022 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2024 |
Jan
|
Feb
(3) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: MG F. <mgf...@gm...> - 2014-02-24 19:17:02
|
> I've had a chance to use Autopsy 3.0.9. Does anyone know how to export a CSV list of what you see when viewing the "Recent Document" within the Results/Recent document view? > > e.g. A CSV that shows the source file, path, date/type, and data source. > > Hope someone can help. Thank you |
From: MG F. <mgf...@gm...> - 2014-02-24 17:57:20
|
I've had a chance to use Autopsy 3.0.9. Does anyone know how to export a CSV list of what you see when viewing the "Recent Document" within the Results/Recent document view? e.g. A CSV that shows the source file, path, date/type, and data source. Hope someone can help. Thank you |
From: ewaldo s. <ewa...@gm...> - 2014-02-24 10:44:47
|
James, I've read the foremost auditlog, but after trying to find inode of those files that foremost carved, none of them is being pointed by the FAT entries of "tagihan.xls" I've shown before. I've also check the metadata of those excel files that foremost carved, and also no information there. to read the metadata I open them on libreoffice and also right click properties on explorer. I guess that's it then, unless there is another suggestion, I will be more than happy to oblige. anyway thank you for all your help, it really helped me alot, and I have much thing that I learned. On Mon, Feb 24, 2014 at 11:01 AM, James Haughom <ja...@ne...> wrote: > Ewaldo > > Foremost produces an audit file which will give you the offset from > which the file was recovered. You might be able to correlate the offset > with data that you are extracting with istat. > > Good luck > > > On 2/23/14 9:07 PM, ewaldo simon wrote: > > Jason, > > > > Thank you very much, you're right, by running: > > > > ifind -o63 imagename.001 -d 20896 (and on other data unit on the same > > FAT entry) > > > > result in diffrent inode/fat entry (which is 156282). > > > > the thing is I have 6 excel files, carved by foremost that I suspected > > is related to those files (because the content of the file). > > FYI, "tagihan" means invoice, and those 6 files that I carved are > invoices, > > the question is this: > > do you guys have any other mean to connect the files that I carved (with > > foremost) with those files that I found using fls? or how can I tell > > which data unit that foremost used to carved these files. > > > > Thank you before. > > > > ps. I posted this also in forensicfocus.com <http://forensicfocus.com>, > > and if it is OK with you, I'll post this discussion result in there too. > > > > > > > > > > > > On Sun, Feb 23, 2014 at 10:34 PM, Jason Wright <jwr...@gm... > > <mailto:jwr...@gm...>> wrote: > > > > Ewaldo, > > > > It is possible the blocks have been reused since these are all > > deleted references. The metadata can still reference the file, but > > the blocks can be reused by an allocated file. If the header of the > > file dumped by icat isn't for an xls file then that is likely the > > case. Use ifind to see what inode is associated with the specific > > block (20896 for example). > > > > R/ > > > > Jason > > > > First of all thank you for your suggestion, I've tried your > > suggestion and change the blocksize, It didn't work out either. I've > > also tried using icat on that inode (which doesn't use blocksize) it > > didn't work either. > > is it possible that the data unit that being pointed by the metadata > > (FAT entries) already used by another file/folder? > > > > actually I managed to save some of those (if it is true that that is > > the files) files needed, by using foremost, and I need to find the > > names of the files, so I tried using dd and/or icat. > > > > so the bottomline is I need a way to extract those excel, or found > > out the name that foremost carved, or make sure that some of those > > files shown with fls is actually the one that foremost recovered. > > > > Thank you before, and sorry for my bad english. > > > > > > > > On Sun, Feb 23, 2014 at 1:13 AM, Alex Nelson <ajn...@cs... > > <mailto:ajn...@cs...>> wrote: > > > > Hi Ewaldo, > > > > Your dd should work given the sector offsets. However, you > > passed a sector size of 4096 (bs=4096). I'm guessing because > > -o63 worked in istat, you should have passed bs=512 in dd. > > > > --Alex > > > > > > On Feb 22, 2014, at 01:44 , ewaldo simon <ewa...@gm... > > <mailto:ewa...@gm...>> wrote: > > > >> Dear sleuthkit user mailing list, can anyone help me with > >> this, I am trying to recover some orphan files. > >> > >> > >> 1. first, using fls, I've found some orphaned files: > >> > >> Code: > >> fls -o 63 -r F imagename.001 | grep -i file_name > >> > >> -/r * 649873: $OrphanFiles/TAGIHAN.xls > >> r/r * 122506: $OrphanFiles/PT8D15~1.NUG/REKAP TAGIHAN > MAR'11.xls > >> -/r * 1212051: $OrphanFiles/TAGIHA~1.XLS > >> -/r * 1282702: $OrphanFiles/TAGIHA~1.XLS > >> -/r * 1374865: $OrphanFiles/TAGIHA~1.XLS > >> -/r * 1472145: $OrphanFiles/TAGIHA~1.XLS > >> -/r * 1519249: $OrphanFiles/TAGIHA~1.XLS > >> -/r * 1571469: $OrphanFiles/TAGIHA~1.XLS > >> > >> 2. then, using istat to see the metadata of the last file > >> listed before (this is the part that I got wrong the last time) > >> > >> Code: > >> istat -o 63 imagename 1571469 > >> > >> Directory Entry: 1571469 > >> Not Allocated > >> File Attributes: File, Archive > >> Size: 24064 > >> Name: TAGIHA~1.XLS > >> > >> Directory Entry Times: > >> Written: Mon Aug 24 14:26:16 2009 > >> Accessed: Tue Aug 7 00:00:00 2012 > >> Created: Tue Aug 7 09:40:58 2012 > >> > >> Sectors: > >> 20896 20897 20898 20899 20900 20901 20902 20903 > >> 20904 20905 20906 20907 20908 20909 20910 20911 > >> 20912 20913 20914 20915 20916 20917 20918 20919 > >> 20920 20921 20922 20923 20924 20925 20926 20927 > >> 20928 20929 20930 20931 20932 20933 20934 20935 > >> 20936 20937 20938 20939 20940 20941 20942 20943 > >> > >> it means that the directory entry still points to the FAT > >> entries and in the end points to the sectors used by that file. > >> > >> 3. now I don't get how to recover the TAGIHA~1.XLS > >> > >> I've tried using dd: > >> Code: > >> dd if=imagefile of=outputfile bs=4096 skip=20896 count=6 > >> and > >> also icat > >> > >> icat -o 63 imagename.001 1571496 > TAGI~1.xls > >> > >> > >> again to no avail. > >> > >> I've tried recovering with foremost, and it does recover some > >> files, but I need the name of the files, that's why I'm trying > >> to use this method. > >> Please correct me if I'm wrong, and give me the hint where to > >> go from here. I really appriciate your help, thank you. > >> > >> > >> > >> > >> -- > >> Regards, > >> Ewaldo Simon > >> > ------------------------------------------------------------------------------ > >> Managing the Performance of Cloud-Based Applications > >> Take advantage of what the Cloud has to offer - Avoid Common > >> Pitfalls. > >> Read the Whitepaper. > >> > http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk_______________________________________________ > >> sleuthkit-users mailing list > >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > >> http://www.sleuthkit.org > > > > > > > > > > -- > > Regards, > > Ewaldo Simon > > > > > ------------------------------------------------------------------------------ > > Managing the Performance of Cloud-Based Applications > > Take advantage of what the Cloud has to offer - Avoid Common > Pitfalls. > > Read the Whitepaper. > > > http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk > > _______________________________________________ > > sleuthkit-users mailing list > > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > > http://www.sleuthkit.org > > > > > > > > > > -- > > Regards, > > Ewaldo Simon > > > > > > > ------------------------------------------------------------------------------ > > Flow-based real-time traffic analytics software. Cisco certified tool. > > Monitor traffic, SLAs, QoS, Medianet, WAAS etc. with NetFlow Analyzer > > Customize your own dashboards, set traffic alerts and generate reports. > > Network behavioral analysis & security monitoring. All-in-one tool. > > > http://pubads.g.doubleclick.net/gampad/clk?id=126839071&iu=/4140/ostg.clktrk > > > > > > > > _______________________________________________ > > sleuthkit-users mailing list > > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > > http://www.sleuthkit.org > > > > > -- Regards, Ewaldo Simon |
From: Brian C. <ca...@sl...> - 2014-02-24 03:13:36
|
Hi Ewaldo, When you say that 'icat' didn't work, does that mean that you got an error message or that it created a file, but it could not be opened in Excel? If there was an error, what was it? thanks, brian On Feb 23, 2014, at 7:22 AM, ewaldo simon <ewa...@gm...> wrote: > First of all thank you for your suggestion, I've tried your suggestion and change the blocksize, It didn't work out either. I've also tried using icat on that inode (which doesn't use blocksize) it didn't work either. > is it possible that the data unit that being pointed by the metadata (FAT entries) already used by another file/folder? > > actually I managed to save some of those (if it is true that that is the files) files needed, by using foremost, and I need to find the names of the files, so I tried using dd and/or icat. > > so the bottomline is I need a way to extract those excel, or found out the name that foremost carved, or make sure that some of those files shown with fls is actually the one that foremost recovered. > > Thank you before, and sorry for my bad english. > > > > On Sun, Feb 23, 2014 at 1:13 AM, Alex Nelson <ajn...@cs...> wrote: > Hi Ewaldo, > > Your dd should work given the sector offsets. However, you passed a sector size of 4096 (bs=4096). I'm guessing because -o63 worked in istat, you should have passed bs=512 in dd. > > --Alex > > > On Feb 22, 2014, at 01:44 , ewaldo simon <ewa...@gm...> wrote: > >> Dear sleuthkit user mailing list, can anyone help me with this, I am trying to recover some orphan files. >> >> >> 1. first, using fls, I've found some orphaned files: >> >> Code: >> fls -o 63 -r F imagename.001 | grep -i file_name >> >> -/r * 649873: $OrphanFiles/TAGIHAN.xls >> r/r * 122506: $OrphanFiles/PT8D15~1.NUG/REKAP TAGIHAN MAR'11.xls >> -/r * 1212051: $OrphanFiles/TAGIHA~1.XLS >> -/r * 1282702: $OrphanFiles/TAGIHA~1.XLS >> -/r * 1374865: $OrphanFiles/TAGIHA~1.XLS >> -/r * 1472145: $OrphanFiles/TAGIHA~1.XLS >> -/r * 1519249: $OrphanFiles/TAGIHA~1.XLS >> -/r * 1571469: $OrphanFiles/TAGIHA~1.XLS >> >> >> 2. then, using istat to see the metadata of the last file listed before (this is the part that I got wrong the last time) >> >> Code: >> istat -o 63 imagename 1571469 >> >> Directory Entry: 1571469 >> Not Allocated >> File Attributes: File, Archive >> Size: 24064 >> Name: TAGIHA~1.XLS >> >> Directory Entry Times: >> Written: Mon Aug 24 14:26:16 2009 >> Accessed: Tue Aug 7 00:00:00 2012 >> Created: Tue Aug 7 09:40:58 2012 >> >> Sectors: >> 20896 20897 20898 20899 20900 20901 20902 20903 >> 20904 20905 20906 20907 20908 20909 20910 20911 >> 20912 20913 20914 20915 20916 20917 20918 20919 >> 20920 20921 20922 20923 20924 20925 20926 20927 >> 20928 20929 20930 20931 20932 20933 20934 20935 >> 20936 20937 20938 20939 20940 20941 20942 20943 >> >> >> it means that the directory entry still points to the FAT entries and in the end points to the sectors used by that file. >> >> 3. now I don't get how to recover the TAGIHA~1.XLS >> >> I've tried using dd: >> Code: >> dd if=imagefile of=outputfile bs=4096 skip=20896 count=6 >> and >> also icat >> >> icat -o 63 imagename.001 1571496 > TAGI~1.xls >> >> again to no avail. >> >> I've tried recovering with foremost, and it does recover some files, but I need the name of the files, that's why I'm trying to use this method. >> Please correct me if I'm wrong, and give me the hint where to go from here. I really appriciate your help, thank you. >> >> >> >> >> -- >> Regards, >> Ewaldo Simon >> ------------------------------------------------------------------------------ >> Managing the Performance of Cloud-Based Applications >> Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. >> Read the Whitepaper. >> http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk_______________________________________________ >> sleuthkit-users mailing list >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >> http://www.sleuthkit.org > > > > > -- > Regards, > Ewaldo Simon > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: ewaldo s. <ewa...@gm...> - 2014-02-24 02:07:51
|
Jason, Thank you very much, you're right, by running: ifind -o63 imagename.001 -d 20896 (and on other data unit on the same FAT entry) result in diffrent inode/fat entry (which is 156282). the thing is I have 6 excel files, carved by foremost that I suspected is related to those files (because the content of the file). FYI, "tagihan" means invoice, and those 6 files that I carved are invoices, the question is this: do you guys have any other mean to connect the files that I carved (with foremost) with those files that I found using fls? or how can I tell which data unit that foremost used to carved these files. Thank you before. ps. I posted this also in forensicfocus.com, and if it is OK with you, I'll post this discussion result in there too. On Sun, Feb 23, 2014 at 10:34 PM, Jason Wright <jwr...@gm...>wrote: > Ewaldo, > > It is possible the blocks have been reused since these are all deleted > references. The metadata can still reference the file, but the blocks can > be reused by an allocated file. If the header of the file dumped by icat > isn't for an xls file then that is likely the case. Use ifind to see what > inode is associated with the specific block (20896 for example). > > R/ > > Jason > First of all thank you for your suggestion, I've tried your suggestion and > change the blocksize, It didn't work out either. I've also tried using icat > on that inode (which doesn't use blocksize) it didn't work either. > is it possible that the data unit that being pointed by the metadata (FAT > entries) already used by another file/folder? > > actually I managed to save some of those (if it is true that that is the > files) files needed, by using foremost, and I need to find the names of the > files, so I tried using dd and/or icat. > > so the bottomline is I need a way to extract those excel, or found out the > name that foremost carved, or make sure that some of those files shown with > fls is actually the one that foremost recovered. > > Thank you before, and sorry for my bad english. > > > > On Sun, Feb 23, 2014 at 1:13 AM, Alex Nelson <ajn...@cs...> wrote: > >> Hi Ewaldo, >> >> Your dd should work given the sector offsets. However, you passed a >> sector size of 4096 (bs=4096). I'm guessing because -o63 worked in istat, >> you should have passed bs=512 in dd. >> >> --Alex >> >> >> On Feb 22, 2014, at 01:44 , ewaldo simon <ewa...@gm...> wrote: >> >> Dear sleuthkit user mailing list, can anyone help me with this, I am >> trying to recover some orphan files. >> >> >> 1. first, using fls, I've found some orphaned files: >> >> Code: >> >> fls -o 63 -r F imagename.001 | grep -i file_name >> >> -/r * 649873: $OrphanFiles/TAGIHAN.xls >> r/r * 122506: $OrphanFiles/PT8D15~1.NUG/REKAP TAGIHAN MAR'11.xls >> -/r * 1212051: $OrphanFiles/TAGIHA~1.XLS >> -/r * 1282702: $OrphanFiles/TAGIHA~1.XLS >> -/r * 1374865: $OrphanFiles/TAGIHA~1.XLS >> -/r * 1472145: $OrphanFiles/TAGIHA~1.XLS >> -/r * 1519249: $OrphanFiles/TAGIHA~1.XLS >> -/r * 1571469: $OrphanFiles/TAGIHA~1.XLS >> >> >> 2. then, using istat to see the metadata of the last file listed before >> (this is the part that I got wrong the last time) >> >> Code: >> >> istat -o 63 imagename 1571469 >> >> Directory Entry: 1571469 >> Not Allocated >> File Attributes: File, Archive >> Size: 24064 >> Name: TAGIHA~1.XLS >> >> Directory Entry Times: >> Written: Mon Aug 24 14:26:16 2009 >> Accessed: Tue Aug 7 00:00:00 2012 >> Created: Tue Aug 7 09:40:58 2012 >> >> Sectors: >> 20896 20897 20898 20899 20900 20901 20902 20903 >> 20904 20905 20906 20907 20908 20909 20910 20911 >> 20912 20913 20914 20915 20916 20917 20918 20919 >> 20920 20921 20922 20923 20924 20925 20926 20927 >> 20928 20929 20930 20931 20932 20933 20934 20935 >> 20936 20937 20938 20939 20940 20941 20942 20943 >> >> >> it means that the directory entry still points to the FAT entries and in >> the end points to the sectors used by that file. >> >> 3. now I don't get how to recover the TAGIHA~1.XLS >> >> I've tried using dd: >> Code: >> >> dd if=imagefile of=outputfile bs=4096 skip=20896 count=6 >> >> and >> also icat >> >> icat -o 63 imagename.001 1571496 > TAGI~1.xls >> >> >> again to no avail. >> >> I've tried recovering with foremost, and it does recover some files, but >> I need the name of the files, that's why I'm trying to use this method. >> Please correct me if I'm wrong, and give me the hint where to go from >> here. I really appriciate your help, thank you. >> >> >> >> >> -- >> Regards, >> Ewaldo Simon >> ------------------------------------------------------------------------------ >> Managing the Performance of Cloud-Based Applications >> Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. >> Read the Whitepaper. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk_______________________________________________ >> sleuthkit-users mailing list >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >> http://www.sleuthkit.org >> >> >> > > > -- > Regards, > Ewaldo Simon > > > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > > http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org > > -- Regards, Ewaldo Simon |
From: Jason W. <jwr...@gm...> - 2014-02-23 15:34:57
|
Ewaldo, It is possible the blocks have been reused since these are all deleted references. The metadata can still reference the file, but the blocks can be reused by an allocated file. If the header of the file dumped by icat isn't for an xls file then that is likely the case. Use ifind to see what inode is associated with the specific block (20896 for example). R/ Jason First of all thank you for your suggestion, I've tried your suggestion and change the blocksize, It didn't work out either. I've also tried using icat on that inode (which doesn't use blocksize) it didn't work either. is it possible that the data unit that being pointed by the metadata (FAT entries) already used by another file/folder? actually I managed to save some of those (if it is true that that is the files) files needed, by using foremost, and I need to find the names of the files, so I tried using dd and/or icat. so the bottomline is I need a way to extract those excel, or found out the name that foremost carved, or make sure that some of those files shown with fls is actually the one that foremost recovered. Thank you before, and sorry for my bad english. On Sun, Feb 23, 2014 at 1:13 AM, Alex Nelson <ajn...@cs...> wrote: > Hi Ewaldo, > > Your dd should work given the sector offsets. However, you passed a sector > size of 4096 (bs=4096). I'm guessing because -o63 worked in istat, you > should have passed bs=512 in dd. > > --Alex > > > On Feb 22, 2014, at 01:44 , ewaldo simon <ewa...@gm...> wrote: > > Dear sleuthkit user mailing list, can anyone help me with this, I am > trying to recover some orphan files. > > > 1. first, using fls, I've found some orphaned files: > > Code: > > fls -o 63 -r F imagename.001 | grep -i file_name > > -/r * 649873: $OrphanFiles/TAGIHAN.xls > r/r * 122506: $OrphanFiles/PT8D15~1.NUG/REKAP TAGIHAN MAR'11.xls > -/r * 1212051: $OrphanFiles/TAGIHA~1.XLS > -/r * 1282702: $OrphanFiles/TAGIHA~1.XLS > -/r * 1374865: $OrphanFiles/TAGIHA~1.XLS > -/r * 1472145: $OrphanFiles/TAGIHA~1.XLS > -/r * 1519249: $OrphanFiles/TAGIHA~1.XLS > -/r * 1571469: $OrphanFiles/TAGIHA~1.XLS > > > 2. then, using istat to see the metadata of the last file listed before > (this is the part that I got wrong the last time) > > Code: > > istat -o 63 imagename 1571469 > > Directory Entry: 1571469 > Not Allocated > File Attributes: File, Archive > Size: 24064 > Name: TAGIHA~1.XLS > > Directory Entry Times: > Written: Mon Aug 24 14:26:16 2009 > Accessed: Tue Aug 7 00:00:00 2012 > Created: Tue Aug 7 09:40:58 2012 > > Sectors: > 20896 20897 20898 20899 20900 20901 20902 20903 > 20904 20905 20906 20907 20908 20909 20910 20911 > 20912 20913 20914 20915 20916 20917 20918 20919 > 20920 20921 20922 20923 20924 20925 20926 20927 > 20928 20929 20930 20931 20932 20933 20934 20935 > 20936 20937 20938 20939 20940 20941 20942 20943 > > > it means that the directory entry still points to the FAT entries and in > the end points to the sectors used by that file. > > 3. now I don't get how to recover the TAGIHA~1.XLS > > I've tried using dd: > Code: > > dd if=imagefile of=outputfile bs=4096 skip=20896 count=6 > > and > also icat > > icat -o 63 imagename.001 1571496 > TAGI~1.xls > > > again to no avail. > > I've tried recovering with foremost, and it does recover some files, but I > need the name of the files, that's why I'm trying to use this method. > Please correct me if I'm wrong, and give me the hint where to go from > here. I really appriciate your help, thank you. > > > > > -- > Regards, > Ewaldo Simon > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > > http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org > > > -- Regards, Ewaldo Simon ------------------------------------------------------------------------------ Managing the Performance of Cloud-Based Applications Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. Read the Whitepaper. http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk _______________________________________________ sleuthkit-users mailing list https://lists.sourceforge.net/lists/listinfo/sleuthkit-users http://www.sleuthkit.org |
From: ewaldo s. <ewa...@gm...> - 2014-02-23 12:23:02
|
First of all thank you for your suggestion, I've tried your suggestion and change the blocksize, It didn't work out either. I've also tried using icat on that inode (which doesn't use blocksize) it didn't work either. is it possible that the data unit that being pointed by the metadata (FAT entries) already used by another file/folder? actually I managed to save some of those (if it is true that that is the files) files needed, by using foremost, and I need to find the names of the files, so I tried using dd and/or icat. so the bottomline is I need a way to extract those excel, or found out the name that foremost carved, or make sure that some of those files shown with fls is actually the one that foremost recovered. Thank you before, and sorry for my bad english. On Sun, Feb 23, 2014 at 1:13 AM, Alex Nelson <ajn...@cs...> wrote: > Hi Ewaldo, > > Your dd should work given the sector offsets. However, you passed a sector > size of 4096 (bs=4096). I'm guessing because -o63 worked in istat, you > should have passed bs=512 in dd. > > --Alex > > > On Feb 22, 2014, at 01:44 , ewaldo simon <ewa...@gm...> wrote: > > Dear sleuthkit user mailing list, can anyone help me with this, I am > trying to recover some orphan files. > > > 1. first, using fls, I've found some orphaned files: > > Code: > > fls -o 63 -r F imagename.001 | grep -i file_name > > -/r * 649873: $OrphanFiles/TAGIHAN.xls > r/r * 122506: $OrphanFiles/PT8D15~1.NUG/REKAP TAGIHAN MAR'11.xls > -/r * 1212051: $OrphanFiles/TAGIHA~1.XLS > -/r * 1282702: $OrphanFiles/TAGIHA~1.XLS > -/r * 1374865: $OrphanFiles/TAGIHA~1.XLS > -/r * 1472145: $OrphanFiles/TAGIHA~1.XLS > -/r * 1519249: $OrphanFiles/TAGIHA~1.XLS > -/r * 1571469: $OrphanFiles/TAGIHA~1.XLS > > > 2. then, using istat to see the metadata of the last file listed before > (this is the part that I got wrong the last time) > > Code: > > istat -o 63 imagename 1571469 > > Directory Entry: 1571469 > Not Allocated > File Attributes: File, Archive > Size: 24064 > Name: TAGIHA~1.XLS > > Directory Entry Times: > Written: Mon Aug 24 14:26:16 2009 > Accessed: Tue Aug 7 00:00:00 2012 > Created: Tue Aug 7 09:40:58 2012 > > Sectors: > 20896 20897 20898 20899 20900 20901 20902 20903 > 20904 20905 20906 20907 20908 20909 20910 20911 > 20912 20913 20914 20915 20916 20917 20918 20919 > 20920 20921 20922 20923 20924 20925 20926 20927 > 20928 20929 20930 20931 20932 20933 20934 20935 > 20936 20937 20938 20939 20940 20941 20942 20943 > > > it means that the directory entry still points to the FAT entries and in > the end points to the sectors used by that file. > > 3. now I don't get how to recover the TAGIHA~1.XLS > > I've tried using dd: > Code: > > dd if=imagefile of=outputfile bs=4096 skip=20896 count=6 > > and > also icat > > icat -o 63 imagename.001 1571496 > TAGI~1.xls > > > again to no avail. > > I've tried recovering with foremost, and it does recover some files, but I > need the name of the files, that's why I'm trying to use this method. > Please correct me if I'm wrong, and give me the hint where to go from > here. I really appriciate your help, thank you. > > > > > -- > Regards, > Ewaldo Simon > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > > http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org > > > -- Regards, Ewaldo Simon |
From: Alex N. <ajn...@cs...> - 2014-02-22 18:13:22
|
Hi Ewaldo, Your dd should work given the sector offsets. However, you passed a sector size of 4096 (bs=4096). I'm guessing because -o63 worked in istat, you should have passed bs=512 in dd. --Alex On Feb 22, 2014, at 01:44 , ewaldo simon <ewa...@gm...> wrote: > Dear sleuthkit user mailing list, can anyone help me with this, I am trying to recover some orphan files. > > > 1. first, using fls, I've found some orphaned files: > > Code: > fls -o 63 -r F imagename.001 | grep -i file_name > > -/r * 649873: $OrphanFiles/TAGIHAN.xls > r/r * 122506: $OrphanFiles/PT8D15~1.NUG/REKAP TAGIHAN MAR'11.xls > -/r * 1212051: $OrphanFiles/TAGIHA~1.XLS > -/r * 1282702: $OrphanFiles/TAGIHA~1.XLS > -/r * 1374865: $OrphanFiles/TAGIHA~1.XLS > -/r * 1472145: $OrphanFiles/TAGIHA~1.XLS > -/r * 1519249: $OrphanFiles/TAGIHA~1.XLS > -/r * 1571469: $OrphanFiles/TAGIHA~1.XLS > > 2. then, using istat to see the metadata of the last file listed before (this is the part that I got wrong the last time) > > Code: > istat -o 63 imagename 1571469 > > Directory Entry: 1571469 > Not Allocated > File Attributes: File, Archive > Size: 24064 > Name: TAGIHA~1.XLS > > Directory Entry Times: > Written: Mon Aug 24 14:26:16 2009 > Accessed: Tue Aug 7 00:00:00 2012 > Created: Tue Aug 7 09:40:58 2012 > > Sectors: > 20896 20897 20898 20899 20900 20901 20902 20903 > 20904 20905 20906 20907 20908 20909 20910 20911 > 20912 20913 20914 20915 20916 20917 20918 20919 > 20920 20921 20922 20923 20924 20925 20926 20927 > 20928 20929 20930 20931 20932 20933 20934 20935 > 20936 20937 20938 20939 20940 20941 20942 20943 > > it means that the directory entry still points to the FAT entries and in the end points to the sectors used by that file. > > 3. now I don't get how to recover the TAGIHA~1.XLS > > I've tried using dd: > Code: > dd if=imagefile of=outputfile bs=4096 skip=20896 count=6 > and > also icat > > icat -o 63 imagename.001 1571496 > TAGI~1.xls > > again to no avail. > > I've tried recovering with foremost, and it does recover some files, but I need the name of the files, that's why I'm trying to use this method. > Please correct me if I'm wrong, and give me the hint where to go from here. I really appriciate your help, thank you. > > > > > -- > Regards, > Ewaldo Simon > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: ewaldo s. <ewa...@gm...> - 2014-02-22 09:44:36
|
Dear sleuthkit user mailing list, can anyone help me with this, I am trying to recover some orphan files. 1. first, using fls, I've found some orphaned files: Code: fls -o 63 -r F imagename.001 | grep -i file_name -/r * 649873: $OrphanFiles/TAGIHAN.xls r/r * 122506: $OrphanFiles/PT8D15~1.NUG/REKAP TAGIHAN MAR'11.xls -/r * 1212051: $OrphanFiles/TAGIHA~1.XLS -/r * 1282702: $OrphanFiles/TAGIHA~1.XLS -/r * 1374865: $OrphanFiles/TAGIHA~1.XLS -/r * 1472145: $OrphanFiles/TAGIHA~1.XLS -/r * 1519249: $OrphanFiles/TAGIHA~1.XLS -/r * 1571469: $OrphanFiles/TAGIHA~1.XLS 2. then, using istat to see the metadata of the last file listed before (this is the part that I got wrong the last time) Code: istat -o 63 imagename 1571469 Directory Entry: 1571469 Not Allocated File Attributes: File, Archive Size: 24064 Name: TAGIHA~1.XLS Directory Entry Times: Written: Mon Aug 24 14:26:16 2009 Accessed: Tue Aug 7 00:00:00 2012 Created: Tue Aug 7 09:40:58 2012 Sectors: 20896 20897 20898 20899 20900 20901 20902 20903 20904 20905 20906 20907 20908 20909 20910 20911 20912 20913 20914 20915 20916 20917 20918 20919 20920 20921 20922 20923 20924 20925 20926 20927 20928 20929 20930 20931 20932 20933 20934 20935 20936 20937 20938 20939 20940 20941 20942 20943 it means that the directory entry still points to the FAT entries and in the end points to the sectors used by that file. 3. now I don't get how to recover the TAGIHA~1.XLS I've tried using dd: Code: dd if=imagefile of=outputfile bs=4096 skip=20896 count=6 and also icat icat -o 63 imagename.001 1571496 > TAGI~1.xls again to no avail. I've tried recovering with foremost, and it does recover some files, but I need the name of the files, that's why I'm trying to use this method. Please correct me if I'm wrong, and give me the hint where to go from here. I really appriciate your help, thank you. -- Regards, Ewaldo Simon |
From: Alex N. <ajn...@cs...> - 2014-02-19 17:39:22
|
Hi Jason, This is a weird coincidence. I just talked with Dave Ferguson yesterday at a conference, about a paper he had authored years ago on a couple different ways NTFS reports file content length. There is apparently a "Valid data length" field that is separate from the space actually allocated for file content. From discussion with Dave, at the time the FSFA book was published, the book acknowledged the "Valid data length" field, but said it had never been observed to be different from the other file content length recorded in the MFT. I haven't scoured TSK's code for the field yet, but I suspect it's part of your problem. So, sorry to nitpick, but when you say "actual size," what do you mean? What is your source for that number, a field of the standard info. attribute? (I'd look through FSFA, but I'm still at the conference.) --Alex PS Dave's article is here, paywalled: http://www.tandfonline.com/doi/abs/10.1080/15567280802587965#.UwTnWkJdWK8 On Feb 19, 2014, at 07:57 , Simson Garfinkel <si...@ac...> wrote: > Do you see the same behavior with other tools? What happens when you try to get the file content via icat? > > > On Feb 19, 2014, at 9:46 AM, Jason Wright <jwr...@gm...> wrote: > >> All, >> >> Recently when examining a number of drives and running fiwalk (sleuthkit 4.1.2 and libewf 20130416) on each of the files, I've noticed that there are a few files that don't get hashed by fiwalk. The files are allocated and do have a size and a file signature, but no hash, I've also noticed that the files that are not getting hashed are similar in that the initialized size from the data attribute is smaller than the actual size. Has anyone come across this situation and is there a way to tell fiwalk to hash them anyway? >> >> As a footnote, the istat of the files shows the clusters at the end to be zero. In other words, they don't have a cluster assignment. The clusters assigned and listed for these files are as many needed for the initialized size. The remaining clusters that would be needed for the size above initialized and to the actual size have no assignments. This makes sense since when parsing the data attribute, the first byte run is described as expected and followed by 0x00, which would indicate an end of the cluster contents of the file. I am wondering if this has some effect on how fiwalk hashes and results in these files not getting hashed. >> >> R/ >> >> Jason Wright >> ------------------------------------------------------------------------------ >> Managing the Performance of Cloud-Based Applications >> Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. >> Read the Whitepaper. >> http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk_______________________________________________ >> sleuthkit-users mailing list >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >> http://www.sleuthkit.org > > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Simson G. <si...@ac...> - 2014-02-19 15:58:04
|
Do you see the same behavior with other tools? What happens when you try to get the file content via icat? On Feb 19, 2014, at 9:46 AM, Jason Wright <jwr...@gm...> wrote: > All, > > Recently when examining a number of drives and running fiwalk (sleuthkit 4.1.2 and libewf 20130416) on each of the files, I've noticed that there are a few files that don't get hashed by fiwalk. The files are allocated and do have a size and a file signature, but no hash, I've also noticed that the files that are not getting hashed are similar in that the initialized size from the data attribute is smaller than the actual size. Has anyone come across this situation and is there a way to tell fiwalk to hash them anyway? > > As a footnote, the istat of the files shows the clusters at the end to be zero. In other words, they don't have a cluster assignment. The clusters assigned and listed for these files are as many needed for the initialized size. The remaining clusters that would be needed for the size above initialized and to the actual size have no assignments. This makes sense since when parsing the data attribute, the first byte run is described as expected and followed by 0x00, which would indicate an end of the cluster contents of the file. I am wondering if this has some effect on how fiwalk hashes and results in these files not getting hashed. > > R/ > > Jason Wright > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Brian C. <ca...@sl...> - 2014-02-19 15:36:54
|
That's good news. It shouldn't be required, but it at least means that the core problem is that the MS dlls being packaged with Autopsy are not being found. Thanks! On Feb 19, 2014, at 10:20 AM, Mauro Silva <mau...@gm...> wrote: > Hi all, > > I've just tried to install the 64 bit version of Autopsy 3.0.9 in a clean Windows 8 machine and it was having problems. I figured out that it was because I didn't have the Microsoft Visual C++ Redistributable Package installed. > > After installing Microsoft Visual C++ (http://www.microsoft.com/en-us/download/details.aspx?id=13523) and reinstalling Autopsy everything started working. > > Thought it might help someone, > Best regards, > Mauro Silva > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Mauro S. <mau...@gm...> - 2014-02-19 15:21:54
|
Hi Brian, just posted another message explaining that the problem was that Visual C++ was missing. But thanks ;) On Wed, Feb 19, 2014 at 3:20 PM, Brian Carrier <ca...@sl...>wrote: > Are you using the 64-bit version? > > We've had one reports on this that 64-bit failed, but the 32-bit version > worked, and then the 64-bit version worked. We've also heard one case that > the 64-bit never worked even though the 32-bit did. > > Another test that could be useful to track this down is to run listdlls > from a command prompt while Autopsy is running and send me the output. I'm > not sure if it is trying to load an older version of the library since we > have moved its location around. > > http://download.sysinternals.com/files/ListDlls.zip > > listdlls autopsy64.exe > autopsy_dlls.txt > > thanks, > brian > > > On Feb 19, 2014, at 9:45 AM, Mauro Silva <mau...@gm...> wrote: > > > Hi, > > > > I've run that and it tells me that it can't find the following > dependencies: > > libewf.dll > > msvcp100.dll > > msvcr100.dll > > zlib.dll > > > > > > Best Regards, > > Mauro Silva > > > > > > >Thanks Joe. Can you download depends.exe ( > http://www.dependencywalker.com/depends22_x64.zip > > for the 64-bit version), open the > C:\Users\joachim\AppData\Local\Temp\libtsk_jni.dll file, and let me know > what it says? Or send me a screen shot offline? > > > > > >For background on this issue, these are The Sleuth Kit libraries that > we are loading behind the scenes. We've done various approaches with where > to put the libraries that libtsk depends on, such as libewf. Not sure why > it is failing on your system though... > > > > > >Thanks! > > > > On Feb 5, 2014, at 6:56 AM, HC (Info) <info@...> wrote: > > > > > Hi there, > > > > > > After installing Autopsy all installed plugins (including > Autopsy-Core) are not activated. > > > If I activate Autopsy-Core manualy, there will be an error: > > > > > > Activation failed: StandardModule:org.sleuthkit.autopsy.core jarFile: > C:\Program > Files\Autopsy-3.0.9\autopsy\modules\org-sleuthkit-autopsy-core.jar: > java.lang.UnsatisfiedLinkError: > C:\Users\joachim\AppData\Local\Temp\libtsk_jni.dll: Can't find dependent > libraries > > > > > > Hereby my cofiguration: > > > > > > Product Version: Autopsy 3.0.9 > > > Java: 1.7.0_25; Java HotSpot(TM) 64-Bit Server VM 23.25-b01 > > > Runtime: Java(TM) SE Runtime Environment 1.7.0_25-b17 > > > System: Windows 7 version 6.1 running on amd64; Cp1252; de_DE (autopsy) > > > User directory: C:\Users\joachim\AppData\Roaming\.autopsy\dev > > > Cache directory: > C:\Users\joachim\AppData\Roaming\.autopsy\dev\var\cache > > > > > > Regards > > > Joe > > > > > > > > > > > > Diese E-Mail ist frei von Viren und Malware, denn der avast! Antivirus > Schutz ist aktiv. > > > > > > > > > > ------------------------------------------------------------------------------ > > > Managing the Performance of Cloud-Based Applications > > > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > > > Read the Whitepaper. > > > _______________________________________________ > > > sleuthkit-users mailing list > > > > > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > > > > > > > http://www.sleuthkit.org > > > ------------------------------------------------------------------------------ > > Managing the Performance of Cloud-Based Applications > > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > > Read the Whitepaper. > > > http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk_______________________________________________ > > sleuthkit-users mailing list > > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > > http://www.sleuthkit.org > > |
From: Mauro S. <mau...@gm...> - 2014-02-19 15:20:59
|
Hi all, I've just tried to install the 64 bit version of Autopsy 3.0.9 in a clean Windows 8 machine and it was having problems. I figured out that it was because I didn't have the Microsoft Visual C++ Redistributable Package installed. After installing Microsoft Visual C++ ( http://www.microsoft.com/en-us/download/details.aspx?id=13523) and reinstalling Autopsy everything started working. Thought it might help someone, Best regards, Mauro Silva |
From: Brian C. <ca...@sl...> - 2014-02-19 15:20:41
|
Are you using the 64-bit version? We've had one reports on this that 64-bit failed, but the 32-bit version worked, and then the 64-bit version worked. We've also heard one case that the 64-bit never worked even though the 32-bit did. Another test that could be useful to track this down is to run listdlls from a command prompt while Autopsy is running and send me the output. I'm not sure if it is trying to load an older version of the library since we have moved its location around. http://download.sysinternals.com/files/ListDlls.zip listdlls autopsy64.exe > autopsy_dlls.txt thanks, brian On Feb 19, 2014, at 9:45 AM, Mauro Silva <mau...@gm...> wrote: > Hi, > > I've run that and it tells me that it can't find the following dependencies: > libewf.dll > msvcp100.dll > msvcr100.dll > zlib.dll > > > Best Regards, > Mauro Silva > > > >Thanks Joe. Can you download depends.exe (http://www.dependencywalker.com/depends22_x64.zip > for the 64-bit version), open the C:\Users\joachim\AppData\Local\Temp\libtsk_jni.dll file, and let me know what it says? Or send me a screen shot offline? > > > >For background on this issue, these are The Sleuth Kit libraries that we are loading behind the scenes. We've done various approaches with where to put the libraries that libtsk depends on, such as libewf. Not sure why it is failing on your system though... > > > >Thanks! > > On Feb 5, 2014, at 6:56 AM, HC (Info) <info@...> wrote: > > > Hi there, > > > > After installing Autopsy all installed plugins (including Autopsy-Core) are not activated. > > If I activate Autopsy-Core manualy, there will be an error: > > > > Activation failed: StandardModule:org.sleuthkit.autopsy.core jarFile: C:\Program Files\Autopsy-3.0.9\autopsy\modules\org-sleuthkit-autopsy-core.jar: java.lang.UnsatisfiedLinkError: C:\Users\joachim\AppData\Local\Temp\libtsk_jni.dll: Can't find dependent libraries > > > > Hereby my cofiguration: > > > > Product Version: Autopsy 3.0.9 > > Java: 1.7.0_25; Java HotSpot(TM) 64-Bit Server VM 23.25-b01 > > Runtime: Java(TM) SE Runtime Environment 1.7.0_25-b17 > > System: Windows 7 version 6.1 running on amd64; Cp1252; de_DE (autopsy) > > User directory: C:\Users\joachim\AppData\Roaming\.autopsy\dev > > Cache directory: C:\Users\joachim\AppData\Roaming\.autopsy\dev\var\cache > > > > Regards > > Joe > > > > > > > > Diese E-Mail ist frei von Viren und Malware, denn der avast! Antivirus Schutz ist aktiv. > > > > > > ------------------------------------------------------------------------------ > > Managing the Performance of Cloud-Based Applications > > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > > Read the Whitepaper. > > _______________________________________________ > > sleuthkit-users mailing list > > > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > > > > http://www.sleuthkit.org > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Jason W. <jwr...@gm...> - 2014-02-19 14:46:34
|
All, Recently when examining a number of drives and running fiwalk (sleuthkit 4.1.2 and libewf 20130416) on each of the files, I've noticed that there are a few files that don't get hashed by fiwalk. The files are allocated and do have a size and a file signature, but no hash, I've also noticed that the files that are not getting hashed are similar in that the initialized size from the data attribute is smaller than the actual size. Has anyone come across this situation and is there a way to tell fiwalk to hash them anyway? As a footnote, the istat of the files shows the clusters at the end to be zero. In other words, they don't have a cluster assignment. The clusters assigned and listed for these files are as many needed for the initialized size. The remaining clusters that would be needed for the size above initialized and to the actual size have no assignments. This makes sense since when parsing the data attribute, the first byte run is described as expected and followed by 0x00, which would indicate an end of the cluster contents of the file. I am wondering if this has some effect on how fiwalk hashes and results in these files not getting hashed. R/ Jason Wright |
From: Mauro S. <mau...@gm...> - 2014-02-19 14:45:15
|
Hi, I've run that and it tells me that it can't find the following dependencies: libewf.dll msvcp100.dll msvcr100.dll zlib.dll Best Regards, Mauro Silva >Thanks Joe. Can you download depends.exe (http://www.dependencywalker.com/depends22_x64.zip for the 64-bit version), open the C:\Users\joachim\AppData\Local\Temp\libtsk_jni.dll file, and let me know what it says? Or send me a screen shot offline? > >For background on this issue, these are The Sleuth Kit libraries that we are loading behind the scenes. We've done various approaches with where to put the libraries that libtsk depends on, such as libewf. Not sure why it is failing on your system though... > >Thanks! On Feb 5, 2014, at 6:56 AM, HC (Info) <info@...> wrote: > Hi there, > > After installing Autopsy all installed plugins (including Autopsy-Core) are not activated. > If I activate Autopsy-Core manualy, there will be an error: > > Activation failed: StandardModule:org.sleuthkit.autopsy.core jarFile: C:\Program Files\Autopsy-3.0.9\autopsy\modules\org-sleuthkit-autopsy-core.jar: java.lang.UnsatisfiedLinkError: C:\Users\joachim\AppData\Local\Temp\libtsk_jni.dll: Can't find dependent libraries > > Hereby my cofiguration: > > Product Version: Autopsy 3.0.9 > Java: 1.7.0_25; Java HotSpot(TM) 64-Bit Server VM 23.25-b01 > Runtime: Java(TM) SE Runtime Environment 1.7.0_25-b17 > System: Windows 7 version 6.1 running on amd64; Cp1252; de_DE (autopsy) > User directory: C:\Users\joachim\AppData\Roaming\.autopsy\dev > Cache directory: C:\Users\joachim\AppData\Roaming\.autopsy\dev\var\cache > > Regards > Joe > > > > Diese E-Mail ist frei von Viren und Malware, denn der avast! Antivirus Schutz ist aktiv. > > > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Alex N. <ajn...@cs...> - 2014-02-19 01:00:50
|
To my knowledge, this hasn't been addressed on-list before. If you write a quick shell session example noting its use (cat'ing into a config file), it might be easy to get a patch accepted. (I'm just guessing, though.) I've recently found myself wanting a clamscan run in tandem with Fiwalk. It'd be nice to see this shipped. --Alex On Feb 13, 2014, at 09:32 , Kam Woods <kam...@gm...> wrote: > There's a script and a config file (ficlam.sh and clamconfig.txt) that are in the TSK git repo (https://github.com/sleuthkit/sleuthkit/tree/master/tools/fiwalk/plugins) but not in the 4.1.3 .tar.gz that's distributed on the TSK site. > > They seem to work fine. Is there a reason they're excluded from the .tar.gz? (Apologies if this has been addressed previously on-list). > > Kam > ------------------------------------------------------------------------------ > Android apps run on BlackBerry 10 > Introducing the new BlackBerry 10.2.1 Runtime for Android apps. > Now with support for Jelly Bean, Bluetooth, Mapview and more. > Get your Android app in front of a whole new audience. Start now. > http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Kam W. <kam...@gm...> - 2014-02-13 17:32:13
|
There's a script and a config file (ficlam.sh and clamconfig.txt) that are in the TSK git repo ( https://github.com/sleuthkit/sleuthkit/tree/master/tools/fiwalk/plugins) but not in the 4.1.3 .tar.gz that's distributed on the TSK site. They seem to work fine. Is there a reason they're excluded from the .tar.gz? (Apologies if this has been addressed previously on-list). Kam |
From: Jon S. <jo...@li...> - 2014-02-13 15:31:50
|
TSK_FS_NAME now has the par_addr field, which contains the meta address (inode number/file ID) of the parent. This is a welcome addition, so, first, thanks for adding it. I know that if you do a full walk of the filesystem you may get some TSK_FS_FILE structs which only have a TSK_FS_NAME struct/do not have a TSK_FS_META struct, remembrances of files past. In such cases TSK was not able to associate the directory entry with an inode and meta_addr is more of a historical curiosity than anything (mmmmaybe you can find a trace of the old file in that inode's slack). So far, so good. The question is: if I get a TSK_FS_FILE struct that only has a TSK_FS_NAME struct, is there a guarantee that par_addr will point to a valid, correct inode... or is that suspect, too? Put another way (I think this is isomorphic), can you have one of these no-meta TSK_FS_FILE structs whose parent directory is also a no-meta TSK_FS_FILE? TIA, Jon -- Jon Stewart, Principal (646) 719-0317 | jo...@li... | Arlington, VA |
From: Brian C. <ca...@sl...> - 2014-02-07 17:10:18
|
Has anyone else seen this problem? We haven't been able to recreate it and the problem went away for Joe somewhere in between installing the 32-bit version and the 64-bit version again. thanks, brian On Feb 5, 2014, at 6:56 AM, HC (Info) <in...@ha...> wrote: > Hi there, > > After installing Autopsy all installed plugins (including Autopsy-Core) are not activated. > If I activate Autopsy-Core manualy, there will be an error: > > Activation failed: StandardModule:org.sleuthkit.autopsy.core jarFile: C:\Program Files\Autopsy-3.0.9\autopsy\modules\org-sleuthkit-autopsy-core.jar: java.lang.UnsatisfiedLinkError: C:\Users\joachim\AppData\Local\Temp\libtsk_jni.dll: Can't find dependent libraries > > Hereby my cofiguration: > > Product Version: Autopsy 3.0.9 > Java: 1.7.0_25; Java HotSpot(TM) 64-Bit Server VM 23.25-b01 > Runtime: Java(TM) SE Runtime Environment 1.7.0_25-b17 > System: Windows 7 version 6.1 running on amd64; Cp1252; de_DE (autopsy) > User directory: C:\Users\joachim\AppData\Roaming\.autopsy\dev > Cache directory: C:\Users\joachim\AppData\Roaming\.autopsy\dev\var\cache > > Regards > Joe > > > > Diese E-Mail ist frei von Viren und Malware, denn der avast! Antivirus Schutz ist aktiv. > > > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > http://pubads.g.doubleclick.net/gampad/clk?id=121051231&iu=/4140/ostg.clktrk_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Brian C. <ca...@sl...> - 2014-02-05 12:29:07
|
Thanks Joe. Can you download depends.exe (http://www.dependencywalker.com/depends22_x64.zip for the 64-bit version), open the C:\Users\joachim\AppData\Local\Temp\libtsk_jni.dll file, and let me know what it says? Or send me a screen shot offline? For background on this issue, these are The Sleuth Kit libraries that we are loading behind the scenes. We've done various approaches with where to put the libraries that libtsk depends on, such as libewf. Not sure why it is failing on your system though... Thanks! On Feb 5, 2014, at 6:56 AM, HC (Info) <in...@ha...> wrote: > Hi there, > > After installing Autopsy all installed plugins (including Autopsy-Core) are not activated. > If I activate Autopsy-Core manualy, there will be an error: > > Activation failed: StandardModule:org.sleuthkit.autopsy.core jarFile: C:\Program Files\Autopsy-3.0.9\autopsy\modules\org-sleuthkit-autopsy-core.jar: java.lang.UnsatisfiedLinkError: C:\Users\joachim\AppData\Local\Temp\libtsk_jni.dll: Can't find dependent libraries > > Hereby my cofiguration: > > Product Version: Autopsy 3.0.9 > Java: 1.7.0_25; Java HotSpot(TM) 64-Bit Server VM 23.25-b01 > Runtime: Java(TM) SE Runtime Environment 1.7.0_25-b17 > System: Windows 7 version 6.1 running on amd64; Cp1252; de_DE (autopsy) > User directory: C:\Users\joachim\AppData\Roaming\.autopsy\dev > Cache directory: C:\Users\joachim\AppData\Roaming\.autopsy\dev\var\cache > > Regards > Joe > > > > Diese E-Mail ist frei von Viren und Malware, denn der avast! Antivirus Schutz ist aktiv. > > > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > http://pubads.g.doubleclick.net/gampad/clk?id=121051231&iu=/4140/ostg.clktrk_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: HC (Info) <in...@ha...> - 2014-02-05 12:10:10
|
Hi there, After installing Autopsy all installed plugins (including Autopsy-Core) are not activated. If I activate Autopsy-Core manualy, there will be an error: Activation failed: StandardModule:org.sleuthkit.autopsy.core jarFile: C:\Program Files\Autopsy-3.0.9\autopsy\modules\org-sleuthkit-autopsy-core.jar: java.lang.UnsatisfiedLinkError: C:\Users\joachim\AppData\Local\Temp\libtsk_jni.dll: Can't find dependent libraries Hereby my cofiguration: *Product Version:* Autopsy 3.0.9 *Java:* 1.7.0_25; Java HotSpot(TM) 64-Bit Server VM 23.25-b01 *Runtime:* Java(TM) SE Runtime Environment 1.7.0_25-b17 *System:* Windows 7 version 6.1 running on amd64; Cp1252; de_DE (autopsy) *User directory:* C:\Users\joachim\AppData\Roaming\.autopsy\dev *Cache directory:* C:\Users\joachim\AppData\Roaming\.autopsy\dev\var\cache Regards Joe --- Diese E-Mail ist frei von Viren und Malware, denn der avast! Antivirus Schutz ist aktiv. http://www.avast.com |
From: Brian C. <ca...@sl...> - 2014-02-04 15:31:13
|
Has lots of minor cleanup things and a few new features. List is below. http://sleuthkit.org/autopsy/download.php Other things to note: - Next Autopsy training class is Mar 19-20 in Herndon, VA. Early bird price ends Feb 19 (http://www.basistech.com/digital-forensics/training/). - We're shooting for a 3.1 release in March. - The Student Autopsy Module Development challenge is open (http://www.basistech.com/digital-forensics/autopsy-student-development-contest/). Changes in 3.0.9: • New "EnCase-style" report that lists files and metadata in tab delimited file • Removed xdock definitions -> some claim this helps with memory problems • Regular expression keyword search works on file names. • Fixed thunderbird parser for subject and dates • Fixed errors in hex viewer • HTML text is better formatted • More lazy loading to help performance with big folders and sets of files • Times can be displayed in local time or GMT • Changed report wizard to make one report at a time • Enhanced reporting on keyword search module errors |
From: Alex N. <ajn...@cs...> - 2014-02-02 17:44:13
|
On Feb 2, 2014, at 12:15 , Simson Garfinkel <si...@ac...> wrote: > Thanks for the explanation. I think that this is an aspect of the SleuthKit API that fiwalk is not properly handling. it seems to be sending those attributes when the primary data is not available. Is there a flag being set that says “these data are different from what you are used to getting?” (Somebody else should chime in here, I'm not sure offhand.) > > Failing that, we can make fiwalk simply suppress hashing and generation of the byte runs on 0-length files. I think that's the most sensible. > > As for your proposal below — would anybody use it? It would benefit anybody who wants to automate analysis of those non-standard streams. It's useful to know that those streams are present, as sometimes Windows uses a named data stream on a file to flag that it was downloaded. I think the below XML format will induce the most straightforward API for running a presence check. I know a small number of people whose past work would have directly benefited from having this around. I'd like to use it too in some future work. --Alex > > > On Feb 1, 2014, at 3:11 PM, Alex Nelson <ajn...@cs...> wrote: > >> Hi Simson, >> >> The NTFS $Secure file is a weird one. Its primary data is stored in the $DATA attribute with the name $SDS, and $SDH and $SII are two $INDEX attributes for the same file. I think there are a couple other NTFS special files that have multiple indices like this; there are definitely files with non-standard indices. >> >> Normally, the default, unnamed data attribute of a file would supply the content that Fiwalk would hash. In the case of $Secure, that attribute is in fact 0-length. (Absent, even, according to your istat.) The real data of $Secure is in the named data stream "$Secure:$SDS". Fiwalk and DFXML don't presently have a way to express that aside from making a whole different fileobject; so, $Secure would be a 0-length file, $Secure:$SDS would be a 1.9MB (for you) file. Right now, Fiwalk is just quietly hashing the content of $Secure:$SDS; I forget if there's an explicit check for that or it's a side-effect of something. >> >> There is further expression weirdness if you want to express $Secure:$SDH or $Secure:$SII, since those aren't technically content, they're indices. >> >> I've been thinking about how to express those in DFXML for a while, because the problem also arises for named data streams in general. What's the hash of a file with multiple data streams? I hope you'll agree that there should be one hash per stream, instead of one hash per file. >> >> I think the best way to approach this problem is to define a new child element of a <fileobject>, a named data stream (I think it's abbreviated NDS in the Carrier book; it's "Alternate" data stream elsewhere). So, in $Secure's case: >> >> <fileobject> >> <filename>$Secure</filename> >> <ntfs:nds> >> <tsk:icat_id>9-128-11</tsk:icat_id> >> <parent_object> >> <inode>9</inode> >> </parent_object> >> <filename>$SDS</filename> >> <byte_runs><!--As expected...--></byte_runs> >> <hashdigest type="sha1">1234abcd...</hashdigest> >> </ntfs:nds> >> </fileobject> >> >> The name data stream elements would be a subset of the <fileobject> elements. >> >> Similarly, there can be elements for the NTFS index root and index allocation attributes, which would also be children of a fileobject. >> >> <ntfs:index_root> >> <tsk:icat_id>9-144-12</tsk:icat_id> >> <parent_object> >> <inode>9</inode> >> </parent_object> >> <filename>$SII</filename> >> <byte_runs><!--The resident data in the MFT entry's attribute; will take some engineering to get this right, I think--></byte_runs> >> <hashdigest type="sha1">5678...</hashdigest> >> </ntfs:index_root> >> >> <ntfs:index_allocation> >> <tsk:icat_id>9-160-13</tsk:icat_id> >> <parent_object> >> <inode>9</inode> >> </parent_object> >> <filename>$SII</filename> >> <byte_runs><!--Of the index clusters--></byte_runs> >> <hashdigest type="sha1">9abc...</hashdigest> >> </ntfs:index_allocation> >> >> This approach wouldn't require changes to the DFXML schema. >> >> Do you think this solves the problem of extra indices and data streams for NTFS? >> >> --Alex >> >> >> >> >> >> On Fri, Jan 31, 2014 at 9:38 PM, Simson Garfinkel <si...@ac...> wrote: >> >> I have an NTFS disk image. There is a file on it that the SleuthKit reports has 0 length. But fiwalk reports that it has several byte runs. Currently fiwalk is computing the hash of those byte runs and reporting it as the file hash, which is the wrong behavior. >> >> Below is the istat, followed by the XML dump and also the fls output. It looks to me that there are several attributes; one of them, the $SDS attribute, is 1.9MB in length. >> >> Clearly the attributes should not be hashed to determine the file's hash, so there is a bug in fiwalk that I need to fix. From the API, how do I determine that the data callback is being given an attribute that shouldn't be hashed? >> >> Here is the relevant part of the directory list with fls: >> >> r/r * 9-144-16(realloc): >> title_ctr[1].gif:$SDH >> r/r * 9-144-18(realloc): title_ctr[1].gif:$SII >> r/r * 9-128-19(realloc): title_ctr[1].gif:$SDS >> >> >> >> Here is the istat: >> >> $ istat -o 63 SG1-1064.E01 9-144-16 >> MFT Entry Header Values: >> Entry: 9 Sequence: 9 >> $LogFile Sequence Number: 586416701932 >> Allocated File >> Links: 1 >> >> $STANDARD_INFORMATION Attribute Values: >> Flags: Hidden, System >> Owner ID: 0 >> Security ID: 257 (S-1-5-32-544) >> Created: 2004-07-12 16:58:51 (EDT) >> File Modified: 2004-07-12 16:58:51 (EDT) >> MFT Modified: 2004-07-12 16:58:51 (EDT) >> Accessed: 2004-07-12 16:58:51 (EDT) >> >> $FILE_NAME Attribute Values: >> Flags: >> Name: $Secure >> Parent MFT Entry: 5 Sequence: 5 >> Allocated Size: 0 Actual Size: 0 >> Created: 2076-11-29 03:54:34 (EST) >> File Modified: 2076-11-29 03:54:34 (EST) >> MFT Modified: 2076-11-29 03:54:34 (EST) >> Accessed: 2076-11-29 03:54:34 (EST) >> >> $ATTRIBUTE_LIST Attribute Values: >> Type: 16-0 MFT Entry: 9 >> VCN: 0 >> Type: 48-7 MFT Entry: 9 >> VCN: 0 >> Type: 128-0 MFT Entry: 178770 >> VCN: 0 >> Type: 144-16 MFT Entry: 9 >> VCN: 0 >> Type: 144-18 MFT Entry: 9 >> VCN: 0 >> Type: 160-2 MFT Entry: 6781 >> VCN: 0 >> Type: 160-3 MFT Entry: 6781 >> VCN: 0 >> Type: 176-4 MFT Entry: 6781 >> VCN: 0 >> Type: 176-5 MFT Entry: 6781 >> VCN: 0 >> >> Attributes: >> Type: $STANDARD_INFORMATION (16-0) Name: N/A Resident size: 72 >> Type: $ATTRIBUTE_LIST (32-17) Name: N/A Non-Resident size: 344 init_size: 344 >> 549297 >> Type: $FILE_NAME (48-7) Name: N/A Resident size: 80 >> Type: $INDEX_ROOT (144-16) Name: $SDH Resident size: 56 >> Type: $INDEX_ROOT (144-18) Name: $SII Resident size: 56 >> Type: $DATA (128-19) Name: $SDS Non-Resident size: 1960040 init_size: 1960040 >> 390176 390177 390178 390179 390180 390181 390182 390183 >> 390184 390185 390186 390187 390188 390189 390190 390191 >> 390192 390193 390194 390195 390196 390197 390198 390199 >> 390200 390201 390202 390203 390204 390205 390206 390207 >> 390208 390209 390210 390211 390212 390213 390214 390215 >> 390216 390217 390218 390219 390220 390221 390222 390223 >> 390224 390225 390226 390227 390228 390229 390230 390231 >> 390232 390233 390234 390235 390236 390237 390238 390239 >> 390240 487278 487279 663659 638527 481604 79306 706903 >> 785371 9883 610353 610355 610371 610270 600412 619380 >> 596219 569580 699395 717528 368206 370944 482186 489621 >> 531746 532353 6076591 6060120 7432272 7403576 7402093 7400205 >> 6386639 6424782 6425853 6308043 6542545 6496155 6556126 6624373 >> 6899574 6900049 7130039 7125434 7125816 7142050 7140859 7137126 >> 7133695 7133209 7132301 7131597 7131365 7131351 7126255 7125530 >> 7124837 7124653 7123341 7132208 8324618 8611460 9711057 3778491 >> 7299721 7299722 7299723 7299724 7299725 7299726 7299727 7299728 >> 7299729 7299730 7299731 7299732 7299733 7299734 7299735 7299736 >> 7299737 7299738 7299739 7299740 7299741 7299742 7299743 7299744 >> 7299745 7299746 7299747 7299748 7299749 7299750 7299751 7299752 >> 7299753 7299754 7299755 7299756 7299757 7299758 7299759 7299760 >> 7299761 7299762 7299763 7299764 7299765 7299766 7299767 7299768 >> 7299769 7299770 7299771 7299772 7299773 7299774 7299775 7299776 >> 7299777 7299778 7299779 7299780 7299781 7299782 7299783 7299784 >> 7299785 6678550 4855356 3758831 3758828 3758502 3758435 3757936 >> 3757899 3757896 3758437 3764874 3764875 3772231 3381010 3373829 >> 3354308 3373799 3066264 3050316 3050323 3639068 3579136 5890982 >> 5005380 5391161 744788 744567 742412 2521794 2544838 2544980 >> 2545877 2547574 2547572 5537845 5427693 5390572 5281512 5274421 >> 5274391 5005604 5005540 4996110 4996004 4919745 4810409 4810410 >> 4810411 4806319 4818401 4835951 7299630 7410654 7380435 6777589 >> 7518578 3611101 3765721 3765626 3765627 3768851 3744748 3732304 >> 3465268 3465269 3465270 3465271 3465272 3465273 3465274 3465275 >> 3465276 3465277 3465278 3465279 3465280 3465281 3465282 3465283 >> 3465284 3465285 3465286 3465287 3465288 3465289 3465290 3465291 >> 3465292 3465293 3465294 3465295 3465296 3465297 3465298 3465299 >> 3465300 3465301 3465302 3465303 3465304 3465305 3465306 3465307 >> 3465308 3465309 3465310 3465311 3465312 3465313 3465314 3465315 >> 3465316 3465317 3465318 3465319 3465320 3465321 3465322 3465323 >> 3465324 3465325 3465326 3465327 3465328 3465329 3465330 3465331 >> 3465332 3466493 4851942 296156 3420790 3421259 3421441 3421592 >> 3421723 3400696 3400697 3400698 3400699 3400700 3400701 3400702 >> 3400703 3400704 3400705 3400665 3403006 3403025 3403181 3404423 >> 3406406 3406408 3393559 3385169 3379651 3370783 3368782 3368665 >> 3366669 3350989 3350453 3350833 3353678 3342048 3341341 3333236 >> 3333234 3333000 3331552 3331254 3331241 3330349 3328982 3328912 >> 3328910 3328053 3327416 3327413 3327387 3322469 3321112 3311241 >> 3304448 3302498 3300538 3300466 3294442 3294089 3291273 3286158 >> 3734663 3734664 3734665 3734666 3734667 3734668 3734669 3734670 >> 3734671 3734672 3734673 3734674 3734675 3734676 3734677 3734678 >> 3734679 3734680 3734681 3734682 3734683 3734684 3734685 3734686 >> 3734687 3734688 3734689 3734690 3734691 3734692 3734693 3734694 >> 3734695 3734696 3734697 3734698 3734699 3734700 3734701 3734702 >> 3734703 3734704 3734705 3734706 3734707 3734708 3734709 3734710 >> 3734711 3734712 3734713 3734714 3734715 3734716 3734717 3734718 >> 3734719 3734720 3734721 3734722 3734723 3734724 3734725 3734726 >> 3734727 3732144 3730026 3730023 3729973 3727747 3727641 3727639 >> 3727631 3727124 3727103 3726877 3726430 3726011 3720147 3720217 >> 3720248 3722010 3722064 3722169 3725802 3748438 3756656 798823 >> 780465 520279 148378 378527 355345 346371 346370 >> Type: $INDEX_ALLOCATION (160-20) Name: $SDH Non-Resident size: 262144 init_size: 262144 >> 78369 610316 610317 610318 610319 700617 700640 695953 >> 690523 692402 1262355 1262344 4855163 4855576 4855596 4853877 >> 4858975 3784815 3762045 3764806 3757945 3757507 366474 7299002 >> 7299012 7298974 3293690 5912759 5915587 5916360 5917039 3758551 >> 3778787 3778785 4850977 4851160 4850782 4851841 4852120 4849070 >> 4847515 4845527 4845314 4844785 4844745 4842047 4841786 4841724 >> 4837114 4837045 3772243 3761602 378528 616442 618862 756370 >> 756371 756372 756373 756374 756375 756376 756377 756378 >> Type: $INDEX_ALLOCATION (160-21) Name: $SII Non-Resident size: 249856 init_size: 249856 >> 511627 478499 1175609 610352 663398 570363 164501 312115 >> 6076594 616643 752222 306845 548567 549279 549339 687886 >> 797375 798538 798352 799153 799352 799355 799361 787025 >> 755996 1589868 1589999 792974 8310299 8306866 8306894 8305736 >> 1583227 1592148 1592149 3532863 3532864 3533327 4017458 4017459 >> 4017460 4017461 4017462 4017437 4017505 4017509 4017511 4017272 >> 4017261 4017457 4016963 4016617 4016615 4016372 4016962 4017433 >> 4017435 4015948 3772658 1245269 1010788 >> Type: $BITMAP (176-22) Name: $SDH Resident size: 16 >> Type: $BITMAP (176-23) Name: $SII Resident size: 8 >> >> >> >> Here is the XML that fiwalk dumps: >> >> <fileobject> >> <filename>Documents and Settings/*******/Local Settings/Temporary Internet Files/Content.IE5/89MRS52V/title_ctr[1].gif</filename> >> <partition>1</partition> >> <id>162982</id> >> <name_type>r</name_type> >> <filesize>0</filesize> >> <alloc>1</alloc> >> <used>1</used> >> <inode>9</inode> >> <meta_type>1</meta_type> >> <mode>365</mode> >> <nlink>1</nlink> >> <uid>0</uid> >> <gid>0</gid> >> <mtime>2004-07-12T20:58:51Z</mtime> >> <ctime>2004-07-12T20:58:51Z</ctime> >> <atime>2004-07-12T20:58:51Z</atime> >> <crtime>2004-07-12T20:58:51Z</crtime> >> <seq>9</seq> >> <byte_runs> >> <byte_run file_offset='0' fs_offset='1598160896' img_offset='1598193152' len='266240'/> >> <byte_run file_offset='266240' fs_offset='1995890688' img_offset='1995922944' len='8192'/> >> <byte_run file_offset='274432' fs_offset='2718347264' img_offset='2718379520' len='4096'/> >> <byte_run file_offset='278528' fs_offset='2615406592' img_offset='2615438848' len='4096'/> >> <byte_run file_offset='282624' fs_offset='1972649984' img_offset='1972682240' len='4096'/> >> <byte_run file_offset='286720' fs_offset='324837376' img_offset='324869632' len='4096'/> >> ... >> <byte_run file_offset='1953792' fs_offset='1418735616' img_offset='1418767872' len='4096'/> >> <byte_run file_offset='1957888' fs_offset='1418731520' img_offset='1418763776' len='2152'/> >> </byte_runs> >> <hashdigest type='md5'>14e29e689be66747926c29e7b6d8da1c</hashdigest> >> <hashdigest type='sha1'>4755f96f4cc83ab7bf8827d361e2d66d1086f0cf</hashdigest> >> </fileobject> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> WatchGuard Dimension instantly turns raw network data into actionable >> security intelligence. It gives you real-time visual feedback on key >> security issues and trends. Skip the complicated setup - simply import >> a virtual appliance and go from zero to informed in seconds. >> http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk >> _______________________________________________ >> sleuthkit-users mailing list >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >> http://www.sleuthkit.org >> >> > |