sleuthkit-users Mailing List for The Sleuth Kit (Page 20)
Brought to you by:
carrier
You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(11) |
Oct
(5) |
Nov
(4) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(1) |
Feb
(20) |
Mar
(60) |
Apr
(40) |
May
(24) |
Jun
(28) |
Jul
(18) |
Aug
(27) |
Sep
(6) |
Oct
(14) |
Nov
(15) |
Dec
(22) |
2004 |
Jan
(34) |
Feb
(13) |
Mar
(28) |
Apr
(23) |
May
(27) |
Jun
(26) |
Jul
(37) |
Aug
(19) |
Sep
(20) |
Oct
(39) |
Nov
(17) |
Dec
(9) |
2005 |
Jan
(45) |
Feb
(43) |
Mar
(66) |
Apr
(36) |
May
(19) |
Jun
(64) |
Jul
(10) |
Aug
(11) |
Sep
(35) |
Oct
(6) |
Nov
(4) |
Dec
(13) |
2006 |
Jan
(52) |
Feb
(34) |
Mar
(39) |
Apr
(39) |
May
(37) |
Jun
(15) |
Jul
(13) |
Aug
(48) |
Sep
(9) |
Oct
(10) |
Nov
(47) |
Dec
(13) |
2007 |
Jan
(25) |
Feb
(4) |
Mar
(2) |
Apr
(29) |
May
(11) |
Jun
(19) |
Jul
(13) |
Aug
(15) |
Sep
(30) |
Oct
(12) |
Nov
(10) |
Dec
(13) |
2008 |
Jan
(2) |
Feb
(54) |
Mar
(58) |
Apr
(43) |
May
(10) |
Jun
(27) |
Jul
(25) |
Aug
(27) |
Sep
(48) |
Oct
(69) |
Nov
(55) |
Dec
(43) |
2009 |
Jan
(26) |
Feb
(36) |
Mar
(28) |
Apr
(27) |
May
(55) |
Jun
(9) |
Jul
(19) |
Aug
(16) |
Sep
(15) |
Oct
(17) |
Nov
(70) |
Dec
(21) |
2010 |
Jan
(56) |
Feb
(59) |
Mar
(53) |
Apr
(32) |
May
(25) |
Jun
(31) |
Jul
(36) |
Aug
(11) |
Sep
(37) |
Oct
(19) |
Nov
(23) |
Dec
(6) |
2011 |
Jan
(21) |
Feb
(20) |
Mar
(30) |
Apr
(30) |
May
(74) |
Jun
(50) |
Jul
(34) |
Aug
(34) |
Sep
(12) |
Oct
(33) |
Nov
(10) |
Dec
(8) |
2012 |
Jan
(23) |
Feb
(57) |
Mar
(26) |
Apr
(14) |
May
(27) |
Jun
(27) |
Jul
(60) |
Aug
(88) |
Sep
(13) |
Oct
(36) |
Nov
(97) |
Dec
(85) |
2013 |
Jan
(60) |
Feb
(24) |
Mar
(43) |
Apr
(32) |
May
(22) |
Jun
(38) |
Jul
(51) |
Aug
(50) |
Sep
(76) |
Oct
(65) |
Nov
(25) |
Dec
(30) |
2014 |
Jan
(19) |
Feb
(41) |
Mar
(43) |
Apr
(28) |
May
(61) |
Jun
(12) |
Jul
(10) |
Aug
(37) |
Sep
(76) |
Oct
(31) |
Nov
(41) |
Dec
(12) |
2015 |
Jan
(33) |
Feb
(28) |
Mar
(53) |
Apr
(22) |
May
(29) |
Jun
(20) |
Jul
(15) |
Aug
(17) |
Sep
(52) |
Oct
(3) |
Nov
(18) |
Dec
(21) |
2016 |
Jan
(20) |
Feb
(8) |
Mar
(21) |
Apr
(7) |
May
(13) |
Jun
(35) |
Jul
(34) |
Aug
(11) |
Sep
(14) |
Oct
(22) |
Nov
(31) |
Dec
(23) |
2017 |
Jan
(20) |
Feb
(7) |
Mar
(5) |
Apr
(6) |
May
(6) |
Jun
(22) |
Jul
(11) |
Aug
(16) |
Sep
(8) |
Oct
(1) |
Nov
(1) |
Dec
(1) |
2018 |
Jan
|
Feb
|
Mar
(16) |
Apr
(2) |
May
(6) |
Jun
(5) |
Jul
|
Aug
(2) |
Sep
(4) |
Oct
|
Nov
(16) |
Dec
(13) |
2019 |
Jan
|
Feb
(1) |
Mar
(25) |
Apr
(9) |
May
(2) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
(2) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
|
Oct
(5) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(4) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2022 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2024 |
Jan
|
Feb
(3) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Lloyd <llo...@gm...> - 2016-01-15 04:31:09
|
Thanks Brian, Yes the drive is mounted. It is mounted at "F:", so I tried TSK_IMG_INFO *tsk_img = tsk_img_open_sing(_T("\\\\.\\F:"), TSK_IMG_TYPE_RAW, 512); and it gives the correct result. Why could this ("\\?\usbstor#...") be failing? Autopsy also correctly loads this as "local disk". Isn't autopsy also using "\\?\usbstor" name to open the device? I tried to check the code of autopsy, as I am not familiar with java, couldn't locate the calls to " tsk_img_open". Any help, hint, tips would be greatly appreciated. Thanks, Lloyd On Thu, Jan 14, 2016 at 10:11 PM, Brian Carrier <ca...@sl...> wrote: > Is the drive mounted? What happens if you use something like \\.\G:? > > > On Jan 14, 2016, at 5:54 AM, Lloyd <llo...@gm...> wrote: > > > > Hi, > > > > I am using libtsk (sleuthkit 4.2) to open and find files in a "live usb > disk (4gb)". For that I have used tsk_img_open_sing with TSK_IMG_TYPE_RAW. > The device name starts with "\\?\usbstor#..." > > > > The files listed in this are incomplete and wrong. > > > > So I took a raw image of the disk and again fed to tsk the same way, > this time it shows the result correctly. > > > > Am I doing something wrong? When I checked the source of > "tsk_img_open_sing " it shows that opening "winobj" is supported. > > > > Any guidance is greatly appreciated. > > > > Thanks, > > Lloyd > > > ------------------------------------------------------------------------------ > > Site24x7 APM Insight: Get Deep Visibility into Application Performance > > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > > Monitor end-to-end web transactions and take corrective actions now > > Troubleshoot faster and improve end-user experience. Signup Now! > > > http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140_______________________________________________ > > sleuthkit-users mailing list > > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > > http://www.sleuthkit.org > > |
From: Ketil F. <ke...@fr...> - 2016-01-14 21:14:10
|
5 days sounds excessive. Autopsy parses the file system(s), traversing all files and folders it can find, and stores info about this in an sqlite database (unless you've set up a postgresql environment). Where is the disk image stored, is it on network storage, a USB drive, etc? Where is your autopsy case directory stored, and can you see how big the file autopsy.db is? What is the filesystem on the disk image? Cheers, Ketil On 14 Jan 2016 20:57, "K Murphy" <km...@ci...> wrote: > > Hello, > > How long should the Add Data Source Wizard (Step 3 of 3) take to run? > > I got a 3 TB drive that has been running for 5 days now. I see in the > progress bar in the pop window it changes directories every now an then. > > Also what is Autopsy doing during this time frame? I ask because the I > turned all of the ingest modules off except for keyword searches. I've seen > that kick off after Wizard is complete. > > Thanks, > K Murphy > > > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org > > |
From: K M. <km...@ci...> - 2016-01-14 19:56:55
|
Hello, How long should the Add Data Source Wizard (Step 3 of 3) take to run? I got a 3 TB drive that has been running for 5 days now. I see in the progress bar in the pop window it changes directories every now an then. Also what is Autopsy doing during this time frame? I ask because the I turned all of the ingest modules off except for keyword searches. I've seen that kick off after Wizard is complete. Thanks, K Murphy |
From: Brian C. <ca...@sl...> - 2016-01-14 16:42:13
|
Is the drive mounted? What happens if you use something like \\.\G:? > On Jan 14, 2016, at 5:54 AM, Lloyd <llo...@gm...> wrote: > > Hi, > > I am using libtsk (sleuthkit 4.2) to open and find files in a "live usb disk (4gb)". For that I have used tsk_img_open_sing with TSK_IMG_TYPE_RAW. The device name starts with "\\?\usbstor#..." > > The files listed in this are incomplete and wrong. > > So I took a raw image of the disk and again fed to tsk the same way, this time it shows the result correctly. > > Am I doing something wrong? When I checked the source of "tsk_img_open_sing " it shows that opening "winobj" is supported. > > Any guidance is greatly appreciated. > > Thanks, > Lloyd > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Lloyd <llo...@gm...> - 2016-01-14 10:54:24
|
Hi, I am using libtsk (sleuthkit 4.2) to open and find files in a "live usb disk (4gb)". For that I have used tsk_img_open_sing with TSK_IMG_TYPE_RAW. The device name starts with "\\?\usbstor#..." The files listed in this are incomplete and wrong. So I took a raw image of the disk and again fed to tsk the same way, this time it shows the result correctly. Am I doing something wrong? When I checked the source of "tsk_img_open_sing " it shows that opening "winobj" is supported. Any guidance is greatly appreciated. Thanks, Lloyd |
From: Brian C. <ca...@sl...> - 2016-01-07 02:43:48
|
This should be fixed. Thanks. It looks like a regression from an October change. > On Jan 6, 2016, at 2:05 PM, Luís Filipe Nassif <lfc...@gm...> wrote: > > Hi, > > We discover configure is not enabling multithreading by default, neither with --enable-multithreading. To fix the problem, change line 295 of configure.ac from > > ax_pthread_ok=$ax_pthread_ok]) > to > ax_multithread=$ax_multithread]) > > Regards, > Luis > ------------------------------------------------------------------------------ > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Luís F. N. <lfc...@gm...> - 2016-01-06 19:05:20
|
Hi, We discover configure is not enabling multithreading by default, neither with --enable-multithreading. To fix the problem, change line 295 of configure.ac from ax_pthread_ok=$ax_pthread_ok]) to ax_multithread=$ax_multithread]) Regards, Luis |
From: scottxwalker . <sco...@gm...> - 2015-12-24 18:17:33
|
When I run the ingest modules on a Windows 7 hard drive, it usually takes about 5 to 6 hours for Autopsy 4.0 to complete the ingest. The target hard drive is 500gb and the utilized space is about 60gb. I am pulling the data across USB 3.0. My forensics machine has plenty of CPU, memory, and uses solid state drives. I cannot figure out how to increase the ingest speed. I tried altering the threads for ingest setting but that did not help at all. Is 5 to 6 hours for an ingest normal or is this running slow for Autopsy 4.0? Thank you, Scott |
From: Darryl Vo <ro...@gm...> - 2015-12-22 23:04:13
|
Oh my mistake, I didn't realize I was giving it a relative path to the Eclipse working directory. Doing that and adding in more dependencies to the class path fixed my problem, thank you! On Tue, Dec 22, 2015 at 2:45 PM, Eamonn Saunders <ea...@ya...> wrote: > > Is that your actual code or are you giving newCase the path to where the > SQLite database should be created (which is what the documentation calls > for)? > > On Tue, Dec 22, 2015 at 5:27 PM, Darryl Vo > <ro...@gm...> wrote: > Hello all, > > I currently am trying to create a simple java program with the java > bindings. Everything is installed correctly. > I am currently using eclipse on Ubuntu. I have added the jar from dist to > the class path. > > When I simply try to call SleuthkitCase sk = > SleuthkitCase.newCase("database"); > I get this as the error message when catching TskCoreException : > > Failed to create case database at database > > It does however create a file. When I try to open it with a > SleuthKitCase.openCase(), it also fail to open it. > What's wrong? > > ------------------------------------------------------------------------------ > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org > > |
From: Eamonn S. <ea...@ya...> - 2015-12-22 22:46:06
|
Is that your actual code or are you giving newCase the path to where the SQLite database should be created (which is what the documentation calls for)? On Tue, Dec 22, 2015 at 5:27 PM, Darryl Vo<ro...@gm...> wrote: Hello all, I currently am trying to create a simple java program with the java bindings. Everything is installed correctly. I am currently using eclipse on Ubuntu. I have added the jar from dist to the class path. When I simply try to call SleuthkitCase sk = SleuthkitCase.newCase("database"); I get this as the error message when catching TskCoreException : Failed to create case database at database It does however create a file. When I try to open it with a SleuthKitCase.openCase(), it also fail to open it. What's wrong? ------------------------------------------------------------------------------ _______________________________________________ sleuthkit-users mailing list https://lists.sourceforge.net/lists/listinfo/sleuthkit-users http://www.sleuthkit.org |
From: Luís F. N. <lfc...@gm...> - 2015-12-22 22:37:52
|
Did you include the xerial sqlite dependency jar in classpath too? 2015-12-22 20:27 GMT-02:00 Darryl Vo <ro...@gm...>: > Hello all, > > I currently am trying to create a simple java program with the java > bindings. Everything is installed correctly. > I am currently using eclipse on Ubuntu. I have added the jar from dist to > the class path. > > When I simply try to call SleuthkitCase sk = > SleuthkitCase.newCase("database"); > I get this as the error message when catching TskCoreException : > > Failed to create case database at database > > It does however create a file. When I try to open it with a > SleuthKitCase.openCase(), it also fail to open it. > What's wrong? > > > ------------------------------------------------------------------------------ > > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org > > |
From: Darryl Vo <ro...@gm...> - 2015-12-22 22:27:49
|
Hello all, I currently am trying to create a simple java program with the java bindings. Everything is installed correctly. I am currently using eclipse on Ubuntu. I have added the jar from dist to the class path. When I simply try to call SleuthkitCase sk = SleuthkitCase.newCase("database"); I get this as the error message when catching TskCoreException : Failed to create case database at database It does however create a file. When I try to open it with a SleuthKitCase.openCase(), it also fail to open it. What's wrong? |
From: Luís F. N. <lfc...@gm...> - 2015-12-17 18:35:54
|
Hi Brian, Thank you very much for your attention. I am building from source. The fix resolved the issue like a charm, tested with 2 problematic thumbdrives I still have with me. Thank you again, Luis 2015-12-17 14:06 GMT-02:00 Brian Carrier <ca...@sl...>: > Hi Luis, > > It looks like it can’t differentiate between it having a DOS or Mac > partition table. Though, it should be able to because the only Mac > partition it found had a size of 0 sectors. Code in github has a fix. > > Are you building from source or an installed version. It unfortunately > looks like tsk_loaddb doesn’t allow you to specify the partition type when > this kind of situation occurs. > > thanks, > brian > > > > > > > On Dec 17, 2015, at 6:38 AM, Luís Filipe Nassif <lfc...@gm...> > wrote: > > > > Hi, > > > > Tsk_Loaddb (4.2 and 4.1.3) is reporting that error message with some > FAT32 thumbdrive images. We can open and browse the file system content > with other forensic tools. > > > > One mmls -v output is below: > > > > tsk_img_open: Type: 0 NumImg: 1 Img1: > /media/sf_F_DRIVE/Imagens/pendrive/unknownFS/Item06ItemArrecadacao08.E01 > > ewf_open: found 1 segment files via libewf_glob > > dos_load_prim: Table Sector: 0 > > ewf_image_read: byte offset: 0 len: 65536 > > dos_load_prim_table: Testing FAT/NTFS conditions > > load_pri:0:0 Start: 63 Size: 7830081 Type: 11 > > load_pri:0:1 Start: 0 Size: 0 Type: 0 > > load_pri:0:2 Start: 0 Size: 0 Type: 0 > > load_pri:0:3 Start: 0 Size: 0 Type: 0 > > bsd_load_table: Table Sector: 1 > > gpt_load_table: Sector: 0 > > gpt_open: Trying other sector sizes > > gpt_open: Trying sector size: 512 > > gpt_load_table: Sector: 0 > > gpt_open: Trying sector size: 1024 > > gpt_load_table: Sector: 0 > > gpt_open: Trying sector size: 2048 > > gpt_load_table: Sector: 0 > > gpt_open: Trying sector size: 4096 > > gpt_load_table: Sector: 0 > > gpt_open: Trying sector size: 8192 > > gpt_load_table: Sector: 0 > > sun_load_table: Trying sector: 0 > > sun_load_table: Trying sector: 1 > > mac_load_table: Sector: 1 > > mac_load: 0 Starting Sector: 0 Size: 0 Type: Status: 0 > > Cannot determine partition type (Mac or DOS at 0) > > > > I will gladly provide any other information requested to help indentify > the problem. I can also share (the whole or part of) one of those images in > ewf format. > > > > Any help will be very much appreciated. > > > > Thank you, > > Luis Nassif > > > ------------------------------------------------------------------------------ > > _______________________________________________ > > sleuthkit-users mailing list > > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > > http://www.sleuthkit.org > > |
From: Justin G. <jus...@gm...> - 2015-12-17 16:27:25
|
Brian and Mark-- Thanks for the responses. Disregard my original post... I just opened the case back up in Autopsy and now the hash was present. Go figure... Maybe I just needed to close/re-open it for it to display properly. Or maybe the module wasn't finished processing when I looked. Who knows! It's there now :) -Justin On Mon, Dec 14, 2015 at 10:40 PM, Brian Carrier <ca...@sl...> wrote: > Does the file’s row in the table have the hash (far right column)? > > If you go to the Hex view, can you page through the entire file? > Sometimes the file can’t be read and therefore the hash is not calculated. > That is much more common with deleted files though. > > > > > On Dec 14, 2015, at 3:43 PM, Justin Grover <jus...@gm...> > wrote: > > > > Currently using Autopsy 4.0.0 on Windows 7. > > > > I'm trying to get Autopsy to display the hash of a file. When checking > each file's "File Metadata" tab, it seems like the MD5 hash of some files > display, but not others. I've already run the Hash Lookup ingest module on > my data source (no hash database used). > > > > Is there some way to get Autopsy to calculate and display the hash of > all files? > > > > Attached is a screenshot from one of my File Metadata tabs. > > > > -Justin > > > <file_hashes.png>------------------------------------------------------------------------------ > > _______________________________________________ > > sleuthkit-users mailing list > > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > > http://www.sleuthkit.org > > |
From: Brian C. <ca...@sl...> - 2015-12-17 16:07:12
|
Hi Luis, It looks like it can’t differentiate between it having a DOS or Mac partition table. Though, it should be able to because the only Mac partition it found had a size of 0 sectors. Code in github has a fix. Are you building from source or an installed version. It unfortunately looks like tsk_loaddb doesn’t allow you to specify the partition type when this kind of situation occurs. thanks, brian > On Dec 17, 2015, at 6:38 AM, Luís Filipe Nassif <lfc...@gm...> wrote: > > Hi, > > Tsk_Loaddb (4.2 and 4.1.3) is reporting that error message with some FAT32 thumbdrive images. We can open and browse the file system content with other forensic tools. > > One mmls -v output is below: > > tsk_img_open: Type: 0 NumImg: 1 Img1: /media/sf_F_DRIVE/Imagens/pendrive/unknownFS/Item06ItemArrecadacao08.E01 > ewf_open: found 1 segment files via libewf_glob > dos_load_prim: Table Sector: 0 > ewf_image_read: byte offset: 0 len: 65536 > dos_load_prim_table: Testing FAT/NTFS conditions > load_pri:0:0 Start: 63 Size: 7830081 Type: 11 > load_pri:0:1 Start: 0 Size: 0 Type: 0 > load_pri:0:2 Start: 0 Size: 0 Type: 0 > load_pri:0:3 Start: 0 Size: 0 Type: 0 > bsd_load_table: Table Sector: 1 > gpt_load_table: Sector: 0 > gpt_open: Trying other sector sizes > gpt_open: Trying sector size: 512 > gpt_load_table: Sector: 0 > gpt_open: Trying sector size: 1024 > gpt_load_table: Sector: 0 > gpt_open: Trying sector size: 2048 > gpt_load_table: Sector: 0 > gpt_open: Trying sector size: 4096 > gpt_load_table: Sector: 0 > gpt_open: Trying sector size: 8192 > gpt_load_table: Sector: 0 > sun_load_table: Trying sector: 0 > sun_load_table: Trying sector: 1 > mac_load_table: Sector: 1 > mac_load: 0 Starting Sector: 0 Size: 0 Type: Status: 0 > Cannot determine partition type (Mac or DOS at 0) > > I will gladly provide any other information requested to help indentify the problem. I can also share (the whole or part of) one of those images in ewf format. > > Any help will be very much appreciated. > > Thank you, > Luis Nassif > ------------------------------------------------------------------------------ > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Luís F. N. <lfc...@gm...> - 2015-12-17 11:38:44
|
Hi, Tsk_Loaddb (4.2 and 4.1.3) is reporting that error message with some FAT32 thumbdrive images. We can open and browse the file system content with other forensic tools. One mmls -v output is below: tsk_img_open: Type: 0 NumImg: 1 Img1: /media/sf_F_DRIVE/Imagens/pendrive/unknownFS/Item06ItemArrecadacao08.E01 ewf_open: found 1 segment files via libewf_glob dos_load_prim: Table Sector: 0 ewf_image_read: byte offset: 0 len: 65536 dos_load_prim_table: Testing FAT/NTFS conditions load_pri:0:0 Start: 63 Size: 7830081 Type: 11 load_pri:0:1 Start: 0 Size: 0 Type: 0 load_pri:0:2 Start: 0 Size: 0 Type: 0 load_pri:0:3 Start: 0 Size: 0 Type: 0 bsd_load_table: Table Sector: 1 gpt_load_table: Sector: 0 gpt_open: Trying other sector sizes gpt_open: Trying sector size: 512 gpt_load_table: Sector: 0 gpt_open: Trying sector size: 1024 gpt_load_table: Sector: 0 gpt_open: Trying sector size: 2048 gpt_load_table: Sector: 0 gpt_open: Trying sector size: 4096 gpt_load_table: Sector: 0 gpt_open: Trying sector size: 8192 gpt_load_table: Sector: 0 sun_load_table: Trying sector: 0 sun_load_table: Trying sector: 1 mac_load_table: Sector: 1 mac_load: 0 Starting Sector: 0 Size: 0 Type: Status: 0 Cannot determine partition type (Mac or DOS at 0) I will gladly provide any other information requested to help indentify the problem. I can also share (the whole or part of) one of those images in ewf format. Any help will be very much appreciated. Thank you, Luis Nassif |
From: Brian C. <ca...@sl...> - 2015-12-15 03:40:25
|
Does the file’s row in the table have the hash (far right column)? If you go to the Hex view, can you page through the entire file? Sometimes the file can’t be read and therefore the hash is not calculated. That is much more common with deleted files though. > On Dec 14, 2015, at 3:43 PM, Justin Grover <jus...@gm...> wrote: > > Currently using Autopsy 4.0.0 on Windows 7. > > I'm trying to get Autopsy to display the hash of a file. When checking each file's "File Metadata" tab, it seems like the MD5 hash of some files display, but not others. I've already run the Hash Lookup ingest module on my data source (no hash database used). > > Is there some way to get Autopsy to calculate and display the hash of all files? > > Attached is a screenshot from one of my File Metadata tabs. > > -Justin > <file_hashes.png>------------------------------------------------------------------------------ > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Justin G. <jus...@gm...> - 2015-12-14 20:43:24
|
Currently using Autopsy 4.0.0 on Windows 7. I'm trying to get Autopsy to display the hash of a file. When checking each file's "File Metadata" tab, it seems like the MD5 hash of some files display, but not others. I've already run the Hash Lookup ingest module on my data source (no hash database used). Is there some way to get Autopsy to calculate and display the hash of all files? Attached is a screenshot from one of my File Metadata tabs. -Justin |
From: Brian C. <ca...@sl...> - 2015-12-10 03:45:41
|
We could change the default behavior and add a command line argument to change it. Any objections to grouping unallocated space? As a side note (and we recently ran into this with some images in Autopsy) is that the current algorithm will break the groups of unallocated space at a sector boundary with an allocated sector. If there is 100GB of contagious unallocated space, the resulting file will be 100GB. We will probably change the algorithm to do something like try to break it at 500MB at a natural boundary, but definitely stop 10% later if there isn’t one. > On Dec 8, 2015, at 6:58 AM, Luís Filipe Nassif <lfc...@gm...> wrote: > > Hi, > > Those 2 commands are populating the sqlite database with different number of unallocated entries. Reading the code, the java command was configured to group non adjacent unallocated clusters up to 500 MB, while tsk_loaddb groups only adjacent blocks. Tsk_loaddb produced a sqlite with 26 millions of unallocated entries in a specific case. I think those 2 commands should return the same output, and I prefer the java one, because it makes caving fragmented files easier. > > Regards, > Luis Nassif > ------------------------------------------------------------------------------ > Go from Idea to Many App Stores Faster with Intel(R) XDK > Give your users amazing mobile app experiences with Intel(R) XDK. > Use one codebase in this all-in-one HTML5 development environment. > Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs. > http://pubads.g.doubleclick.net/gampad/clk?id=254741911&iu=/4140_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Brian C. <ca...@sl...> - 2015-12-10 03:45:41
|
Is tsk_fs_file_hash_calc() returning an error code? It should return 1 on error and you can get the error with tsk_error_print(). > On Dec 9, 2015, at 9:28 AM, sle...@fa... wrote: > > Hey, > I started to modify tsk_recover to my need by adding some basic triage functionality directly into the code. Basically what I did was to exted the TskRecover::writeFile function with the following snippet (direcly at the beginning of the function). > > int8_t hashFound = 0; > TSK_FS_HASH_RESULTS fileHash = {}; > > tsk_fs_file_hash_calc (a_fs_file, &fileHash, TSK_BASE_HASH_MD5); > hashFound = tsk_hdb_lookup_raw (m_hdbInfo, fileHash.md5_digest, 16, TSK_HDB_FLAG_QUICK, NULL, NULL); > > if (hashFound == 1) > return 0; > else if (hashFound == -1) > fprintf(stderr, "Error hash lookup."); > > m_hdbInfo is an added member varaibale of type TSK_HDB_INFO* which I set in the constructor to an NSRL database, everything else should be self explanatory. > > The problem is that the hashes are not calculated correctly. I made two oservations: 1) The hashes change in every test run. 2) The calculated hashes repeat, quite often but without a pattern (at least I couldn't see one) > I checked with md5sum and the hashes are definitely wrong. > So is there something I missed? For example I thought of a missing init function call, but tsk_fs_file_hash_calc does that already. > > Kind regards > > ------------------------------------------------------------------------------ > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Luís F. N. <lfc...@gm...> - 2015-12-09 21:31:06
|
I AM exactly doing that in a forensic APP I developed: breaking big virtual unallocated files into smaller ones to do multithreaded carving and to do fast indexed searches and highlighting. Att. Luís Nassif Em 09/12/2015 13:40, "Brian Carrier" <ca...@sl...> escreveu: > We could change the default behavior and add a command line argument to > change it. > > Any objections to grouping unallocated space? > > As a side note (and we recently ran into this with some images in Autopsy) > is that the current algorithm will break the groups of unallocated space at > a sector boundary with an allocated sector. If there is 100GB of contagious > unallocated space, the resulting file will be 100GB. We will probably > change the algorithm to do something like try to break it at 500MB at a > natural boundary, but definitely stop 10% later if there isn’t one. > > > > > > > On Dec 8, 2015, at 6:58 AM, Luís Filipe Nassif <lfc...@gm...> > wrote: > > > > Hi, > > > > Those 2 commands are populating the sqlite database with different > number of unallocated entries. Reading the code, the java command was > configured to group non adjacent unallocated clusters up to 500 MB, while > tsk_loaddb groups only adjacent blocks. Tsk_loaddb produced a sqlite with > 26 millions of unallocated entries in a specific case. I think those 2 > commands should return the same output, and I prefer the java one, because > it makes caving fragmented files easier. > > > > Regards, > > Luis Nassif > > > ------------------------------------------------------------------------------ > > Go from Idea to Many App Stores Faster with Intel(R) XDK > > Give your users amazing mobile app experiences with Intel(R) XDK. > > Use one codebase in this all-in-one HTML5 development environment. > > Design, debug & build mobile apps & 2D/3D high-impact games for multiple > OSs. > > > http://pubads.g.doubleclick.net/gampad/clk?id=254741911&iu=/4140_______________________________________________ > > sleuthkit-users mailing list > > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > > http://www.sleuthkit.org > > |
From: Simson G. <si...@ac...> - 2015-12-09 15:23:34
|
Hi. All of this functionality is in fiwalk. Have you looked at that program? Regards, Simson Garfinkel > On Dec 9, 2015, at 9:28 AM, sle...@fa... wrote: > > Hey, > I started to modify tsk_recover to my need by adding some basic triage functionality directly into the code. Basically what I did was to exted the TskRecover::writeFile function with the following snippet (direcly at the beginning of the function). > > int8_t hashFound = 0; > TSK_FS_HASH_RESULTS fileHash = {}; > > tsk_fs_file_hash_calc (a_fs_file, &fileHash, TSK_BASE_HASH_MD5); > hashFound = tsk_hdb_lookup_raw (m_hdbInfo, fileHash.md5_digest, 16, TSK_HDB_FLAG_QUICK, NULL, NULL); > > if (hashFound == 1) > return 0; > else if (hashFound == -1) > fprintf(stderr, "Error hash lookup."); > > m_hdbInfo is an added member varaibale of type TSK_HDB_INFO* which I set in the constructor to an NSRL database, everything else should be self explanatory. > > The problem is that the hashes are not calculated correctly. I made two oservations: 1) The hashes change in every test run. 2) The calculated hashes repeat, quite often but without a pattern (at least I couldn't see one) > I checked with md5sum and the hashes are definitely wrong. > So is there something I missed? For example I thought of a missing init function call, but tsk_fs_file_hash_calc does that already. > > Kind regards > > ------------------------------------------------------------------------------ > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: <sle...@fa...> - 2015-12-09 14:28:26
|
Hey, I started to modify tsk_recover to my need by adding some basic triage functionality directly into the code. Basically what I did was to exted the TskRecover::writeFile function with the following snippet (direcly at the beginning of the function). int8_t hashFound = 0; TSK_FS_HASH_RESULTS fileHash = {}; tsk_fs_file_hash_calc (a_fs_file, &fileHash, TSK_BASE_HASH_MD5); hashFound = tsk_hdb_lookup_raw (m_hdbInfo, fileHash.md5_digest, 16, TSK_HDB_FLAG_QUICK, NULL, NULL); if (hashFound == 1) return 0; else if (hashFound == -1) fprintf(stderr, "Error hash lookup."); m_hdbInfo is an added member varaibale of type TSK_HDB_INFO* which I set in the constructor to an NSRL database, everything else should be self explanatory. The problem is that the hashes are not calculated correctly. I made two oservations: 1) The hashes change in every test run. 2) The calculated hashes repeat, quite often but without a pattern (at least I couldn't see one) I checked with md5sum and the hashes are definitely wrong. So is there something I missed? For example I thought of a missing init function call, but tsk_fs_file_hash_calc does that already. Kind regards |
From: Luís F. N. <lfc...@gm...> - 2015-12-08 11:58:56
|
Hi, Those 2 commands are populating the sqlite database with different number of unallocated entries. Reading the code, the java command was configured to group non adjacent unallocated clusters up to 500 MB, while tsk_loaddb groups only adjacent blocks. Tsk_loaddb produced a sqlite with 26 millions of unallocated entries in a specific case. I think those 2 commands should return the same output, and I prefer the java one, because it makes caving fragmented files easier. Regards, Luis Nassif |
From: Lloyd <llo...@gm...> - 2015-12-08 06:12:31
|
Hi, Using the function 'tsk_fs_file_open_meta()' the contents of existing and deleted files could be read. Considering the following cases: 1. We have created a text file(1 KB) and added some content to it. Later deleted it and created a new text file(1 KB) with the same file name and a different content(less number of bytes). Now when we try to read the files, the deleted file displays the content of the new file followed by zeros. The size of deleted file is shown correctly. 2. We created a text file(3 KB) and added some content to it. Later deleted it and created a new text file(1 KB) with the same file name and a different content. Now when we try to read the files, the deleted file displays the content of the new file followed by zeros in the first 1 KB portion and the actual content content in the remaining 2 KB portions. So the deleted file is being overwritten somehow. Using 'TSK_FS_META_FLAG_ALLOC' and 'TSK_FS_META_FLAG_UNALLOC' flags, it can be understood whether the file is existing or deleted respectively. But could not find any information regarding whether the file is overwritten is or not. Is there any mechanism available for detecting the overwritten files? Thanks, Lloyd |