sleuthkit-developers Mailing List for The Sleuth Kit (Page 18)
Brought to you by:
carrier
You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(10) |
Sep
(2) |
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(22) |
Feb
(39) |
Mar
(8) |
Apr
(17) |
May
(10) |
Jun
(2) |
Jul
(6) |
Aug
(4) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
2005 |
Jan
(2) |
Feb
(6) |
Mar
(2) |
Apr
(2) |
May
(13) |
Jun
(2) |
Jul
|
Aug
|
Sep
(5) |
Oct
|
Nov
(2) |
Dec
|
2006 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
(2) |
Jun
(9) |
Jul
(4) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(9) |
Dec
(4) |
2007 |
Jan
(1) |
Feb
(2) |
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(4) |
Oct
|
Nov
|
Dec
(2) |
2008 |
Jan
(4) |
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(9) |
Jul
(14) |
Aug
|
Sep
(5) |
Oct
(10) |
Nov
(4) |
Dec
(7) |
2009 |
Jan
(7) |
Feb
(10) |
Mar
(10) |
Apr
(19) |
May
(16) |
Jun
(3) |
Jul
(9) |
Aug
(5) |
Sep
(5) |
Oct
(16) |
Nov
(35) |
Dec
(30) |
2010 |
Jan
(4) |
Feb
(24) |
Mar
(25) |
Apr
(31) |
May
(11) |
Jun
(9) |
Jul
(11) |
Aug
(31) |
Sep
(11) |
Oct
(10) |
Nov
(15) |
Dec
(3) |
2011 |
Jan
(8) |
Feb
(17) |
Mar
(14) |
Apr
(2) |
May
(4) |
Jun
(4) |
Jul
(3) |
Aug
(7) |
Sep
(18) |
Oct
(8) |
Nov
(16) |
Dec
(1) |
2012 |
Jan
(9) |
Feb
(2) |
Mar
(3) |
Apr
(13) |
May
(10) |
Jun
(7) |
Jul
(1) |
Aug
(5) |
Sep
|
Oct
(3) |
Nov
(19) |
Dec
(3) |
2013 |
Jan
(16) |
Feb
(3) |
Mar
(2) |
Apr
(4) |
May
|
Jun
(3) |
Jul
(2) |
Aug
(17) |
Sep
(6) |
Oct
(1) |
Nov
|
Dec
(4) |
2014 |
Jan
(2) |
Feb
|
Mar
(3) |
Apr
(7) |
May
(6) |
Jun
(1) |
Jul
(18) |
Aug
|
Sep
(3) |
Oct
(1) |
Nov
(26) |
Dec
(7) |
2015 |
Jan
(5) |
Feb
(1) |
Mar
(2) |
Apr
|
May
(1) |
Jun
(1) |
Jul
(5) |
Aug
(7) |
Sep
(4) |
Oct
(1) |
Nov
(1) |
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
(13) |
Jul
(23) |
Aug
(2) |
Sep
(11) |
Oct
|
Nov
(1) |
Dec
|
2017 |
Jan
(4) |
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
2018 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
(1) |
Jun
(3) |
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
(2) |
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
(4) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
(5) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2024 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
From: SourceForge.net <no...@so...> - 2011-01-10 22:12:06
|
Feature Requests item #3154655, was opened at 2011-01-10 23:12 Message generated for change (Tracker Item Submitted) made by jbmetz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477892&aid=3154655&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Installer Group: None Status: Open Priority: 5 Private: No Submitted By: Joachim Metz (jbmetz) Assigned to: Nobody/Anonymous (nobody) Summary: autoconf/make m4 directory Initial Comment: As indicated by recent versions of autoconf/make please use a sub-directory for M4 scripts. The attached patch makes the necessary changes for this. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477892&aid=3154655&group_id=55685 |
From: SourceForge.net <no...@so...> - 2011-01-06 14:57:34
|
Feature Requests item #3152442, was opened at 2011-01-06 14:57 Message generated for change (Tracker Item Submitted) made by You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477892&aid=3152442&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Priority: 5 Private: No Submitted By: https://www.google.com/accounts () Assigned to: Nobody/Anonymous (nobody) Summary: sigfind "I'm feeling lucky" option Initial Comment: sigfind is useful in locating partition tables in disk images, virtual machine files etc. In the vast majority of cases if there is a partition table it is normally the first match that sigfind finds, especially when using a template with the -t switch. However, sigfind will continue to search the file for more signatures. This can be a minor annoyance when using sigfind in a script/batch file. Would it be possible to have an option that will stop sigfind searching as soon as it makes a match and outputs the result to STDOUT? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477892&aid=3152442&group_id=55685 |
From: SourceForge.net <no...@so...> - 2010-12-31 03:00:39
|
Bugs item #3143117, was opened at 2010-12-23 14:56 Message generated for change (Settings changed) made by carrier You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3143117&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: File System Tools Group: None >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: John Lehr (jlehr) Assigned to: Nobody/Anonymous (nobody) Summary: fls -rd doesn't recurse into deleted directories Initial Comment: Platform: Sleuthkit 3.2.0, Debian Testing. Caveat:This report is written with the assumption that the inverse behavior of fls -u and fls -d in TSK 3.2.0 is corrected. Bug: "fls -rd" does not recurse into deleted FAT32 directory, where inverting the flags to "-dr" does. # fls -r image.dd d/d 3: DCIM + d/d 1029: 100DSCIM ++ r/r * 2053: _ICT0001.AVI ++ r/r * 2054: _ICT0002.AVI r/r * 6: New Text Document.txt r/r * 7: _ime.txt ... # fls -rd r/r * 6: New Text Document.txt r/r * 7: _ime.txt ... # fls -dr image.dd r/r * 2053: DCIM/100DSCIM/_ICT0001.AVI r/r * 2054: DCIM/100DSCIM/_ICT0002.AVI r/r * 6: New Text Document.txt r/r * 7: _ime.txt ... ---------------------------------------------------------------------- >Comment By: Brian Carrier (carrier) Date: 2010-12-30 22:00 Message: This was fixed with the previous fix for 3108272 (wrong flags). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3143117&group_id=55685 |
From: SourceForge.net <no...@so...> - 2010-12-31 02:24:53
|
Bugs item #3105539, was opened at 2010-11-08 16:28 Message generated for change (Comment added) made by carrier You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3105539&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Dragonlord (drag0nl0rd) Assigned to: Nobody/Anonymous (nobody) Summary: Build error for sleuthkit 3.2.0 Initial Comment: Building the package on Arch Linux i686 with up-to-date packages gives following error: ... make[2]: Entering directory `/build/src/sleuthkit-3.2.0/tools/autotools' g++ -DHAVE_CONFIG_H -I. -I../../tsk3 -I../.. -Wall -march=i686 -mtune=generic -O2 -pipe -MT tsk_recover.o -MD -MP -MF .deps/tsk_recover.Tpo -c -o tsk_recover.o tsk_recover.cpp mv -f .deps/tsk_recover.Tpo .deps/tsk_recover.Po /bin/sh ../../libtool --tag=CXX --mode=link g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -L/usr/local/lib -static -o tsk_recover tsk_recover.o ../../tsk3/libtsk3.la libtool: link: g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -o tsk_recover tsk_recover.o -L/usr/local/lib ../../tsk3/.libs/libtsk3.a g++ -DHAVE_CONFIG_H -I. -I../../tsk3 -I../.. -Wall -march=i686 -mtune=generic -O2 -pipe -MT tsk_loaddb.o -MD -MP -MF .deps/tsk_loaddb.Tpo -c -o tsk_loaddb.o tsk_loaddb.cpp mv -f .deps/tsk_loaddb.Tpo .deps/tsk_loaddb.Po /bin/sh ../../libtool --tag=CXX --mode=link g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -L/usr/local/lib -static -o tsk_loaddb tsk_loaddb.o ../../tsk3/libtsk3.la libtool: link: g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -o tsk_loaddb tsk_loaddb.o -L/usr/local/lib ../../tsk3/.libs/libtsk3.a ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlSym': sqlite3.c:(.text+0x2380): undefined reference to `dlsym' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `pthreadMutexTry': sqlite3.c:(.text+0xa1d9): undefined reference to `pthread_mutex_trylock' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `pthreadMutexAlloc': sqlite3.c:(.text+0xa2e4): undefined reference to `pthread_mutexattr_init' sqlite3.c:(.text+0xa2f4): undefined reference to `pthread_mutexattr_settype' sqlite3.c:(.text+0xa308): undefined reference to `pthread_mutexattr_destroy' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlClose': sqlite3.c:(.text+0xb149): undefined reference to `dlclose' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlOpen': sqlite3.c:(.text+0xb181): undefined reference to `dlopen' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlError': sqlite3.c:(.text+0x19a56): undefined reference to `dlerror' collect2: ld returned 1 exit status make[2]: *** [tsk_loaddb] Error 1 make[2]: Leaving directory `/build/src/sleuthkit-3.2.0/tools/autotools' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/build/src/sleuthkit-3.2.0/tools' make: *** [all-recursive] Error 1 Aborting... ---------------------------------------------------------------------- >Comment By: Brian Carrier (carrier) Date: 2010-12-30 21:24 Message: Sending branches/sleuthkit-3.2/NEWS.txt Sending branches/sleuthkit-3.2/configure.ac Sending trunk/NEWS.txt Sending trunk/configure.ac Transmitting file data .... Committed revision 309. ---------------------------------------------------------------------- Comment By: David Hollis (slacker775) Date: 2010-11-11 11:04 Message: I was able to get it to compile by adding '-pthread' to the LDFLAGS prior to configure. In my case, I took the latest sleuthkit 3.1.1 SRPM out of Fedora Koji, upped to it 3.2.0, added: export LDFLAGS=-pthread prior to the %configure ... call and it built just fine. ---------------------------------------------------------------------- Comment By: https://www.google.com/accounts () Date: 2010-11-11 09:33 Message: Same here, Debian Squeeze amd64. Tested with sid libs. /bin/bash ../../libtool --tag=CXX --mode=link g++ -g -O2 -L/usr/local/lib -static -o tsk_loaddb tsk_loaddb.o ../../tsk3/libtsk3.la libtool: link: g++ -g -O2 -o tsk_loaddb tsk_loaddb.o -L/usr/local/lib ../../tsk3/.libs/libtsk3.a ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `pthreadMutexTry': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:16683: undefined reference to `pthread_mutex_trylock' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `pthreadMutexAlloc': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:16551: undefined reference to `pthread_mutexattr_init' /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:16552: undefined reference to `pthread_mutexattr_settype' /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:16554: undefined reference to `pthread_mutexattr_destroy' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlError': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:27464: undefined reference to `dlerror' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlSym': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:27491: undefined reference to `dlsym' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlClose': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:27495: undefined reference to `dlclose' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlOpen': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:27450: undefined reference to `dlopen' collect2: ld returned 1 exit status make[2]: *** [tsk_loaddb] Error 1 make[2]: se sale del directorio `/home/breakersinc/Descargas/sleuthkit-3.2.0/tools/autotools' make[1]: *** [all-recursive] Error 1 make[1]: se sale del directorio `/home/breakersinc/Descargas/sleuthkit-3.2.0/tools' make: *** [all-recursive] Error 1 .... WTF? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3105539&group_id=55685 |
From: SourceForge.net <no...@so...> - 2010-12-23 19:56:39
|
Bugs item #3143117, was opened at 2010-12-23 11:56 Message generated for change (Tracker Item Submitted) made by jlehr You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3143117&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: File System Tools Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: John Lehr (jlehr) Assigned to: Nobody/Anonymous (nobody) Summary: fls -rd doesn't recurse into deleted directories Initial Comment: Platform: Sleuthkit 3.2.0, Debian Testing. Caveat:This report is written with the assumption that the inverse behavior of fls -u and fls -d in TSK 3.2.0 is corrected. Bug: "fls -rd" does not recurse into deleted FAT32 directory, where inverting the flags to "-dr" does. # fls -r image.dd d/d 3: DCIM + d/d 1029: 100DSCIM ++ r/r * 2053: _ICT0001.AVI ++ r/r * 2054: _ICT0002.AVI r/r * 6: New Text Document.txt r/r * 7: _ime.txt ... # fls -rd r/r * 6: New Text Document.txt r/r * 7: _ime.txt ... # fls -dr image.dd r/r * 2053: DCIM/100DSCIM/_ICT0001.AVI r/r * 2054: DCIM/100DSCIM/_ICT0002.AVI r/r * 6: New Text Document.txt r/r * 7: _ime.txt ... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3143117&group_id=55685 |
From: SourceForge.net <no...@so...> - 2010-11-29 05:11:30
|
Bugs item #3121998, was opened at 2010-11-29 05:04 Message generated for change (Comment added) made by You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3121998&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: File System Tools Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: atcuno () Assigned to: Nobody/Anonymous (nobody) Summary: jcat segfault in ext2fs_jblk_walk Initial Comment: steps to reproduce: 1) compile latest release of sleuthkit (with debugging symbols) 2) create an ext3 partition (I tested on loop and on a vmware virtual disk) 3) run jcat against the image with an argument of a number greater than 0 result: usenixatc:~/newtsk# gdb ./sleuthkit-3.2.0/tools/fstools/jcat (gdb) r /dev/loop0 2 Starting program: /root/newtsk/sleuthkit-3.2.0/tools/fstools/jcat /dev/loop0 2 [Thread debugging using libthread_db enabled] [New Thread 0xb74b06c0 (LWP 8221)] Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xb74b06c0 (LWP 8221)] 0x080abb6e in ext2fs_jblk_walk (fs=0x9161828, start=2, end=2, flags=0, action=0, ptr=0x0) at ext2fs_journal.c:521 521 if (big_tsk_getu32(head->magic) != EXT2_JMAGIC) Current language: auto; currently c (gdb) x/x head 0x9160d08: Cannot access memory at address 0x9160d08 not really sure what other info to give. it should be reproducible by just creating an ext3 image on a file made from /dev/zero. I can send other info as needed to fix the bug. ---------------------------------------------------------------------- >Comment By: atcuno () Date: 2010-11-29 05:11 Message: more info I forgot to include: running the latest debian, 32 bit, with gcc 4.3.2 (distro package) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3121998&group_id=55685 |
From: SourceForge.net <no...@so...> - 2010-11-29 05:04:44
|
Bugs item #3121998, was opened at 2010-11-29 05:04 Message generated for change (Tracker Item Submitted) made by You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3121998&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: File System Tools Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: atcuno () Assigned to: Nobody/Anonymous (nobody) Summary: jcat segfault in ext2fs_jblk_walk Initial Comment: steps to reproduce: 1) compile latest release of sleuthkit (with debugging symbols) 2) create an ext3 partition (I tested on loop and on a vmware virtual disk) 3) run jcat against the image with an argument of a number greater than 0 result: usenixatc:~/newtsk# gdb ./sleuthkit-3.2.0/tools/fstools/jcat (gdb) r /dev/loop0 2 Starting program: /root/newtsk/sleuthkit-3.2.0/tools/fstools/jcat /dev/loop0 2 [Thread debugging using libthread_db enabled] [New Thread 0xb74b06c0 (LWP 8221)] Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xb74b06c0 (LWP 8221)] 0x080abb6e in ext2fs_jblk_walk (fs=0x9161828, start=2, end=2, flags=0, action=0, ptr=0x0) at ext2fs_journal.c:521 521 if (big_tsk_getu32(head->magic) != EXT2_JMAGIC) Current language: auto; currently c (gdb) x/x head 0x9160d08: Cannot access memory at address 0x9160d08 not really sure what other info to give. it should be reproducible by just creating an ext3 image on a file made from /dev/zero. I can send other info as needed to fix the bug. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3121998&group_id=55685 |
From: Stefan K. <sk...@bf...> - 2010-11-18 11:49:00
|
Brian, > Are these the same bugs? If yes, I'd be happy to check the > latest trunk. Which I just did - no more problems. Cheers, Stefan. -- Stefan Kelm <sk...@bf...> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstrasse 100 Tel: +49-721-96201-1 D-76133 Karlsruhe Fax: +49-721-96201-99 |
From: Stefan K. <sk...@bf...> - 2010-11-17 17:04:48
|
Hi Brian, > Initial Comment: > 3.2 introduced some incorrect flag behavior in fls. Reported by John Lehr. I just ran across a similar (or the same?) issue with fls 3.2.0 in that the order of the options as passed to 'fls' is relevant to the output: # fls -p -r -d -o 63 image.dd |wc -l 32 # fls -d -r -p -o 63 image.dd |wc -l 74194 Also, the '-d' option shows undeleted entries as well. Are these the same bugs? If yes, I'd be happy to check the latest trunk. Cheers, Stefan. -- Stefan Kelm <sk...@bf...> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstrasse 100 Tel: +49-721-96201-1 D-76133 Karlsruhe Fax: +49-721-96201-99 |
From: SourceForge.net <no...@so...> - 2010-11-17 13:53:04
|
Feature Requests item #3110732, was opened at 2010-11-17 14:53 Message generated for change (Tracker Item Submitted) made by skelm You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477900&aid=3110732&group_id=55687 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Interface Group: None Status: Open Priority: 5 Private: No Submitted By: skelm (skelm) Assigned to: Nobody/Anonymous (nobody) Summary: Sort cases and hosts Initial Comment: Both the "case gallery" and the "host gallery" should provide for a sorted output in order to reflect a regular directory listing. In Caseman.pm the line foreach my $c (readdir CASES) should be replaced by foreach my $c (sort readdir CASES) (similar to 'readdir HOSTS') ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477900&aid=3110732&group_id=55687 |
From: SourceForge.net <no...@so...> - 2010-11-13 04:02:08
|
Bugs item #3108272, was opened at 2010-11-12 22:04 Message generated for change (Tracker Item Submitted) made by carrier You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3108272&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: File System Tools Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Brian Carrier (carrier) Assigned to: Brian Carrier (carrier) Summary: wrong flag behavior in fls Initial Comment: 3.2 introduced some incorrect flag behavior in fls. Reported by John Lehr. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3108272&group_id=55685 |
From: SourceForge.net <no...@so...> - 2010-11-13 03:38:36
|
Bugs item #3108272, was opened at 2010-11-12 22:04 Message generated for change (Comment added) made by carrier You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3108272&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: File System Tools Group: None >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Brian Carrier (carrier) Assigned to: Brian Carrier (carrier) Summary: wrong flag behavior in fls Initial Comment: 3.2 introduced some incorrect flag behavior in fls. Reported by John Lehr. ---------------------------------------------------------------------- >Comment By: Brian Carrier (carrier) Date: 2010-11-12 22:05 Message: Sending branches/sleuthkit-3.2/NEWS.txt Sending branches/sleuthkit-3.2/tools/fstools/fls.cpp Sending trunk/NEWS.txt Sending trunk/tools/fstools/fls.cpp Transmitting file data .... Committed revision 305. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3108272&group_id=55685 |
From: SourceForge.net <no...@so...> - 2010-11-13 02:53:42
|
Bugs item #3108270, was opened at 2010-11-12 21:53 Message generated for change (Tracker Item Submitted) made by carrier You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3108270&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: File System Tools Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Brian Carrier (carrier) Assigned to: Nobody/Anonymous (nobody) Summary: Do a lazy load on $Secure Initial Comment: Eduardo Aguiar observed that TSK 3.1+ is slower on some images. the verbose trace seems to show that the $Secure file on this image is really fragmented and takes a long time to load each time. This could be fixed by doing a lazy load on the $Secure data. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3108270&group_id=55685 |
From: Oscar V. <osc...@gm...> - 2010-11-11 16:16:57
|
I think this all boils down to the question whether it is possible to get an offset (relative to the start of the filesystem) to the $DATA attribute of a resident file using the current TSK api. Looking at the TSK_FS_ATTR struct, there is a size field. So, a possible solution might be to iterate through all attributes of a file (MFT record) adding the sizes as you go (assuming the size includes the header of the attribute and the attributes are not alligned on some boundary other than 1) until you hit the $DATA attribute. The offset of the MFT record relative to the start of the filesystem could be calculated from its "filenumber" and the MFT record size and the starting address of the MFT itself. This would probably give the info needed to construct a valid carvpath. The question is whether there is a more direct way to get this from the TSK api. regards Oscar On Thu, Nov 11, 2010 at 12:52 PM, Rob Meijer <pi...@gm...> wrote: > Basically a carvpath is a simple file-system-path compatible > annotation of the offset/size and/or sparse fragments that make up a > file, partition or block within a larger entity (for example a disk > image). > The CarvPath library is used both by the forensic tool (in this case > the tskfs OCFA module that uses libtsk) and the CarvFS pseudo > filesystem. > > For example a CarvPath may that may look something like: > "40960+512_S1024_41472+512/256+512" representing a single fragment > entity within a second 3 fragment entity (that consists of an > ofset/size fragment, a sparse fragment and a second offset/size > fragment). LibCarvPath could flatten this carvpath to a shorter two > fragment one level carvpath "41216+256_S256" (consisting of an > offset/size fragment and a sparse fragment. > > Basically what now happens in OCFA is that: > > 1) A kicktree module appends a sparse version of a disk image to a > large growing archive that is mounted using CarvFS and submits a > carvpath representing that image into OCFA. > 2) The image may get router to the carvpath aware (libtsk based) mmls > module, that would extract the carvpaths of the partitions. > 3) One of the partitions may, having an NTFS filesystem on it, as a > carvpath get routed to the carvpath aware (libtsk based) tskfs module, > that would extract carvpaths for all suitable (that is : > non-compressed, non-encrypted, but currently also only non-resident) > files. > 4) One of the files may be an ISO image that as a carvpath would also > get routed to the carvpath aware (libtsk based) tskfs module, that > would extract the files also as carvpaths. > 5) The files from the ISO and other files from the NTFS will get > routed to other ocfa modules for data and/or meta-data extraction. > 6) The partition may after getting back from tskfs get routed to the > carvpath aware (libtsk based) blkls module that will extract > non-interrupted blocks of unallocated data from the NTFS partition and > represent them as carvpaths. > 7) The unallocated blocks may get routed to the carvpath aware > (scalpel based) carver module that will extract carved files as > carvpaths. > > You get things like "mnt/0/CarvFS/<partition > annotation>/<iso-image-file-annotation>/<file-from-iso-annotation>.crv" > or "/mnt/0/CarvFS/<partition > annotation>/<unallocated-block-annotation>/<carved-file-annotation>.crv" > anailable as a pseudo file to data/meta-data extraction tools like > exiftags or antiword, without ever any tool needing to copy-out any > data to other files on a real filesystem. > > I hope the above explanation is clear. > > The point now is that the above works fine for the most part and the > libtsk library works great for most of the process. The only minor > issue we have is with our tskfs module when it processes NTFS. In NTFS > some of the files are resident, and I havn't been able to find my way > in the libtsk API to get the tskfs module to gather information needed > to represent these resident files as offset/size and/or sparse > fragments needed to produce a carvpath for them. > > The code for tskfs is under OcfaModules/tree/tskfs in the OCFA source > distribution on sourceforge ( > http://sourceforge.net/projects/ocfa/files/ ), with the relevant > classes being TskFsInode and TskFsResidentDataAttribute and possibly > TskFsCarvPathDataAttribute (with their code in equally named hpp and > cpp files). > > Could you possibly have a quick look at the code and maybe propose > changes to the code using the libtsk API that would allow me to > implement TskFsResidentDataAttribute in a way not unlike > TskFsCarvPathDataAttribute as to allow resident files to use the same > libcarvpath based zero storage techniques with resident files that it > uses with non resident files? I looked at the API over and over and > can't figure out how I could make this work. Possibly the API simply > doesn't allow this (yet?), but more likely I simply don't understand > how to use the existing in order to gain access to the information I > need. > > Rob > > > > > 010/11/10 Brian Carrier <ca...@sl...>: >> What is a "carvpath"? >> >> On Nov 4, 2010, at 1:39 AM, Rob Meijer wrote: >> >>> Yesterday the first release candidate for ocfa (open computer forensics architecture) was released. Jn this version of ocfa the main new thing is that the perl script calling sleuthkit tools was completely replaced with (carvfs aware) treegraph ocfa modules that use libtsk. This setup allows ocfa to use zero storage techniques for most of the extracted slethkit data. There are basically two reasons why copy out would still be needed at the moment. Compression and residentness. I believe howaver that resident files must be expressable as carvpath. If anyone on this list would want to have a quick look at the OcfaModules/tree/tskfs ocfa code for resident and non resident files, maybe we could find a way for resident files to be also represented as carvpaths. Tia, >>> >>> Rob >>> >>> ------------------------------------------------------------------------------ >>> The Next 800 Companies to Lead America's Growth: New Video Whitepaper >>> David G. Thomson, author of the best-selling book "Blueprint to a >>> Billion" shares his insights and actions to help propel your >>> business during the next growth cycle. Listen Now! >>> http://p.sf.net/sfu/SAP-dev2dev_______________________________________________ >>> sleuthkit-developers mailing list >>> sle...@li... >>> https://lists.sourceforge.net/lists/listinfo/sleuthkit-developers >> >> > > ------------------------------------------------------------------------------ > Centralized Desktop Delivery: Dell and VMware Reference Architecture > Simplifying enterprise desktop deployment and management using > Dell EqualLogic storage and VMware View: A highly scalable, end-to-end > client virtualization framework. Read more! > http://p.sf.net/sfu/dell-eql-dev2dev > _______________________________________________ > sleuthkit-developers mailing list > sle...@li... > https://lists.sourceforge.net/lists/listinfo/sleuthkit-developers > |
From: SourceForge.net <no...@so...> - 2010-11-11 16:04:41
|
Bugs item #3105539, was opened at 2010-11-08 16:28 Message generated for change (Comment added) made by slacker775 You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3105539&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Dragonlord (drag0nl0rd) Assigned to: Nobody/Anonymous (nobody) Summary: Build error for sleuthkit 3.2.0 Initial Comment: Building the package on Arch Linux i686 with up-to-date packages gives following error: ... make[2]: Entering directory `/build/src/sleuthkit-3.2.0/tools/autotools' g++ -DHAVE_CONFIG_H -I. -I../../tsk3 -I../.. -Wall -march=i686 -mtune=generic -O2 -pipe -MT tsk_recover.o -MD -MP -MF .deps/tsk_recover.Tpo -c -o tsk_recover.o tsk_recover.cpp mv -f .deps/tsk_recover.Tpo .deps/tsk_recover.Po /bin/sh ../../libtool --tag=CXX --mode=link g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -L/usr/local/lib -static -o tsk_recover tsk_recover.o ../../tsk3/libtsk3.la libtool: link: g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -o tsk_recover tsk_recover.o -L/usr/local/lib ../../tsk3/.libs/libtsk3.a g++ -DHAVE_CONFIG_H -I. -I../../tsk3 -I../.. -Wall -march=i686 -mtune=generic -O2 -pipe -MT tsk_loaddb.o -MD -MP -MF .deps/tsk_loaddb.Tpo -c -o tsk_loaddb.o tsk_loaddb.cpp mv -f .deps/tsk_loaddb.Tpo .deps/tsk_loaddb.Po /bin/sh ../../libtool --tag=CXX --mode=link g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -L/usr/local/lib -static -o tsk_loaddb tsk_loaddb.o ../../tsk3/libtsk3.la libtool: link: g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -o tsk_loaddb tsk_loaddb.o -L/usr/local/lib ../../tsk3/.libs/libtsk3.a ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlSym': sqlite3.c:(.text+0x2380): undefined reference to `dlsym' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `pthreadMutexTry': sqlite3.c:(.text+0xa1d9): undefined reference to `pthread_mutex_trylock' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `pthreadMutexAlloc': sqlite3.c:(.text+0xa2e4): undefined reference to `pthread_mutexattr_init' sqlite3.c:(.text+0xa2f4): undefined reference to `pthread_mutexattr_settype' sqlite3.c:(.text+0xa308): undefined reference to `pthread_mutexattr_destroy' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlClose': sqlite3.c:(.text+0xb149): undefined reference to `dlclose' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlOpen': sqlite3.c:(.text+0xb181): undefined reference to `dlopen' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlError': sqlite3.c:(.text+0x19a56): undefined reference to `dlerror' collect2: ld returned 1 exit status make[2]: *** [tsk_loaddb] Error 1 make[2]: Leaving directory `/build/src/sleuthkit-3.2.0/tools/autotools' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/build/src/sleuthkit-3.2.0/tools' make: *** [all-recursive] Error 1 Aborting... ---------------------------------------------------------------------- Comment By: David Hollis (slacker775) Date: 2010-11-11 11:04 Message: I was able to get it to compile by adding '-pthread' to the LDFLAGS prior to configure. In my case, I took the latest sleuthkit 3.1.1 SRPM out of Fedora Koji, upped to it 3.2.0, added: export LDFLAGS=-pthread prior to the %configure ... call and it built just fine. ---------------------------------------------------------------------- Comment By: https://www.google.com/accounts () Date: 2010-11-11 09:33 Message: Same here, Debian Squeeze amd64. Tested with sid libs. /bin/bash ../../libtool --tag=CXX --mode=link g++ -g -O2 -L/usr/local/lib -static -o tsk_loaddb tsk_loaddb.o ../../tsk3/libtsk3.la libtool: link: g++ -g -O2 -o tsk_loaddb tsk_loaddb.o -L/usr/local/lib ../../tsk3/.libs/libtsk3.a ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `pthreadMutexTry': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:16683: undefined reference to `pthread_mutex_trylock' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `pthreadMutexAlloc': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:16551: undefined reference to `pthread_mutexattr_init' /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:16552: undefined reference to `pthread_mutexattr_settype' /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:16554: undefined reference to `pthread_mutexattr_destroy' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlError': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:27464: undefined reference to `dlerror' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlSym': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:27491: undefined reference to `dlsym' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlClose': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:27495: undefined reference to `dlclose' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlOpen': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:27450: undefined reference to `dlopen' collect2: ld returned 1 exit status make[2]: *** [tsk_loaddb] Error 1 make[2]: se sale del directorio `/home/breakersinc/Descargas/sleuthkit-3.2.0/tools/autotools' make[1]: *** [all-recursive] Error 1 make[1]: se sale del directorio `/home/breakersinc/Descargas/sleuthkit-3.2.0/tools' make: *** [all-recursive] Error 1 .... WTF? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3105539&group_id=55685 |
From: SourceForge.net <no...@so...> - 2010-11-11 14:33:20
|
Bugs item #3105539, was opened at 2010-11-08 21:28 Message generated for change (Comment added) made by You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3105539&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Dragonlord (drag0nl0rd) Assigned to: Nobody/Anonymous (nobody) Summary: Build error for sleuthkit 3.2.0 Initial Comment: Building the package on Arch Linux i686 with up-to-date packages gives following error: ... make[2]: Entering directory `/build/src/sleuthkit-3.2.0/tools/autotools' g++ -DHAVE_CONFIG_H -I. -I../../tsk3 -I../.. -Wall -march=i686 -mtune=generic -O2 -pipe -MT tsk_recover.o -MD -MP -MF .deps/tsk_recover.Tpo -c -o tsk_recover.o tsk_recover.cpp mv -f .deps/tsk_recover.Tpo .deps/tsk_recover.Po /bin/sh ../../libtool --tag=CXX --mode=link g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -L/usr/local/lib -static -o tsk_recover tsk_recover.o ../../tsk3/libtsk3.la libtool: link: g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -o tsk_recover tsk_recover.o -L/usr/local/lib ../../tsk3/.libs/libtsk3.a g++ -DHAVE_CONFIG_H -I. -I../../tsk3 -I../.. -Wall -march=i686 -mtune=generic -O2 -pipe -MT tsk_loaddb.o -MD -MP -MF .deps/tsk_loaddb.Tpo -c -o tsk_loaddb.o tsk_loaddb.cpp mv -f .deps/tsk_loaddb.Tpo .deps/tsk_loaddb.Po /bin/sh ../../libtool --tag=CXX --mode=link g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -L/usr/local/lib -static -o tsk_loaddb tsk_loaddb.o ../../tsk3/libtsk3.la libtool: link: g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -o tsk_loaddb tsk_loaddb.o -L/usr/local/lib ../../tsk3/.libs/libtsk3.a ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlSym': sqlite3.c:(.text+0x2380): undefined reference to `dlsym' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `pthreadMutexTry': sqlite3.c:(.text+0xa1d9): undefined reference to `pthread_mutex_trylock' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `pthreadMutexAlloc': sqlite3.c:(.text+0xa2e4): undefined reference to `pthread_mutexattr_init' sqlite3.c:(.text+0xa2f4): undefined reference to `pthread_mutexattr_settype' sqlite3.c:(.text+0xa308): undefined reference to `pthread_mutexattr_destroy' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlClose': sqlite3.c:(.text+0xb149): undefined reference to `dlclose' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlOpen': sqlite3.c:(.text+0xb181): undefined reference to `dlopen' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlError': sqlite3.c:(.text+0x19a56): undefined reference to `dlerror' collect2: ld returned 1 exit status make[2]: *** [tsk_loaddb] Error 1 make[2]: Leaving directory `/build/src/sleuthkit-3.2.0/tools/autotools' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/build/src/sleuthkit-3.2.0/tools' make: *** [all-recursive] Error 1 Aborting... ---------------------------------------------------------------------- Comment By: https://www.google.com/accounts () Date: 2010-11-11 14:33 Message: Same here, Debian Squeeze amd64. Tested with sid libs. /bin/bash ../../libtool --tag=CXX --mode=link g++ -g -O2 -L/usr/local/lib -static -o tsk_loaddb tsk_loaddb.o ../../tsk3/libtsk3.la libtool: link: g++ -g -O2 -o tsk_loaddb tsk_loaddb.o -L/usr/local/lib ../../tsk3/.libs/libtsk3.a ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `pthreadMutexTry': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:16683: undefined reference to `pthread_mutex_trylock' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `pthreadMutexAlloc': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:16551: undefined reference to `pthread_mutexattr_init' /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:16552: undefined reference to `pthread_mutexattr_settype' /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:16554: undefined reference to `pthread_mutexattr_destroy' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlError': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:27464: undefined reference to `dlerror' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlSym': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:27491: undefined reference to `dlsym' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlClose': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:27495: undefined reference to `dlclose' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlOpen': /home/breakersinc/Descargas/sleuthkit-3.2.0/tsk3/auto/sqlite3.c:27450: undefined reference to `dlopen' collect2: ld returned 1 exit status make[2]: *** [tsk_loaddb] Error 1 make[2]: se sale del directorio `/home/breakersinc/Descargas/sleuthkit-3.2.0/tools/autotools' make[1]: *** [all-recursive] Error 1 make[1]: se sale del directorio `/home/breakersinc/Descargas/sleuthkit-3.2.0/tools' make: *** [all-recursive] Error 1 .... WTF? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3105539&group_id=55685 |
From: Rob M. <pi...@gm...> - 2010-11-11 11:52:24
|
Basically a carvpath is a simple file-system-path compatible annotation of the offset/size and/or sparse fragments that make up a file, partition or block within a larger entity (for example a disk image). The CarvPath library is used both by the forensic tool (in this case the tskfs OCFA module that uses libtsk) and the CarvFS pseudo filesystem. For example a CarvPath may that may look something like: "40960+512_S1024_41472+512/256+512" representing a single fragment entity within a second 3 fragment entity (that consists of an ofset/size fragment, a sparse fragment and a second offset/size fragment). LibCarvPath could flatten this carvpath to a shorter two fragment one level carvpath "41216+256_S256" (consisting of an offset/size fragment and a sparse fragment. Basically what now happens in OCFA is that: 1) A kicktree module appends a sparse version of a disk image to a large growing archive that is mounted using CarvFS and submits a carvpath representing that image into OCFA. 2) The image may get router to the carvpath aware (libtsk based) mmls module, that would extract the carvpaths of the partitions. 3) One of the partitions may, having an NTFS filesystem on it, as a carvpath get routed to the carvpath aware (libtsk based) tskfs module, that would extract carvpaths for all suitable (that is : non-compressed, non-encrypted, but currently also only non-resident) files. 4) One of the files may be an ISO image that as a carvpath would also get routed to the carvpath aware (libtsk based) tskfs module, that would extract the files also as carvpaths. 5) The files from the ISO and other files from the NTFS will get routed to other ocfa modules for data and/or meta-data extraction. 6) The partition may after getting back from tskfs get routed to the carvpath aware (libtsk based) blkls module that will extract non-interrupted blocks of unallocated data from the NTFS partition and represent them as carvpaths. 7) The unallocated blocks may get routed to the carvpath aware (scalpel based) carver module that will extract carved files as carvpaths. You get things like "mnt/0/CarvFS/<partition annotation>/<iso-image-file-annotation>/<file-from-iso-annotation>.crv" or "/mnt/0/CarvFS/<partition annotation>/<unallocated-block-annotation>/<carved-file-annotation>.crv" anailable as a pseudo file to data/meta-data extraction tools like exiftags or antiword, without ever any tool needing to copy-out any data to other files on a real filesystem. I hope the above explanation is clear. The point now is that the above works fine for the most part and the libtsk library works great for most of the process. The only minor issue we have is with our tskfs module when it processes NTFS. In NTFS some of the files are resident, and I havn't been able to find my way in the libtsk API to get the tskfs module to gather information needed to represent these resident files as offset/size and/or sparse fragments needed to produce a carvpath for them. The code for tskfs is under OcfaModules/tree/tskfs in the OCFA source distribution on sourceforge ( http://sourceforge.net/projects/ocfa/files/ ), with the relevant classes being TskFsInode and TskFsResidentDataAttribute and possibly TskFsCarvPathDataAttribute (with their code in equally named hpp and cpp files). Could you possibly have a quick look at the code and maybe propose changes to the code using the libtsk API that would allow me to implement TskFsResidentDataAttribute in a way not unlike TskFsCarvPathDataAttribute as to allow resident files to use the same libcarvpath based zero storage techniques with resident files that it uses with non resident files? I looked at the API over and over and can't figure out how I could make this work. Possibly the API simply doesn't allow this (yet?), but more likely I simply don't understand how to use the existing in order to gain access to the information I need. Rob 010/11/10 Brian Carrier <ca...@sl...>: > What is a "carvpath"? > > On Nov 4, 2010, at 1:39 AM, Rob Meijer wrote: > >> Yesterday the first release candidate for ocfa (open computer forensics architecture) was released. Jn this version of ocfa the main new thing is that the perl script calling sleuthkit tools was completely replaced with (carvfs aware) treegraph ocfa modules that use libtsk. This setup allows ocfa to use zero storage techniques for most of the extracted slethkit data. There are basically two reasons why copy out would still be needed at the moment. Compression and residentness. I believe howaver that resident files must be expressable as carvpath. If anyone on this list would want to have a quick look at the OcfaModules/tree/tskfs ocfa code for resident and non resident files, maybe we could find a way for resident files to be also represented as carvpaths. Tia, >> >> Rob >> >> ------------------------------------------------------------------------------ >> The Next 800 Companies to Lead America's Growth: New Video Whitepaper >> David G. Thomson, author of the best-selling book "Blueprint to a >> Billion" shares his insights and actions to help propel your >> business during the next growth cycle. Listen Now! >> http://p.sf.net/sfu/SAP-dev2dev_______________________________________________ >> sleuthkit-developers mailing list >> sle...@li... >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-developers > > |
From: Brian C. <ca...@sl...> - 2010-11-10 01:40:48
|
What is a "carvpath"? On Nov 4, 2010, at 1:39 AM, Rob Meijer wrote: > Yesterday the first release candidate for ocfa (open computer forensics architecture) was released. Jn this version of ocfa the main new thing is that the perl script calling sleuthkit tools was completely replaced with (carvfs aware) treegraph ocfa modules that use libtsk. This setup allows ocfa to use zero storage techniques for most of the extracted slethkit data. There are basically two reasons why copy out would still be needed at the moment. Compression and residentness. I believe howaver that resident files must be expressable as carvpath. If anyone on this list would want to have a quick look at the OcfaModules/tree/tskfs ocfa code for resident and non resident files, maybe we could find a way for resident files to be also represented as carvpaths. Tia, > > Rob > > ------------------------------------------------------------------------------ > The Next 800 Companies to Lead America's Growth: New Video Whitepaper > David G. Thomson, author of the best-selling book "Blueprint to a > Billion" shares his insights and actions to help propel your > business during the next growth cycle. Listen Now! > http://p.sf.net/sfu/SAP-dev2dev_______________________________________________ > sleuthkit-developers mailing list > sle...@li... > https://lists.sourceforge.net/lists/listinfo/sleuthkit-developers |
From: SourceForge.net <no...@so...> - 2010-11-08 21:28:46
|
Bugs item #3105539, was opened at 2010-11-08 22:28 Message generated for change (Tracker Item Submitted) made by drag0nl0rd You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3105539&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Dragonlord (drag0nl0rd) Assigned to: Nobody/Anonymous (nobody) Summary: Build error for sleuthkit 3.2.0 Initial Comment: Building the package on Arch Linux i686 with up-to-date packages gives following error: ... make[2]: Entering directory `/build/src/sleuthkit-3.2.0/tools/autotools' g++ -DHAVE_CONFIG_H -I. -I../../tsk3 -I../.. -Wall -march=i686 -mtune=generic -O2 -pipe -MT tsk_recover.o -MD -MP -MF .deps/tsk_recover.Tpo -c -o tsk_recover.o tsk_recover.cpp mv -f .deps/tsk_recover.Tpo .deps/tsk_recover.Po /bin/sh ../../libtool --tag=CXX --mode=link g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -L/usr/local/lib -static -o tsk_recover tsk_recover.o ../../tsk3/libtsk3.la libtool: link: g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -o tsk_recover tsk_recover.o -L/usr/local/lib ../../tsk3/.libs/libtsk3.a g++ -DHAVE_CONFIG_H -I. -I../../tsk3 -I../.. -Wall -march=i686 -mtune=generic -O2 -pipe -MT tsk_loaddb.o -MD -MP -MF .deps/tsk_loaddb.Tpo -c -o tsk_loaddb.o tsk_loaddb.cpp mv -f .deps/tsk_loaddb.Tpo .deps/tsk_loaddb.Po /bin/sh ../../libtool --tag=CXX --mode=link g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -L/usr/local/lib -static -o tsk_loaddb tsk_loaddb.o ../../tsk3/libtsk3.la libtool: link: g++ -march=i686 -mtune=generic -O2 -pipe -Wl,--hash-style=gnu -Wl,--as-needed -o tsk_loaddb tsk_loaddb.o -L/usr/local/lib ../../tsk3/.libs/libtsk3.a ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlSym': sqlite3.c:(.text+0x2380): undefined reference to `dlsym' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `pthreadMutexTry': sqlite3.c:(.text+0xa1d9): undefined reference to `pthread_mutex_trylock' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `pthreadMutexAlloc': sqlite3.c:(.text+0xa2e4): undefined reference to `pthread_mutexattr_init' sqlite3.c:(.text+0xa2f4): undefined reference to `pthread_mutexattr_settype' sqlite3.c:(.text+0xa308): undefined reference to `pthread_mutexattr_destroy' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlClose': sqlite3.c:(.text+0xb149): undefined reference to `dlclose' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlOpen': sqlite3.c:(.text+0xb181): undefined reference to `dlopen' ../../tsk3/.libs/libtsk3.a(sqlite3.o): In function `unixDlError': sqlite3.c:(.text+0x19a56): undefined reference to `dlerror' collect2: ld returned 1 exit status make[2]: *** [tsk_loaddb] Error 1 make[2]: Leaving directory `/build/src/sleuthkit-3.2.0/tools/autotools' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/build/src/sleuthkit-3.2.0/tools' make: *** [all-recursive] Error 1 Aborting... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3105539&group_id=55685 |
From: Rob M. <pi...@gm...> - 2010-11-04 05:39:16
|
Yesterday the first release candidate for ocfa (open computer forensics architecture) was released. Jn this version of ocfa the main new thing is that the perl script calling sleuthkit tools was completely replaced with (carvfs aware) treegraph ocfa modules that use libtsk. This setup allows ocfa to use zero storage techniques for most of the extracted slethkit data. There are basically two reasons why copy out would still be needed at the moment. Compression and residentness. I believe howaver that resident files must be expressable as carvpath. If anyone on this list would want to have a quick look at the OcfaModules/tree/tskfs ocfa code for resident and non resident files, maybe we could find a way for resident files to be also represented as carvpaths. Tia, Rob |
From: SourceForge.net <no...@so...> - 2010-10-28 02:46:39
|
Bugs item #2950687, was opened at 2010-02-12 11:38 Message generated for change (Comment added) made by carrier You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=2950687&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Other Group: None >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Brian Carrier (carrier) Assigned to: Brian Carrier (carrier) Summary: Windows binaries not working. Initial Comment: >From Gregg Gunsch: Per the instruction on the bug tracker page, I'm sending this to the sleuthkit-users list first. Does anybody else see this problem or know of a simple solution? sleuth-win32-3.1.0.zip: On some machines in our relatively homogeneous computer lab, attempting to run TSK tools yields the following error message: "The system cannot execute the specified program" My limited research seems to indicate that an incorrect version of a system DLL could be the culprit (e.g., older kernel32.dll) but I haven't been able to pin down a difference between working and non-working machines, even with Dependency Walker. The files were extracted from the .zip archive and placed into a directory in "C:\Program Files", preserving the hierarchy found in the archive. The path was added to the environment variable, and the commands are being found (e.g., "which istat" locates it). They just aren't successfully being run. I even tried copying the DLLs that came with TSK into the system32 folder, but no help. We are running WinXP Pro, SP2 and SP3. Some SP2 machines run TSK just fine, as do the SP3 versions (and yes, I'm in the process of updating them all). I've also hashed the TSK files on a working system and compared to those on a non-working machine - they are identical. Is there a way to produce a more portable collection of executables that are less target-system dependent? Is there something I should be doing with the manifest so that the dependencies are satisfied? Should I be compiling the source myself instead of using the build in the .zip file? It's been years since I've done development, and a lot seems to have changed, so there may be some simple steps that I'm just overlooking right now. Thanks for your assistance, [[ other offline e-mails exist. Other versions of visual studio are on the machine ]] ---------------------------------------------------------------------- >Comment By: Brian Carrier (carrier) Date: 2010-10-27 21:46 Message: The redist dlls installer seemed to help. ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-09-10 11:07 Message: Dan Jerger reported that installing the VS 2008 redist dlls helped: http://www.microsoft.com/downloads/details.aspx?FamilyID=A5C84275-3B97-4AB7-A40D-3802B2AF5FC2&displaylang=en It's still not clear why this is needed though because these dlls are included with TSK... But, they are not officially installed. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=2950687&group_id=55685 |
From: SourceForge.net <no...@so...> - 2010-10-27 03:09:10
|
Bugs item #3088447, was opened at 2010-10-15 19:31 Message generated for change (Comment added) made by carrier You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3088447&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: File System Tools Group: None >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Brian Carrier (carrier) Assigned to: Brian Carrier (carrier) Summary: NTFS error about adding attribute sequence Initial Comment: Adam Dershowitz reported: General file system error (fs_attr_add_run: error adding additional run (9): No filler entry for 0. Final 176) ( - proc_attrseq: put run- proc_attrlist) ---------------------------------------------------------------------- >Comment By: Brian Carrier (carrier) Date: 2010-10-26 22:09 Message: I think this is fixed now. Solved by assigning unique IDs to attributes. Previously, TSK assumed that each attribute had a unique ID, but that isn't the case when a file has multiple MFT entries. Sending trunk/NEWS.txt Sending trunk/tsk3/fs/ntfs.c Transmitting file data ... Committed revision 292. ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-10-25 22:41 Message: ntfs_proc_attrlist() should process the list and make a map from (ExtMftEntry,Type,ID) to NewID. That structure needs to be passed to ntfs_proc_attrseq(). ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-10-25 22:19 Message: The problem seems to come from a TSK design problem. TSK assumes that the attribute ID will be unique for the file, but in fact it is unique only within the MFT entry. For files that have multiple MFT entries because they are fragmented, it is possible to have two attributes with the same type and ID value. In this scenario, this error is generated. One solution is to specify the name in addition to the type. This would dramatically change the addressing scheme in TSK. Another solution is to assign a unique ID to each attribute that is not located in the base MFT entry. ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-10-20 22:06 Message: Looked at the attribute list cluster and the IDX_ALLOC entries have an ID of 0. The BITMAP entries have an ID value set, but the DATA attributes don't either. But, DATA is defined in the base MFT entry. IDX_ALLOC aren't. ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-10-20 21:55 Message: Note that Stefan Kelm reported this same error back in Sept. ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-10-19 22:49 Message: The base MFT entry didn't have any details about the IDX_ALLOC entries. Requested the ATTRLIST block for more details to see if the id is in there. The verbose log shows that all entries in ATTRLIST have an ID of 0. ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-10-18 23:01 Message: Issue seems to be because the $Secure file is VERY fragmented and has lots of attrlists. There are two NTFS_ATYPE_IDXALLOC attributes and they both have an ID of 0 in the attrlist entries. So, we get a collision when the second one is added to the runs of the first one (because they have the same type and id). Sent a request to get MFT entry 9 to see if there is ID info that we are dropping.... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3088447&group_id=55685 |
From: SourceForge.net <no...@so...> - 2010-10-26 03:41:56
|
Bugs item #3088447, was opened at 2010-10-15 19:31 Message generated for change (Comment added) made by carrier You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3088447&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: File System Tools Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Brian Carrier (carrier) Assigned to: Brian Carrier (carrier) Summary: NTFS error about adding attribute sequence Initial Comment: Adam Dershowitz reported: General file system error (fs_attr_add_run: error adding additional run (9): No filler entry for 0. Final 176) ( - proc_attrseq: put run- proc_attrlist) ---------------------------------------------------------------------- >Comment By: Brian Carrier (carrier) Date: 2010-10-25 22:41 Message: ntfs_proc_attrlist() should process the list and make a map from (ExtMftEntry,Type,ID) to NewID. That structure needs to be passed to ntfs_proc_attrseq(). ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-10-25 22:19 Message: The problem seems to come from a TSK design problem. TSK assumes that the attribute ID will be unique for the file, but in fact it is unique only within the MFT entry. For files that have multiple MFT entries because they are fragmented, it is possible to have two attributes with the same type and ID value. In this scenario, this error is generated. One solution is to specify the name in addition to the type. This would dramatically change the addressing scheme in TSK. Another solution is to assign a unique ID to each attribute that is not located in the base MFT entry. ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-10-20 22:06 Message: Looked at the attribute list cluster and the IDX_ALLOC entries have an ID of 0. The BITMAP entries have an ID value set, but the DATA attributes don't either. But, DATA is defined in the base MFT entry. IDX_ALLOC aren't. ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-10-20 21:55 Message: Note that Stefan Kelm reported this same error back in Sept. ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-10-19 22:49 Message: The base MFT entry didn't have any details about the IDX_ALLOC entries. Requested the ATTRLIST block for more details to see if the id is in there. The verbose log shows that all entries in ATTRLIST have an ID of 0. ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-10-18 23:01 Message: Issue seems to be because the $Secure file is VERY fragmented and has lots of attrlists. There are two NTFS_ATYPE_IDXALLOC attributes and they both have an ID of 0 in the attrlist entries. So, we get a collision when the second one is added to the runs of the first one (because they have the same type and id). Sent a request to get MFT entry 9 to see if there is ID info that we are dropping.... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3088447&group_id=55685 |
From: SourceForge.net <no...@so...> - 2010-10-26 03:19:31
|
Bugs item #3088447, was opened at 2010-10-15 19:31 Message generated for change (Comment added) made by carrier You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3088447&group_id=55685 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: File System Tools Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Brian Carrier (carrier) Assigned to: Brian Carrier (carrier) Summary: NTFS error about adding attribute sequence Initial Comment: Adam Dershowitz reported: General file system error (fs_attr_add_run: error adding additional run (9): No filler entry for 0. Final 176) ( - proc_attrseq: put run- proc_attrlist) ---------------------------------------------------------------------- >Comment By: Brian Carrier (carrier) Date: 2010-10-25 22:19 Message: The problem seems to come from a TSK design problem. TSK assumes that the attribute ID will be unique for the file, but in fact it is unique only within the MFT entry. For files that have multiple MFT entries because they are fragmented, it is possible to have two attributes with the same type and ID value. In this scenario, this error is generated. One solution is to specify the name in addition to the type. This would dramatically change the addressing scheme in TSK. Another solution is to assign a unique ID to each attribute that is not located in the base MFT entry. ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-10-20 22:06 Message: Looked at the attribute list cluster and the IDX_ALLOC entries have an ID of 0. The BITMAP entries have an ID value set, but the DATA attributes don't either. But, DATA is defined in the base MFT entry. IDX_ALLOC aren't. ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-10-20 21:55 Message: Note that Stefan Kelm reported this same error back in Sept. ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-10-19 22:49 Message: The base MFT entry didn't have any details about the IDX_ALLOC entries. Requested the ATTRLIST block for more details to see if the id is in there. The verbose log shows that all entries in ATTRLIST have an ID of 0. ---------------------------------------------------------------------- Comment By: Brian Carrier (carrier) Date: 2010-10-18 23:01 Message: Issue seems to be because the $Secure file is VERY fragmented and has lots of attrlists. There are two NTFS_ATYPE_IDXALLOC attributes and they both have an ID of 0 in the attrlist entries. So, we get a collision when the second one is added to the runs of the first one (because they have the same type and id). Sent a request to get MFT entry 9 to see if there is ID info that we are dropping.... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=477889&aid=3088447&group_id=55685 |
From: Michael C. <scu...@gm...> - 2010-10-24 20:16:09
|
Hi Lists, I have just uploaded a new experimental python binding for tsk3.1 in http://code.google.com/p/pytsk/ To build just unpack, adjust the location to the tsk headers in the top of setup.py (default is /usr/local/include/tsk) and then run python setup.py install There are some sample scripts there which emulate fls, ils, istat etc just to give an idea of how to use the library from a python script. The bindings have been tested on ubuntu 10.04 - they are known not to work currently under windows. If there is interest in windows support I can burn some cycles on it at a later date. Of interest is a python fuse binding which allows users to mount the filesystem onto a directory. You will see any deleted files discoverable by tsk and maybe even read them. Please test and provide feedback, Thanks Michael. |