sleuthkit-users Mailing List for The Sleuth Kit (Page 14)
Brought to you by:
carrier
You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(11) |
Oct
(5) |
Nov
(4) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(1) |
Feb
(20) |
Mar
(60) |
Apr
(40) |
May
(24) |
Jun
(28) |
Jul
(18) |
Aug
(27) |
Sep
(6) |
Oct
(14) |
Nov
(15) |
Dec
(22) |
2004 |
Jan
(34) |
Feb
(13) |
Mar
(28) |
Apr
(23) |
May
(27) |
Jun
(26) |
Jul
(37) |
Aug
(19) |
Sep
(20) |
Oct
(39) |
Nov
(17) |
Dec
(9) |
2005 |
Jan
(45) |
Feb
(43) |
Mar
(66) |
Apr
(36) |
May
(19) |
Jun
(64) |
Jul
(10) |
Aug
(11) |
Sep
(35) |
Oct
(6) |
Nov
(4) |
Dec
(13) |
2006 |
Jan
(52) |
Feb
(34) |
Mar
(39) |
Apr
(39) |
May
(37) |
Jun
(15) |
Jul
(13) |
Aug
(48) |
Sep
(9) |
Oct
(10) |
Nov
(47) |
Dec
(13) |
2007 |
Jan
(25) |
Feb
(4) |
Mar
(2) |
Apr
(29) |
May
(11) |
Jun
(19) |
Jul
(13) |
Aug
(15) |
Sep
(30) |
Oct
(12) |
Nov
(10) |
Dec
(13) |
2008 |
Jan
(2) |
Feb
(54) |
Mar
(58) |
Apr
(43) |
May
(10) |
Jun
(27) |
Jul
(25) |
Aug
(27) |
Sep
(48) |
Oct
(69) |
Nov
(55) |
Dec
(43) |
2009 |
Jan
(26) |
Feb
(36) |
Mar
(28) |
Apr
(27) |
May
(55) |
Jun
(9) |
Jul
(19) |
Aug
(16) |
Sep
(15) |
Oct
(17) |
Nov
(70) |
Dec
(21) |
2010 |
Jan
(56) |
Feb
(59) |
Mar
(53) |
Apr
(32) |
May
(25) |
Jun
(31) |
Jul
(36) |
Aug
(11) |
Sep
(37) |
Oct
(19) |
Nov
(23) |
Dec
(6) |
2011 |
Jan
(21) |
Feb
(20) |
Mar
(30) |
Apr
(30) |
May
(74) |
Jun
(50) |
Jul
(34) |
Aug
(34) |
Sep
(12) |
Oct
(33) |
Nov
(10) |
Dec
(8) |
2012 |
Jan
(23) |
Feb
(57) |
Mar
(26) |
Apr
(14) |
May
(27) |
Jun
(27) |
Jul
(60) |
Aug
(88) |
Sep
(13) |
Oct
(36) |
Nov
(97) |
Dec
(85) |
2013 |
Jan
(60) |
Feb
(24) |
Mar
(43) |
Apr
(32) |
May
(22) |
Jun
(38) |
Jul
(51) |
Aug
(50) |
Sep
(76) |
Oct
(65) |
Nov
(25) |
Dec
(30) |
2014 |
Jan
(19) |
Feb
(41) |
Mar
(43) |
Apr
(28) |
May
(61) |
Jun
(12) |
Jul
(10) |
Aug
(37) |
Sep
(76) |
Oct
(31) |
Nov
(41) |
Dec
(12) |
2015 |
Jan
(33) |
Feb
(28) |
Mar
(53) |
Apr
(22) |
May
(29) |
Jun
(20) |
Jul
(15) |
Aug
(17) |
Sep
(52) |
Oct
(3) |
Nov
(18) |
Dec
(21) |
2016 |
Jan
(20) |
Feb
(8) |
Mar
(21) |
Apr
(7) |
May
(13) |
Jun
(35) |
Jul
(34) |
Aug
(11) |
Sep
(14) |
Oct
(22) |
Nov
(31) |
Dec
(23) |
2017 |
Jan
(20) |
Feb
(7) |
Mar
(5) |
Apr
(6) |
May
(6) |
Jun
(22) |
Jul
(11) |
Aug
(16) |
Sep
(8) |
Oct
(1) |
Nov
(1) |
Dec
(1) |
2018 |
Jan
|
Feb
|
Mar
(16) |
Apr
(2) |
May
(6) |
Jun
(5) |
Jul
|
Aug
(2) |
Sep
(4) |
Oct
|
Nov
(16) |
Dec
(13) |
2019 |
Jan
|
Feb
(1) |
Mar
(25) |
Apr
(9) |
May
(2) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
(2) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
|
Oct
(5) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(4) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2022 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2024 |
Jan
|
Feb
(3) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Richard C. <rco...@ba...> - 2016-09-14 23:32:13
|
It looks like you are overtaxing your system with too many file ingest threads.The error notifications you are getting should not be considered to be typical and look to be symptomatic of a system struggling with out of memory errors. Based on your ingest progress snapshot, all file level analysis has been completed except keyword search, which is the most memory intensive aspect of Autopsy data source ingest. >> The system has dual Xeon x5550 2.66 quad core processors...I’ve set number of threads to 12 as suggested by the Options dialog. We generally recommend at most one file ingest thread per core, and the code for the Autopsy options dialog is even more conservative. For the system you describe, I would expect the Java library we use to detect the available processors (java.lang.Runtime.getRuntime().availableProcessors()) would return eight ( http://ark.intel.com/products/37106/Intel-Xeon-Processor-X5550-8M-Cache-2_66-GHz-6_40-GTs-Intel-QPI), which would indeed give you a recommendation of only six file ingest threads! However, it appears that the Java lib is reporting the "hyperthreads" (sixteen) rather than the cores, which would indeed result in a recommendation of twelve threads. We will need to look into this further! In the mean time, the best advice I can give you is back way off on file ingest threads. Sincerely, Richard Cordovano Basis Technology On Wed, Sep 14, 2016 at 6:04 PM, MATT PIERCE <mat...@ad...> wrote: > I’m working a case and again have issues with performance using Autopsy. > I have setup a dedicated server for running Autopsy. In two days of ingest > I’m at 45%. In 8 hours it has only progressed 7%. I was hoping someone can > spot where my bottle neck is? > > > > The system has dual Xeon x5550 2.66 quad core processors. 24 GB RAM. > Windows 2012 R2 x64. The case drive is an OCZ Revo 350 PCIe SSD. Autopsy > is loaded on a Raid0 15k SAS volume. > > > > Autopsy Load. > > Product Version: Autopsy 4.1.1 (RELEASE) Sleuth Kit Version: 4.2.0 > Netbeans RCP Build: 201510222201 Java: 1.8.0_92; Java HotSpot(TM) 64-Bit > Server VM 25.92-b14 System: Windows Server 2012 R2 version 6.3 running on > amd64; Cp1252; en_US (autopsy) > > > > The image is from a Windows 7 workstation. FTKimager took the disk image > in E01 format. I have the NSRL known good hash database loaded. I’ve set > number of threads to 12 as suggested by the Options dialog. I’m running > the default ingest process with no 3rd party modules. > > > > Performance Diagnostics > > > > Ingest Progress Snapshot > > > > 1 IDLE Wed Sep 14 > 01:10:44 CDT 2016 15:49:08.024 0 > > 2 Keyword Search 2016-08-31-1-1.E01 > image1.emf Wed Sep 14 16:59:26 CDT 2016 0:00:25.759 > 2 > > 3 IDLE Wed Sep 14 > 16:59:26 CDT 2016 0:00:25.758 0 > > 4 Keyword Search 2016-08-31-1-1.E01 > image1.emf Wed Sep 14 16:59:26 CDT 2016 0:00:25.798 > 2 > > 5 IDLE Wed Sep 14 > 16:59:26 CDT 2016 0:00:25.764 0 > > 6 Keyword Search 2016-08-31-1-1.E01 > image1.emf Wed Sep 14 16:59:26 CDT 2016 0:00:25.759 > 2 > > 7 Keyword Search 2016-08-31-1-1.E01 > image1.emf Wed Sep 14 16:59:26 CDT 2016 0:00:26.155 > 2 > > 8 Keyword Search 2016-08-31-1-1.E01 > image1.emf Wed Sep 14 16:59:26 CDT 2016 0:00:26.132 > 2 > > 9 IDLE Wed Sep 14 > 16:59:26 CDT 2016 0:00:25.742 0 > > 10 IDLE Wed Sep 14 > 16:59:26 CDT 2016 0:00:25.812 0 > > 11 Keyword Search 2016-08-31-1-1.E01 > image1.emf Wed Sep 14 16:59:26 CDT 2016 0:00:26.120 > 2 > > 12 IDLE Wed Sep 14 > 16:59:26 CDT 2016 0:00:25.811 0 > > 13 Keyword Search 2016-08-31-1-1.E01 > image1.emf Wed Sep 14 16:59:26 CDT 2016 0:00:26.155 > 2 > > > > 2 2016-08-31-1-1.E01 15:23:00 193034 > 2.0938940654524942 84 2 36 > 34 0 > > > > Keyword Search 155:52:59.626 (36%) > > File Type Identification 139:55:17.562 (32%) > > Hash Lookup 48:49:44.445 (11%) > > Embedded File Extractor 46:01:20.323 (10%) > > Email Parser 28:37:58.461 (6%) > > Extension Mismatch Detector 4:51:51.763 (1%) > > Exif Parser 1:41:39.597 (0%) > > PhotoRec Carver 0:00:04.762 (0%) > > Interesting Files Identifier 0:00:00.105 (0%) > > > > I get a bunch of errors but that is pretty typical. > > > > > ------------------------------------------------------------ > ------------------ > > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org > > |
From: MATT P. <mat...@ad...> - 2016-09-14 22:19:50
|
I'm working a case and again have issues with performance using Autopsy. I have setup a dedicated server for running Autopsy. In two days of ingest I'm at 45%. In 8 hours it has only progressed 7%. I was hoping someone can spot where my bottle neck is? The system has dual Xeon x5550 2.66 quad core processors. 24 GB RAM. Windows 2012 R2 x64. The case drive is an OCZ Revo 350 PCIe SSD. Autopsy is loaded on a Raid0 15k SAS volume. Autopsy Load. Product Version: Autopsy 4.1.1 (RELEASE) Sleuth Kit Version: 4.2.0 Netbeans RCP Build: 201510222201 Java: 1.8.0_92; Java HotSpot(TM) 64-Bit Server VM 25.92-b14 System: Windows Server 2012 R2 version 6.3 running on amd64; Cp1252; en_US (autopsy) The image is from a Windows 7 workstation. FTKimager took the disk image in E01 format. I have the NSRL known good hash database loaded. I've set number of threads to 12 as suggested by the Options dialog. I'm running the default ingest process with no 3rd party modules. Performance Diagnostics [cid:image001.png@01D20EA9.99419680] Ingest Progress Snapshot 1 IDLE Wed Sep 14 01:10:44 CDT 2016 15:49:08.024 0 2 Keyword Search 2016-08-31-1-1.E01 image1.emf Wed Sep 14 16:59:26 CDT 2016 0:00:25.759 2 3 IDLE Wed Sep 14 16:59:26 CDT 2016 0:00:25.758 0 4 Keyword Search 2016-08-31-1-1.E01 image1.emf Wed Sep 14 16:59:26 CDT 2016 0:00:25.798 2 5 IDLE Wed Sep 14 16:59:26 CDT 2016 0:00:25.764 0 6 Keyword Search 2016-08-31-1-1.E01 image1.emf Wed Sep 14 16:59:26 CDT 2016 0:00:25.759 2 7 Keyword Search 2016-08-31-1-1.E01 image1.emf Wed Sep 14 16:59:26 CDT 2016 0:00:26.155 2 8 Keyword Search 2016-08-31-1-1.E01 image1.emf Wed Sep 14 16:59:26 CDT 2016 0:00:26.132 2 9 IDLE Wed Sep 14 16:59:26 CDT 2016 0:00:25.742 0 10 IDLE Wed Sep 14 16:59:26 CDT 2016 0:00:25.812 0 11 Keyword Search 2016-08-31-1-1.E01 image1.emf Wed Sep 14 16:59:26 CDT 2016 0:00:26.120 2 12 IDLE Wed Sep 14 16:59:26 CDT 2016 0:00:25.811 0 13 Keyword Search 2016-08-31-1-1.E01 image1.emf Wed Sep 14 16:59:26 CDT 2016 0:00:26.155 2 2 2016-08-31-1-1.E01 15:23:00 193034 2.0938940654524942 84 2 36 34 0 Keyword Search 155:52:59.626 (36%) File Type Identification 139:55:17.562 (32%) Hash Lookup 48:49:44.445 (11%) Embedded File Extractor 46:01:20.323 (10%) Email Parser 28:37:58.461 (6%) Extension Mismatch Detector 4:51:51.763 (1%) Exif Parser 1:41:39.597 (0%) PhotoRec Carver 0:00:04.762 (0%) Interesting Files Identifier 0:00:00.105 (0%) I get a bunch of errors but that is pretty typical. [cid:image002.png@01D20EAA.066E5950] |
From: Edward D. <eld...@tr...> - 2016-09-14 01:47:02
|
I have an Encase image of a Mac drive. Under either TSK 4.2.0 and TSK 4.3.0 I can run mmls against the image to see all of the volumes: MAC Partition Map Offset Sector: 0 Units are in 512-byte sectors Slot Start End Length Description 000: ------- 0000000000 0000000000 0000000001 Unallocated 001: 000 0000000001 0000000063 0000000063 Apple_partition_map 002: Meta 0000000001 0000000010 0000000010 Table 003: 001 0000000064 0000033015 0000032952 Apple_HFS 004: 002 0000033016 0000065967 0000032952 Apple_HFS 005: 003 0000065968 0000080871 0000014904 Apple_Free 006: 004 0000080872 0000115391 0000034520 Apple_HFS 007: 005 0000115392 0000131079 0000015688 Apple_Free 008: 006 0000131080 0000166383 0000035304 Apple_HFS 009: 007 0000166384 0000183791 0000017408 Apple_Boot 010: 008 0000183792 0000201583 0000017792 Apple_UFS 011: 009 0000201584 0000201599 0000000016 Apple_Free This looks very much like the Mac partition map shown in the documentation at http://wiki.sleuthkit.org/index.php?title=Mmls#Mac_Partitions, although of course the partition types are slightly different. But when I try to recover file information from these partitions, using either tsk_loaddb or tsk_recover in 4.2.0 or 4.3.0 I get instead the errors: Error: Cannot determine file system type (Sector offset: 1, Partition Type: Apple_partition_map) Error: Cannot determine file system type (Sector offset: 65968, Partition Type:Apple_Free) Error: Cannot determine file system type (Sector offset: 115392, Partition Type: Apple_Free) Error: Cannot determine file system type (Sector offset: 131080, Partition Type: Apple_HFS) Since the image at the link I gave previously also contains an 'Apple_partition_map', 'Apple_Free', and 'Apple_HFS partitions, is SleuthKit 4.2.0 and 4.3.0 unable to work with Mac disks as far as discovering the data in the individual partitions ? If so is there any timetable for fixing SleuthKit so that it works with Mac disks and Mac partition types ? If not is there any way using SleuthKit to discover why SleuthKit tsk_loaddb or tsk_recover cannot determine the filesystem while mmls easily can ? |
From: Brian C. <ca...@sl...> - 2016-09-13 14:16:37
|
Hello, We’re hosting a 1-day Autopsy training after OSDFCon on October 27. You can attend in person if you will be in the Herndon, VA for OSDFCon or online. The course covers the basics of using the tool and configuring the modules. You can register using the forms on this page: http://www.autopsy.com/training/ I know the hours are difficult for online people who are not on the East coast and we are looking at options for later in the year to offer it more frequently at different times. thanks, brian |
From: Joel G. <joe...@gm...> - 2016-09-13 08:52:41
|
Hi everyone, i'm a new user of autopsy (windows gui), and i have a question : when i add a E01 image on autopsy, i can't see the file system (NTS,HFS etc..) on the right frame [image: 2..JPG] Do you think it's possible to add this info on this frame ? Joel French Gendarmerie |
From: <lar...@ea...> - 2016-09-12 20:35:13
|
Hey sleuthkit-users was so very lucky to receive the answers I needed to solve my problem. way to go sleuthkit-users... thanks Larry Gilson |
From: Derrick K. <dk...@gm...> - 2016-09-12 18:07:27
|
Hi Larry. The package you installed is the old Autopsy 2.x package which is no longer maintained. Are you certain that is the version you want to run or were you trying to run Autopsy 4.x which is the latest Java based Autopsy? If you are trying to run Autopsy 2.x, you need to start Autopsy as root via 'sudo autopsy' or run it as your root account and then it will bind correctly to tcp:9999. ie: $ sudo autopsy $ sudo netstat -pan | grep 9999 tcp 0 0 127.0.0.1:9999 0.0.0.0:* LISTEN 11923/perl $ pgrep autopsy 11923 After it is bound correctly, you should be able to hit it with your web browser: http://localhost:9999/autopsy Derrick On Mon, Sep 12, 2016 at 11:51 AM, <lar...@ea...> wrote: > Hello fellow Autopsy users, > > How I wish I were a fellow also but I can't get Autopsy to work. I'm running Linux Mint 18 on an ASUS laptop with plenty of memory and hard disk space. I downloaded Sleuthkit and Autopsy with apt-get and everything seem find. The binaries are there and seem to work fine. However when I start Autopsy I get a message inside a terminal window to send my browser to localhost:9999 autopsy, but my browser says that it can not make a connection. Do I have to set up my local interface like I do for a wireless connection? Also looking at my running process after the terminal window with the message in it is still up there is no Autopsy process. > > Have read till I'm blue in the face and don't know what to do now. This is the first time I have asked for help on a users list so I hope that I'm doing it right. I mostly struggle through myself but this one's got me. Any help would be greatly appreciated. > > P.S. I tried the archives but didn't know how to search for a specific topic... > > thank you in advance. > Larry W. Gilson > > ------------------------------------------------------------------------------ > What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic > patterns at an interface-level. Reveals which users, apps, and protocols are > consuming the most bandwidth. Provides multi-vendor support for NetFlow, > J-Flow, sFlow and other flows. Make informed decisions using capacity > planning reports. http://sdm.link/zohodev2dev > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: <lar...@ea...> - 2016-09-12 17:52:11
|
Hello fellow Autopsy users, How I wish I were a fellow also but I can't get Autopsy to work. I'm running Linux Mint 18 on an ASUS laptop with plenty of memory and hard disk space. I downloaded Sleuthkit and Autopsy with apt-get and everything seem find. The binaries are there and seem to work fine. However when I start Autopsy I get a message inside a terminal window to send my browser to localhost:9999 autopsy, but my browser says that it can not make a connection. Do I have to set up my local interface like I do for a wireless connection? Also looking at my running process after the terminal window with the message in it is still up there is no Autopsy process. Have read till I'm blue in the face and don't know what to do now. This is the first time I have asked for help on a users list so I hope that I'm doing it right. I mostly struggle through myself but this one's got me. Any help would be greatly appreciated. P.S. I tried the archives but didn't know how to search for a specific topic... thank you in advance. Larry W. Gilson |
From: <slo...@gm...> - 2016-08-30 01:09:42
|
I found the source file using the tsk_file_layout table in sleuthkit database created by tsk_loaddb. When I perform an istat lookup on the metadata structure, I find the file has the following blocks identified. Type: $DATA (128-1) Name: N/A Non-Resident size: 1442836480 > init_size: 0 > 6710176 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > 0 0 0 0 0 0 0 0 > I know Brian has explained the zeros in the block list, but I no longer recall why they occur. Is this the reason for the ifind failure to identifiy the meta data address? Are there any command line workarounds for building a file layout without invoking and relying on tsk_loaddb? I'm willing to work with the db, but the complexity is more than I need and the flag definitions are hard to find (i.e., attribute types and id's, etc). The links from the wiki are broken, too. Thanks, and suggestions appreciated. John On Mon, Aug 29, 2016 at 5:38 PM, slo...@gm... <slo...@gm...> wrote: > I have an artifact of interest in block 6713248 of a partition (here > represented with the variable $p4). The blkstat tool and the results of > blkls -el both indicate the block is allocated. > > $ blkstat $p4 6713248 >> Cluster: 6713248 >> Allocated >> > > However, when I use ifind to determine the associated metadata structure, > no inode can be found: > > $ ifind $p4 -d 6713248 >> Inode not found >> > > Is this a bug? How do I determine the associated file/meta data structure > without ifind? > > Thanks, > John > > Platform: Linux > TSK v. 4.3.0 and 4.2.0 > |
From: <slo...@gm...> - 2016-08-30 00:38:49
|
I have an artifact of interest in block 6713248 of a partition (here represented with the variable $p4). The blkstat tool and the results of blkls -el both indicate the block is allocated. $ blkstat $p4 6713248 > Cluster: 6713248 > Allocated > However, when I use ifind to determine the associated metadata structure, no inode can be found: $ ifind $p4 -d 6713248 > Inode not found > Is this a bug? How do I determine the associated file/meta data structure without ifind? Thanks, John Platform: Linux TSK v. 4.3.0 and 4.2.0 |
From: Simson G. <si...@ac...> - 2016-08-27 22:16:59
|
Luis, You are correct: — NTFS stores in UTC — -z is not relevant for NTFS. When testing this, you need to consider not just if DST was in effect when the files were written, but also if it is in effect when the files are viewed. Earlier versions of EnCase would change the file timestamps depending on whether you viewed the file in winter or summer. I believe that this was fixed in later versions. Simson > On Aug 26, 2016, at 10:03 AM, Luís Filipe Nassif <lfc...@gm...> wrote: > > Hi Simson, > > I've tested on NTFS images too, and the timestamps are stored ok in the database as UTC, regardless of being daylight savings time or not, and with or without the ported -z option. As I understood, I think -z option is to specify the original timezone of the evidence being ingested, it is not relevant to NTFS, but is to FAT. That is different from ajusting the timestamps to a specific timezone used by the investigator during an analisys. > > So I think tsk stores timestamps as UTC, and I think tsk_loaddb needs -z option to work with FAT images. That option exists in JavaBindings api because it is needed. > > Another problem is that I've observed the difference between the timestamps shown by Windows and the ones stored by tsk_loadb is -3 hours (because of UTC-3 timezone, that's ok) for most files but is only -1 hour when it should be -2 hours for timestamps in daylight saving time. I think tsk assumes (without -z option) FAT timestamps are stored with daylight saving time conversion, but they may be not. > > I agree with you that different systems (Windows, Linux, Cameras and Cell phones) may store the timestamps on FAT by different ways, with or without daylight saving time, as UTC or not, so I think the -z option is important to let the user specify. > > Luis > > 2016-08-25 22:56 GMT-03:00 Simson Garfinkel <si...@ac... <mailto:si...@ac...>>: > Testing on FAT32 images is not enough. My belief is that different operating systems have different behavior. > > MSDOS, for example, only had one concept of time — local time. When daylight savings came around it was the responsibility of the operator to change the clock accordingly. Back in the 1980s this was one of the big advantages of Unix over DOS. > > Windows 3.1 did include timezone support. A program called TZEDIT allowed editing the time zone information. Apparently TZEDIT was available in 1995 (https://www.reasoncoresecurity.com/tzedit.exe-ff22959d3c2875308b1a47481964b44b95177128.aspx <https://www.reasoncoresecurity.com/tzedit.exe-ff22959d3c2875308b1a47481964b44b95177128.aspx>); I think that it was available earlier. > > There was always problems with times being wrong when the timezone flipped. I don't remember how DOS file timestamps changed after a daylight savings time change. The times might have changed, but they might not. Some forensic tools allow you to specify the timezone and whether or not daylight savings time is in effect for looking at DOS partitions. That is what the "-z" option does. It's not clear to me if tsk_loaddb requires the -z option or not; it depends on if local time or UTC time is being put in the database. > > Simson > > > >> On Aug 25, 2016, at 4:56 PM, Luís Filipe Nassif <lfc...@gm... <mailto:lfc...@gm...>> wrote: >> >> I've done some tests with two real life FAT32 images and I think the timestamps are being incorreclty decoded by tsk_loaddb. It seems to use local timezone, UTC-3 here at São Paulo, and to store UTC timestamps in the generated sqlite for most files. The problem is with timestamps in daylight saving periods. For example, Windows shows 07:00 and tsk_loaddb stores 10:00 for a file not in daylight saving time, it is ok. But for a file in daylight saving time, Windows shows 08:00 and tsk_loaddb stores 09:00 when it should store 10:00 UTC. FTKImager shows 07:00 for that file. >> >> I think timestamps are stored in FAT FS without daylight saving conversions, as I decoded with a hex viewer. I think Windows does the conversion on demand (+1 hour) when it displays the files in Explorer. But I think tsk is considering that timestamps are stored in FAT FS with daylight saving calculations, what I think is not the case. That explains the difference of only 1hour between Windows and loaddb, when it should be 2hours for a file in daylight saving period on a system at UTC-3 (UTC-2 with daylight saving). >> >> I observed tsk_loaddb lacks -z option to configure timezone, but that option is present in fls, ifind, istat, tsk_gettimes.... I've ported -z option to tsk_loaddb and run it with -z BRT3 (without daylight saving) to force it to not do daylight saving calculations, that produced the expected output in sqlite. >> >> Luis >> >> 2016-08-25 15:33 GMT-03:00 Ketil Froyn <ke...@fr... <mailto:ke...@fr...>>: >> I think it uses local timezone. I reported this issue in Autopsy where >> FAT timestamps are interpreted as a local timezone, and I'll assume >> that tsk_loaddb would have done the same: >> >> https://github.com/sleuthkit/autopsy/issues/1687 <https://github.com/sleuthkit/autopsy/issues/1687> >> >> It should be fairly straightforward to test, though. Create a small >> sample image with dd and mkfs -t fat, mount it loopback, create some >> files, and finally run tsk_loaddb on the resulting image file to >> compare tsk's file dates with the ones you set. >> >> On 24 August 2016 at 21:38, Luís Filipe Nassif <lfc...@gm... <mailto:lfc...@gm...>> wrote: >> > Hi, >> > >> > Anyone knows the answer? Does tsk_loaddb use UTC ou local timezone to >> > interpret FAT FS dates? >> > >> > Thanks, >> > Luis >> > >> > >> > 2016-06-17 13:11 GMT-03:00 Luís Filipe Nassif <lfc...@gm... <mailto:lfc...@gm...>>: >> >> >> >> Hi, >> >> >> >> What timezone does tskloaddb use for fat file systems? It is possible to >> >> configure it like Java bindings addimageprocess? >> >> >> >> Thank you, >> >> Luis >> > >> > >> > >> > ------------------------------------------------------------------------------ >> > >> > _______________________________________________ >> > sleuthkit-users mailing list >> > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users <https://lists.sourceforge.net/lists/listinfo/sleuthkit-users> >> > http://www.sleuthkit.org <http://www.sleuthkit.org/> >> > >> >> >> >> -- >> -Ketil >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> sleuthkit-users mailing list >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users <https://lists.sourceforge.net/lists/listinfo/sleuthkit-users> >> http://www.sleuthkit.org <http://www.sleuthkit.org/> > > |
From: Luís F. N. <lfc...@gm...> - 2016-08-26 14:04:03
|
Hi Simson, I've tested on NTFS images too, and the timestamps are stored ok in the database as UTC, regardless of being daylight savings time or not, and with or without the ported -z option. As I understood, I think -z option is to specify the original timezone of the evidence being ingested, it is not relevant to NTFS, but is to FAT. That is different from ajusting the timestamps to a specific timezone used by the investigator during an analisys. So I think tsk stores timestamps as UTC, and I think tsk_loaddb needs -z option to work with FAT images. That option exists in JavaBindings api because it is needed. Another problem is that I've observed the difference between the timestamps shown by Windows and the ones stored by tsk_loadb is -3 hours (because of UTC-3 timezone, that's ok) for most files but is only -1 hour when it should be -2 hours for timestamps in daylight saving time. I think tsk assumes (without -z option) FAT timestamps are stored with daylight saving time conversion, but they may be not. I agree with you that different systems (Windows, Linux, Cameras and Cell phones) may store the timestamps on FAT by different ways, with or without daylight saving time, as UTC or not, so I think the -z option is important to let the user specify. Luis 2016-08-25 22:56 GMT-03:00 Simson Garfinkel <si...@ac...>: > Testing on FAT32 images is not enough. My belief is that different > operating systems have different behavior. > > MSDOS, for example, only had one concept of time — local time. When > daylight savings came around it was the responsibility of the operator to > change the clock accordingly. Back in the 1980s this was one of the big > advantages of Unix over DOS. > > Windows 3.1 did include timezone support. A program called TZEDIT allowed > editing the time zone information. Apparently TZEDIT was available in 1995 > (https://www.reasoncoresecurity.com/tzedit.exe- > ff22959d3c2875308b1a47481964b44b95177128.aspx); I think that it was > available earlier. > > There was always problems with times being wrong when the timezone > flipped. I don't remember how DOS file timestamps changed after a daylight > savings time change. The times might have changed, but they might not. > Some forensic tools allow you to specify the timezone and whether or not > daylight savings time is in effect for looking at DOS partitions. That is > what the "-z" option does. It's not clear to me if tsk_loaddb requires the > -z option or not; it depends on if local time or UTC time is being put in > the database. > > Simson > > > > On Aug 25, 2016, at 4:56 PM, Luís Filipe Nassif <lfc...@gm...> > wrote: > > I've done some tests with two real life FAT32 images and I think the > timestamps are being incorreclty decoded by tsk_loaddb. It seems to use > local timezone, UTC-3 here at São Paulo, and to store UTC timestamps in the > generated sqlite for most files. The problem is with timestamps in daylight > saving periods. For example, Windows shows 07:00 and tsk_loaddb stores > 10:00 for a file not in daylight saving time, it is ok. But for a file in > daylight saving time, Windows shows 08:00 and tsk_loaddb stores 09:00 when > it should store 10:00 UTC. FTKImager shows 07:00 for that file. > > I think timestamps are stored in FAT FS without daylight saving > conversions, as I decoded with a hex viewer. I think Windows does the > conversion on demand (+1 hour) when it displays the files in Explorer. But > I think tsk is considering that timestamps are stored in FAT FS with > daylight saving calculations, what I think is not the case. That explains > the difference of only 1hour between Windows and loaddb, when it should be > 2hours for a file in daylight saving period on a system at UTC-3 (UTC-2 > with daylight saving). > > I observed tsk_loaddb lacks -z option to configure timezone, but that > option is present in fls, ifind, istat, tsk_gettimes.... I've ported -z > option to tsk_loaddb and run it with -z BRT3 (without daylight saving) to > force it to not do daylight saving calculations, that produced the expected > output in sqlite. > > Luis > > 2016-08-25 15:33 GMT-03:00 Ketil Froyn <ke...@fr...>: > >> I think it uses local timezone. I reported this issue in Autopsy where >> FAT timestamps are interpreted as a local timezone, and I'll assume >> that tsk_loaddb would have done the same: >> >> https://github.com/sleuthkit/autopsy/issues/1687 >> >> It should be fairly straightforward to test, though. Create a small >> sample image with dd and mkfs -t fat, mount it loopback, create some >> files, and finally run tsk_loaddb on the resulting image file to >> compare tsk's file dates with the ones you set. >> >> On 24 August 2016 at 21:38, Luís Filipe Nassif <lfc...@gm...> >> wrote: >> > Hi, >> > >> > Anyone knows the answer? Does tsk_loaddb use UTC ou local timezone to >> > interpret FAT FS dates? >> > >> > Thanks, >> > Luis >> > >> > >> > 2016-06-17 13:11 GMT-03:00 Luís Filipe Nassif <lfc...@gm...>: >> >> >> >> Hi, >> >> >> >> What timezone does tskloaddb use for fat file systems? It is possible >> to >> >> configure it like Java bindings addimageprocess? >> >> >> >> Thank you, >> >> Luis >> > >> > >> > >> > ------------------------------------------------------------ >> ------------------ >> > >> > _______________________________________________ >> > sleuthkit-users mailing list >> > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >> > http://www.sleuthkit.org >> > >> >> >> >> -- >> -Ketil >> > > ------------------------------------------------------------ > ------------------ > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org > > > |
From: Simson G. <si...@ac...> - 2016-08-26 03:15:40
|
Testing on FAT32 images is not enough. My belief is that different operating systems have different behavior. MSDOS, for example, only had one concept of time — local time. When daylight savings came around it was the responsibility of the operator to change the clock accordingly. Back in the 1980s this was one of the big advantages of Unix over DOS. Windows 3.1 did include timezone support. A program called TZEDIT allowed editing the time zone information. Apparently TZEDIT was available in 1995 (https://www.reasoncoresecurity.com/tzedit.exe-ff22959d3c2875308b1a47481964b44b95177128.aspx <https://www.reasoncoresecurity.com/tzedit.exe-ff22959d3c2875308b1a47481964b44b95177128.aspx>); I think that it was available earlier. There was always problems with times being wrong when the timezone flipped. I don't remember how DOS file timestamps changed after a daylight savings time change. The times might have changed, but they might not. Some forensic tools allow you to specify the timezone and whether or not daylight savings time is in effect for looking at DOS partitions. That is what the "-z" option does. It's not clear to me if tsk_loaddb requires the -z option or not; it depends on if local time or UTC time is being put in the database. Simson > On Aug 25, 2016, at 4:56 PM, Luís Filipe Nassif <lfc...@gm...> wrote: > > I've done some tests with two real life FAT32 images and I think the timestamps are being incorreclty decoded by tsk_loaddb. It seems to use local timezone, UTC-3 here at São Paulo, and to store UTC timestamps in the generated sqlite for most files. The problem is with timestamps in daylight saving periods. For example, Windows shows 07:00 and tsk_loaddb stores 10:00 for a file not in daylight saving time, it is ok. But for a file in daylight saving time, Windows shows 08:00 and tsk_loaddb stores 09:00 when it should store 10:00 UTC. FTKImager shows 07:00 for that file. > > I think timestamps are stored in FAT FS without daylight saving conversions, as I decoded with a hex viewer. I think Windows does the conversion on demand (+1 hour) when it displays the files in Explorer. But I think tsk is considering that timestamps are stored in FAT FS with daylight saving calculations, what I think is not the case. That explains the difference of only 1hour between Windows and loaddb, when it should be 2hours for a file in daylight saving period on a system at UTC-3 (UTC-2 with daylight saving). > > I observed tsk_loaddb lacks -z option to configure timezone, but that option is present in fls, ifind, istat, tsk_gettimes.... I've ported -z option to tsk_loaddb and run it with -z BRT3 (without daylight saving) to force it to not do daylight saving calculations, that produced the expected output in sqlite. > > Luis > > 2016-08-25 15:33 GMT-03:00 Ketil Froyn <ke...@fr... <mailto:ke...@fr...>>: > I think it uses local timezone. I reported this issue in Autopsy where > FAT timestamps are interpreted as a local timezone, and I'll assume > that tsk_loaddb would have done the same: > > https://github.com/sleuthkit/autopsy/issues/1687 <https://github.com/sleuthkit/autopsy/issues/1687> > > It should be fairly straightforward to test, though. Create a small > sample image with dd and mkfs -t fat, mount it loopback, create some > files, and finally run tsk_loaddb on the resulting image file to > compare tsk's file dates with the ones you set. > > On 24 August 2016 at 21:38, Luís Filipe Nassif <lfc...@gm... <mailto:lfc...@gm...>> wrote: > > Hi, > > > > Anyone knows the answer? Does tsk_loaddb use UTC ou local timezone to > > interpret FAT FS dates? > > > > Thanks, > > Luis > > > > > > 2016-06-17 13:11 GMT-03:00 Luís Filipe Nassif <lfc...@gm... <mailto:lfc...@gm...>>: > >> > >> Hi, > >> > >> What timezone does tskloaddb use for fat file systems? It is possible to > >> configure it like Java bindings addimageprocess? > >> > >> Thank you, > >> Luis > > > > > > > > ------------------------------------------------------------------------------ > > > > _______________________________________________ > > sleuthkit-users mailing list > > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users <https://lists.sourceforge.net/lists/listinfo/sleuthkit-users> > > http://www.sleuthkit.org <http://www.sleuthkit.org/> > > > > > > -- > -Ketil > > ------------------------------------------------------------------------------ > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Luís F. N. <lfc...@gm...> - 2016-08-25 20:56:12
|
I've done some tests with two real life FAT32 images and I think the timestamps are being incorreclty decoded by tsk_loaddb. It seems to use local timezone, UTC-3 here at São Paulo, and to store UTC timestamps in the generated sqlite for most files. The problem is with timestamps in daylight saving periods. For example, Windows shows 07:00 and tsk_loaddb stores 10:00 for a file not in daylight saving time, it is ok. But for a file in daylight saving time, Windows shows 08:00 and tsk_loaddb stores 09:00 when it should store 10:00 UTC. FTKImager shows 07:00 for that file. I think timestamps are stored in FAT FS without daylight saving conversions, as I decoded with a hex viewer. I think Windows does the conversion on demand (+1 hour) when it displays the files in Explorer. But I think tsk is considering that timestamps are stored in FAT FS with daylight saving calculations, what I think is not the case. That explains the difference of only 1hour between Windows and loaddb, when it should be 2hours for a file in daylight saving period on a system at UTC-3 (UTC-2 with daylight saving). I observed tsk_loaddb lacks -z option to configure timezone, but that option is present in fls, ifind, istat, tsk_gettimes.... I've ported -z option to tsk_loaddb and run it with -z BRT3 (without daylight saving) to force it to not do daylight saving calculations, that produced the expected output in sqlite. Luis 2016-08-25 15:33 GMT-03:00 Ketil Froyn <ke...@fr...>: > I think it uses local timezone. I reported this issue in Autopsy where > FAT timestamps are interpreted as a local timezone, and I'll assume > that tsk_loaddb would have done the same: > > https://github.com/sleuthkit/autopsy/issues/1687 > > It should be fairly straightforward to test, though. Create a small > sample image with dd and mkfs -t fat, mount it loopback, create some > files, and finally run tsk_loaddb on the resulting image file to > compare tsk's file dates with the ones you set. > > On 24 August 2016 at 21:38, Luís Filipe Nassif <lfc...@gm...> > wrote: > > Hi, > > > > Anyone knows the answer? Does tsk_loaddb use UTC ou local timezone to > > interpret FAT FS dates? > > > > Thanks, > > Luis > > > > > > 2016-06-17 13:11 GMT-03:00 Luís Filipe Nassif <lfc...@gm...>: > >> > >> Hi, > >> > >> What timezone does tskloaddb use for fat file systems? It is possible to > >> configure it like Java bindings addimageprocess? > >> > >> Thank you, > >> Luis > > > > > > > > ------------------------------------------------------------ > ------------------ > > > > _______________________________________________ > > sleuthkit-users mailing list > > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > > http://www.sleuthkit.org > > > > > > -- > -Ketil > |
From: Ketil F. <ke...@fr...> - 2016-08-25 18:59:42
|
I think it uses local timezone. I reported this issue in Autopsy where FAT timestamps are interpreted as a local timezone, and I'll assume that tsk_loaddb would have done the same: https://github.com/sleuthkit/autopsy/issues/1687 It should be fairly straightforward to test, though. Create a small sample image with dd and mkfs -t fat, mount it loopback, create some files, and finally run tsk_loaddb on the resulting image file to compare tsk's file dates with the ones you set. On 24 August 2016 at 21:38, Luís Filipe Nassif <lfc...@gm...> wrote: > Hi, > > Anyone knows the answer? Does tsk_loaddb use UTC ou local timezone to > interpret FAT FS dates? > > Thanks, > Luis > > > 2016-06-17 13:11 GMT-03:00 Luís Filipe Nassif <lfc...@gm...>: >> >> Hi, >> >> What timezone does tskloaddb use for fat file systems? It is possible to >> configure it like Java bindings addimageprocess? >> >> Thank you, >> Luis > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org > -- -Ketil |
From: Luís F. N. <lfc...@gm...> - 2016-08-24 19:38:45
|
Hi, Anyone knows the answer? Does tsk_loaddb use UTC ou local timezone to interpret FAT FS dates? Thanks, Luis 2016-06-17 13:11 GMT-03:00 Luís Filipe Nassif <lfc...@gm...>: > Hi, > > What timezone does tskloaddb use for fat file systems? It is possible to > configure it like Java bindings addimageprocess? > > Thank you, > Luis > |
From: Brian C. <ca...@sl...> - 2016-08-22 18:27:03
|
Autopsy 4.1.1 was released with a single bug fix for Python modules. The 4.1.0 release included some extra files that caused some Python modules to not work. 4.1.1 fixes that. http://sleuthkit.org/autopsy/download.php thanks, brian |
From: Brian C. <ca...@sl...> - 2016-08-22 13:47:17
|
The 7th Annual OSDFCon program has been finalized. Learn about the latest in memory forensics, incident response, cloud computing, and more at the 1-day event in Herndon, VA. This years event also has 4 hands-on workshops before the conference and a 1-day Autopsy training after it. Register Now to learn about software that you can add to your toolkit. As always, it is free for government employees. Early bird registration ends October 1st. Conference Program This year's program includes presentations from Facebook, Google, Mozilla, and Basis Technology. Topics include: • Enterprise-scale incident response • Memory forensics • Timeline analysis • Updates to Autopsy • Parsing windows artifacts • Using and analyzing the cloud • and more! Workshops The workshops before the conference provide hands-on experience and are often taught by the developers. Topics include: • Memory forensics using Volatility • Writing Python modules for Autopsy • Parsing windows artifacts using Eric Zimmerman’s tools Autopsy Training There is a 1-day Autopsy training course after the conference that covers how to most effectively use the free software. Autopsy has the features needed to fully conduct an investigation and was built to be easy to use and extensible. Learn how to use it in your cases. Register Now for the conference, workshops, or training. Sponsorships We have a small number of sponsorship opportunities available this year if you are interested in supporting this effort. We can provide exhibit tables or recognition for specific events. Autopsy Module Contest Basis Technology is again sponsoring an Autopsy module contest. Win cash prizes with a Python or Java module. The more submissions we get, the bigger the prizes get. Autopsy provides the infrastructure needed to build modules. All you need to do is write some cool analytics. We have several tutorials and code samples to start from. Submit your module code for the chance to win a cash prize! Submissions are due October 10, 2016. |
From: Hoyt H. <hoy...@gm...> - 2016-08-06 21:42:15
|
If any of you guys get a chance to look, I've posted a couple of errors I'm getting building Autopsy with NetBeans here: http://forum.sleuthkit.org/viewtopic.php?f=8&t=2795 I'm hoping these two can fix cleanly, but I fully expect they'll give birth to five more each. That's how it usually goes for me. Thanks in advance! -- Hoyt ----------------- There are 11 kinds of people - those who think binary jokes are funny, those who don't, ...and those who don't know binary. |
From: Richard C. <rco...@ba...> - 2016-07-28 14:30:56
|
Autopsy 4.1.0 was built using SleuthKit 4.3.0. If you are building from source, you can checkout the sleuthkit-4.3.0 tag if you want to see exactly what TSK code is used by Autopsy 4.1.0, or you can look at the code for that tag in the sleuthkit/sleuthkit repository on github. SleuthKit 4.3.0 incorporates the libewf code checked into the sleuthkit/libewf_64bit repository on github. This code is not current with the code checked into the the libyal/libewf repository on github. When we last contacted libewf author Joachim Metz, he advised us that the available tags in that repository were still experimental, so we have not updated sleuthkit/libewf_64bit yet. On Thu, Jul 28, 2016 at 9:04 AM, Hoyt Harness <hoy...@gm...> wrote: > Last appeal... where can I find release notes for Autopsy 4.1.0? > Specifically, I'm looking for version numbers of the various subcomponents, > especially TSK and libewf. If that's listed in the source code or > elsewhere, please point me there. > > Also, if this listserv isn't the most appropriate place to ask this > question, is there one more specific to Autopsy proper? > > Thanks in advance! > > Hoyt > > > ------------------------------------------------------------------------------ > > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org > > |
From: Hoyt H. <hoy...@gm...> - 2016-07-28 14:29:56
|
Exactly what I needed... thank you, Richard! On Thu, Jul 28, 2016 at 9:02 AM, Richard Cordovano <rco...@ba... > wrote: > Autopsy 4.1.0 was built using SleuthKit 4.3.0. If you are building from > source, you can checkout the sleuthkit-4.3.0 tag if you want to see exactly > what TSK code is used by Autopsy 4.1.0, or you can look at the code for > that tag in the sleuthkit/sleuthkit repository on github. > > SleuthKit 4.3.0 incorporates the libewf code checked into the > sleuthkit/libewf_64bit repository on github. This code is not current with > the code checked into the the libyal/libewf repository on github. When we > last contacted libewf author Joachim Metz, he advised us that the available > tags in that repository were still experimental, so we have not updated > sleuthkit/libewf_64bit yet. > > On Thu, Jul 28, 2016 at 9:04 AM, Hoyt Harness <hoy...@gm...> > wrote: > >> Last appeal... where can I find release notes for Autopsy 4.1.0? >> Specifically, I'm looking for version numbers of the various subcomponents, >> especially TSK and libewf. If that's listed in the source code or >> elsewhere, please point me there. >> >> Also, if this listserv isn't the most appropriate place to ask this >> question, is there one more specific to Autopsy proper? >> >> Thanks in advance! >> >> Hoyt >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> sleuthkit-users mailing list >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >> http://www.sleuthkit.org >> >> > -- Hoyt ----------------- There are 11 kinds of people - those who think binary jokes are funny, those who don't, ...and those who don't know binary. |
From: Hoyt H. <hoy...@gm...> - 2016-07-28 13:04:22
|
Last appeal... where can I find release notes for Autopsy 4.1.0? Specifically, I'm looking for version numbers of the various subcomponents, especially TSK and libewf. If that's listed in the source code or elsewhere, please point me there. Also, if this listserv isn't the most appropriate place to ask this question, is there one more specific to Autopsy proper? Thanks in advance! Hoyt |
From: Luís F. N. <lfc...@gm...> - 2016-07-26 17:59:20
|
Hi, I provided a test ISO that reproduces the (probably the same) problem at https://github.com/sleuthkit/sleuthkit/issues/393 and it seems like a fix was proposed. Any chance to apply it in the near future? Regards, Luis |
From: Brian C. <ca...@sl...> - 2016-07-26 14:49:00
|
This message occurs when TSK can’t decompress an NTFS compressed file (NTFS can store files compressed). In this case, the clue is “Status: Deleted”, which means that the sector that the file used to occupy has been re-used by another file. TSK is trying to interpret that new data as though it were the compressed data and is having problems. You should expect some errors every now and then with deleted files. > On Jul 25, 2016, at 3:46 AM, expert66 <exp...@ya...> wrote: > > Hi, i am using icat in my script and on some image file i have got an errors > looks like: "Error extracting file from image (ntfs_uncompress_compunit: > Block length longer than buffer length: 27056) (208010 - type: 128 id: 4 > Status: Deleted)" > Can you help me: what does it mean and how i can fix it? Thanks. > > > > -- > View this message in context: http://filesystems.996266.n3.nabble.com/get-an-error-block-length-tp9590.html > Sent from the sleuthkit-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic > patterns at an interface-level. Reveals which users, apps, and protocols are > consuming the most bandwidth. Provides multi-vendor support for NetFlow, > J-Flow, sFlow and other flows. Make informed decisions using capacity planning > reports.http://sdm.link/zohodev2dev > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: expert66 <exp...@ya...> - 2016-07-25 07:47:01
|
Hi, i am using icat in my script and on some image file i have got an errors looks like: "Error extracting file from image (ntfs_uncompress_compunit: Block length longer than buffer length: 27056) (208010 - type: 128 id: 4 Status: Deleted)" Can you help me: what does it mean and how i can fix it? Thanks. -- View this message in context: http://filesystems.996266.n3.nabble.com/get-an-error-block-length-tp9590.html Sent from the sleuthkit-users mailing list archive at Nabble.com. |