sleuthkit-developers Mailing List for The Sleuth Kit (Page 2)
Brought to you by:
carrier
You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(10) |
Sep
(2) |
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(22) |
Feb
(39) |
Mar
(8) |
Apr
(17) |
May
(10) |
Jun
(2) |
Jul
(6) |
Aug
(4) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
2005 |
Jan
(2) |
Feb
(6) |
Mar
(2) |
Apr
(2) |
May
(13) |
Jun
(2) |
Jul
|
Aug
|
Sep
(5) |
Oct
|
Nov
(2) |
Dec
|
2006 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
(2) |
Jun
(9) |
Jul
(4) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(9) |
Dec
(4) |
2007 |
Jan
(1) |
Feb
(2) |
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(4) |
Oct
|
Nov
|
Dec
(2) |
2008 |
Jan
(4) |
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(9) |
Jul
(14) |
Aug
|
Sep
(5) |
Oct
(10) |
Nov
(4) |
Dec
(7) |
2009 |
Jan
(7) |
Feb
(10) |
Mar
(10) |
Apr
(19) |
May
(16) |
Jun
(3) |
Jul
(9) |
Aug
(5) |
Sep
(5) |
Oct
(16) |
Nov
(35) |
Dec
(30) |
2010 |
Jan
(4) |
Feb
(24) |
Mar
(25) |
Apr
(31) |
May
(11) |
Jun
(9) |
Jul
(11) |
Aug
(31) |
Sep
(11) |
Oct
(10) |
Nov
(15) |
Dec
(3) |
2011 |
Jan
(8) |
Feb
(17) |
Mar
(14) |
Apr
(2) |
May
(4) |
Jun
(4) |
Jul
(3) |
Aug
(7) |
Sep
(18) |
Oct
(8) |
Nov
(16) |
Dec
(1) |
2012 |
Jan
(9) |
Feb
(2) |
Mar
(3) |
Apr
(13) |
May
(10) |
Jun
(7) |
Jul
(1) |
Aug
(5) |
Sep
|
Oct
(3) |
Nov
(19) |
Dec
(3) |
2013 |
Jan
(16) |
Feb
(3) |
Mar
(2) |
Apr
(4) |
May
|
Jun
(3) |
Jul
(2) |
Aug
(17) |
Sep
(6) |
Oct
(1) |
Nov
|
Dec
(4) |
2014 |
Jan
(2) |
Feb
|
Mar
(3) |
Apr
(7) |
May
(6) |
Jun
(1) |
Jul
(18) |
Aug
|
Sep
(3) |
Oct
(1) |
Nov
(26) |
Dec
(7) |
2015 |
Jan
(5) |
Feb
(1) |
Mar
(2) |
Apr
|
May
(1) |
Jun
(1) |
Jul
(5) |
Aug
(7) |
Sep
(4) |
Oct
(1) |
Nov
(1) |
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
(13) |
Jul
(23) |
Aug
(2) |
Sep
(11) |
Oct
|
Nov
(1) |
Dec
|
2017 |
Jan
(4) |
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
2018 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
(1) |
Jun
(3) |
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
(2) |
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
(4) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
(5) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2024 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
From: Brian C. <ca...@sl...> - 2018-03-19 14:44:40
|
We have tried to maintain the lowest possible Java level to enable the most widespread usage. Currently, it is set to 1.6, but we would like to move to the more modern 1.8. If we change to 1.8 is this going to break anyone's Java projects that use the JAR? brian |
From: Christopher W. <ch...@cw...> - 2018-03-17 13:25:04
|
Hello, I'm currently creating a Report Module for Autopsy, which is nearly finished now, however for some reason I can't get the progressPanel to update a message which let's the user know what is happening, when it is happening. I have used the exact same method as shown in add hashes module: http://www.sleuthkit.org/autopsy/docs/api-docs/4.5.0/_ add_tagged_hashes_to_hash_db_8java_source.html progressPanel.updateStatusLabel <http://www.sleuthkit.org/autopsy/docs/api-docs/4.5.0/classorg_1_1sleuthkit_1_1autopsy_1_1report_1_1_report_progress_panel.html#a2162ed8086014100811ef5b130fa13f8> ("Adding hashes..."); However - their writing shows up on the progress panel page, where as mine does not. These are the sum of my progressPanel statements in my generateReport method: progressPanel.setIndeterminate(false); progressPanel.start(); progressPanel.updateStatusLabel("Adding files..."); if (progressPanel.getStatus() == ReportProgressPanel.ReportStatus.CANCELED) { break; } progressPanel.setMaximumProgress(tags.size()); progressPanel.updateStatusLabel("Adding \"" + tagName.getDisplayName() + "\" files to " + configPanel.getSelectedDocumentName() + "..."); progressPanel.updateStatusLabel("Adding " + tag.getContent().getName() + " from \"" + tagName.getDisplayName() + "\" to " + configPanel.getSelectedDocumentName() + "..."); // Increment the progressPanel every time a file is processed progressPanel.increment(); progressPanel.complete(ReportProgressPanel.ReportStatus.COMPLETE); However not a single updateStatusLabel is showing when I generate a report. Note the bar itself is progressing as it should, everything compiles, runs, without errors, increments itself to completion after every file is processed, I have imported the reportProgressPanel from org.sleuthkit.autopsy.report.ReportProgressPanel - exactly the same as add tagged hashes to hash db report module. I have also tried compiling the module itself into a NBM, and then installing it through Autopsy - however still no updated status labels. Could I get your help? Cheers |
From: Brian C. <ca...@sl...> - 2017-11-10 23:16:25
|
If you use one of the DATETIME attributes, the date should be stored in epoch (number of seconds since jan 1, 1970 UTC). Are you converting the format that Skype stores in to an integer that represents it as epoch? On Thu, Nov 9, 2017 at 6:07 PM, Chandapiwa Monica Mpuchane-Molefha < cmp...@un...> wrote: > I am new to python and Autopsy Sleuthkit, and i am having a proplem with > converting a timestamp to local time such that it would work in Autopsy. Im > developing an extension module using python for a .db source file. > > I have attached the my .py file and word document with the screenshot > showing my output result on Autopsy. I have included my sample database as > well. > > > The problem is on the Date Time and Start Date Time. The TSK_COMMENT AND > TSK_DECRIPTION field show the right info, but since its data type does not > match with the datetime, I get an error message if i use it to set my > TSK_DATETIME. > > Your help will be really appreciated, > > > Kind Regards > > Chandapiwa Monica Mpuchane-Molefha > Graduate Student > University of Nebraska at Omaha > MS in Management Information Systems > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > sleuthkit-developers mailing list > sle...@li... > https://lists.sourceforge.net/lists/listinfo/sleuthkit-developers > > |
From: Chandapiwa M. Mpuchane-M. <cmp...@un...> - 2017-11-09 23:49:09
|
I am new to python and Autopsy Sleuthkit, and i am having a proplem with converting a timestamp to local time such that it would work in Autopsy. Im developing an extension module using python for a .db source file. I have attached the my .py file and word document with the screenshot showing my output result on Autopsy. I have included my sample database as well. The problem is on the Date Time and Start Date Time. The TSK_COMMENT AND TSK_DECRIPTION field show the right info, but since its data type does not match with the datetime, I get an error message if i use it to set my TSK_DATETIME. Your help will be really appreciated, Kind Regards Chandapiwa Monica Mpuchane-Molefha Graduate Student University of Nebraska at Omaha MS in Management Information Systems |
From: Luís F. N. <lfc...@gm...> - 2017-04-28 20:03:08
|
Opened https://github.com/sleuthkit/sleuthkit/issues/821 I will provide any other info needed to help fix the issue, now that I have at hand an image to reproduce the problem. Thanks, Luis 2017-04-28 16:46 GMT-03:00 Luís Filipe Nassif <lfc...@gm...>: > Hi folks, > > We are still having this issue with tsk-4.4. I have received one report > yesterday and one today about the processing hanging with thousands of > orphan files with ~4GB of size, which together result in 140TB of data in > one image! > > Fortunately I have access to the image of the other report. Looking at it, > the orphans together resulted in ~500GB of data, they were recovered from a > FAT32 file system of only ~32GB of size! That 32GB FAT32 FS was recovered > from a 92,5 KB volume/partition! This 92,5KB partition recognized by > TSK-4.4 is shown by FTKImager as an unpartitioned space, but I think this > is another issue. > > I have trimmed the image down from the original 320GB to 10GB (5GB in ewf > format) without the user data. Because of that, the orphan files together > now result in ~45GB (not the original 500GB). I will open a ticket at > github and post the link to the trimmed image there for reproducing the > problem. > > PS: In the past I have received similar reports with thumbdrives. So looks > like the issue is with FAT and not with NTFS like I originally reported > below. Probably those ntfs images had fat32 partitions side by side. This > explains the upper limit of 4GB of size of those orphans. > > Regards, > Luis Nassif > > > 2015-09-07 12:12 GMT-03:00 Luís Filipe Nassif <lfc...@gm...>: > >> Sorry for the long delay. I do not have the image with me, I will ask my >> colleague if trimming the image is possible... We worked around the problem >> by filtering out orphans with logical size greater than 10 MB before >> sending them to the processing engine. >> >> Thank you, >> Luis >> >> 2015-08-13 14:13 GMT-03:00 Stefan Petrea <ste...@gm...>: >> >>> Hi Luis, >>> >>> Could the NTFS image you're looking at be trimmed down and provided as >>> sample input to reproduce the problem ? >>> >>> Best Regards, >>> Stefan >>> >>> On Thu, Aug 13, 2015 at 8:05 PM, Luís Filipe Nassif <lfc...@gm... >>> > wrote: >>> >>>> This error have happened again with a colleague's NTFS image, using the >>>> develop branch compiled about 1 month ago. Thousands of huge corrupted >>>> orphans were added by loaddb, which caused our processing application (and >>>> probably Autopsy too) to process indefinitely the evidence. >>>> >>>> Any help will be appreciated. >>>> >>>> Regards, >>>> Luis Nassif >>>> >>>> >>>> 2014-09-30 21:00 GMT-03:00 Luís Filipe Nassif <lfc...@gm...>: >>>> >>>>> This problem still happens with 4.2.0 branch. If I can help with some >>>>> more information, please let me know. >>>>> >>>>> Thanks >>>>> Luis >>>>> >>>>> 2014-07-24 9:21 GMT-03:00 Luís Filipe Nassif <lfc...@gm...>: >>>>> >>>>>> Another information: the sum of the millions of file sizes resulted >>>>>> in 1,1 petabyte, while the image has only 250 GB. >>>>>> >>>>>> >>>>>> 2014-07-23 22:21 GMT-03:00 Luís Filipe Nassif <lfc...@gm...>: >>>>>> >>>>>>> We tested loaddb of both the released 4.1.3 version and the develop >>>>>>> branch of sleuthkit on a NTFS image of a hard disk with a lot of bad >>>>>>> blocks, many of them at the beginning of the disk. >>>>>>> >>>>>>> The 4.1.3 version found ~400.000 allocated files more ~100.000 >>>>>>> orphan files, about the same found by other forensic tools. The develop >>>>>>> branch found the same ~400.000 allocated files more ~2.500.000 orphan >>>>>>> files! Most of these millions of orphans have corrupted names or the name >>>>>>> OrphanFile-xxxxxxx and have lengths ranging from 0 to 4.294.967.296 bytes. >>>>>>> We think the recent changes to NTFS code are causing this large number of >>>>>>> corrupted orphans to be added to the case. Maybe it should be investigated >>>>>>> before the final 4.2 release. >>>>>>> >>>>>>> Luis >>>>>>> >>>>>> >>>>>> >>>>> >>>> >>>> ------------------------------------------------------------ >>>> ------------------ >>>> >>>> _______________________________________________ >>>> sleuthkit-developers mailing list >>>> sle...@li... >>>> https://lists.sourceforge.net/lists/listinfo/sleuthkit-developers >>>> >>>> >>> >> > |
From: Luís F. N. <lfc...@gm...> - 2017-04-28 19:47:07
|
Hi folks, We are still having this issue with tsk-4.4. I have received one report yesterday and one today about the processing hanging with thousands of orphan files with ~4GB of size, which together result in 140TB of data in one image! Fortunately I have access to the image of the other report. Looking at it, the orphans together resulted in ~500GB of data, they were recovered from a FAT32 file system of only ~32GB of size! That 32GB FAT32 FS was recovered from a 92,5 KB volume/partition! This 92,5KB partition recognized by TSK-4.4 is shown by FTKImager as an unpartitioned space, but I think this is another issue. I have trimmed the image down from the original 320GB to 10GB (5GB in ewf format) without the user data. Because of that, the orphan files together now result in ~45GB (not the original 500GB). I will open a ticket at github and post the link to the trimmed image there for reproducing the problem. PS: In the past I have received similar reports with thumbdrives. So looks like the issue is with FAT and not with NTFS like I originally reported below. Probably those ntfs images had fat32 partitions side by side. This explains the upper limit of 4GB of size of those orphans. Regards, Luis Nassif 2015-09-07 12:12 GMT-03:00 Luís Filipe Nassif <lfc...@gm...>: > Sorry for the long delay. I do not have the image with me, I will ask my > colleague if trimming the image is possible... We worked around the problem > by filtering out orphans with logical size greater than 10 MB before > sending them to the processing engine. > > Thank you, > Luis > > 2015-08-13 14:13 GMT-03:00 Stefan Petrea <ste...@gm...>: > >> Hi Luis, >> >> Could the NTFS image you're looking at be trimmed down and provided as >> sample input to reproduce the problem ? >> >> Best Regards, >> Stefan >> >> On Thu, Aug 13, 2015 at 8:05 PM, Luís Filipe Nassif <lfc...@gm...> >> wrote: >> >>> This error have happened again with a colleague's NTFS image, using the >>> develop branch compiled about 1 month ago. Thousands of huge corrupted >>> orphans were added by loaddb, which caused our processing application (and >>> probably Autopsy too) to process indefinitely the evidence. >>> >>> Any help will be appreciated. >>> >>> Regards, >>> Luis Nassif >>> >>> >>> 2014-09-30 21:00 GMT-03:00 Luís Filipe Nassif <lfc...@gm...>: >>> >>>> This problem still happens with 4.2.0 branch. If I can help with some >>>> more information, please let me know. >>>> >>>> Thanks >>>> Luis >>>> >>>> 2014-07-24 9:21 GMT-03:00 Luís Filipe Nassif <lfc...@gm...>: >>>> >>>>> Another information: the sum of the millions of file sizes resulted in >>>>> 1,1 petabyte, while the image has only 250 GB. >>>>> >>>>> >>>>> 2014-07-23 22:21 GMT-03:00 Luís Filipe Nassif <lfc...@gm...>: >>>>> >>>>>> We tested loaddb of both the released 4.1.3 version and the develop >>>>>> branch of sleuthkit on a NTFS image of a hard disk with a lot of bad >>>>>> blocks, many of them at the beginning of the disk. >>>>>> >>>>>> The 4.1.3 version found ~400.000 allocated files more ~100.000 orphan >>>>>> files, about the same found by other forensic tools. The develop branch >>>>>> found the same ~400.000 allocated files more ~2.500.000 orphan files! Most >>>>>> of these millions of orphans have corrupted names or the name >>>>>> OrphanFile-xxxxxxx and have lengths ranging from 0 to 4.294.967.296 bytes. >>>>>> We think the recent changes to NTFS code are causing this large number of >>>>>> corrupted orphans to be added to the case. Maybe it should be investigated >>>>>> before the final 4.2 release. >>>>>> >>>>>> Luis >>>>>> >>>>> >>>>> >>>> >>> >>> ------------------------------------------------------------ >>> ------------------ >>> >>> _______________________________________________ >>> sleuthkit-developers mailing list >>> sle...@li... >>> https://lists.sourceforge.net/lists/listinfo/sleuthkit-developers >>> >>> >> > |
From: Joachim M. <joa...@gm...> - 2017-01-31 05:46:48
|
We are playing with the idea to organize an Open Source digital forensics and incident response tooling developer exchange. The idea is to have an exchange for developer to discuss development, coding, maintenance, testing and exchange ideas. With this survey we are gauging interest. If you develop and/or maintain Open Source DFIR projects and are interested please leave information about your projects at "I develop/maintain Open Source DFIR projects". https://docs.google.com/forms/d/e/1FAIpQLScgKBpr8_X8lzuxoV7HaB16kyr4vVuZdpzjnNJnV1WdFDhFlA/viewform?c=0&w=1 Thanks in advance, Joachim |
From: Jon S. <JSt...@St...> - 2017-01-18 23:19:35
|
That's excellent news. In particular, the NTFS issue looks like it may be ungood. We're continuing to investigate it. Jon > -----Original Message----- > From: Brian Carrier [mailto:ca...@sl...] > Sent: Wednesday, January 18, 2017 5:39 PM > To: Jon Stewart <JSt...@St...> > Cc: Joel Uckelman <juc...@st...>; sleuthkit- > us...@li...; sle...@li... > Subject: Re: [sleuthkit-users] 4.3.1 release > > Hey Jon, > > > I'll make sure we get those in for the next release. We're going to get > into a 2-month release rhythm. > > > thanks, > > brian > > > > On Tue, Jan 17, 2017 at 12:39 PM, Jon Stewart > <JSt...@st... <mailto:JSt...@st...> > > wrote: > > > The issue here: > https://github.com/sleuthkit/sleuthkit/issues/748 > <https://github.com/sleuthkit/sleuthkit/issues/748> > appears to be a regression in NTFS between 4.2 and 4.3. We'll add > more detail to it, but the issue seems due to an early exit from a loop. > We may even be able to share the image, if someone can contact me off- > list. It would be great to get this fixed in 4.4. > > This open PR fixes some memory leaks: > https://github.com/sleuthkit/sleuthkit/pull/684 > <https://github.com/sleuthkit/sleuthkit/pull/684> > > There's also this exFAT issue; we're trying to make up a tiny test > evidence file to determine whether encoding exFAT uses is UTF-16 or UCS- > 2: > https://github.com/sleuthkit/sleuthkit/pull/747 > <https://github.com/sleuthkit/sleuthkit/pull/747> > > > Jon > > > > -----Original Message----- > > From: Brian Carrier [mailto:ca...@sl... > <mailto:ca...@sl...> ] > > Sent: Tuesday, January 17, 2017 10:21 AM > > To: Joel Uckelman <juc...@st... > <mailto:juc...@st...> > > > Cc: sle...@li... <mailto:sleuthkit- > us...@li...> > > Subject: Re: [sleuthkit-users] 4.3.1 release > > > > Hey Joel, > > > > > > We made the tag along with the Autopsy 4.2 release but never made > the > > full release in the craziness before OSDFCon. We're about to > make an > > Autopsy 4.3 release along with a TSK 4.4.0 release. Major > difference > > between 4.4 and 4.3 is the change in Visual Studio compiler. > > > > > > > > > > On Wed, Jan 11, 2017 at 11:01 AM, Joel Uckelman > > <juc...@st... > <mailto:juc...@st...> > <mailto:juc...@st... > <mailto:juc...@st...> > > > > wrote: > > > > > > I see a sleuthkit-4.3.1 tag on GitHub with commits that > make it > > look like a 4.3.1 release, but no mention of an actual release > anywhere. > > Is that going to be an official release at some point? > > > > > > Joel Uckelman | Senior Developer > > > > Stroz Friedberg, an Aon company > > Capital House, 85 King William Street | London, EC4N 7BL > > > > T: +44 20.7061.2239 | M: +44 79.6478.7296 | F: +44 20 7061 > 2201 > > juc...@st... > <mailto:juc...@st...> > > <mailto:juc...@st... > <mailto:juc...@st...> > | www.strozfriedberg.com > <http://www.strozfriedberg.com> > > <http://www.strozfriedberg.com> > > > > This message and/or its attachments may contain information > that is > > confidential and/or protected by privilege from disclosure. If > you have > > reason to believe you are not the intended recipient, please > immediately > > notify the sender by reply e-mail or by telephone, then delete > this > > message (and any attachments), as well as all copies, including > any > > printed copies. Thank you. > > > > Stroz Friedberg Limited is a company registered in England > and > > Wales No: 4150500, Registered office: Capital House, 85 King > William > > Street, London EC4N 7BL, VAT registration No: 783 1701 29. > > > > > > > > ----------------------------------------------------------- > -------- > > ----------- > > Developer Access Program for Intel Xeon Phi Processors > > Access to Intel Xeon Phi processor-based developer > platforms. > > With one year of Intel Parallel Studio XE. > > Training and support from Colfax. > > Order your platform today. http://sdm.link/xeonphi > > _______________________________________________ > > sleuthkit-users mailing list > > https://lists.sourceforge.net/lists/listinfo/sleuthkit- > users <https://lists.sourceforge.net/lists/listinfo/sleuthkit-users> > > <https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > <https://lists.sourceforge.net/lists/listinfo/sleuthkit-users> > > > http://www.sleuthkit.org > > > > > > > |
From: Brian C. <ca...@sl...> - 2017-01-18 22:39:32
|
Hey Jon, I'll make sure we get those in for the next release. We're going to get into a 2-month release rhythm. thanks, brian On Tue, Jan 17, 2017 at 12:39 PM, Jon Stewart <JSt...@st...> wrote: > The issue here: > https://github.com/sleuthkit/sleuthkit/issues/748 > appears to be a regression in NTFS between 4.2 and 4.3. We'll add more > detail to it, but the issue seems due to an early exit from a loop. We may > even be able to share the image, if someone can contact me off-list. It > would be great to get this fixed in 4.4. > > This open PR fixes some memory leaks: > https://github.com/sleuthkit/sleuthkit/pull/684 > > There's also this exFAT issue; we're trying to make up a tiny test > evidence file to determine whether encoding exFAT uses is UTF-16 or UCS-2: > https://github.com/sleuthkit/sleuthkit/pull/747 > > > Jon > > > > -----Original Message----- > > From: Brian Carrier [mailto:ca...@sl...] > > Sent: Tuesday, January 17, 2017 10:21 AM > > To: Joel Uckelman <juc...@st...> > > Cc: sle...@li... > > Subject: Re: [sleuthkit-users] 4.3.1 release > > > > Hey Joel, > > > > > > We made the tag along with the Autopsy 4.2 release but never made the > > full release in the craziness before OSDFCon. We're about to make an > > Autopsy 4.3 release along with a TSK 4.4.0 release. Major difference > > between 4.4 and 4.3 is the change in Visual Studio compiler. > > > > > > > > > > On Wed, Jan 11, 2017 at 11:01 AM, Joel Uckelman > > <juc...@st... <mailto:juc...@st...> > > > wrote: > > > > > > I see a sleuthkit-4.3.1 tag on GitHub with commits that make it > > look like a 4.3.1 release, but no mention of an actual release anywhere. > > Is that going to be an official release at some point? > > > > > > Joel Uckelman | Senior Developer > > > > Stroz Friedberg, an Aon company > > Capital House, 85 King William Street | London, EC4N 7BL > > > > T: +44 20.7061.2239 | M: +44 79.6478.7296 | F: +44 20 7061 2201 > > juc...@st... > > <mailto:juc...@st...> | www.strozfriedberg.com > > <http://www.strozfriedberg.com> > > > > This message and/or its attachments may contain information that is > > confidential and/or protected by privilege from disclosure. If you have > > reason to believe you are not the intended recipient, please immediately > > notify the sender by reply e-mail or by telephone, then delete this > > message (and any attachments), as well as all copies, including any > > printed copies. Thank you. > > > > Stroz Friedberg Limited is a company registered in England and > > Wales No: 4150500, Registered office: Capital House, 85 King William > > Street, London EC4N 7BL, VAT registration No: 783 1701 29. > > > > > > > > ------------------------------------------------------------ > ------- > > ----------- > > Developer Access Program for Intel Xeon Phi Processors > > Access to Intel Xeon Phi processor-based developer platforms. > > With one year of Intel Parallel Studio XE. > > Training and support from Colfax. > > Order your platform today. http://sdm.link/xeonphi > > _______________________________________________ > > sleuthkit-users mailing list > > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > > <https://lists.sourceforge.net/lists/listinfo/sleuthkit-users> > > http://www.sleuthkit.org > > > > > > |
From: Jon S. <JSt...@St...> - 2017-01-17 17:59:20
|
The issue here: https://github.com/sleuthkit/sleuthkit/issues/748 appears to be a regression in NTFS between 4.2 and 4.3. We'll add more detail to it, but the issue seems due to an early exit from a loop. We may even be able to share the image, if someone can contact me off-list. It would be great to get this fixed in 4.4. This open PR fixes some memory leaks: https://github.com/sleuthkit/sleuthkit/pull/684 There's also this exFAT issue; we're trying to make up a tiny test evidence file to determine whether encoding exFAT uses is UTF-16 or UCS-2: https://github.com/sleuthkit/sleuthkit/pull/747 Jon > -----Original Message----- > From: Brian Carrier [mailto:ca...@sl...] > Sent: Tuesday, January 17, 2017 10:21 AM > To: Joel Uckelman <juc...@st...> > Cc: sle...@li... > Subject: Re: [sleuthkit-users] 4.3.1 release > > Hey Joel, > > > We made the tag along with the Autopsy 4.2 release but never made the > full release in the craziness before OSDFCon. We're about to make an > Autopsy 4.3 release along with a TSK 4.4.0 release. Major difference > between 4.4 and 4.3 is the change in Visual Studio compiler. > > > > > On Wed, Jan 11, 2017 at 11:01 AM, Joel Uckelman > <juc...@st... <mailto:juc...@st...> > > wrote: > > > I see a sleuthkit-4.3.1 tag on GitHub with commits that make it > look like a 4.3.1 release, but no mention of an actual release anywhere. > Is that going to be an official release at some point? > > > Joel Uckelman | Senior Developer > > Stroz Friedberg, an Aon company > Capital House, 85 King William Street | London, EC4N 7BL > > T: +44 20.7061.2239 | M: +44 79.6478.7296 | F: +44 20 7061 2201 > juc...@st... > <mailto:juc...@st...> | www.strozfriedberg.com > <http://www.strozfriedberg.com> > > This message and/or its attachments may contain information that is > confidential and/or protected by privilege from disclosure. If you have > reason to believe you are not the intended recipient, please immediately > notify the sender by reply e-mail or by telephone, then delete this > message (and any attachments), as well as all copies, including any > printed copies. Thank you. > > Stroz Friedberg Limited is a company registered in England and > Wales No: 4150500, Registered office: Capital House, 85 King William > Street, London EC4N 7BL, VAT registration No: 783 1701 29. > > > > ------------------------------------------------------------------- > ----------- > Developer Access Program for Intel Xeon Phi Processors > Access to Intel Xeon Phi processor-based developer platforms. > With one year of Intel Parallel Studio XE. > Training and support from Colfax. > Order your platform today. http://sdm.link/xeonphi > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > <https://lists.sourceforge.net/lists/listinfo/sleuthkit-users> > http://www.sleuthkit.org > > |
From: Makoto S. <sh...@st...> - 2016-11-03 18:29:11
|
Windows version of blkls (4.3.0) has a problem when using "-s", print slack space only option. As far as I examined, it seems that blkls converts all "\x0a(LF)" in the data to "\x0d\x0a(CR/LF)" when extracting slack space. I suppose _setmode(_fileno(stdout), _O_BINARY) is missing before fwrite() in dls_lib.c or somewhere else. Hope this helps, Makoto Shiotsuki |
From: Brian C. <ca...@sl...> - 2016-09-27 16:27:16
|
Reminder that submissions for the Autopsy Module Competition are due Oct 10. Modules can be in either Python or Java and the attendees of OSDFCon will vote on the winner. Great news that there are over 12 submissions this year, which means the prizes are doubled! Details on writing modules and how to submit can be found here: http://www.osdfcon.org/2016-event/2016-module-development-contest/ thanks, brian |
From: Jon S. <jo...@li...> - 2016-09-17 15:02:27
|
The system partition filesystem types are unknown and TSK can't process them (probably not a big loss as they're just boot partitions). I don't know about the behavior of tsk_recover and tsk_loaddb, but there's no inherent reason why the HFS+ partitions can't be accessed and processed. Jon > On Sep 17, 2016, at 9:35 AM, Edward Diener <eld...@tr...> wrote: > > The code in tsk/vs/mac.c in the mac_load_table function marks every > partition in an Apple Partition Map as an allocated partition unless the > entry for the status partition field of the partition map entry is 0. > Since the description for the Apple Partition Map at Wikipedia ( > https://en.wikipedia.org/wiki/Apple_Partition_Map#Partition_status ) > basically implies that this field is never 0, all partitions in the > Apple Partition Map are marked as allocated. > > Then when walking through the entries an attempt to find files in these > volumes, failure occurs unless the partition map entry is an HFS/HFS+ > partition. For the attempt to find files on the Macintosh disk failure > always occurs because of this reason. Therefore tsk_loaddb and > tsk_recover always fail on images of Macintosh disks. > > Is this a known problem of SleuthKit ? Is essentially trying to find > files on a Macintosh disk broken because of this failure, so that > finding files for the Macintosh only works when the image is a single > logical HFS/HFS+ partition instead of an entire Macintosh disk ? > > ------------------------------------------------------------------------------ > _______________________________________________ > sleuthkit-developers mailing list > sle...@li... > https://lists.sourceforge.net/lists/listinfo/sleuthkit-developers |
From: Edward D. <eld...@tr...> - 2016-09-17 13:35:30
|
The code in tsk/vs/mac.c in the mac_load_table function marks every partition in an Apple Partition Map as an allocated partition unless the entry for the status partition field of the partition map entry is 0. Since the description for the Apple Partition Map at Wikipedia ( https://en.wikipedia.org/wiki/Apple_Partition_Map#Partition_status ) basically implies that this field is never 0, all partitions in the Apple Partition Map are marked as allocated. Then when walking through the entries an attempt to find files in these volumes, failure occurs unless the partition map entry is an HFS/HFS+ partition. For the attempt to find files on the Macintosh disk failure always occurs because of this reason. Therefore tsk_loaddb and tsk_recover always fail on images of Macintosh disks. Is this a known problem of SleuthKit ? Is essentially trying to find files on a Macintosh disk broken because of this failure, so that finding files for the Macintosh only works when the image is a single logical HFS/HFS+ partition instead of an entire Macintosh disk ? |
From: noxdafox <nox...@gm...> - 2016-09-14 18:34:25
|
Hello, I've been working on the feature for a while and I'd say it's ready for review. https://github.com/sleuthkit/sleuthkit/pull/689 The PR comments explain the feature and the reason behind the implementation choices. I'd postpone the support to journal records version 3 and 4: * Hard to find examples out there, it's an early-stage feature. * They are not enabled by default. https://msdn.microsoft.com/en-us/library/windows/desktop/dn302075%28v=vs.85%29.aspx * In case of v3 and v4 records, the logic will skip them warning the user. On 09/07/16 06:47, Brian Carrier wrote: > Hello, > > Sorry for the very late replies. I certainly think it would be of interest to the TSK users. My 2 cents would be to take a look at the existing journal infrastructure in TSK that was not designed with the knowledge of NTFS structures (so it maybe too limited). But, it would be good to try to enhance that versus adding in a parallel journal infrastructure. Examples of it can be found in ext2fs_journal.c and hfs_journal.c and it provides callbacks for each entry. > > thanks, > brian > >> On Jun 28, 2016, at 12:49 PM, noxdafox <nox...@gm...> wrote: >> >> Greetings, >> >> recently I've been playing around with NTFS Update Sequence Number >> Journals which I find a fairly good instrument for extracting timelines >> from NTFS drives. >> >> I have been writing few parsers for it, the last one been written in C. >> >> I was thinking about porting it to sleuthkit. Do you think it would be >> beneficial for the library? >> >> The idea would be to expose a visitor API (in similar fashion as for >> tsk_fs_dir_walk) and then a command line tool built on top of it. >> >> More info about UsnJrnl files: >> https://msdn.microsoft.com/en-us/library/windows/desktop/aa365722%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396# >> >> >> ------------------------------------------------------------------------------ >> Attend Shape: An AT&T Tech Expo July 15-16. Meet us at AT&T Park in San >> Francisco, CA to explore cutting-edge tech and listen to tech luminaries >> present their vision of the future. This family event has something for >> everyone, including kids. Get more information and register today. >> http://sdm.link/attshape >> _______________________________________________ >> sleuthkit-developers mailing list >> sle...@li... >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-developers |
From: Edward D. <eld...@tr...> - 2016-09-09 20:40:22
|
<html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">On 9/9/2016 1:54 PM, Brian Carrier wrote:<br> </div> <blockquote cite="mid:7B1...@sl..." type="cite"> <pre wrap="">The docs are correct and that is the way that Autopsy also populates that table when it adds entries for carved files. If you are seeing something different from the C++ code when it adds the layout for files when it creates the DB, then it could be a bug in the C++ code. Thanks for finding the issue. Can you supply a pull request?</pre> </blockquote> I did not look at the tsk_loaddb internal code to see what it is doing. I only know that when I treat the 'byte_start' field of 'tsk_file_layout' table as an offset from the start of the partition and not from the start of the image I am able to successfully access the data content of a file. Whereas treating the 'byte_start' field of 'tsk_file_layout' table as an offset from the start of the image gives me garbage when accessing the content of a file. My results correspond to the earlier message I cited, which was also corroborated at the time by someone else.<br> <br> I do not work with Autopsy but did use tsk_loaddb to populate the table. <br> <br> The tsk_loaddb.exe did work in version 4.2.0 to create a database from an ewf image but trying tsk_loaddb.exe in version 4.3.0 to create a database from the exact same ewf image crashes with an exception. I can therefore assert that tsk_loaddb.exe in 4.2.0 works as I explain above, but have no idea what tsk_loaddb.exe 4.3.0 is doing because of the crash. <br> <br> Eddie Diener<br> <blockquote cite="mid:7B1...@sl..." type="cite"> <blockquote type="cite"> <pre wrap="">On Sep 7, 2016, at 9:33 PM, Edward Diener <a class="moz-txt-link-rfc2396E" href="mailto:eld...@tr..."><eld...@tr...></a> wrote: In the documentation, at <a class="moz-txt-link-freetext" href="http://wiki.sleuthkit.org/index.php?title=SQLite_Database_v3_Schema">http://wiki.sleuthkit.org/index.php?title=SQLite_Database_v3_Schema</a>, for the V3 schema of the sqlite database it says in the description of the tsk_file_layout table: "byte_start - Byte offset of fragment relative to the start of the image file" This is not the case. The 'byte_start' offset is relative to the start of the file system in which the file resides, not to the image itself. To get the actual byte_start relative to the start of the image file you need to add this value to the tsk_fs_info img_offset value for the appropriate tsk_fs_info row. In a message dated 11/26/2014 at <a class="moz-txt-link-freetext" href="https://sourceforge.net/p/sleuthkit/mailman/message/33084547/">https://sourceforge.net/p/sleuthkit/mailman/message/33084547/</a> the same correction to the documentation was offered. Since this is some 22 months ago can the documentation be corrected accordingly now ? Eddie Diener</pre> </blockquote> </blockquote> </body> </html> |
From: Edward D. <eld...@tr...> - 2016-09-09 19:48:43
|
On 9/9/2016 1:55 PM, Brian Carrier wrote: > The C/C++ code doesn’t have any methods that use the DB info. If you want to open a file mentioned in the DB though, you can use tsk_fs_file_open_meta. If you want to open a given block, you can use tsk_fs_block…. to get it. > > The Java side has code that uses that table. Actually the C/C++ code does have the tsk_img_open... functions for opening an image and the tsk_img_read function for reading data from anyplace in the image. When I realized that the tsk_file_layout table's 'byte_start' field is, in 4.2.0, the start from the partition and not from the image itself I was able to use the tsk_file_layout table to directly read a file without having to use the tsk_fs_... functionality. I was completely wrong about the fact that the "tsk_file_layout information for a file contains blocks of data which has both the file data and non-file data filling out the blocks". Instead the information is purely the file's data. Eddie Diener > > > > > >> On Sep 6, 2016, at 7:28 PM, Edward Diener <eld...@tr...> wrote: >> >> How do I read the contents of a file from an image using the database's >> tsk_file_layout information for a file ? >> >> Is there a way to use some tsk file functionality to read the file's >> data using the tsk_file_layout information ? >> >> I have found out that the tsk_file_layout information for a file >> contains blocks of data which has both the file data and non-file data >> filling out the blocks, so a naive attempt at just reading the >> information directly from the image using tsk_image_open and >> tsk_image_read does not work. >> >> Any help with this would be appreciated. |
From: Brian C. <ca...@sl...> - 2016-09-09 17:56:05
|
The C/C++ code doesn’t have any methods that use the DB info. If you want to open a file mentioned in the DB though, you can use tsk_fs_file_open_meta. If you want to open a given block, you can use tsk_fs_block…. to get it. The Java side has code that uses that table. > On Sep 6, 2016, at 7:28 PM, Edward Diener <eld...@tr...> wrote: > > How do I read the contents of a file from an image using the database's > tsk_file_layout information for a file ? > > Is there a way to use some tsk file functionality to read the file's > data using the tsk_file_layout information ? > > I have found out that the tsk_file_layout information for a file > contains blocks of data which has both the file data and non-file data > filling out the blocks, so a naive attempt at just reading the > information directly from the image using tsk_image_open and > tsk_image_read does not work. > > Any help with this would be appreciated. > > > > ------------------------------------------------------------------------------ > _______________________________________________ > sleuthkit-developers mailing list > sle...@li... > https://lists.sourceforge.net/lists/listinfo/sleuthkit-developers |
From: Brian C. <ca...@sl...> - 2016-09-09 17:54:20
|
The docs are correct and that is the way that Autopsy also populates that table when it adds entries for carved files. If you are seeing something different from the C++ code when it adds the layout for files when it creates the DB, then it could be a bug in the C++ code. Thanks for finding the issue. Can you supply a pull request? > On Sep 7, 2016, at 9:33 PM, Edward Diener <eld...@tr...> wrote: > > In the documentation, at > http://wiki.sleuthkit.org/index.php?title=SQLite_Database_v3_Schema, for > the V3 schema of the sqlite database it says in the description of the > tsk_file_layout table: > > "byte_start - Byte offset of fragment relative to the start of the image > file" > > This is not the case. The 'byte_start' offset is relative to the start > of the file system in which the file resides, not to the image itself. > To get the actual byte_start relative to the start of the image file you > need to add this value to the tsk_fs_info img_offset value for the > appropriate tsk_fs_info row. > > In a message dated 11/26/2014 at > https://sourceforge.net/p/sleuthkit/mailman/message/33084547/ the same > correction to the documentation was offered. Since this is some 22 > months ago can the documentation be corrected accordingly now ? > > Eddie Diener > > ------------------------------------------------------------------------------ > _______________________________________________ > sleuthkit-developers mailing list > sle...@li... > https://lists.sourceforge.net/lists/listinfo/sleuthkit-developers |
From: Edward D. <eld...@tr...> - 2016-09-08 01:33:36
|
In the documentation, at http://wiki.sleuthkit.org/index.php?title=SQLite_Database_v3_Schema, for the V3 schema of the sqlite database it says in the description of the tsk_file_layout table: "byte_start - Byte offset of fragment relative to the start of the image file" This is not the case. The 'byte_start' offset is relative to the start of the file system in which the file resides, not to the image itself. To get the actual byte_start relative to the start of the image file you need to add this value to the tsk_fs_info img_offset value for the appropriate tsk_fs_info row. In a message dated 11/26/2014 at https://sourceforge.net/p/sleuthkit/mailman/message/33084547/ the same correction to the documentation was offered. Since this is some 22 months ago can the documentation be corrected accordingly now ? Eddie Diener |
From: Edward D. <eld...@tr...> - 2016-09-06 23:28:12
|
How do I read the contents of a file from an image using the database's tsk_file_layout information for a file ? Is there a way to use some tsk file functionality to read the file's data using the tsk_file_layout information ? I have found out that the tsk_file_layout information for a file contains blocks of data which has both the file data and non-file data filling out the blocks, so a naive attempt at just reading the information directly from the image using tsk_image_open and tsk_image_read does not work. Any help with this would be appreciated. |
From: Edward D. <eld...@tr...> - 2016-09-06 15:22:47
|
In the explanation for the has_layout column of the tsk_files table for the SQLite Database v3 Schema at http://wiki.sleuthkit.org/index.php?title=SQLite_Database_v3_Schema I see: "has_layout - True if file has an entry in tsk_file_layout" Yet after creating an Sqlite database from an image using version 4.2.0 of TSK I see in the tsk_files entries for allocated files NULL database values, even though all of these files have entries in the tsk_file_layout table. In fact only the vast majority of unallocated file entries have a value of 1 for this column and no entries have a value of 0 for this column. What is the actual explanation for this column ? Or does the value of this column only apply to unallocated file areas ? If the latter is the case the explanation in the V3 database should be corrected to say so. |
From: Edward D. <eld...@tr...> - 2016-08-28 23:30:54
|
In the description of the tsk_files database table it says, "Contains one for for every file found in the images. Has the basic metadata for the file." The table has an attribute type and an attribute id. But certain files can have multiple attributes. How is this represented in the database. Eddie Diener |
From: Roberto A. <g.r...@gm...> - 2016-08-12 16:57:02
|
Hello, I am developing a file ingest module in Python and I would like to check the format of each file. Using tsk/autopsy APIs I found the possibility to use file.getNameExtension() or file.getMIMEType() that don't really satisfy my requirements. Is there any other method (that may check the magic number)? or is there any way to use python-magic? Thank you, Roberto |
From: Edward D. <eld...@tr...> - 2016-07-23 10:52:10
|
According to the documentation for the fully automated tools: "These tools integrate the volume and file system functionality. Instead of analyzing only a single file system, these tools take a disk image as input and identify the volumes and process the contents. " This implies to me that if I have ewf file sequence images which encompasses a number of different partitions, each partition having its own filesystem ( ntfs, fat32, ext3, ext4 as an example ) that the fully automated tools should process the ewf file sequence correctly. Yet when I tried using tsk_recover against such an image sequence it failed completely, whether with the 4.2.0 or 4.3.0 release. When I tried running tsk_recover, using the '-o sector offset' parameter to a particular filesystem in the image sequence it succeeded. So are these fuilly automated tools supposed to work correctly against a multi-partition image sequence, or are they supposed to work correctly only against a single particular partition in a multi-partition image sequence at a time ? Eddie Diener |