sleuthkit-users Mailing List for The Sleuth Kit (Page 37)
Brought to you by:
carrier
You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(11) |
Oct
(5) |
Nov
(4) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(1) |
Feb
(20) |
Mar
(60) |
Apr
(40) |
May
(24) |
Jun
(28) |
Jul
(18) |
Aug
(27) |
Sep
(6) |
Oct
(14) |
Nov
(15) |
Dec
(22) |
2004 |
Jan
(34) |
Feb
(13) |
Mar
(28) |
Apr
(23) |
May
(27) |
Jun
(26) |
Jul
(37) |
Aug
(19) |
Sep
(20) |
Oct
(39) |
Nov
(17) |
Dec
(9) |
2005 |
Jan
(45) |
Feb
(43) |
Mar
(66) |
Apr
(36) |
May
(19) |
Jun
(64) |
Jul
(10) |
Aug
(11) |
Sep
(35) |
Oct
(6) |
Nov
(4) |
Dec
(13) |
2006 |
Jan
(52) |
Feb
(34) |
Mar
(39) |
Apr
(39) |
May
(37) |
Jun
(15) |
Jul
(13) |
Aug
(48) |
Sep
(9) |
Oct
(10) |
Nov
(47) |
Dec
(13) |
2007 |
Jan
(25) |
Feb
(4) |
Mar
(2) |
Apr
(29) |
May
(11) |
Jun
(19) |
Jul
(13) |
Aug
(15) |
Sep
(30) |
Oct
(12) |
Nov
(10) |
Dec
(13) |
2008 |
Jan
(2) |
Feb
(54) |
Mar
(58) |
Apr
(43) |
May
(10) |
Jun
(27) |
Jul
(25) |
Aug
(27) |
Sep
(48) |
Oct
(69) |
Nov
(55) |
Dec
(43) |
2009 |
Jan
(26) |
Feb
(36) |
Mar
(28) |
Apr
(27) |
May
(55) |
Jun
(9) |
Jul
(19) |
Aug
(16) |
Sep
(15) |
Oct
(17) |
Nov
(70) |
Dec
(21) |
2010 |
Jan
(56) |
Feb
(59) |
Mar
(53) |
Apr
(32) |
May
(25) |
Jun
(31) |
Jul
(36) |
Aug
(11) |
Sep
(37) |
Oct
(19) |
Nov
(23) |
Dec
(6) |
2011 |
Jan
(21) |
Feb
(20) |
Mar
(30) |
Apr
(30) |
May
(74) |
Jun
(50) |
Jul
(34) |
Aug
(34) |
Sep
(12) |
Oct
(33) |
Nov
(10) |
Dec
(8) |
2012 |
Jan
(23) |
Feb
(57) |
Mar
(26) |
Apr
(14) |
May
(27) |
Jun
(27) |
Jul
(60) |
Aug
(88) |
Sep
(13) |
Oct
(36) |
Nov
(97) |
Dec
(85) |
2013 |
Jan
(60) |
Feb
(24) |
Mar
(43) |
Apr
(32) |
May
(22) |
Jun
(38) |
Jul
(51) |
Aug
(50) |
Sep
(76) |
Oct
(65) |
Nov
(25) |
Dec
(30) |
2014 |
Jan
(19) |
Feb
(41) |
Mar
(43) |
Apr
(28) |
May
(61) |
Jun
(12) |
Jul
(10) |
Aug
(37) |
Sep
(76) |
Oct
(31) |
Nov
(41) |
Dec
(12) |
2015 |
Jan
(33) |
Feb
(28) |
Mar
(53) |
Apr
(22) |
May
(29) |
Jun
(20) |
Jul
(15) |
Aug
(17) |
Sep
(52) |
Oct
(3) |
Nov
(18) |
Dec
(21) |
2016 |
Jan
(20) |
Feb
(8) |
Mar
(21) |
Apr
(7) |
May
(13) |
Jun
(35) |
Jul
(34) |
Aug
(11) |
Sep
(14) |
Oct
(22) |
Nov
(31) |
Dec
(23) |
2017 |
Jan
(20) |
Feb
(7) |
Mar
(5) |
Apr
(6) |
May
(6) |
Jun
(22) |
Jul
(11) |
Aug
(16) |
Sep
(8) |
Oct
(1) |
Nov
(1) |
Dec
(1) |
2018 |
Jan
|
Feb
|
Mar
(16) |
Apr
(2) |
May
(6) |
Jun
(5) |
Jul
|
Aug
(2) |
Sep
(4) |
Oct
|
Nov
(16) |
Dec
(13) |
2019 |
Jan
|
Feb
(1) |
Mar
(25) |
Apr
(9) |
May
(2) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
(2) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
|
Oct
(5) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(4) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2022 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2024 |
Jan
|
Feb
(3) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Greg F. <gre...@gm...> - 2014-09-17 03:08:00
|
Logarithmic for me. Even light usage may be important and that could be missed on a linear scale. One click to go to linear makes easy to do and not having scale marks on the y axis will send even a novice looking at the options for how to get scale marks. Greg On September 15, 2014 4:47:41 PM EDT, Brian Carrier <ca...@sl...> wrote: >As many of you may know, we've been working on a new timeline viewer >for Autopsy as part of a DHS S&T contract. It's got some really cool >features and I'm looking for some feedback on default settings. One >view has bar graphs to show "how many things occurred in a given time >frame". The primary use case was to answer questions about knowing when >and how often the system was used. There is another view that provides >details. > >My question is if linear or logarithmic scale is better as a default. >In the bar chart, there are differently colored sections for file >system activity, web activity, and "other" activity. There will be >more bars as we add more features. Linear allows you to compare the >size of each bar, but it means that many bars are not visible. >Logarithmic is not as intuitive for people, but it allows you to see >more of the bars. Below is an example. The Linear view doesn't show >any of the blue bars. As a reference on the final bar in the log >scale, the red bar has 53,000 events, the green has 3,500, and the blue >has 54. > > >My vote is to have log scale be the default so that you can see that >there is web activity even though there is far less than file system >times, but I wanted to get feedback before we did that. Votes? > > > > >------------------------------------------------------------------------ > >------------------------------------------------------------------------------ >Want excitement? >Manually upgrade your production database. >When you want reliability, choose Perforce >Perforce version control. Predictably reliable. >http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > >------------------------------------------------------------------------ > >_______________________________________________ >sleuthkit-users mailing list >https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >http://www.sleuthkit.org -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. |
From: RB <ao...@gm...> - 2014-09-17 02:53:03
|
On Mon, Sep 15, 2014 at 2:47 PM, Brian Carrier <ca...@sl...> wrote: > My question is if linear or logarithmic scale is better as a default. > Mine too, so long as they're switchable. Logarithmic tends to provide a more "natural" vision of data, for lack of a better term. I do a lot of audio work and analysis, and I barely ever use linear scales because they fail to accurately depict the relative "weight" of given signals. This is kind of the same situation. |
From: Brian C. <ca...@sl...> - 2014-09-15 20:47:52
|
As many of you may know, we've been working on a new timeline viewer for Autopsy as part of a DHS S&T contract. It's got some really cool features and I'm looking for some feedback on default settings. One view has bar graphs to show "how many things occurred in a given time frame". The primary use case was to answer questions about knowing when and how often the system was used. There is another view that provides details. My question is if linear or logarithmic scale is better as a default. In the bar chart, there are differently colored sections for file system activity, web activity, and "other" activity. There will be more bars as we add more features. Linear allows you to compare the size of each bar, but it means that many bars are not visible. Logarithmic is not as intuitive for people, but it allows you to see more of the bars. Below is an example. The Linear view doesn't show any of the blue bars. As a reference on the final bar in the log scale, the red bar has 53,000 events, the green has 3,500, and the blue has 54. My vote is to have log scale be the default so that you can see that there is web activity even though there is far less than file system times, but I wanted to get feedback before we did that. Votes? |
From: anthony s. <ant...@gm...> - 2014-09-12 17:58:19
|
Good afternoon, In the few cases that I’ve done so far, no information has populated under the E-mail messages tab. In my current case I exported the pst file and parsed it with IEF so I know there is info there. I ran an additional ingest using just the email parser with no change. Any reason why it won’t populate in Autopsy? It shows active in the plugins menu. I’m using 3.1 Thank you AJS |
From: Stuart M. <st...@ap...> - 2014-09-11 18:34:30
|
On 09/09/2014 07:13 PM, Brian Carrier wrote: > The FILLER entries are there for basic record keeping because NTFS makes not guarantees that the runs will be stored in consecutive order. TSK adds the FILLER entries when it gets runs out of order and pops them out as it finds them. > > Is the data you are describing below from the same Ext4 image you mentioned before? > > brian > > Hi Brian, yes it is. Does that shed any light on things? I am still confused as to what fillers actually are and whether a file system is suspect should tsk tools announce that they have found some ;) Stuart |
From: Brian C. <ca...@sl...> - 2014-09-10 16:58:34
|
The full agenda for the 5th Annual Open Source Digital Forensics Conference (OSDFCon) is up. http://www.basistech.com/osdfcon-program/ There are talks on memory forensics, cell phones, triage, desktop tools, and more. It's a great opportunity to learn more about open source tools that can help in your investigations. The conference this year will be held in Herndon, VA. Schedule: Nov 4: Autopsy and Plaso Workshops. The OMFW will once again be held at the same location. Nov 5: OSDFCon Nov 6-7: Autopsy training Cost: - Workshops: $99 each - Conference: -- Early bird government: $25 -- Early bird non-government: $195 -- Student: $25 Registration form can be found here: http://www.basistech.com/osdfcon/ Submissions for the Autopsy module writing competition are due by Oct 20. thanks, brian |
From: Alessandro F. <at...@gm...> - 2014-09-10 16:54:21
|
Hi Brian don't worry for the delay...I'm very grateful for your answers ;) I've started another time Autopsy, new case, same image, file ingestion. In the log I've find these "non critical" errors: ******************* Errors occured while ingesting image 1. Database Error (TskDbSqlite::findParObjId: Error selecting file id by meta_addr: unknown error (result code 101) ) 2. 3. Database Error (TskDbSqlite::findParObjId: Error selecting file id by meta_addr: unknown error (result code 101) ) 4. ) ...... 550485. Database Error (TskDbSqlite::findParObjId: Error selecting file id by meta_addr: unknown error (result code 101) ) 550486. Cannot determine file system type (Sector offset: 235708600, Partition Type: Recovery HD) 550487. Error reading image file (ewf_image_read - offset: 20480 - len: 65536 - Result too large) (TskAutoDb::addFsInfoUnalloc: error opening fs at offset 20480) 550488. Error reading image file (ewf_image_read - offset: 209736704 - len: 65536 - Result too large) (TskAutoDb::addFsInfoUnalloc: error opening fs at offset 209735680) ******************** The first errors are repeated a lot of times. The the error about the image. I assure you that the image is ok, I manage to mount and browse with ftk (in Windows) and ewfmount (in linux). If you think can be useful, I could send in private a cople of screenshot whit Autopsy and FTK. Thanks in advance for your help Alessandro 2014-09-10 3:39 GMT+02:00 Brian Carrier <ca...@sl...>: > Hi Alessandro, > > Sorry for the delayed response. I had a bit of travel going on. > > Can you add the image to a case again and notice if in the final panel of > the "Add Data Source" wizard if there is a button that says that there were > errors ingesting the image? If so, can you click on the button and send the > messages? > > We should review that panel because there have been several cases where > people don't notice that some errors occurred... > > thanks, > brian > > > > On Sep 2, 2014, at 12:14 PM, Alessandro Farina <at...@gm...> wrote: > > > Yes. > > If I select the partition in the partitions tree, nothing is show in the > detail window. > > > > > > 2014-08-21 4:09 GMT+02:00 Brian Carrier <ca...@sl...>: > > So the image has four partitions, but one of them isn't showing any > files? > > > > > > On Aug 14, 2014, at 5:42 AM, Alessandro Farina <at...@gm...> > wrote: > > > > > Hi > > > I'm analysing an image (EWF) extracted from an IMAC. > > > The disk (image) has 4 partition: 2 HFS+ and 2 NTFS (BOOTCAMP). > > > I'm using Autopsy 3.0.10 on Window 7 SP1. > > > From the partition browser I can't access to one of the HFS+ partition. > > > The image file is ok, infact I can mount and browse all the partition > in > > > linux (via ewfmount) without any problem. The same happens if I access > > > the image via ftk mounter on windows. > > > I think there is some sort of problem with Autopsy and I would like to > > > help whith analysis and debug. > > > I can't send to many info on the contents because is part of an ongoing > > > investigation, but I think I can share info on disk and partition > structure. > > > Any help will be very appreciated. > > > > > > Thanks in advance > > > Alessandro > > > > > > > ------------------------------------------------------------------------------ > > > _______________________________________________ > > > sleuthkit-users mailing list > > > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > > > http://www.sleuthkit.org > > > > > > > ------------------------------------------------------------------------------ > > Slashdot TV. > > Video for Nerds. Stuff that matters. > > http://tv.slashdot.org/_______________________________________________ > > sleuthkit-users mailing list > > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > > http://www.sleuthkit.org > > |
From: Jon S. <jo...@li...> - 2014-09-10 16:15:28
|
Cool, thanks for clarifying. Jon On Wed, Sep 10, 2014 at 12:05 PM, Brian Carrier <ca...@sl...> wrote: > If all goes well, you'll never see them. The caller to the API never sees the attribute until it has been fully populated and for good files all of the filler entries will have been pushed out. The only times that you will see them are if: > - The file system is corrupt and you don't have all of the run info. This can occur in NTFS if the run list is stored across multiple MFT entries and some of them have been re-used. > - There is a bug in TSK. > > You won't need to wait. There is nothing to wait for. > > > > On Sep 10, 2014, at 8:46 AM, Jon Stewart <jo...@li...> wrote: > >> Sorry to veer off-topic with this thread (stupid gmail won't let me >> change the subject), but I'm now more confused/concerned by this >> explanation regarding FILLER entries. >> >> 1. Under what circumstances can you get a FILLER ATTR_RUN? >> >> 2. What can you do about it? How does one wait on TSK to go find the >> missing run? >> >> Thanks, >> >> Jon >> >> On Tue, Sep 9, 2014 at 10:13 PM, Brian Carrier <ca...@sl...> wrote: >>> The FILLER entries are there for basic record keeping because NTFS makes not guarantees that the runs will be stored in consecutive order. TSK adds the FILLER entries when it gets runs out of order and pops them out as it finds them. >>> >>> Is the data you are describing below from the same Ext4 image you mentioned before? >>> >>> brian >>> >>> >>> >>> On Sep 5, 2014, at 1:46 PM, Stuart Maclean <st...@ap...> wrote: >>> >>>> Hi all, I'm glad to have provoked some conversation on the merits (or >>>> otherwise!) of md5 sums as useful representations of the state of a file >>>> system. >>>> >>>> Can anyone enlighten me as to the meaning of the 'flags' member in a >>>> TSK_FS_ATTR_RUN? Specifically, what does this comment mean? >>>> >>>> TSK_FS_ATTR_RUN_FLAG_FILLER = 0x01, ///< Entry is a filler for a run >>>> that has not been seen yet in the processing (or has been lost) >>>> >>>> In a fs I am walking and inspecting the runs for, I am seeing run >>>> structs with addr 0 and flags 1. I was under the impression that any >>>> run address of 0 represented a 'missing run' i.e. that this part of the >>>> file content is N zeros, where N = run.length * fs.blocksize. I presume >>>> that would be the case were the run flags value 2: >>>> >>>> TSK_FS_ATTR_RUN_FLAG_SPARSE = 0x02 ///< Entry is a sparse run where >>>> all data in the run is zeros >>>> >>>> If I use istat, I can see inodes which have certain 'Direct Blocks' of >>>> value 0, and when I see M consecutive 0 blocks that matches up to a >>>> 'missing run' when inspecting the runs using the tsk lib (actually my >>>> tsk4jJava binding, which is now finally showing its worth since I can do >>>> all data structure manipulation in Java, nicer than in C, for me at least). >>>> >>>> I am worried at being 'filler' and not 'sparse', the partial file >>>> content represented by the run(s) with addr 0 is not necessarily a >>>> sequence of zeros. >>>> >>>> Anyone shed light on this? Brian? >>>> >>>> Thanks >>>> >>>> Stuart >>>> >>>> ------------------------------------------------------------------------------ >>>> Slashdot TV. >>>> Video for Nerds. Stuff that matters. >>>> http://tv.slashdot.org/ >>>> _______________________________________________ >>>> sleuthkit-users mailing list >>>> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >>>> http://www.sleuthkit.org >>> >>> >>> ------------------------------------------------------------------------------ >>> Want excitement? >>> Manually upgrade your production database. >>> When you want reliability, choose Perforce >>> Perforce version control. Predictably reliable. >>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> sleuthkit-users mailing list >>> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >>> http://www.sleuthkit.org >> >> >> >> -- >> Jon Stewart, Principal >> (646) 719-0317 | jo...@li... | Arlington, VA >> >> ------------------------------------------------------------------------------ >> Want excitement? >> Manually upgrade your production database. >> When you want reliability, choose Perforce >> Perforce version control. Predictably reliable. >> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >> _______________________________________________ >> sleuthkit-users mailing list >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >> http://www.sleuthkit.org > -- Jon Stewart, Principal (646) 719-0317 | jo...@li... | Arlington, VA |
From: Simson G. <si...@ac...> - 2014-09-10 16:08:39
|
Perhaps a 2x - 5x speedup. On Sep 10, 2014, at 12:06 PM, Brian Carrier <ca...@sl...> wrote: > What types of performance improvements are we talking about? > > > On Sep 10, 2014, at 7:16 AM, Simson Garfinkel <si...@ac...> wrote: > >> Brian, >> >> You could sector-sort the files in the "\Users" and the "\Documents and Settings" folders for improved performnace. >> >> >> On Sep 9, 2014, at 10:05 PM, Brian Carrier <ca...@sl...> wrote: >> >>> Sorry to join the party late. >>> >>> I'm curious what types of speed ups you see by doing this sorting. >>> >>> In terms of if Autopsy could benefit, I think it depends on what type of investigation you are doing and if you are more interested in fastest overall time or interesting results sooner. Autopsy currently has an assumption that you are more interested in analysis results from user content ASAP more than you are interested in overall run time. I say this because of two "features": >>> - Files inside of "\Documents and Settings" or "\Users" are analyzed before other files. >>> - The keyword search module will commit its index every 5 minutes and do a search of pre-defined keywords. This makes the analysis process take longer, but means that you have keyword results in minutes versus hours or days. >>> >>> So, yes the overall analysis time of Autopsy could benefit from doing this type of sorting, but it could mean that for the first 60 minutes, that Autopsy is just analyzing Windows OS files and the user is patiently waiting for interesting results. >>> >>> brian >>> >>> >>> On Sep 4, 2014, at 7:53 PM, Stuart Maclean <st...@ap...> wrote: >>> >>>> On 09/04/2014 03:46 PM, Simson Garfinkel wrote: >>>>> Hi Stuart. >>>>> >>>>> You are correct — I put this in numerous presentations but never published it. >>>>> >>>>> The MD5 algorithm won't let you combine a partial hash from the middle of the file with one from the beginning. You need to start at the beginning and hash through the end. (That's one of the many problems with MD5 for forensics, BTW.) So I believe that the only approach is sorting the files by the sector number of the first run, and just leaving it at that. >>>>> >>>>> I saw speedup with both HDs and SSDs, strangely enough, but not as much with SSDs. There may be a prefetch thing going on here. >>>>> >>>>> I think that the Autopsy framework should hash this way, but currently it doesn't. On the other hand, it may be more useful to hash based on the "importance" of the files. >>>>> >>>>> Simson >>>>> >>>>> >>>> Hi Simson, currently I have just got as far as noting the 'seek >>>> distances' between consecutive runs, across ALL files. I have yet to >>>> actually read the file content. But I don't think it's that hard. As >>>> you point out, md5 summing must be done with the file content in correct >>>> order. I see an analogy between the 'runs ordered by block address but >>>> not necessarily file offset' and the problem the IP layer has in tcp/ip >>>> as it tries to reassemble the fragments of a datagram that may arrive in >>>> any order. We may have to have some 'pending data' structure for runs >>>> whose content has been read but which cannot yet be offered to the md5 >>>> hasher due to an as yet unread run being needed first. >>>> >>>> I'll let you know if/when I nail this. Pehaps Autopsy could benefit? >>>> Is fiwalk doing it the 'regular way' too, i.e reading all file content >>>> of each file as the walk proceeds? >>> >>> >>> >>> >> >> >> ------------------------------------------------------------------------------ >> Want excitement? >> Manually upgrade your production database. >> When you want reliability, choose Perforce >> Perforce version control. Predictably reliable. >> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >> _______________________________________________ >> sleuthkit-users mailing list >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >> http://www.sleuthkit.org > |
From: Brian C. <ca...@sl...> - 2014-09-10 16:06:19
|
What types of performance improvements are we talking about? On Sep 10, 2014, at 7:16 AM, Simson Garfinkel <si...@ac...> wrote: > Brian, > > You could sector-sort the files in the "\Users" and the "\Documents and Settings" folders for improved performnace. > > > On Sep 9, 2014, at 10:05 PM, Brian Carrier <ca...@sl...> wrote: > >> Sorry to join the party late. >> >> I'm curious what types of speed ups you see by doing this sorting. >> >> In terms of if Autopsy could benefit, I think it depends on what type of investigation you are doing and if you are more interested in fastest overall time or interesting results sooner. Autopsy currently has an assumption that you are more interested in analysis results from user content ASAP more than you are interested in overall run time. I say this because of two "features": >> - Files inside of "\Documents and Settings" or "\Users" are analyzed before other files. >> - The keyword search module will commit its index every 5 minutes and do a search of pre-defined keywords. This makes the analysis process take longer, but means that you have keyword results in minutes versus hours or days. >> >> So, yes the overall analysis time of Autopsy could benefit from doing this type of sorting, but it could mean that for the first 60 minutes, that Autopsy is just analyzing Windows OS files and the user is patiently waiting for interesting results. >> >> brian >> >> >> On Sep 4, 2014, at 7:53 PM, Stuart Maclean <st...@ap...> wrote: >> >>> On 09/04/2014 03:46 PM, Simson Garfinkel wrote: >>>> Hi Stuart. >>>> >>>> You are correct — I put this in numerous presentations but never published it. >>>> >>>> The MD5 algorithm won't let you combine a partial hash from the middle of the file with one from the beginning. You need to start at the beginning and hash through the end. (That's one of the many problems with MD5 for forensics, BTW.) So I believe that the only approach is sorting the files by the sector number of the first run, and just leaving it at that. >>>> >>>> I saw speedup with both HDs and SSDs, strangely enough, but not as much with SSDs. There may be a prefetch thing going on here. >>>> >>>> I think that the Autopsy framework should hash this way, but currently it doesn't. On the other hand, it may be more useful to hash based on the "importance" of the files. >>>> >>>> Simson >>>> >>>> >>> Hi Simson, currently I have just got as far as noting the 'seek >>> distances' between consecutive runs, across ALL files. I have yet to >>> actually read the file content. But I don't think it's that hard. As >>> you point out, md5 summing must be done with the file content in correct >>> order. I see an analogy between the 'runs ordered by block address but >>> not necessarily file offset' and the problem the IP layer has in tcp/ip >>> as it tries to reassemble the fragments of a datagram that may arrive in >>> any order. We may have to have some 'pending data' structure for runs >>> whose content has been read but which cannot yet be offered to the md5 >>> hasher due to an as yet unread run being needed first. >>> >>> I'll let you know if/when I nail this. Pehaps Autopsy could benefit? >>> Is fiwalk doing it the 'regular way' too, i.e reading all file content >>> of each file as the walk proceeds? >> >> >> >> > > > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Brian C. <ca...@sl...> - 2014-09-10 16:05:32
|
If all goes well, you'll never see them. The caller to the API never sees the attribute until it has been fully populated and for good files all of the filler entries will have been pushed out. The only times that you will see them are if: - The file system is corrupt and you don't have all of the run info. This can occur in NTFS if the run list is stored across multiple MFT entries and some of them have been re-used. - There is a bug in TSK. You won't need to wait. There is nothing to wait for. On Sep 10, 2014, at 8:46 AM, Jon Stewart <jo...@li...> wrote: > Sorry to veer off-topic with this thread (stupid gmail won't let me > change the subject), but I'm now more confused/concerned by this > explanation regarding FILLER entries. > > 1. Under what circumstances can you get a FILLER ATTR_RUN? > > 2. What can you do about it? How does one wait on TSK to go find the > missing run? > > Thanks, > > Jon > > On Tue, Sep 9, 2014 at 10:13 PM, Brian Carrier <ca...@sl...> wrote: >> The FILLER entries are there for basic record keeping because NTFS makes not guarantees that the runs will be stored in consecutive order. TSK adds the FILLER entries when it gets runs out of order and pops them out as it finds them. >> >> Is the data you are describing below from the same Ext4 image you mentioned before? >> >> brian >> >> >> >> On Sep 5, 2014, at 1:46 PM, Stuart Maclean <st...@ap...> wrote: >> >>> Hi all, I'm glad to have provoked some conversation on the merits (or >>> otherwise!) of md5 sums as useful representations of the state of a file >>> system. >>> >>> Can anyone enlighten me as to the meaning of the 'flags' member in a >>> TSK_FS_ATTR_RUN? Specifically, what does this comment mean? >>> >>> TSK_FS_ATTR_RUN_FLAG_FILLER = 0x01, ///< Entry is a filler for a run >>> that has not been seen yet in the processing (or has been lost) >>> >>> In a fs I am walking and inspecting the runs for, I am seeing run >>> structs with addr 0 and flags 1. I was under the impression that any >>> run address of 0 represented a 'missing run' i.e. that this part of the >>> file content is N zeros, where N = run.length * fs.blocksize. I presume >>> that would be the case were the run flags value 2: >>> >>> TSK_FS_ATTR_RUN_FLAG_SPARSE = 0x02 ///< Entry is a sparse run where >>> all data in the run is zeros >>> >>> If I use istat, I can see inodes which have certain 'Direct Blocks' of >>> value 0, and when I see M consecutive 0 blocks that matches up to a >>> 'missing run' when inspecting the runs using the tsk lib (actually my >>> tsk4jJava binding, which is now finally showing its worth since I can do >>> all data structure manipulation in Java, nicer than in C, for me at least). >>> >>> I am worried at being 'filler' and not 'sparse', the partial file >>> content represented by the run(s) with addr 0 is not necessarily a >>> sequence of zeros. >>> >>> Anyone shed light on this? Brian? >>> >>> Thanks >>> >>> Stuart >>> >>> ------------------------------------------------------------------------------ >>> Slashdot TV. >>> Video for Nerds. Stuff that matters. >>> http://tv.slashdot.org/ >>> _______________________________________________ >>> sleuthkit-users mailing list >>> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >>> http://www.sleuthkit.org >> >> >> ------------------------------------------------------------------------------ >> Want excitement? >> Manually upgrade your production database. >> When you want reliability, choose Perforce >> Perforce version control. Predictably reliable. >> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >> _______________________________________________ >> sleuthkit-users mailing list >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >> http://www.sleuthkit.org > > > > -- > Jon Stewart, Principal > (646) 719-0317 | jo...@li... | Arlington, VA > > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: anthony s. <ant...@gm...> - 2014-09-10 15:30:42
|
Good morning Brian, Let me preface this by saying I’m new to the field and new to Autopsy, so I may just be missing a simple tweak somewhere. For the metadata: The Recent Documents page is laid out a little better in that it shows the path and the created date. An accessed/modified date would, I think, be a good addition. As far as the tagged files/results, the fact that they are all lumped together isn’t a huge problem but it might be easier to have them separated in the explorer pane. Then add the metadata in addition to the file name. The image hash could be added to the “Image Info” on the case summary page. Thanks AJS On Sep 9, 2014, at 9:54 PM, Brian Carrier <ca...@sl...> wrote: > Hi Anthony, > > What exactly are the requests: > > 1) In the file tags page, include the file metadata in addition to just file name? > 2) Where do you want hash? In the file tags page? > > brian > > On Sep 5, 2014, at 3:22 PM, anthony snow <ant...@gm...> wrote: > >> Good afternoon, >> >> I’m starting to use Autopsy as my main suite vs EnCase and since my IEF trial ran out, but it was pointed out to me that file metadata is not included in the html report. Also, there is no hash in the report. I’m finding that the reporting is not very flexible at this point. What good is separating the evidence into separate tags if its just going to report it all under one “tagged files” link? >> >> Any word on a fix for this? I tried using the excel report but it puts in more info into the report than I want it to. >> >> What are others doing to work around this? >> >> >> Thank you >> AJS >> ------------------------------------------------------------------------------ >> Slashdot TV. >> Video for Nerds. Stuff that matters. >> http://tv.slashdot.org/ >> _______________________________________________ >> sleuthkit-users mailing list >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >> http://www.sleuthkit.org > |
From: Jon S. <jo...@li...> - 2014-09-10 13:12:54
|
Sorry to veer off-topic with this thread (stupid gmail won't let me change the subject), but I'm now more confused/concerned by this explanation regarding FILLER entries. 1. Under what circumstances can you get a FILLER ATTR_RUN? 2. What can you do about it? How does one wait on TSK to go find the missing run? Thanks, Jon On Tue, Sep 9, 2014 at 10:13 PM, Brian Carrier <ca...@sl...> wrote: > The FILLER entries are there for basic record keeping because NTFS makes not guarantees that the runs will be stored in consecutive order. TSK adds the FILLER entries when it gets runs out of order and pops them out as it finds them. > > Is the data you are describing below from the same Ext4 image you mentioned before? > > brian > > > > On Sep 5, 2014, at 1:46 PM, Stuart Maclean <st...@ap...> wrote: > >> Hi all, I'm glad to have provoked some conversation on the merits (or >> otherwise!) of md5 sums as useful representations of the state of a file >> system. >> >> Can anyone enlighten me as to the meaning of the 'flags' member in a >> TSK_FS_ATTR_RUN? Specifically, what does this comment mean? >> >> TSK_FS_ATTR_RUN_FLAG_FILLER = 0x01, ///< Entry is a filler for a run >> that has not been seen yet in the processing (or has been lost) >> >> In a fs I am walking and inspecting the runs for, I am seeing run >> structs with addr 0 and flags 1. I was under the impression that any >> run address of 0 represented a 'missing run' i.e. that this part of the >> file content is N zeros, where N = run.length * fs.blocksize. I presume >> that would be the case were the run flags value 2: >> >> TSK_FS_ATTR_RUN_FLAG_SPARSE = 0x02 ///< Entry is a sparse run where >> all data in the run is zeros >> >> If I use istat, I can see inodes which have certain 'Direct Blocks' of >> value 0, and when I see M consecutive 0 blocks that matches up to a >> 'missing run' when inspecting the runs using the tsk lib (actually my >> tsk4jJava binding, which is now finally showing its worth since I can do >> all data structure manipulation in Java, nicer than in C, for me at least). >> >> I am worried at being 'filler' and not 'sparse', the partial file >> content represented by the run(s) with addr 0 is not necessarily a >> sequence of zeros. >> >> Anyone shed light on this? Brian? >> >> Thanks >> >> Stuart >> >> ------------------------------------------------------------------------------ >> Slashdot TV. >> Video for Nerds. Stuff that matters. >> http://tv.slashdot.org/ >> _______________________________________________ >> sleuthkit-users mailing list >> https://lists.sourceforge.net/lists/listinfo/sleuthkit-users >> http://www.sleuthkit.org > > > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org -- Jon Stewart, Principal (646) 719-0317 | jo...@li... | Arlington, VA |
From: Brian S. <bhs...@ya...> - 2014-09-10 13:11:19
|
I eventually discovered the hash databases listed on the "Add Data Source" -> "Configure Ingest Modules wizard" -> "Hash Lookup" side-panel, but none were selected by default. In another setup I tested, they must have been automatically selected. Once I manually selected the lists, everything worked well. We'll chalk this one up to user error... Thanks for your time, -Brian On Tuesday, September 9, 2014 9:51 PM, Brian Carrier <ca...@sl...> wrote: Hi Brian, When you added the image, you should have then gotten a list of modules to run and it sounds like the hash database module was enabled (otherwise you would not have seen those messages). Did you see the hash databases listed in the panel on the right when you selected the hash database module? Were they selected? thanks, brian On Sep 3, 2014, at 3:02 PM, Brian S. <bhs...@ya...> wrote: > Using hfind, I created two hash db indexes (1 custom, 1 based on NSRL) and added them to Autopsy 3.1.0. Command line searching using hfind works as expected, so I know the indexes are good. When I go to ingest new media, I get the messages listed below. I'd expect the "No known bad..." entry to appear, but not the second. Is this a result of user error or a bug? I've found the wiki documentation on the hash db "import" process to be pretty weak, so it easily could be user error. Thanks for your time. -Brian > > Module Subject > Hash lookup "No known bad hash database set" > Hash lookup "No known hash database set" > > > ------------------------------------------------------------------------------ > Slashdot TV. > Video for Nerds. Stuff that matters. > http://tv.slashdot.org/_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Simson G. <si...@ac...> - 2014-09-10 11:16:52
|
Brian, You could sector-sort the files in the "\Users" and the "\Documents and Settings" folders for improved performnace. On Sep 9, 2014, at 10:05 PM, Brian Carrier <ca...@sl...> wrote: > Sorry to join the party late. > > I'm curious what types of speed ups you see by doing this sorting. > > In terms of if Autopsy could benefit, I think it depends on what type of investigation you are doing and if you are more interested in fastest overall time or interesting results sooner. Autopsy currently has an assumption that you are more interested in analysis results from user content ASAP more than you are interested in overall run time. I say this because of two "features": > - Files inside of "\Documents and Settings" or "\Users" are analyzed before other files. > - The keyword search module will commit its index every 5 minutes and do a search of pre-defined keywords. This makes the analysis process take longer, but means that you have keyword results in minutes versus hours or days. > > So, yes the overall analysis time of Autopsy could benefit from doing this type of sorting, but it could mean that for the first 60 minutes, that Autopsy is just analyzing Windows OS files and the user is patiently waiting for interesting results. > > brian > > > On Sep 4, 2014, at 7:53 PM, Stuart Maclean <st...@ap...> wrote: > >> On 09/04/2014 03:46 PM, Simson Garfinkel wrote: >>> Hi Stuart. >>> >>> You are correct — I put this in numerous presentations but never published it. >>> >>> The MD5 algorithm won't let you combine a partial hash from the middle of the file with one from the beginning. You need to start at the beginning and hash through the end. (That's one of the many problems with MD5 for forensics, BTW.) So I believe that the only approach is sorting the files by the sector number of the first run, and just leaving it at that. >>> >>> I saw speedup with both HDs and SSDs, strangely enough, but not as much with SSDs. There may be a prefetch thing going on here. >>> >>> I think that the Autopsy framework should hash this way, but currently it doesn't. On the other hand, it may be more useful to hash based on the "importance" of the files. >>> >>> Simson >>> >>> >> Hi Simson, currently I have just got as far as noting the 'seek >> distances' between consecutive runs, across ALL files. I have yet to >> actually read the file content. But I don't think it's that hard. As >> you point out, md5 summing must be done with the file content in correct >> order. I see an analogy between the 'runs ordered by block address but >> not necessarily file offset' and the problem the IP layer has in tcp/ip >> as it tries to reassemble the fragments of a datagram that may arrive in >> any order. We may have to have some 'pending data' structure for runs >> whose content has been read but which cannot yet be offered to the md5 >> hasher due to an as yet unread run being needed first. >> >> I'll let you know if/when I nail this. Pehaps Autopsy could benefit? >> Is fiwalk doing it the 'regular way' too, i.e reading all file content >> of each file as the walk proceeds? > > > > |
From: Brian C. <ca...@sl...> - 2014-09-10 02:15:42
|
We have just started an effort to make a STIX / Cybox module in Autopsy as part of a DHS S&T effort. In Autopsy, the hash value is stored in the DB after the hash lookup module runs, so you can also do the Cybox analysis on each file as it is analyzed or after all of the files have been analyzed. On Sep 4, 2014, at 7:04 PM, Stuart Maclean <st...@ap...> wrote: > I am tracking recent efforts in STIX and Cybox and all things Mitre. > One indicator of compromise is an md5 hash of some file. Presumably you > compare the hash with all files on some file system to see if there is a > match. Obviously this requires a walk of the host fs, using e.g. fls or > fiwalk or the tsk library in general. > > Is this a common activity, the hashing of a complete filesystem that > is? If yes, some experiments I have done with minimising total disk > seek time by ordering Runs, reading content from the ordered Runs and > piecing each file's hash back together would show that this is indeed a > worthy optimization since it can decrease the time spent deriving the > full hash table considerably. > > I did see a slide deck by Simson G where he alluded to a similar win > situation when disk reads are ordered so as to minimise seek time, but > wonder if much has been published on the topic, specifically relating to > the digital forensics arena, i.e. when an entire file system contents is > to be read in a single pass, for the purposes of producing an 'md5 -> > file path' map. > > Opinions and comments welcomed. > > Stuart > > > ------------------------------------------------------------------------------ > Slashdot TV. > Video for Nerds. Stuff that matters. > http://tv.slashdot.org/ > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Anthony S. <ant...@gm...> - 2014-09-10 02:13:56
|
<p dir="ltr">Thanks Brian. In this particular case I was searching for credit card fraud related images. I haven't had to work through a CP case yet, but its only a matter of time, so I will be utilizing that feature at some point.</p> <p dir="ltr">Thanks again<br></p> <p dir="ltr">Sent using <a href="https://cloudmagic.com/k/d/mailapp?ct=ta&cv=5.1.5&pv=4.4.4">CloudMagic</a></p> <br/><div class="cm_quote" style=" color: #787878">On Tue, Sep 09, 2014 at 9:44 PM, Brian Carrier <<a href="mailto:ca...@sl...">ca...@sl...</a>> wrote:</div><br><div id="oldcontent" style="background-color: rgb(255, 255, 255); background-position: initial initial; background-repeat: initial initial;"><blockquote style=""><p dir="ltr">Hi Anthony, <br> <br> There is no way to adjust it in the current version, BUT 3.1.1 will have an entirely new image gallery viewer that we built to make it easier to go through lots of thumbnails. That being said, you may not like the 3.1.1 feature because it groups thumbnails by folders. It was built to be able to analyze child exploitation images while the image is being ingested and shows you the next folder with the most hash hits when you press the "next" button. We tried to organize all of the images in a big scroll area that got populated as images were hashed and analyzed, but it was really slow and resource intensive. <br> <br> brian <br> <br> <br> On Sep 2, 2014, at 11:15 AM, anthony snow <ant...@gm...> wrote: <br> <br> > Good morning, <br> > <br> > Is there any way to increase the number of thumbnails on a page? Or is that perhaps something in the works? Its a bit slow to click through 183 pages of thumbs. <br> > <br> > Thanks <br> > <br> > ajs <br> > ------------------------------------------------------------------------------ <br> > Slashdot TV. <br> > Video for Nerds. Stuff that matters. <br> > http://tv.slashdot.org/ <br> > _______________________________________________ <br> > sleuthkit-users mailing list <br> > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users <br> > http://www.sleuthkit.org <br> <br> </p> </blockquote></div> |
From: Brian C. <ca...@sl...> - 2014-09-10 02:13:10
|
The FILLER entries are there for basic record keeping because NTFS makes not guarantees that the runs will be stored in consecutive order. TSK adds the FILLER entries when it gets runs out of order and pops them out as it finds them. Is the data you are describing below from the same Ext4 image you mentioned before? brian On Sep 5, 2014, at 1:46 PM, Stuart Maclean <st...@ap...> wrote: > Hi all, I'm glad to have provoked some conversation on the merits (or > otherwise!) of md5 sums as useful representations of the state of a file > system. > > Can anyone enlighten me as to the meaning of the 'flags' member in a > TSK_FS_ATTR_RUN? Specifically, what does this comment mean? > > TSK_FS_ATTR_RUN_FLAG_FILLER = 0x01, ///< Entry is a filler for a run > that has not been seen yet in the processing (or has been lost) > > In a fs I am walking and inspecting the runs for, I am seeing run > structs with addr 0 and flags 1. I was under the impression that any > run address of 0 represented a 'missing run' i.e. that this part of the > file content is N zeros, where N = run.length * fs.blocksize. I presume > that would be the case were the run flags value 2: > > TSK_FS_ATTR_RUN_FLAG_SPARSE = 0x02 ///< Entry is a sparse run where > all data in the run is zeros > > If I use istat, I can see inodes which have certain 'Direct Blocks' of > value 0, and when I see M consecutive 0 blocks that matches up to a > 'missing run' when inspecting the runs using the tsk lib (actually my > tsk4jJava binding, which is now finally showing its worth since I can do > all data structure manipulation in Java, nicer than in C, for me at least). > > I am worried at being 'filler' and not 'sparse', the partial file > content represented by the run(s) with addr 0 is not necessarily a > sequence of zeros. > > Anyone shed light on this? Brian? > > Thanks > > Stuart > > ------------------------------------------------------------------------------ > Slashdot TV. > Video for Nerds. Stuff that matters. > http://tv.slashdot.org/ > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Brian C. <ca...@sl...> - 2014-09-10 02:06:05
|
Sorry to join the party late. I'm curious what types of speed ups you see by doing this sorting. In terms of if Autopsy could benefit, I think it depends on what type of investigation you are doing and if you are more interested in fastest overall time or interesting results sooner. Autopsy currently has an assumption that you are more interested in analysis results from user content ASAP more than you are interested in overall run time. I say this because of two "features": - Files inside of "\Documents and Settings" or "\Users" are analyzed before other files. - The keyword search module will commit its index every 5 minutes and do a search of pre-defined keywords. This makes the analysis process take longer, but means that you have keyword results in minutes versus hours or days. So, yes the overall analysis time of Autopsy could benefit from doing this type of sorting, but it could mean that for the first 60 minutes, that Autopsy is just analyzing Windows OS files and the user is patiently waiting for interesting results. brian On Sep 4, 2014, at 7:53 PM, Stuart Maclean <st...@ap...> wrote: > On 09/04/2014 03:46 PM, Simson Garfinkel wrote: >> Hi Stuart. >> >> You are correct — I put this in numerous presentations but never published it. >> >> The MD5 algorithm won't let you combine a partial hash from the middle of the file with one from the beginning. You need to start at the beginning and hash through the end. (That's one of the many problems with MD5 for forensics, BTW.) So I believe that the only approach is sorting the files by the sector number of the first run, and just leaving it at that. >> >> I saw speedup with both HDs and SSDs, strangely enough, but not as much with SSDs. There may be a prefetch thing going on here. >> >> I think that the Autopsy framework should hash this way, but currently it doesn't. On the other hand, it may be more useful to hash based on the "importance" of the files. >> >> Simson >> >> > Hi Simson, currently I have just got as far as noting the 'seek > distances' between consecutive runs, across ALL files. I have yet to > actually read the file content. But I don't think it's that hard. As > you point out, md5 summing must be done with the file content in correct > order. I see an analogy between the 'runs ordered by block address but > not necessarily file offset' and the problem the IP layer has in tcp/ip > as it tries to reassemble the fragments of a datagram that may arrive in > any order. We may have to have some 'pending data' structure for runs > whose content has been read but which cannot yet be offered to the md5 > hasher due to an as yet unread run being needed first. > > I'll let you know if/when I nail this. Pehaps Autopsy could benefit? > Is fiwalk doing it the 'regular way' too, i.e reading all file content > of each file as the walk proceeds? |
From: Brian C. <ca...@sl...> - 2014-09-10 01:54:16
|
Hi Anthony, What exactly are the requests: 1) In the file tags page, include the file metadata in addition to just file name? 2) Where do you want hash? In the file tags page? brian On Sep 5, 2014, at 3:22 PM, anthony snow <ant...@gm...> wrote: > Good afternoon, > > I’m starting to use Autopsy as my main suite vs EnCase and since my IEF trial ran out, but it was pointed out to me that file metadata is not included in the html report. Also, there is no hash in the report. I’m finding that the reporting is not very flexible at this point. What good is separating the evidence into separate tags if its just going to report it all under one “tagged files” link? > > Any word on a fix for this? I tried using the excel report but it puts in more info into the report than I want it to. > > What are others doing to work around this? > > > Thank you > AJS > ------------------------------------------------------------------------------ > Slashdot TV. > Video for Nerds. Stuff that matters. > http://tv.slashdot.org/ > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Brian C. <ca...@sl...> - 2014-09-10 01:51:07
|
Hi Brian, When you added the image, you should have then gotten a list of modules to run and it sounds like the hash database module was enabled (otherwise you would not have seen those messages). Did you see the hash databases listed in the panel on the right when you selected the hash database module? Were they selected? thanks, brian On Sep 3, 2014, at 3:02 PM, Brian S. <bhs...@ya...> wrote: > Using hfind, I created two hash db indexes (1 custom, 1 based on NSRL) and added them to Autopsy 3.1.0. Command line searching using hfind works as expected, so I know the indexes are good. When I go to ingest new media, I get the messages listed below. I'd expect the "No known bad..." entry to appear, but not the second. Is this a result of user error or a bug? I've found the wiki documentation on the hash db "import" process to be pretty weak, so it easily could be user error. Thanks for your time. -Brian > > Module Subject > Hash lookup "No known bad hash database set" > Hash lookup "No known hash database set" > > > ------------------------------------------------------------------------------ > Slashdot TV. > Video for Nerds. Stuff that matters. > http://tv.slashdot.org/_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Brian C. <ca...@sl...> - 2014-09-10 01:49:40
|
Hi Stuart, I'm wondering if the file in question is sparse and Ext4 isn't properly dealing with that. I made an issue for it. Any debugging help would be appreciated though to verify that in the original file. brian On Sep 3, 2014, at 3:21 PM, Stuart Maclean <st...@ap...> wrote: > I am using tsk 4.1.3 on Ubuntu, 64-bit machine. /dev/sda1 is a ext4 > filessytem. > > I have an inode for which istat claims > > allocated > inode: 1322012 > size: 4296704 > direct blocks: 5289177 > > If I dd the file, I do indeed see 4296704 bytes produced. Somewhat > curiously, the first 1876 bytes appear to be 'regular content', in fact > utf-16 text (the file itself is some sort of kde cache file), while the > remainder of the file, over 4MB, are all zeros. According to dd that is. > > Now, if I icat this file (icat also from 4.1.3), the icat produces only > 4096 bytes of content. I presume this number reflects the fact that > istat said there was only a single block, and the fs block size is > 4096. The icat output shows the same 1876 leading bytes as dd did, and > further has all zeros from there up to its 4096 byte length. > > I am not quite sure what is going on. I was under the impression that > icat and dd would give the same result for this file (and would for all > allocated files in general). > > Any help appreciated. > > Stuart > > > ------------------------------------------------------------------------------ > Slashdot TV. > Video for Nerds. Stuff that matters. > http://tv.slashdot.org/ > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Brian C. <ca...@sl...> - 2014-09-10 01:44:30
|
Hi Anthony, There is no way to adjust it in the current version, BUT 3.1.1 will have an entirely new image gallery viewer that we built to make it easier to go through lots of thumbnails. That being said, you may not like the 3.1.1 feature because it groups thumbnails by folders. It was built to be able to analyze child exploitation images while the image is being ingested and shows you the next folder with the most hash hits when you press the "next" button. We tried to organize all of the images in a big scroll area that got populated as images were hashed and analyzed, but it was really slow and resource intensive. brian On Sep 2, 2014, at 11:15 AM, anthony snow <ant...@gm...> wrote: > Good morning, > > Is there any way to increase the number of thumbnails on a page? Or is that perhaps something in the works? Its a bit slow to click through 183 pages of thumbs. > > Thanks > > ajs > ------------------------------------------------------------------------------ > Slashdot TV. > Video for Nerds. Stuff that matters. > http://tv.slashdot.org/ > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Brian C. <ca...@sl...> - 2014-09-10 01:39:17
|
Hi Alessandro, Sorry for the delayed response. I had a bit of travel going on. Can you add the image to a case again and notice if in the final panel of the "Add Data Source" wizard if there is a button that says that there were errors ingesting the image? If so, can you click on the button and send the messages? We should review that panel because there have been several cases where people don't notice that some errors occurred... thanks, brian On Sep 2, 2014, at 12:14 PM, Alessandro Farina <at...@gm...> wrote: > Yes. > If I select the partition in the partitions tree, nothing is show in the detail window. > > > 2014-08-21 4:09 GMT+02:00 Brian Carrier <ca...@sl...>: > So the image has four partitions, but one of them isn't showing any files? > > > On Aug 14, 2014, at 5:42 AM, Alessandro Farina <at...@gm...> wrote: > > > Hi > > I'm analysing an image (EWF) extracted from an IMAC. > > The disk (image) has 4 partition: 2 HFS+ and 2 NTFS (BOOTCAMP). > > I'm using Autopsy 3.0.10 on Window 7 SP1. > > From the partition browser I can't access to one of the HFS+ partition. > > The image file is ok, infact I can mount and browse all the partition in > > linux (via ewfmount) without any problem. The same happens if I access > > the image via ftk mounter on windows. > > I think there is some sort of problem with Autopsy and I would like to > > help whith analysis and debug. > > I can't send to many info on the contents because is part of an ongoing > > investigation, but I think I can share info on disk and partition structure. > > Any help will be very appreciated. > > > > Thanks in advance > > Alessandro > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > > sleuthkit-users mailing list > > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > > http://www.sleuthkit.org > > > ------------------------------------------------------------------------------ > Slashdot TV. > Video for Nerds. Stuff that matters. > http://tv.slashdot.org/_______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org |
From: Stuart M. <st...@ap...> - 2014-09-09 18:10:29
|
Sometimes when using fls ot the tsk library to walk a filesystem, I see these lines printed to stderr: Attribute Run Dump: 0 to 4095 NotFiller Can anyone shed any light on these messages? Do they indicate a corrupt filesystem? Thanks Stuart |