sleuthkit-users Mailing List for The Sleuth Kit (Page 177)
Brought to you by:
carrier
You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(11) |
Oct
(5) |
Nov
(4) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(1) |
Feb
(20) |
Mar
(60) |
Apr
(40) |
May
(24) |
Jun
(28) |
Jul
(18) |
Aug
(27) |
Sep
(6) |
Oct
(14) |
Nov
(15) |
Dec
(22) |
2004 |
Jan
(34) |
Feb
(13) |
Mar
(28) |
Apr
(23) |
May
(27) |
Jun
(26) |
Jul
(37) |
Aug
(19) |
Sep
(20) |
Oct
(39) |
Nov
(17) |
Dec
(9) |
2005 |
Jan
(45) |
Feb
(43) |
Mar
(66) |
Apr
(36) |
May
(19) |
Jun
(64) |
Jul
(10) |
Aug
(11) |
Sep
(35) |
Oct
(6) |
Nov
(4) |
Dec
(13) |
2006 |
Jan
(52) |
Feb
(34) |
Mar
(39) |
Apr
(39) |
May
(37) |
Jun
(15) |
Jul
(13) |
Aug
(48) |
Sep
(9) |
Oct
(10) |
Nov
(47) |
Dec
(13) |
2007 |
Jan
(25) |
Feb
(4) |
Mar
(2) |
Apr
(29) |
May
(11) |
Jun
(19) |
Jul
(13) |
Aug
(15) |
Sep
(30) |
Oct
(12) |
Nov
(10) |
Dec
(13) |
2008 |
Jan
(2) |
Feb
(54) |
Mar
(58) |
Apr
(43) |
May
(10) |
Jun
(27) |
Jul
(25) |
Aug
(27) |
Sep
(48) |
Oct
(69) |
Nov
(55) |
Dec
(43) |
2009 |
Jan
(26) |
Feb
(36) |
Mar
(28) |
Apr
(27) |
May
(55) |
Jun
(9) |
Jul
(19) |
Aug
(16) |
Sep
(15) |
Oct
(17) |
Nov
(70) |
Dec
(21) |
2010 |
Jan
(56) |
Feb
(59) |
Mar
(53) |
Apr
(32) |
May
(25) |
Jun
(31) |
Jul
(36) |
Aug
(11) |
Sep
(37) |
Oct
(19) |
Nov
(23) |
Dec
(6) |
2011 |
Jan
(21) |
Feb
(20) |
Mar
(30) |
Apr
(30) |
May
(74) |
Jun
(50) |
Jul
(34) |
Aug
(34) |
Sep
(12) |
Oct
(33) |
Nov
(10) |
Dec
(8) |
2012 |
Jan
(23) |
Feb
(57) |
Mar
(26) |
Apr
(14) |
May
(27) |
Jun
(27) |
Jul
(60) |
Aug
(88) |
Sep
(13) |
Oct
(36) |
Nov
(97) |
Dec
(85) |
2013 |
Jan
(60) |
Feb
(24) |
Mar
(43) |
Apr
(32) |
May
(22) |
Jun
(38) |
Jul
(51) |
Aug
(50) |
Sep
(76) |
Oct
(65) |
Nov
(25) |
Dec
(30) |
2014 |
Jan
(19) |
Feb
(41) |
Mar
(43) |
Apr
(28) |
May
(61) |
Jun
(12) |
Jul
(10) |
Aug
(37) |
Sep
(76) |
Oct
(31) |
Nov
(41) |
Dec
(12) |
2015 |
Jan
(33) |
Feb
(28) |
Mar
(53) |
Apr
(22) |
May
(29) |
Jun
(20) |
Jul
(15) |
Aug
(17) |
Sep
(52) |
Oct
(3) |
Nov
(18) |
Dec
(21) |
2016 |
Jan
(20) |
Feb
(8) |
Mar
(21) |
Apr
(7) |
May
(13) |
Jun
(35) |
Jul
(34) |
Aug
(11) |
Sep
(14) |
Oct
(22) |
Nov
(31) |
Dec
(23) |
2017 |
Jan
(20) |
Feb
(7) |
Mar
(5) |
Apr
(6) |
May
(6) |
Jun
(22) |
Jul
(11) |
Aug
(16) |
Sep
(8) |
Oct
(1) |
Nov
(1) |
Dec
(1) |
2018 |
Jan
|
Feb
|
Mar
(16) |
Apr
(2) |
May
(6) |
Jun
(5) |
Jul
|
Aug
(2) |
Sep
(4) |
Oct
|
Nov
(16) |
Dec
(13) |
2019 |
Jan
|
Feb
(1) |
Mar
(25) |
Apr
(9) |
May
(2) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
(2) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
|
Oct
(5) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(4) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2022 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2024 |
Jan
|
Feb
(3) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Palmer,Gary L. <PA...@mi...> - 2005-07-15 17:52:33
|
Reminder to anyone planning to attend DFRWS 2005: Hotel reservations must be made TODAY to ensure that you get the reduced rate of $85 per night at the Astor Crowne Plaza Hotel. Please call 888-696-4806 and reference DFRWS 2005. After today, rooms will be booked on an as-available basis only. Thanks, The DFRWS 2005 Organizing Committee P.S. Apologies for any duplication of this message--we're using several mailing lists. |
From: Palmer,Gary L. <PA...@mi...> - 2005-07-12 20:06:39
|
This is a reminder that early registration for the 5th Annual=20 Digital Forensic Research Workshop ends this Friday, July 15, 2005.=20 This year the workshop is in New Orleans, August 17-19. The workshop=20 is designed to provide a forum for researchers and practitioners=20 with interests in digital forensics to share results, knowledge,=20 and experience. This year's program is now available online at: http://www.dfrws.org/2005/program.html <http://www.dfrws.org/2005/program.html> . For more information about DFRWS05 and to register, go to: http://www.dfrws.org/2005/index.html <http://www.dfrws.org/2005/index.html>=20 See you in New Orleans! DFRWS Organizing Committee. |
From: Brian C. <ca...@sl...> - 2005-07-08 22:48:50
|
Version 2.02 of The Sleuth Kit is now available: http://www.sleuthkit.org/sleuthkit/ * Bug Fixes o fls could crash if FAT short name did not exist o Linux header file problem with some distros. o Missing UFS / Ext2/3 file names (if deleted file claimed it used that data). o Missing FAT directory entries with ils (if initial entries in cluster were invalid). o Missing NTFS file if no $DATA or $IDX_* attributes existed (which meant the file had no content). * Updates o Support for OS X Tiger. o Internal design improvements and memory leak fix. o 'ils -o' was readded as 'ils -O'. o 'mactime -m' was added so that month is printed as number instead of name. MD5: d8f53a69069369ee20a4ce623eb640b5 brian |
From: Dennis Borkhus-V. <db...@ya...> - 2005-07-06 17:31:29
|
I a bit of a novice with Linux and I have installed Sleuthkit and Autopsy and they both work. What I want to know is how can a make a program shortcut to start them? I have used some of the Live CD installs of Knoppix that have had sleuthkit and autopsy setup this way, so you click on an Icon and the program starts. Now I have to first run the .\autopsy script then copy the address from that windows and past into a browser. And I have to leave the window from the script open otherwise it quits. any help would be appreciated. Dennis __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com |
From: <fu...@gm...> - 2005-07-01 08:47:03
|
I did istat on the inode: MFT Entry Header Values: Entry: 7084 Sequence: 14 $LogFile Sequence Number: 1081760676 Allocated File Links: 2 $STANDARD_INFORMATION Attribute Values: Flags: Archive Owner ID: 0 Security ID: 552 Created: Tue May 10 19:05:09 2005 File Modified: Mon Jun 6 08:46:06 2005 MFT Modified: Mon Jun 6 08:46:06 2005 Accessed: Mon Jun 6 17:59:50 2005 $FILE_NAME Attribute Values: Flags: Archive Name: archive2005.pst Parent MFT Entry: 2907 Sequence: 1 Allocated Size: 0 Actual Size: 0 Created: Tue May 10 19:05:09 2005 File Modified: Tue May 10 19:05:12 2005 MFT Modified: Tue May 10 19:05:12 2005 Accessed: Tue May 10 19:05:12 2005 $ATTRIBUTE_LIST Attribute Values: Type: 16-0 MFT Entry: 7084 VCN: 0 Type: 48-2 MFT Entry: 7084 VCN: 0 Type: 48-3 MFT Entry: 7084 VCN: 0 Attributes: Type: $STANDARD_INFORMATION (16-0) Name: N/A Resident size: 72 Type: $ATTRIBUTE_LIST (32-5) Name: N/A Resident size: 96 Type: $FILE_NAME (48-3) Name: N/A Resident size: 90 Type: $FILE_NAME (48-2) Name: N/A Resident size: 96 So istat sees it. Hm. And yes it is the same file system. Runninf fsck was not possible because I don't have a new Windows Box here and fsck for ntfs undr linux does not exist afaik. Btw, the disk is know in the laboratory, the file really has size 0, so the only issue is: why does autopsy not list the file? To compare, here a istat-output of a file in the same directory but which is listed in autopsy: MFT Entry Header Values: Entry: 2908 Sequence: 1 $LogFile Sequence Number: 1160480335 Allocated File Links: 2 $STANDARD_INFORMATION Attribute Values: Flags: Archive Owner ID: 0 Security ID: 622 Created: Tue Oct 5 19:24:22 2004 File Modified: Tue May 24 13:21:38 2005 MFT Modified: Wed Jun 15 09:03:42 2005 Accessed: Wed Jun 15 17:47:45 2005 $FILE_NAME Attribute Values: Flags: Archive Name: archive2002.pst Parent MFT Entry: 2907 Sequence: 1 Allocated Size: 0 Actual Size: 0 Created: Tue Oct 5 19:24:22 2004 File Modified: Tue Oct 5 19:24:22 2004 MFT Modified: Tue Oct 5 19:24:22 2004 Accessed: Tue Oct 5 19:24:22 2004 Attributes: Type: $STANDARD_INFORMATION (16-0) Name: N/A Resident size: 72 Type: $FILE_NAME (48-3) Name: N/A Resident size: 90 Type: $FILE_NAME (48-2) Name: N/A Resident size: 96 Type: $DATA (128-4) Name: $Data Non-Resident size: 693354496 <snip> thank you and regards Fuerst > --- Ursprüngliche Nachricht --- > Von: Brian Carrier <ca...@sl...> > An: fu...@gm... > Kopie: sle...@li... > Betreff: Re: [sleuthkit-users] NTFS, files with no permissions > Datum: Sun, 26 Jun 2005 13:34:27 -0500 > > Can you run 'ls -i' on these files to find the "inode" number and then > run 'istat' on them? You can also run 'istat' by using the Metadata > mode in Autopsy. Is this the same file system that you previously > noted that it had some consistency issues? Did you ever run 'fsck' or > similar to fix the problems? > > brian > > > > On Jun 24, 2005, at 8:12 AM, fu...@gm... wrote: > > > Hi > > > > Once more, I'm looking at a NTFS-Disk. When I mount ro the disk, I can > > see > > in a directory the File archive2005.pst with the following permission: > > > > -r-------- 1 0 2005-06-06 08:30 archive2004.pst > > > > So file size is 0, the rest seems okay. But when I go to the directory > > in > > Autopsy, the file does not appear. What did happen here? Any > > information I > > can provide you? I use Autopsy 2.05 and sleuthkit 2.01 on a Debian > > Sarge. > > > > The other issue is: I have a file which looks the following when I do > > ls -l > > in the mounted disk: > > > > ?--------- 1 0 2002-04-21 11:46 54I70048.jpg > > > > When I look into Autops, the file shows not up. If I try to copy the > > file > > from the mounted filesystem to somewhere, cp bothers me with "argument > > is > > illegal". So there is something wrong with this file but I'm wondering > > why > > it's not shown in Autopsy? > > > > ------------------------------------------------------- > SF.Net email is sponsored by: Discover Easy Linux Migration Strategies > >from IBM. Find simple to follow Roadmaps, straightforward articles, > informative Webcasts and more! Get everything you need to get up to > speed, fast. http://ads.osdn.com/?ad_id=7477&alloc_id=16492&op=click > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org > -- Weitersagen: GMX DSL-Flatrates mit Tempo-Garantie! Ab 4,99 Euro/Monat: http://www.gmx.net/de/go/dsl |
From: Brian C. <ca...@sl...> - 2005-07-01 04:44:08
|
On Jun 30, 2005, at 5:58 PM, Ed wrote: > I'm trying to restore some files from an ext3 volume group (split into > 3 > image files) using http://www.sleuthkit.org/sleuthkit/docs/ref_fs.html > as a > guide. That guide uses Ext2 and not Ext3, the behavior changed. I should add a note to it. > istat returns: > > Direct Blocks: > 0 0 > > Which doesn't help me much. > > Does anyone have any pointers on how to get this working It is working. The block pointers are wiped with Ext3, so file-system based recovery won't help. brian |
From: Ed <wh...@dm...> - 2005-06-30 22:59:17
|
Hi I'm trying to restore some files from an ext3 volume group (split into 3 image files) using http://www.sleuthkit.org/sleuthkit/docs/ref_fs.html as a guide. Using fls finds me the directory I'm after: d/d * 3637251: shared/somedir but if I perform: istat -b 2 -f linux-ext3 -i split image1.dd image2.dd image3.dd 367251 istat returns: Direct Blocks: 0 0 Which doesn't help me much. Does anyone have any pointers on how to get this working TIA Ed |
From: Brian C. <ca...@sl...> - 2005-06-29 13:41:10
|
On Jun 29, 2005, at 8:41 AM, Angus Marshall wrote: > On Wed Jun 29 13:29 , 'Conley, Tom' <tom...@us...> sent: > >> I am receiving the following error when attempting make of Sleuthkit >> under Fedora Core 4 >> any ideas?? >> >> ntfs.c : In function 'ntfs_proc_attseq': >> ntfs.c:770: error: invalid lvalue in assignment > > The compiler is barfing on the use of the cast on the lhs of the > assignment. I'm > working on a way round it now. You can either remove the '-Wall' from the OPT line in src/fstools/Makefile or replace line 770 of src/fstools/ntfs.c: (uintptr_t) attr = (uintptr_t) attr + getu32(fs, attr->len)) { with this: attr = (ntfs_attr *) ((uintptr_t) attr + getu32(fs, attr->len))) { I should have a new version out next week that already included this fix. brian |
From: Angus M. <an...@n-...> - 2005-06-29 13:40:52
|
On Wed Jun 29 13:29 , 'Conley, Tom' <tom...@us...> sent: >I am receiving the following error when attempting make of Sleuthkit >under Fedora Core 4 >any ideas?? > >ntfs.c : In function 'ntfs_proc_attseq': >ntfs.c:770: error: invalid lvalue in assignment OK - here's the story : Fedora Core 4 ships with gcc 4.0.0 gcc 4.0.0 prohibits the use of casts as lvalues in assignments. The only exception to this is Apple's version of gcc 4.0.0 which has an additional switch to modify the behaviour. Current best bet for a solution would be to switch to an older gcc 3.x.x for compilation of sleuthkit (or switch to MacOS ;-) ). --> Brian - this is going to be a heck of a gotcha as more users upgrade to gcc 4.x.x |
From: Angus M. <an...@n-...> - 2005-06-29 13:21:36
|
On Wed Jun 29 13:29 , 'Conley, Tom' <tom...@us...> sent: >I am receiving the following error when attempting make of Sleuthkit >under Fedora Core 4 >any ideas?? > >ntfs.c : In function 'ntfs_proc_attseq': >ntfs.c:770: error: invalid lvalue in assignment The compiler is barfing on the use of the cast on the lhs of the assignment. I'm working on a way round it now. |
From: Conley, T. <tom...@us...> - 2005-06-29 12:29:46
|
I am receiving the following error when attempting make of Sleuthkit under Fedora Core 4 any ideas?? ntfs.c : In function 'ntfs_proc_attseq': ntfs.c:770: error: invalid lvalue in assignment =20 Thomas G Conley, GSEC I/T Senior Security Analyst USAA Information Technology Security Management (813) 615-5576 |
From: <te...@me...> - 2005-06-29 09:32:19
|
Hello everybody I'm still trying to find the best way to get back my harddrive working, and I'm wondering if there's a way to recreate the fat structure with all the recovered datas and to write it back either on the disk or on the disk image ? It's a bit boring to have to sort all the files renamed by sorter, to put them on another drive while nearly all my files are still on the drive and recoverable. -- Ce message a été vérifié par MailScanner |
From: Lisa M. <34....@gm...> - 2005-06-28 23:16:29
|
On 6/28/05, Eagle Investigative Services, Inc. <in...@ea...> w= rote: > >> Probably just bad memory in my case! >=20 > Not so Lisa. I had that exact experience, where I dd'd out the partition > with bs=3D8192 and was not able to mount it afterwards. Changing the bs t= o > 512 worked and allowed me to mount the partition. >=20 > The argument is is kinda moot now, since there's no real reason/need to > carve out the partitions anymore, as TSK can read an entire drive image. >=20 Yeah, thats the scenario I was looking for, and personally I just don't get it why changing bs down to 512 would make a blind bit of difference in that scenario... Lisa. |
From: Brian C. <ca...@ce...> - 2005-06-28 22:46:57
|
[sourceforge seems to be conflicting with the sleuthkit.org spam filter, so I'll try from this account] > From: Brian Carrier <ca...@sl...> > Date: June 28, 2005 11:39:48 AM EST > To: "te...@me..." <te...@me...> > Cc: sle...@li... > Subject: Re: [sleuthkit-users] Could sorter recreate the directory > structure ? > > > > On Jun 28, 2005, at 6:30 AM, te...@me... wrote: > >> Hello evererybody. >> >> I'm trying to recover datas on an ibm hard disk which system files >> have been trashed by maxtor utility maxblast. >> I used autopsy or directly sorter and it seems that nearly all my >> files are recoverable. The problem is that they are >> all renamed and dispatched into categories directories.I would like >> to know if there's a way to recreate automatically >> the directory structure while saving the files since the filelists >> created by sorter contains all the path and names of >> the files. > > > I've never tried it before, but check out Dave Henkewick's recoup: > > http://metawire.org/~henk/recoup > > brian |
From: John T. H. <joh...@gm...> - 2005-06-28 19:10:08
|
On 6/11/05, esrkq yahoo <es...@ya...> wrote: > Hi Guys, > slightly off topic but does anyone know of a utility > that will mount a dd image under windows xp. ...interesting thread so far... http://chitchat.at.infoseek.co.jp/vmware/vdk.html VMWare's Back. I've successfully used this tool with *vmware* images, and it claims to now support DD images. I have *not* tried it yet. But it's a handy tool for VMWare images, if nothing else. |
From: Eagle I. S. Inc. <in...@ea...> - 2005-06-28 12:52:18
|
>> Probably just bad memory in my case! Not so Lisa. I had that exact experience, where I dd'd out the partition with bs=8192 and was not able to mount it afterwards. Changing the bs to 512 worked and allowed me to mount the partition. The argument is is kinda moot now, since there's no real reason/need to carve out the partitions anymore, as TSK can read an entire drive image. Niall. > On 6/27/05, Aaron Stone <aa...@se...> wrote: >> On Mon, Jun 27, 2005, ""Barry J. Grundy"" <bg...@im...> >> said: >> > In short, the blocksize is important because it sets the unit size for >> > your "count" (or skip, etc.). >> >> This is something I've always wondered: if you set the blocksize to, >> say, >> 8192, but do not specify a skip or a count, and capture an entire >> partition, is there a downside to setting the larger blocksize? My >> understanding is that it can make the capture faster because you're >> reading and buffering larger blocks. Scaling up to the size of the disk >> cache should get faster, no? > > Yes, optimising the bs size will yield faster imaging times, bs=512 is > a little slow at he best of times. > > The way I had remembered this / some other thread on the list was that > an NTFS partition was DD'd and wasn't loadable, but by redoing it with > bs=512 it was. > > Probably just bad memory in my case! > > Lisa. > > > ------------------------------------------------------- > SF.Net email is sponsored by: Discover Easy Linux Migration Strategies > from IBM. Find simple to follow Roadmaps, straightforward articles, > informative Webcasts and more! Get everything you need to get up to > speed, fast. http://ads.osdn.com/?ad_idt77&alloc_id492&op=click > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org > |
From: Barry J. G. <bg...@im...> - 2005-06-28 12:20:19
|
On Mon, 2005-06-27 at 20:21 +0000, Aaron Stone wrote: > This is something I've always wondered: if you set the blocksize to, say, > 8192, but do not specify a skip or a count, and capture an entire > partition, is there a downside to setting the larger blocksize? My > understanding is that it can make the capture faster because you're > reading and buffering larger blocks. Scaling up to the size of the disk > cache should get faster, no? Yes, it will speed things up, but there is a limit to that (a sweet spot). It's hardware dependent, but for local HDD transfers (there's lot's of factors that affect this) we use 4096 (4k). For network transfers, 32k seems to work best. The problem with willy-nilly blocksize settings comes into play when you use things like "conv=noerror,sync", etc. In those cases, it's best to set a low (like 512) blocksize because entire blocks will be discarded when errors are found. Better to keep them low. The only point I was making in Lisa's original post was that if you are *carving* partitions and whatnot, you should set the blocksize to 512 (sector size) because in general, the carving is done based on sector units (partition offsets, etc). skip and count are sector locations, so bs must be 512. Heck, for giggles you could use byte offsets and set the bs to one if you want. -- /*************************************** Special Agent Barry J. Grundy NASA Office of Inspector General Computer Crimes Division Goddard Space Flight Center Code 190 Greenbelt Rd. Greenbelt, MD 20771 (301)286-3358 **************************************/ |
From: Lisa M. <34....@gm...> - 2005-06-28 11:50:02
|
On 6/27/05, Aaron Stone <aa...@se...> wrote: > On Mon, Jun 27, 2005, ""Barry J. Grundy"" <bg...@im...> said: > > In short, the blocksize is important because it sets the unit size for > > your "count" (or skip, etc.). >=20 > This is something I've always wondered: if you set the blocksize to, say, > 8192, but do not specify a skip or a count, and capture an entire > partition, is there a downside to setting the larger blocksize? My > understanding is that it can make the capture faster because you're > reading and buffering larger blocks. Scaling up to the size of the disk > cache should get faster, no? Yes, optimising the bs size will yield faster imaging times, bs=3D512 is a little slow at he best of times. The way I had remembered this / some other thread on the list was that an NTFS partition was DD'd and wasn't loadable, but by redoing it with bs=3D512 it was. Probably just bad memory in my case! Lisa. |
From: <te...@me...> - 2005-06-28 11:31:45
|
Hello evererybody. I'm trying to recover datas on an ibm hard disk which system files have been trashed by maxtor utility maxblast. I used autopsy or directly sorter and it seems that nearly all my files are recoverable. The problem is that they are all renamed and dispatched into categories directories.I would like to know if there's a way to recreate automatically the directory structure while saving the files since the filelists created by sorter contains all the path and names of the files. -- Ce message a été vérifié par MailScanner |
From: inconnu <in...@me...> - 2005-06-28 10:31:26
|
Hello evererybody. I'm trying to recover datas on an ibm hard disk which system files have been trashed by maxtor utility maxblast. I used autopsy or directly sorter and it seems that nearly all my files are recoverable. The problem is that they are all renamed and dispatched into categories directories.I would like to know if there's a way to recreate automatically the directory structure while saving the files since the filelists created by sorter contains all the path and names of the files. -- Ce message a été vérifié par MailScanner |
From: Aaron S. <aa...@se...> - 2005-06-27 20:21:28
|
On Mon, Jun 27, 2005, ""Barry J. Grundy"" <bg...@im...> said: > On Mon, 2005-06-27 at 19:56 +0100, Lisa Muir wrote: >> >> So what I did was try starting at 63 and 62, and tried ending at one >> >> more >> >> than the end point and none of these options worked. >> > >> > Also make sure that the 'bs=' value for >> > 'dd' is set to 512. >> >> Was wondering if you could explain why it is important to make the bs >> value 512? I know this is the usual disk block size, but why would >> that matter as long as DD appends what it writes out sequentially to >> what was last written? > > Didn't see the original thread here (too lazy to look at the > archives ;-), but when you are dealing with counting units (in this case > sectors), for carving partitions/mbr/etc., then it's important to > explicitly set the block size. > > If the units we are counting with are sectors, then we want a bs=512, > not bs=1024. 1024 would give us TWO sectors per count (therefore the > chunk of data we end up with is twice as big as we want). Things get > even worse if you are using skip or seek with the wrong bs, or a bs > other that what you *think* it is. > > In short, the blocksize is important because it sets the unit size for > your "count" (or skip, etc.). This is something I've always wondered: if you set the blocksize to, say, 8192, but do not specify a skip or a count, and capture an entire partition, is there a downside to setting the larger blocksize? My understanding is that it can make the capture faster because you're reading and buffering larger blocks. Scaling up to the size of the disk cache should get faster, no? Aaron |
From: Barry J. G. <bg...@im...> - 2005-06-27 19:36:18
|
On Mon, 2005-06-27 at 19:56 +0100, Lisa Muir wrote: > >> So what I did was try starting at 63 and 62, and tried ending at one > >> more > >> than the end point and none of these options worked. > > > > Also make sure that the 'bs=' value for > > 'dd' is set to 512. > > Was wondering if you could explain why it is important to make the bs > value 512? I know this is the usual disk block size, but why would > that matter as long as DD appends what it writes out sequentially to > what was last written? Didn't see the original thread here (too lazy to look at the archives ;-), but when you are dealing with counting units (in this case sectors), for carving partitions/mbr/etc., then it's important to explicitly set the block size. If the units we are counting with are sectors, then we want a bs=512, not bs=1024. 1024 would give us TWO sectors per count (therefore the chunk of data we end up with is twice as big as we want). Things get even worse if you are using skip or seek with the wrong bs, or a bs other that what you *think* it is. In short, the blocksize is important because it sets the unit size for your "count" (or skip, etc.). -- /*************************************** Special Agent Barry J. Grundy NASA Office of Inspector General Computer Crimes Division Goddard Space Flight Center Code 190 Greenbelt Rd. Greenbelt, MD 20771 (301)286-3358 **************************************/ |
From: Lisa M. <34....@gm...> - 2005-06-27 18:57:05
|
I was just reviewing some of the list archives, and found this message curious.... On Wednesday, March 31, 2004, Brian Carrier wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 >=20 >=20 > On Mar 31, 2004, at 12:26 PM, Eagle Investigative Services, Inc. wrote: >=20 >> Brian, >> >> The process of dd'ing out the partition worked for only one image=20 >> file, and >> not for the three others. All 4 image files were dd'd the same way. >=20 > Which one worked? The first one or one of the latter ones? >=20 >> So what I did was try starting at 63 and 62, and tried ending at one=20 >> more >> than the end point and none of these options worked. >=20 > It could be that the file system is corrupt. The first partition of=20 > almost every disk starts at sector 63, so that shouldn't be a problem. = =20 > Make sure you are using the original disk image and not one of the=20 > partition images as input. Also make sure that the 'bs=3D' value for=20 > 'dd' is set to 512. Was wondering if you could explain why it is important to make the bs value 512? I know this is the usual disk block size, but why would that matter as long as DD appends what it writes out sequentially to what was last written? TIA, Lisa. >=20 > Send an 'xxd' output of the first sector of the partition image if=20 > nothing else works: >=20 > dd if=3Dpart-img.dd count=3D1 | xxd >=20 >=20 > brian > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.2.4 (Darwin) >=20 > iD8DBQFAaxhQOK1gLsdFTIsRAonLAJ9AmeJM39h41j70Tp/d3r+KEDZBXQCgiH1A > 7B2rcrJZ4CFtlOkX9uD5uyI=3D > =3DKkb7 > -----END PGP SIGNATURE----- >=20 >=20 >=20 > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President and CEO of > GenToo technologies. Learn everything from fundamentals to system > administration.http://ads.osdn.com/?ad_id=3D1470&alloc_id=3D3638&op=3Dcli= ck > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org > |
From: Brian C. <ca...@sl...> - 2005-06-26 18:34:46
|
Can you run 'ls -i' on these files to find the "inode" number and then run 'istat' on them? You can also run 'istat' by using the Metadata mode in Autopsy. Is this the same file system that you previously noted that it had some consistency issues? Did you ever run 'fsck' or similar to fix the problems? brian On Jun 24, 2005, at 8:12 AM, fu...@gm... wrote: > Hi > > Once more, I'm looking at a NTFS-Disk. When I mount ro the disk, I can > see > in a directory the File archive2005.pst with the following permission: > > -r-------- 1 0 2005-06-06 08:30 archive2004.pst > > So file size is 0, the rest seems okay. But when I go to the directory > in > Autopsy, the file does not appear. What did happen here? Any > information I > can provide you? I use Autopsy 2.05 and sleuthkit 2.01 on a Debian > Sarge. > > The other issue is: I have a file which looks the following when I do > ls -l > in the mounted disk: > > ?--------- 1 0 2002-04-21 11:46 54I70048.jpg > > When I look into Autops, the file shows not up. If I try to copy the > file > from the mounted filesystem to somewhere, cp bothers me with "argument > is > illegal". So there is something wrong with this file but I'm wondering > why > it's not shown in Autopsy? |
From: Barry J. G. <bg...@im...> - 2005-06-24 13:42:09
|
On Fri, 2005-06-24 at 15:12 +0200, fu...@gm... wrote: > Once more, I'm looking at a NTFS-Disk. When I mount ro the disk, I can see > in a directory the File archive2005.pst with the following permission: > > -r-------- 1 0 2005-06-06 08:30 archive2004.pst > > So file size is 0, the rest seems okay. But when I go to the directory in > Autopsy, the file does not appear. What did happen here? Any information I > can provide you? I use Autopsy 2.05 and sleuthkit 2.01 on a Debian Sarge. I have not seen this sort of thing before, so maybe someone with more experience can give you specific details, but until a better answer comes along, I'm curious: What does the output of "stat" give you on the mounted disk for each of those files? Compare that to the output of istat (maybe with -b ?). I'm wondering if the inode returned by stat will have any info that istat can see from the MFT. Do the $STANDARD_INFORMATION attributes match, and do the $FILE_NAME attributes match? Or does istat return the entry as unallocated? It's possible that none of this will answer your question, and maybe someone else has a direct answer, but until then... -- /*************************************** Special Agent Barry J. Grundy NASA Office of Inspector General Computer Crimes Division Goddard Space Flight Center Code 190 Greenbelt Rd. Greenbelt, MD 20771 (301)286-3358 **************************************/ |