Thread: RE: [sleuthkit-users] Extracting partions from dd image
Brought to you by:
carrier
From: Eagle I. S. I. <in...@ea...> - 2004-03-31 17:26:28
|
Brian, The process of dd'ing out the partition worked for only one image file, and not for the three others. All 4 image files were dd'd the same way. I get the error message that it's not a valid NTFS system. I've tried altering the starting point and ending point by one block to no avail. When I do fdisk -lu 9g.dd I get 9g.dd1 ......(start)63 (end)17789897 HPFS/NTFS So what I did was try starting at 63 and 62, and tried ending at one more than the end point and none of these options worked. Am I missing something? Niall. |
From: Lisa M. <34....@gm...> - 2005-06-27 18:57:05
|
I was just reviewing some of the list archives, and found this message curious.... On Wednesday, March 31, 2004, Brian Carrier wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 >=20 >=20 > On Mar 31, 2004, at 12:26 PM, Eagle Investigative Services, Inc. wrote: >=20 >> Brian, >> >> The process of dd'ing out the partition worked for only one image=20 >> file, and >> not for the three others. All 4 image files were dd'd the same way. >=20 > Which one worked? The first one or one of the latter ones? >=20 >> So what I did was try starting at 63 and 62, and tried ending at one=20 >> more >> than the end point and none of these options worked. >=20 > It could be that the file system is corrupt. The first partition of=20 > almost every disk starts at sector 63, so that shouldn't be a problem. = =20 > Make sure you are using the original disk image and not one of the=20 > partition images as input. Also make sure that the 'bs=3D' value for=20 > 'dd' is set to 512. Was wondering if you could explain why it is important to make the bs value 512? I know this is the usual disk block size, but why would that matter as long as DD appends what it writes out sequentially to what was last written? TIA, Lisa. >=20 > Send an 'xxd' output of the first sector of the partition image if=20 > nothing else works: >=20 > dd if=3Dpart-img.dd count=3D1 | xxd >=20 >=20 > brian > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.2.4 (Darwin) >=20 > iD8DBQFAaxhQOK1gLsdFTIsRAonLAJ9AmeJM39h41j70Tp/d3r+KEDZBXQCgiH1A > 7B2rcrJZ4CFtlOkX9uD5uyI=3D > =3DKkb7 > -----END PGP SIGNATURE----- >=20 >=20 >=20 > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President and CEO of > GenToo technologies. Learn everything from fundamentals to system > administration.http://ads.osdn.com/?ad_id=3D1470&alloc_id=3D3638&op=3Dcli= ck > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org > |
From: Barry J. G. <bg...@im...> - 2005-06-27 19:36:18
|
On Mon, 2005-06-27 at 19:56 +0100, Lisa Muir wrote: > >> So what I did was try starting at 63 and 62, and tried ending at one > >> more > >> than the end point and none of these options worked. > > > > Also make sure that the 'bs=' value for > > 'dd' is set to 512. > > Was wondering if you could explain why it is important to make the bs > value 512? I know this is the usual disk block size, but why would > that matter as long as DD appends what it writes out sequentially to > what was last written? Didn't see the original thread here (too lazy to look at the archives ;-), but when you are dealing with counting units (in this case sectors), for carving partitions/mbr/etc., then it's important to explicitly set the block size. If the units we are counting with are sectors, then we want a bs=512, not bs=1024. 1024 would give us TWO sectors per count (therefore the chunk of data we end up with is twice as big as we want). Things get even worse if you are using skip or seek with the wrong bs, or a bs other that what you *think* it is. In short, the blocksize is important because it sets the unit size for your "count" (or skip, etc.). -- /*************************************** Special Agent Barry J. Grundy NASA Office of Inspector General Computer Crimes Division Goddard Space Flight Center Code 190 Greenbelt Rd. Greenbelt, MD 20771 (301)286-3358 **************************************/ |
From: Aaron S. <aa...@se...> - 2005-06-27 20:21:28
|
On Mon, Jun 27, 2005, ""Barry J. Grundy"" <bg...@im...> said: > On Mon, 2005-06-27 at 19:56 +0100, Lisa Muir wrote: >> >> So what I did was try starting at 63 and 62, and tried ending at one >> >> more >> >> than the end point and none of these options worked. >> > >> > Also make sure that the 'bs=' value for >> > 'dd' is set to 512. >> >> Was wondering if you could explain why it is important to make the bs >> value 512? I know this is the usual disk block size, but why would >> that matter as long as DD appends what it writes out sequentially to >> what was last written? > > Didn't see the original thread here (too lazy to look at the > archives ;-), but when you are dealing with counting units (in this case > sectors), for carving partitions/mbr/etc., then it's important to > explicitly set the block size. > > If the units we are counting with are sectors, then we want a bs=512, > not bs=1024. 1024 would give us TWO sectors per count (therefore the > chunk of data we end up with is twice as big as we want). Things get > even worse if you are using skip or seek with the wrong bs, or a bs > other that what you *think* it is. > > In short, the blocksize is important because it sets the unit size for > your "count" (or skip, etc.). This is something I've always wondered: if you set the blocksize to, say, 8192, but do not specify a skip or a count, and capture an entire partition, is there a downside to setting the larger blocksize? My understanding is that it can make the capture faster because you're reading and buffering larger blocks. Scaling up to the size of the disk cache should get faster, no? Aaron |
From: Barry J. G. <bg...@im...> - 2005-06-28 12:20:19
|
On Mon, 2005-06-27 at 20:21 +0000, Aaron Stone wrote: > This is something I've always wondered: if you set the blocksize to, say, > 8192, but do not specify a skip or a count, and capture an entire > partition, is there a downside to setting the larger blocksize? My > understanding is that it can make the capture faster because you're > reading and buffering larger blocks. Scaling up to the size of the disk > cache should get faster, no? Yes, it will speed things up, but there is a limit to that (a sweet spot). It's hardware dependent, but for local HDD transfers (there's lot's of factors that affect this) we use 4096 (4k). For network transfers, 32k seems to work best. The problem with willy-nilly blocksize settings comes into play when you use things like "conv=noerror,sync", etc. In those cases, it's best to set a low (like 512) blocksize because entire blocks will be discarded when errors are found. Better to keep them low. The only point I was making in Lisa's original post was that if you are *carving* partitions and whatnot, you should set the blocksize to 512 (sector size) because in general, the carving is done based on sector units (partition offsets, etc). skip and count are sector locations, so bs must be 512. Heck, for giggles you could use byte offsets and set the bs to one if you want. -- /*************************************** Special Agent Barry J. Grundy NASA Office of Inspector General Computer Crimes Division Goddard Space Flight Center Code 190 Greenbelt Rd. Greenbelt, MD 20771 (301)286-3358 **************************************/ |
From: Lisa M. <34....@gm...> - 2005-06-28 11:50:02
|
On 6/27/05, Aaron Stone <aa...@se...> wrote: > On Mon, Jun 27, 2005, ""Barry J. Grundy"" <bg...@im...> said: > > In short, the blocksize is important because it sets the unit size for > > your "count" (or skip, etc.). >=20 > This is something I've always wondered: if you set the blocksize to, say, > 8192, but do not specify a skip or a count, and capture an entire > partition, is there a downside to setting the larger blocksize? My > understanding is that it can make the capture faster because you're > reading and buffering larger blocks. Scaling up to the size of the disk > cache should get faster, no? Yes, optimising the bs size will yield faster imaging times, bs=3D512 is a little slow at he best of times. The way I had remembered this / some other thread on the list was that an NTFS partition was DD'd and wasn't loadable, but by redoing it with bs=3D512 it was. Probably just bad memory in my case! Lisa. |
From: Eagle I. S. Inc. <in...@ea...> - 2005-06-28 12:52:18
|
>> Probably just bad memory in my case! Not so Lisa. I had that exact experience, where I dd'd out the partition with bs=8192 and was not able to mount it afterwards. Changing the bs to 512 worked and allowed me to mount the partition. The argument is is kinda moot now, since there's no real reason/need to carve out the partitions anymore, as TSK can read an entire drive image. Niall. > On 6/27/05, Aaron Stone <aa...@se...> wrote: >> On Mon, Jun 27, 2005, ""Barry J. Grundy"" <bg...@im...> >> said: >> > In short, the blocksize is important because it sets the unit size for >> > your "count" (or skip, etc.). >> >> This is something I've always wondered: if you set the blocksize to, >> say, >> 8192, but do not specify a skip or a count, and capture an entire >> partition, is there a downside to setting the larger blocksize? My >> understanding is that it can make the capture faster because you're >> reading and buffering larger blocks. Scaling up to the size of the disk >> cache should get faster, no? > > Yes, optimising the bs size will yield faster imaging times, bs=512 is > a little slow at he best of times. > > The way I had remembered this / some other thread on the list was that > an NTFS partition was DD'd and wasn't loadable, but by redoing it with > bs=512 it was. > > Probably just bad memory in my case! > > Lisa. > > > ------------------------------------------------------- > SF.Net email is sponsored by: Discover Easy Linux Migration Strategies > from IBM. Find simple to follow Roadmaps, straightforward articles, > informative Webcasts and more! Get everything you need to get up to > speed, fast. http://ads.osdn.com/?ad_idt77&alloc_id492&op=click > _______________________________________________ > sleuthkit-users mailing list > https://lists.sourceforge.net/lists/listinfo/sleuthkit-users > http://www.sleuthkit.org > |
From: Lisa M. <34....@gm...> - 2005-06-28 23:16:29
|
On 6/28/05, Eagle Investigative Services, Inc. <in...@ea...> w= rote: > >> Probably just bad memory in my case! >=20 > Not so Lisa. I had that exact experience, where I dd'd out the partition > with bs=3D8192 and was not able to mount it afterwards. Changing the bs t= o > 512 worked and allowed me to mount the partition. >=20 > The argument is is kinda moot now, since there's no real reason/need to > carve out the partitions anymore, as TSK can read an entire drive image. >=20 Yeah, thats the scenario I was looking for, and personally I just don't get it why changing bs down to 512 would make a blind bit of difference in that scenario... Lisa. |
From: Geert V. A. <gee...@pa...> - 2005-08-23 10:05:12
|
Hi list, is there a possibility in autopsy to give a list of keywords to search for on a raw disk ? I know it's possible on the command line, just wondering if there is an option somewhere in autopsy. Thanks, Geert VAN ACKER |
From: Brian C. <ca...@ce...> - 2005-08-25 13:57:03
|
On Tue, Aug 23, 2005 at 12:05:03PM +0200, Geert VAN ACKER wrote: > Hi list, > > is there a possibility in autopsy to give a list of keywords to search > for on a raw disk ? Well, you can craft up a regular expression with all of the keywords in one term, but the results will be a pain to go through. You currently need to do 1 by 1. brian |
From: Brian C. <ca...@sl...> - 2004-03-31 19:13:35
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Mar 31, 2004, at 12:26 PM, Eagle Investigative Services, Inc. wrote: > Brian, > > The process of dd'ing out the partition worked for only one image > file, and > not for the three others. All 4 image files were dd'd the same way. Which one worked? The first one or one of the latter ones? > So what I did was try starting at 63 and 62, and tried ending at one > more > than the end point and none of these options worked. It could be that the file system is corrupt. The first partition of almost every disk starts at sector 63, so that shouldn't be a problem. Make sure you are using the original disk image and not one of the partition images as input. Also make sure that the 'bs=' value for 'dd' is set to 512. Send an 'xxd' output of the first sector of the partition image if nothing else works: dd if=part-img.dd count=1 | xxd brian -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (Darwin) iD8DBQFAaxhQOK1gLsdFTIsRAonLAJ9AmeJM39h41j70Tp/d3r+KEDZBXQCgiH1A 7B2rcrJZ4CFtlOkX9uD5uyI= =Kkb7 -----END PGP SIGNATURE----- |
From: Eagle I. S. I. <in...@ea...> - 2004-03-31 19:20:25
|
Brian, I had bs=4096 to help speed things up, so I will retest with bs=512. Niall. -----Original Message----- From: Brian Carrier [mailto:ca...@sl...] Sent: Wednesday, March 31, 2004 2:13 PM To: Eagle Investigative Services, Inc. Cc: sle...@li... Subject: Re: [sleuthkit-users] Extracting partions from dd image -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Mar 31, 2004, at 12:26 PM, Eagle Investigative Services, Inc. wrote: > Brian, > > The process of dd'ing out the partition worked for only one image > file, and not for the three others. All 4 image files were dd'd the > same way. Which one worked? The first one or one of the latter ones? > So what I did was try starting at 63 and 62, and tried ending at one > more than the end point and none of these options worked. It could be that the file system is corrupt. The first partition of almost every disk starts at sector 63, so that shouldn't be a problem. Make sure you are using the original disk image and not one of the partition images as input. Also make sure that the 'bs=' value for 'dd' is set to 512. Send an 'xxd' output of the first sector of the partition image if nothing else works: dd if=part-img.dd count=1 | xxd brian -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (Darwin) iD8DBQFAaxhQOK1gLsdFTIsRAonLAJ9AmeJM39h41j70Tp/d3r+KEDZBXQCgiH1A 7B2rcrJZ4CFtlOkX9uD5uyI= =Kkb7 -----END PGP SIGNATURE----- |
From: Eagle I. S. I. <in...@ea...> - 2004-04-02 04:58:32
|
Brian, Setting the bs to 512 did the trick. Thanks Niall. -----Original Message----- From: sle...@li... [mailto:sle...@li...] On Behalf Of Eagle Investigative Services, Inc. Sent: Wednesday, March 31, 2004 2:20 PM To: 'Brian Carrier' Cc: sle...@li... Subject: RE: [sleuthkit-users] Extracting partions from dd image Brian, I had bs=4096 to help speed things up, so I will retest with bs=512. Niall. -----Original Message----- From: Brian Carrier [mailto:ca...@sl...] Sent: Wednesday, March 31, 2004 2:13 PM To: Eagle Investigative Services, Inc. Cc: sle...@li... Subject: Re: [sleuthkit-users] Extracting partions from dd image -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Mar 31, 2004, at 12:26 PM, Eagle Investigative Services, Inc. wrote: > Brian, > > The process of dd'ing out the partition worked for only one image > file, and not for the three others. All 4 image files were dd'd the > same way. Which one worked? The first one or one of the latter ones? > So what I did was try starting at 63 and 62, and tried ending at one > more than the end point and none of these options worked. It could be that the file system is corrupt. The first partition of almost every disk starts at sector 63, so that shouldn't be a problem. Make sure you are using the original disk image and not one of the partition images as input. Also make sure that the 'bs=' value for 'dd' is set to 512. Send an 'xxd' output of the first sector of the partition image if nothing else works: dd if=part-img.dd count=1 | xxd brian -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (Darwin) iD8DBQFAaxhQOK1gLsdFTIsRAonLAJ9AmeJM39h41j70Tp/d3r+KEDZBXQCgiH1A 7B2rcrJZ4CFtlOkX9uD5uyI= =Kkb7 -----END PGP SIGNATURE----- ------------------------------------------------------- This SF.Net email is sponsored by: IBM Linux Tutorials Free Linux tutorial presented by Daniel Robbins, President and CEO of GenToo technologies. Learn everything from fundamentals to system administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click _______________________________________________ sleuthkit-users mailing list https://lists.sourceforge.net/lists/listinfo/sleuthkit-users http://www.sleuthkit.org |