Re: [sleuthkit-users] Extracting partions from dd image
Brought to you by:
carrier
From: Aaron S. <aa...@se...> - 2005-06-27 20:21:28
|
On Mon, Jun 27, 2005, ""Barry J. Grundy"" <bg...@im...> said: > On Mon, 2005-06-27 at 19:56 +0100, Lisa Muir wrote: >> >> So what I did was try starting at 63 and 62, and tried ending at one >> >> more >> >> than the end point and none of these options worked. >> > >> > Also make sure that the 'bs=' value for >> > 'dd' is set to 512. >> >> Was wondering if you could explain why it is important to make the bs >> value 512? I know this is the usual disk block size, but why would >> that matter as long as DD appends what it writes out sequentially to >> what was last written? > > Didn't see the original thread here (too lazy to look at the > archives ;-), but when you are dealing with counting units (in this case > sectors), for carving partitions/mbr/etc., then it's important to > explicitly set the block size. > > If the units we are counting with are sectors, then we want a bs=512, > not bs=1024. 1024 would give us TWO sectors per count (therefore the > chunk of data we end up with is twice as big as we want). Things get > even worse if you are using skip or seek with the wrong bs, or a bs > other that what you *think* it is. > > In short, the blocksize is important because it sets the unit size for > your "count" (or skip, etc.). This is something I've always wondered: if you set the blocksize to, say, 8192, but do not specify a skip or a count, and capture an entire partition, is there a downside to setting the larger blocksize? My understanding is that it can make the capture faster because you're reading and buffering larger blocks. Scaling up to the size of the disk cache should get faster, no? Aaron |