Re: [sleuthkit-users] Extracting partions from dd image
Brought to you by:
carrier
From: Barry J. G. <bg...@im...> - 2005-06-28 12:20:19
|
On Mon, 2005-06-27 at 20:21 +0000, Aaron Stone wrote: > This is something I've always wondered: if you set the blocksize to, say, > 8192, but do not specify a skip or a count, and capture an entire > partition, is there a downside to setting the larger blocksize? My > understanding is that it can make the capture faster because you're > reading and buffering larger blocks. Scaling up to the size of the disk > cache should get faster, no? Yes, it will speed things up, but there is a limit to that (a sweet spot). It's hardware dependent, but for local HDD transfers (there's lot's of factors that affect this) we use 4096 (4k). For network transfers, 32k seems to work best. The problem with willy-nilly blocksize settings comes into play when you use things like "conv=noerror,sync", etc. In those cases, it's best to set a low (like 512) blocksize because entire blocks will be discarded when errors are found. Better to keep them low. The only point I was making in Lisa's original post was that if you are *carving* partitions and whatnot, you should set the blocksize to 512 (sector size) because in general, the carving is done based on sector units (partition offsets, etc). skip and count are sector locations, so bs must be 512. Heck, for giggles you could use byte offsets and set the bs to one if you want. -- /*************************************** Special Agent Barry J. Grundy NASA Office of Inspector General Computer Crimes Division Goddard Space Flight Center Code 190 Greenbelt Rd. Greenbelt, MD 20771 (301)286-3358 **************************************/ |