WARNING! I've never needed the 'ntfsdump -[oO] device file' case and I'm
afraid it's broken, don't use it now!!!
Note2: after an ntfsdump you can check the dump with 'ntfsresize -i file'.
Szaka
On Thu, 10 Apr 2003, Szakacsits Szabolcs wrote:
> Here it is. It's fully functional and works for me. But it's for 1.8 devel
> tree (should patch cleanly 1.7.1 source as well) and ALPHA code. There are
> lots of FIXME's in the source and also the internals, especially walk
> functions, will change dramatically. Now the control is an unmaintainable
> mess, there are kludges and a lot of duplicated code from ntfsresize.
>
> Just patch, autogen.sh and make install.
>
> Usage: ntfsdump [options] device
>
> -o file --output file Dump NTFS to the non-existent file
> -O file Dump NTFS to file, overwriting if exists
> -m --metadata Dump *only* metadata (for NTFS experts)
>
> If -o is '-' then it dumps to stdout otherwise dumps to a sparse file. The
> sparse file should be packed with 'tar -Sjcf', that's what I found to be
> the most efficient from speed and compression point of view [if I remember
> correctly].
>
> With option --metadata user data is not dumped, only NTFS internal data is
> saved. Moreover for best compression, several things are also nullified (I
> plan more but almost all will be optional). As an example, here is a stat
> from a --metadata dumped volume. 4 GB originally and the compressed dump
> is only 1.5 MB, still mountable after uncompression).
>
> Num of MFT records = 53353
> Num of used MFT records = 53151
> Wiped unused MFT data = 549275
> Wiped deleted MFT data = 32971
> Wiped resident user data = 1688107
> Wiped timestamp data = 3990400
> Wiped totally = 6260753
> Compressed size = 1571289
>
> In both cases [full or metadata-only dumps] the dumps are mountable, e.g.
>
> ntfsdump -o ntfs.dump device
> mount -o loop ntfs.dump /mnt/ntfs
>
> I've found the compressed metadata-only dumps are between 0.3 - 4 MB
> usually independently on volume size. The reason, it mostly depends on
> the number of MFT records in use. Most people have 30,000 - 200,000 files.
>
> Question, comments, ideas are welcomed. Note, I consider this mainly as a
> development tool, not for NTFS imaging Ian is working on.
>
> Cheers,
>
> Szaka
>
--
|