restore memory usage

Dan Cox
  • Dan Cox

    Dan Cox - 2005-06-30

    Using latest dump (0.4b40) I recently did a dump of an ext3 filesystem with around 1.5 million inodes in use (mostly hardlinks) to a single 36GB file. This worked great. The problem is the restore operation from this file seems to take around 3GB of system memory. It starts out around a couple of hundred MB after reading in all the metadata, then grows larger and larger as the restore operation is running. Is this normal? Will it not be possible to restore larger volumes in the future as it will hit the x86 maximum per-process memory limit?

    • Stelian Pop

      Stelian Pop - 2005-07-01

      You assume correctly, restore does cache the entire directory tree in memory. If you have such huge number of inodes you will end up using all the memory. The only option is to go buy a 64 bit processor :)

      That being said, the indexed directory feature added in b38 consumes a lot more memory than before.

      Could you test restore 0.4b37 and compare its memory usage with restore 0.4b40 ?

      If it is necessary, I suppose I could add a flag to restore to turn off the 'indexed directory' feature and go back to the old way of doing things...

      You can also recompile restore with a lower DIRHASH_SIZE constant in restore.h but this is only a temporary workaround...


    • Dan Cox

      Dan Cox - 2005-07-02

      I tried 0.4b37, but got the following errors when trying the restore:

      restore -T /var/tmp -r -v -f /mnt/temp/srv.dump
      Extract directories from tape
      Mangled directory: reclen not multiple of 4 reclen less than DIRSIZ (1 < 12)
      Mangled directory: reclen less than DIRSIZ (0 < 12)
      Mangled directory: reclen less than DIRSIZ (0 < 12)
      Mangled directory: reclen less than DIRSIZ (0 < 12)
      . is not on the tape
      Root directory is not on tape
      abort? [yn] n

      Calculate extraction list.
      Warning: `.' missing from directory .
      Warning: `..' missing from directory .
      Extract new leaves.
      Check pointing the restore
      expected next file 5873665, got 12
      expected next file 5873665, got 15
      expected next file 5873665, got 16
      expected next file 5873665, got 17
      expected next file 5873665, got 18
      expected next file 5873665, got 19
      ... etc errors..

      So i recompiled 0.4b40 with DIRHASH_SIZE 128
      Memory usage is much lower now.. around 680MB, but I'm having a new problem. I get a 'maximum file size exceeded' on the restoresymtable file. It maxes out at 2GB. I've got dump compiled with largefile support and of course, the system/filesystem supports large files.

    • Stelian Pop

      Stelian Pop - 2005-07-05

      Indeed, the symtab file is not opened in large file mode. Try the patch from: , it should fix your problem.

    • Dan Cox

      Dan Cox - 2005-07-08

      Thanks.. I was finally able to get a complete restore. I cast my vote for the 'low memory usage' flag :)

      • Stelian Pop

        Stelian Pop - 2005-07-08

        Well, you can try the CVS version then :)

        I've turned the hashing option off by default (it's -H in restore, see man page), so it should work with the default params now.


        • Dan Cox

          Dan Cox - 2005-07-14

          I tried the CVS version and it worked fine. Memory usage was about 550MB. Still higher than I'd prefer, but should be fine for now. Thanks!


Log in to post a comment.