Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo

Close

#2 2 GB dump file limit?

closed
Stelian Pop
None
5
2000-05-06
2000-04-11
TJ McNeely
No

Discussion

  • Remember me? :)

    I am having another issue... one of our server has about a 3.5 GB backup, we attempt to do this via file to
    the backup server then we do the tape of all servers... however this one seems to be getting stopped at
    about 2GB, is this a limit of the e2fs?

    root@bill:~:# backup

    --- START FULL BACKUP ---

    DUMP: Connection to ted established.
    DUMP: Date of this level 0 dump: Mon Apr 10 22:06:09 2000
    DUMP: Date of last level 0 dump: the epoch
    DUMP: Dumping /dev/hda1 (/) to /backup/bill.dump on host ted
    DUMP: Label: none
    DUMP: mapping (Pass I) [regular files]
    DUMP: mapping (Pass II) [directories]
    DUMP: estimated 3184949 tape blocks.
    DUMP: Volume 1 started at: Mon Apr 10 22:06:54 2000
    DUMP: dumping (Pass III) [directories]
    DUMP: dumping (Pass IV) [regular files]
    DUMP: 10.65% done, finished in 0:41
    DUMP: 24.13% done, finished in 0:31
    DUMP: 39.95% done, finished in 0:22
    DUMP: 54.45% done, finished in 0:16
    DUMP: write: File too large
    DUMP: write error 2097170 blocks into volume 1
    DUMP: Do you want to restart?: ("yes" or "no") no
    DUMP: The ENTIRE dump is aborted.

    --- END ---

    root@bill:~:# cat /root/bin/backup
    #!/bin/bash
    printf "\n\n --- START FULL BACKUP --- \n\n\n"
    /sbin/dump -0au -f ted:/backup/bill.dump /dev/hda1
    printf "\n\n --- END --- \n\n"
    root@bill:~:#

    --

    (ted)

    -rw-r--r-- 1 root root 2147483647 Apr 10 21:22 bill.dump

    ------------

    Thanks in advance :)

    Knop Head

     
  • I am starting to wonder whether there is a special option at boot time to allow files larger than 2 GB... In my Solaris
    boxes, the maximum file size is one terrabyte... unless you put a special option on to limit file size to 2 GB..

    Thanks
    Knop

     
  • Logged In: YES
    user_id=5513
    Browser: Mozilla/4.72 [en] (X11; U; Linux 2.2.14-12 i686; Nav)

    The 2GB is a limit of ext2fs. You can bypass this limit by
    enabling large
    file support (but you will surely need to recompile your
    kernel, have
    a recent glibc, maybe recompile some programs... I cannot
    really help
    you with this, maybe there is a FAQ somewhere? ).

    As for this limit seen from the dump's point of view, you
    can use the -M (multivolume option) coupled with the -B (to
    specify the 2GB size of each volume).

    Hope this helps.

    Stelian.

     
  • Stelian Pop
    Stelian Pop
    2000-05-06

    • assigned_to: nobody --> stelian
    • status: open --> closed