Tristan Rhodes wrote:
> I have been unable to duplicate the error with "dar -t". I will keep you informed if it shows up again.
sure, no problem.
>>dar 2.1.0_pre5 is quite the same as release 2.1.0. If you have the
>>opportunity, try the released version, but, that should not change the
>>problem, I guess.
> I have just updated to release 2.1.0 and all tests results are from this version. After we resolve this issue, I am going to switch to the 64-bit version of DAR. I just read in LIMITATIONS about the extra memory that is used with the infinite version. Since I usually back up an entire drive, the 64-bit version should use significantly less memory (Dar uses up to 82 MB on full-system backups). Is this correct? I do not think I will be limited by the 18,446,744,073 GB limitation. (Is there any system in the world that uses anything close to this much space?)
that's correct, 64 and 32 bits version use less memory to store each
file in the catalogue (stored in memory while creating the archive).
18 TB is a bit far from what I have heard of today's usual total system
disk, but, if you have a look at the "Hall of Fame" on dar's web site,
you will see that already we have 1.4 TB archives... that's exceptional
for today, but in 2 years, the exceptional archives will be probably
around 14 TB, maybe more, and un ten days, maybe 18TB will be a common
limitation for a normal system .... that's a matter of time. But at that
time 128 bits integer will be standard on all system, and I will just
have to add a new #define in the code to have the 128 bits mode in
libdar... currently, I cannot, as I could not find a standard and
portable C/C++ type that defines a 128 bits integer... that's also a
matter of time. In the meanwhile for those that want to do exceptional
things, there is still the native dar version (using infinint) which is
a bit more fond of memory (but exceptional guys have exceptional memory
too ;-) ) and which has no such limitation.
>>large file support is set in dar, but there is also the kernel, and
>>standard libraries to have ready to support largefiles. What is your
>>Linux kernel version ("uname -a" should tell you) ?
> linux:~/sarab/sarab-0.1.0 # uname -a
> Linux linux 2.4.20-4GB #1 Wed Aug 6 18:26:21 UTC 2003 i686 unknown unknown GNU/Linux
that's correct, only kernel 2.0.x and bellow have a 2GB limitation in
>>can you try to create a large file with another tool than dar (using dd
>>for example : "dd if=/dev/zero of=/tmp/sample.dat bs=1048576 count=3072"
>>should make a 3GB file in /tmp/sample.dat )
> linux:~/sarab/sarab-0.1.0 # dd if=/dev/zero of=/tmp/sample.dat bs=1048576 count=6144
> 6144+0 records in
> 6144+0 records out
> linux:~/sarab/sarab-0.1.0 # ls -l /tmp/sample.dat
> -rw-r--r-- 1 root root 6442450944 Feb 21 14:12 /tmp/sample.dat
> It doesn't look like a filesystem limitation, I was able to create a 6 GB file.
> Thanks for your great progam, Denis.
you are welcome,
> Tristan Rhodes