Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.

Close

#22 Large archive add bug in MinGW

closed-fixed
nobody
None
5
2013-02-27
2008-06-28
Willus
No

Zip 3.0i/3.0 release candidate quits abruptly when I try to add 3 GiB worth of .mpg files (roughly 30 files) to an archive which is already just over 8 GiB in size (~2800 files). I've compiled it with MinGW in Windows XP. Below is the output of the command:

zip -r -sd test1.zip myfolder -i *.mpg

sd: Zipfile name 'test1.zip'
sd: Command line read
sd: Reading archive
sd: Scanning files
sd: Applying filters
sd: Checking dups
sd: Scanning files to update
sd: fcount = 28
sd: Open zip file and create temp file
sd: Creating new zip file
sd: Going through old zip file
sd: Zipping up new entries
adding: file...
(list of new files added)
sd: Get comment if any
sd: Writing central directory

zip error: Interrupted (aborting)

No temp file is left, and the original zip file is not changed. After some "printf" style debugging, I've determined that zip aborts due to a malloc() call inside append_ulong_to_mem() inside putcentral(). All values passed to the functions seem to be okay (no null pointers), so it's a mystery at this point. It will be a little cryptic without my debugging source code, but here is my latest debugging output:

from inside main():
sd: n=4300869089, t=4293310199, k=1455, z=00AFF740
from inside putcentral():
pc3
1block=00000000, offset=0, blocksize=0
from inside append_ulong_to_mem():
pPtr=0022FCCC
blocksize=0022FCD4
mallocing...
back from malloc, *pPtr=00DD1A30
writing ulong to mem...
adding 4 to offset...
returning.
from inside putcentral():
2block=00DD1A30, offset=4, blocksize=1024
3block=00DD1A30, offset=6, blocksize=1024
4block=00DD1A30, offset=8, blocksize=1024
5block=00DD1A30, offset=10, blocksize=1024
6block=00DD1A30, offset=12, blocksize=1024
from inside append_ulong_to_mem():
pPtr=0022FCCC
writing ulong to mem...
adding 4 to offset...
returning.
from inside putcentral():
7block=00DD1A30, offset=16, blocksize=1024
from inside append_ulong_to_mem():
pPtr=0022FCCC
writing ulong to mem...
adding 4 to offset...
returning.
from inside putcentral():
pc4
block freed.
from main():
sd: n=4304009029, t=4296445245, k=1456, z=00AFFA20
from inside putcentral():
pc3
1block=00000000, offset=0, blocksize=0
from inside append_ulong_to_mem():
pPtr=0022FCCC
blocksize=0022FCD4
mallocing...

zip error: Interrupted (aborting)

I will continue to look into it. Debugging suggestions are welcome.

Discussion

  • Willus
    Willus
    2008-06-30

    Logged In: YES
    user_id=681780
    Originator: YES

    I think I found the bug. It's due to a memcpy() that writes outside the bounds of memory that was just allocated with malloc. The fix below eliminated the abort, and the resultant test archive passed the "unzip -t" test.

    Fix: Lines 1192 and 1193 in zipfile.c in the function add_central_zip64_extra_field():

    len = pZipListEntry->cext - oldefsize - len;
    memcpy(pTemp + len, pExtraFieldPtr + oldefsize, len);

    ... should be replaced by...

    // len = pZipListEntry->cext - oldefsize - len; (or just delete this line)
    memcpy(pTemp + len, pExtraFieldPtr + oldefsize, pZipListEntry->cext - oldefsize - len);

     
  • Steven Schweda
    Steven Schweda
    2008-06-30

    Logged In: YES
    user_id=1925486
    Originator: NO

    "//" is _not_ the most portable C comment delimiter.

     
  • Ed Gordon
    Ed Gordon
    2008-07-01

    Logged In: YES
    user_id=1172496
    Originator: NO

    I've traced the change through and it looks correct to me. I'd appreciate others confirming this. Making this change may delay releasing Zip 3.0 for a couple days as a couple tests need to be done.

    It looks like add_local_zip64_extra_field(), which does the same thing but for the local header, approaches this slightly differently to avoid this problem. Someone might compare the two.

    If some testing with creating and updating Zip64 archives can be done, that will speed getting Zip 3.0 out. This should be the last change before release.

    Here is the updated code in zipfile.c: (starting at line 1183)

    else
    {
    /* get the old Zip64 extra field out and add new */
    oldefsize = usTemp + ZIP_EF_HEADER_SIZE;
    if ((pTemp = (char *) malloc(pZipListEntry->cext - oldefsize + efsize)) == NULL) {
    return ZE_MEM;
    }
    len = (extent)(pExtraFieldPtr - pZipListEntry->cextra);
    memcpy(pTemp, pZipListEntry->cextra, len);
    memcpy(pTemp + len, pExtraFieldPtr + oldefsize,
    pZipListEntry->cext - oldefsize - len);
    pZipListEntry->cext -= oldefsize;
    pExtraFieldPtr = pTemp + pZipListEntry->cext;
    pZipListEntry->cext += efsize;
    free(pZipListEntry->cextra);
    pZipListEntry->cextra = pTemp;
    }

    Thanks all.

     
  • Ed Gordon
    Ed Gordon
    2008-07-01

    Logged In: YES
    user_id=1172496
    Originator: NO

    It seems the impact of this bug is if extra fields are written after the Zip64 extra field they get written to a slightly wrong location in the central directory entry extra field block. This can cause the crashes that Will is seeing in specific cases. Note that the order of extra fields in an existing extra field block is not determined, as well as where an existing Zip64 extra field might be in the block, so without digging in to the structure in the existing archive it's hard to tell when the bug might happen. Given the possibility of archive corruption, it seems this change needs to be made in Zip 3.0 to fix the bug.

     
  • Willus
    Willus
    2008-07-01

    Logged In: YES
    user_id=681780
    Originator: YES

    Thank you for your efforts, Ed. I just wanted to comment that the impact is more than just the extra fields getting written to the wrong location. A memcpy() is done which violates the bounds of malloc'd() memory space, which is always bad news, plus it's in generic code (not specific to MinGW). That's why I wanted to get this in the release. I have done brief testing with the fix. Like I said, it resolved my problem. Now when I create a large archive and then add another large set of files to it (8 GiB -> 11 GiB), it seems to run flawlessly and tests perfectly with "unzip -t".

     
  • Ed Gordon
    Ed Gordon
    2008-09-28

    This should be fixed in Zip 3.0 as released. Please confirm this bug can be closed. Thanks!

     
  • Willus
    Willus
    2008-09-28

    Confirmed. Okay w/me to close it.

     
  • Steven Schweda
    Steven Schweda
    2013-02-27

    • status: open --> closed-fixed