Menu

#36 structured storage "Insufficient memory" problem

open
nobody
None
7
2003-08-21
2003-04-24
Jim Trainor
No

The following problem was reported by David.Mansergh
@ Quantel.Com.

I have recently come across a problem writing AAF files.
This problem has occurred about 5 times in the last
month in various bits of our software that write AAF files.
Most recently it has occurred when using edlaaf. I am
hoping that this will allow the problem to be identified
and fixed!

test1.edl contains approximately 500 edits referring to
approximately 500 different tapes. edlaaf appears to
successfully convert this to test1.aaf (using the
command line "edlaaf 30 test1").

When test1.aaf is dumped using infodumper, the
following error is always
reported:
***Caught hresult 0x80030008*** ("There is insufficient
memory available to complete operation")

When test1.aaf is dumped using dump, the following
error is always
reported:
dump.exe: Error : "/Header-2/Content-3b03/Mobs-1901
{394}/Slots-4403{4}/" : There is insufficient memory
available to complete operation.
dump.exe: Fatal error in routine "dumpStorage".
IStorage::OpenStream() failed.

During the dumping process, Windows Task Manager
indicates that there is plenty of memory available.

I have created 10 AAF files from the same edl and only
about half of these exhibit this problem. Unfortunately, in
our other bits of software we do not have the luxury of
repeating the process until it works. I guess that the
unpredictability of the process is due to differences in
the aaf file (the mobIDs change if nothing else). I don't
know what determines the order of the mobs in the AAF
file but this appears to change between files.

I have tried this with various builds of the AAF SDK,
including aaf_release.Win.1_0_1.504.zip from
SourceForge.

Is anyone else aware of this problem and has it already
been fixed?

I would be grateful for any help that you might be able to
give. The attached zip contains test1.edl, test1.aaf,
edlaaf.log, infodumper.log, dump.log. Please say if you
need any more details.

Thanks in advance, David.

Discussion

  • Jim Trainor

    Jim Trainor - 2003-04-24

    Logged In: YES
    user_id=239292

    The file(s) referred to in David Mansergh's comments were too
    large to attach so I put them in the source forge AAF group
    directory. scp them as follows:

    scp
    aaf.sourceforge.net:/home/groups/a/aa/aaf/htdocs/bugFiles/72
    6916/files.zip <destination_dir>

    This should work too:

    http://aaf.sourceforge.net/bugFiles/726916/files.zip

     
  • Jim Trainor

    Jim Trainor - 2003-04-24

    Logged In: YES
    user_id=239292

    I was able to dump the file using the most recent version of
    the SDK running on Linux. Both InfoDumper, axDump, and
    DevUtils/dump worked,
    InfoDumper and axDump both grew to ~75 MB in size - rather
    large, considering they aren't doing very much. The dump
    program only grew to ~2.1 Mb.

    On my Win2K system, I see the same problem you
    described when using InfoDumper and axDump. The
    DevUtils/dump program, however, completed without
    problem. InfoDumper and axDump both grew to about 64
    MB before failing. The dump program grew to 12 MB.

     
  • Jim Trainor

    Jim Trainor - 2003-04-24

    Logged In: YES
    user_id=239292

    Phil Tudor's comments:

    I have repeated your bug report when dumping using
    Windows SS - it bombs out with insufficient memory at the
    point you report. Note that, in this case, this file dumps *fine*
    using the current Linux SS binary, which I believe is a
    different piece of MS source code.

     
  • David Mansergh

    David Mansergh - 2003-04-28

    Logged In: YES
    user_id=752351

    > I have recently come across a problem writing AAF files.

    Since the aaf file can be read correctly using the Linux SS
    binary but not the Windows SS binary, it would appear that
    the aaf file itself is fine. This would suggest that the bug is in
    reading the aaf file, not writing it as I originally thought. I
    suppose that it is comforting to know that the data has not
    been lost - it's just not readily available!

     
  • John Emmas

    John Emmas - 2003-05-09

    Logged In: YES
    user_id=567963

    I can confirm what David has found. An application I wrote is
    being used by a customer in Canada. He sent me an AAF
    file which encountered this very same problem during an
    import. It seems as though there is an unreasonably small
    limit somewhere in the structured storage system.

    The file I was sent contained over 13 thousand (yes,
    THOUSAND) mobs - and that was only one half of the full
    edited programme! The "insufficient memory" problem
    cropped up at around 1600 mobs. In other words, only 20%
    of the file could be processed before it fell over.

    If there isn't an immediate fix available, does anyone know of
    a workaround?

     
  • John Emmas

    John Emmas - 2003-05-11

    Logged In: YES
    user_id=567963

    In my earlier comment I remarked about a file which someone
    sent me containing over 13,000 mobs. Needless to say, I
    didn't count them. I was going by the InfoDumper printout.
    However, I've just posted a bug re InfoDumper which is
    causing totals to be displayed as hexadecimal figures when
    in fact they're plain old decimal.

    This means that the file actually contained about 3,000 mobs -
    not 13,000 which in turn, means that roughly 50% of the
    mobs were processed before the structured storage problem
    caused everything to fall over. That's better I suppose but still
    not really acceptable.

     
  • David Mansergh

    David Mansergh - 2003-05-19

    Logged In: YES
    user_id=752351

    At the AAF engineering meetings in New York, Tim Bingham
    suggested that it would be worth checking if this problem
    could be avoided by using ‘eager’ rather than ‘lazy’ loading.
    This can be specified as a mode flag in
    AAFFileOpenExistingRead(). I modified infodumper to specify
    AAF_FILE_MODE_EAGER_LOADING but this caused the
    following error: AAFRESULT_NOT_IN_CURRENT_VERSION.
    The code in ImplAAFFile::OpenExistingRead() has the
    following comment:

    // Save the mode flags for now. They are not currently
    (2/4/1999) used by the OM to open the doc file. Why do we
    return an error if modeFlags != 0? Answer: because none of
    them are implemented yet.

    However, OMFile::openExistingRead()
    allows ‘OMFile::eagerLoad’ to be specified. When a new
    version of AAFCOAPI.dll is built with this mode set in
    ImplAAFFile::OpenExistingRead(), infodumper can
    successfully dump the problematic files. Hurray!

    Since this problem typically occurs in large AAF files, I
    assume that eagerly loading these files is significantly less
    than optimal. i.e. this is not the correct solution to the
    problem. One small step for man, one giant leap for mankind?

     
  • Sydorenko Vladimir

    Logged In: YES
    user_id=788502

    I found what on Win2k is impossible to create aaf files larger
    2G - problem is in OMMSSStoredObject::createFile() and
    OMMSSStoredObject::openFile() realisation. They use Win9x
    compatible call to StgCreateDocfile() and StgOpenStorage().
    After changing them to Win2K compatible calls
    StgCreateStorageEx() and StgOpenStorageEx() with set up
    sector size to 4096 the problem was fixed. But, in Standart
    codecs source code (CAAFCDCICodec.cpp etc) you must
    comment lines like
    if ( offset + buflen > (aafPosition_t)
    2*1024*1024*1024-1 ) {
    return AAFRESULT_EOF;
    }
    I think - more right way to do this using autodetect of OS
    (9x/Win2k/XP) and use appropriative calls.

    VVS

     
  • Jim Trainor

    Jim Trainor - 2003-08-19

    Logged In: YES
    user_id=239292

    StgCreateStorageEx and StgOpenStorageEx are not
    supported by the reference structured storage implementation
    currently used on UNIX platforms.

    The currently "sanctioned" method is to create AAF files that
    reference essence data in excess of 2GB is to create a
    sequence of source clip objects. Each source clip represents
    a chunk of essence that is less than 2GB in size.

    This is done using the IAAFMasterMob::CreateEssence(),
    writing the essence data until the codec returns EOF, then
    call IAAFMasterMobEx::ExtenEssence() and continue writing
    the essence data. If the essence data is stored in external
    files then this will work on all platforms, including those with
    aging structured storage implementations, 2GB file system
    limitations.

    No extended interface is required to read the essence data.
    The SDK recognizes a sequence of source clips and takes
    appropriate actions to read the data.

    This is demonstrated in the AAF/examples2/axMasterMobEx
    example program.

     
  • Sydorenko Vladimir

    Logged In: YES
    user_id=788502

    Thanks for you replay.
    In my application I am already cut long clip on chunk less 2G.
    But problem still exists if the essence is embedded - than the
    total AAF try exceed 2G limit (2047Mb, not the essence - all
    file) the exception is occured. Even in the case than I have a
    number of short clips, summary length of which exceeded 2G.
    When I create file using modified myself aafcoapi.dll, I can
    create AAF any size and this file can be read by RC1 version
    of aafcoapi.dll (another applications can read such files, back
    compatibility exists).
    I need to create aaf file for clips having duration more 10
    minuts, much more.

    One question more. Does in AAF file specification exists rule,
    what one essence size must be less or equal 2G? I cant find,
    but may be I am mistaken.

    VVS

     
  • Jim Trainor

    Jim Trainor - 2003-08-19

    Logged In: YES
    user_id=239292

    There are, currently, practical concerns that limit the size of
    a file to 2GB. It is not the result of any specification. The
    problem is due to discord amoung the various structured
    storage implementations, and the requirement to create AAF
    files that can be read on all platforms.

    For the moment, the solution is to store essence in external
    files.

     
  • Phil Tudor

    Phil Tudor - 2003-08-21

    Logged In: YES
    user_id=162067

    Here is some information on this issue from Skip Pizzi at
    Microsoft...

    Skip Pizzi writes:
    Here are some comments from Henry Lee at MS, which may
    be of value to this discussion:

    "I believe this a well-known limitation with structured
    storage's shared memory heap used for custom marshaling,
    which is set to 4MB per root open. For a single open file using
    normal 512-byte sectors, there is limit of 4MB used to
    allocate opened stream and storage objects. Typically, a
    large number of concurrent open streams or storages will
    trigger this problem.

    Typical workarounds:

    1. Revise the application to close it IStream or IStorage
    objects immediately after using them. If using transacted
    mode, try switching to direct mode. If there are stream or
    storage pointer leaks, they need to be fixed. 2. Revise the
    application to close and reopen its root storage object when
    the memory error hits. 3. Switch to 4k-sector structured
    storage files -- it doesn't have this limitation. 4. Use the
    StgOpenStorageOnILockBytes or
    StgCreateDocfileOnILockBytes APIs -- it doesn't have this
    limitation. 5. Port the app to Win64 -- IA64 or AMD64
    Windows doesn't have this limitation."

    Also, if the reference to 'Linux SS binary' means the MS SS
    Reference Implementation, it will behave differently, as Phil
    reports. Henry Lee continues:

    "The MS SS Reference Implementation shouldn't have this
    problem, since it doesn't support marshaling and all allocations
    are done using task memory.

    The shared memory support is OLE (linking and embedding)
    legacy. The 16-bit implementation used shared memory to
    marshal storage and stream objects between processes very
    efficiently, and this was carried over to 32-bit for
    compatibility. On NT and later, the shared memory heap was
    broken up into individual per-open shared memory heaps for
    security and robustness reasons.

    When the 4k-sector support was added in Win2k, this mode
    used task memory instead of shared memory, so it could
    handle a much greater number of concurrent objects in
    memory. If AAF switches to the newer 4k-sector format, it
    won't be subject to the docfile's internal shared-memory
    limitations."

     
  • Phil Tudor

    Phil Tudor - 2003-08-21
    • priority: 5 --> 7