Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.

Close

Need help: got OutOfMemoryError

Help
Anonymous
2012-01-26
2013-04-30

  • Anonymous
    2012-01-26

    Good day

    I received an error during indexing

    My actions and logs:

    CREATE TABLE TEST AS
    SELECT ROWNUM ID, TO_CHAR(ROWNUM) TEXT FROM DUAL CONNECT BY ROWNUM<=500000;
    CREATE INDEX TEST$LDI$ID ON TEST(ID) 
    INDEXTYPE IS lucene.LuceneIndex 
    PARAMETERS('LogLevel:ALL;Analyzer:org.apache.lucene.analysis.standard.StandardAnalyzer;MergeFactor:500;LobStorageParameters:PCTVERSION 0 ENABLE STORAGE IN ROW CHUNK 32768 CACHE READS FILESYSTEM_LIKE_LOGGING;ExtraCols:TEXT');
    

    Log:

    *** 2012-01-26 08:42:25.357
    *** SESSION ID:(145.83) 2012-01-26 08:42:25.357
    *** CLIENT ID:() 2012-01-26 08:42:25.357
    *** SERVICE NAME:(fms_dev) 2012-01-26 08:42:25.357
    *** MODULE NAME:(PL/SQL Developer) 2012-01-26 08:42:25.357
    *** ACTION NAME:(SQL Window - DROP TABLE TEST; CREATE TABLE TEST AS SELECT ROWNUM) 2012-01-26 08:42:25.357
    
    Jan 26, 2012 8:42:25 AM org.apache.lucene.indexer.LuceneDomainIndex ODCIIndexCreate
    INFO: Indexing column: 'ID'
    Jan 26, 2012 8:42:25 AM org.apache.lucene.indexer.LuceneDomainIndex ODCIIndexCreate
    INFO: Internal Parameters (TEST.TEST$LDI$ID):
    TableSchema:TEST;TableName:TEST;Partition:null;TypeName:NUMBER;ColName:ID;PopulateIndex:false
    *** 2012-01-26 08:42:26.904
    Jan 26, 2012 8:42:26 AM org.apache.lucene.indexer.LuceneDomainIndex createLuceneStore
    INFO: .prepareCall 'call LuceneDomainIndex.createTable(?,?)'
    Jan 26, 2012 8:42:26 AM org.apache.lucene.indexer.LuceneDomainIndex createLuceneStore
    INFO: .setString 'TEST.TEST$LDI$ID'
    Jan 26, 2012 8:42:26 AM org.apache.lucene.indexer.LuceneDomainIndex createLuceneStore
    INFO: .setString 'PCTVERSION 0 ENABLE STORAGE IN ROW CHUNK 32768 CACHE READS FILESYSTEM_LIKE_LOGGING'
    *** 2012-01-26 08:42:27.576
    Jan 26, 2012 8:42:27 AM org.apache.lucene.indexer.LuceneDomainIndex createLuceneStore
    INFO: .prepareCall 'call LuceneDomainAdm.createQueue(?)'
    Jan 26, 2012 8:42:27 AM org.apache.lucene.indexer.LuceneDomainIndex createLuceneStore
    INFO: .setString 'TEST.TEST$LDI$ID'
    Jan 26, 2012 8:42:27 AM org.apache.lucene.indexer.LuceneDomainIndex ODCIIndexCreate
    INFO:  ExtraTabs: null WhereCondition: null LockMasterTable: true
    IFD [Thu Jan 26 08:42:27 GMT+02:00 2012; Root Thread]: setInfoStream deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@48648a64
    IW 0 [Thu Jan 26 08:42:27 GMT+02:00 2012; Root Thread]: 
    dir=org.apache.lucene.store.OJVMDirectory@9b6a5f7f lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@12d7b7c0
    index=
    version=3.3-SNAPSHOT
    matchVersion=LUCENE_30
    analyzer=org.apache.lucene.analysis.standard.StandardAnalyzer
    delPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy
    commit=null
    openMode=CREATE
    similarity=org.apache.lucene.search.DefaultSimilarity
    termIndexInterval=128
    mergeScheduler=org.apache.lucene.index.ConcurrentMergeScheduler
    default WRITE_LOCK_TIMEOUT=1000
    writeLockTimeout=1000
    maxBufferedDeleteTerms=-1
    ramBufferSizeMB=139.0
    maxBufferedDocs=-1
    mergedSegmentWarmer=null
    mergePolicy=[LogByteSizeMergePolicy: minMergeSize=1677721, mergeFactor=10, maxMergeSize=2147483648, maxMergeSizeForOptimize=9223372036854775807, calibrateSizeByDeletes=true, maxMergeDocs=2147483647, useCompoundFile=false, noCFSRatio=0.1]
    maxThreadStates=8
    readerPooling=false
    readerTermsIndexDivisor=1
    Jan 26, 2012 8:42:27 AM org.apache.lucene.indexer.LuceneDomainIndex getUserDataStore
    INFO: UserDataStoreClass=org.apache.lucene.indexer.DefaultUserDataStore
    Jan 26, 2012 8:42:27 AM org.apache.lucene.indexer.LuceneDomainIndex ODCIIndexCreate
    INFO: ExtraCols: TEXT User Data Store: org.apache.lucene.indexer.DefaultUserDataStore@9ba3c0a4
    Jan 26, 2012 8:42:27 AM org.apache.lucene.indexer.TableIndexer index
    INFO: index(Logger logger, IndexWriter writer, String col, boolean withMasterColumn), Performing:
    SELECT /*+ DYNAMIC_SAMPLING(L$MT,0) RULE NOCACHE(L$MT) */ L$MT.rowid,L$MT."ID",TEXT FROM TEST.TEST L$MT for update nowait
    *** 2012-01-26 08:43:29.326
    IW 0 [Thu Jan 26 08:43:29 GMT+02:00 2012; Root Thread]: DW: setAborting
    IW 0 [Thu Jan 26 08:43:29 GMT+02:00 2012; Root Thread]: DW: exception in updateDocument aborting=true
    IW 0 [Thu Jan 26 08:43:29 GMT+02:00 2012; Root Thread]: DW: docWriter: abort
    IW 0 [Thu Jan 26 08:43:29 GMT+02:00 2012; Root Thread]: DW: docWriter: abort waitIdle done
    *** 2012-01-26 08:43:31.279
    IW 0 [Thu Jan 26 08:43:31 GMT+02:00 2012; Root Thread]: DW: docWriter: done abort; success=true
    IW 0 [Thu Jan 26 08:43:31 GMT+02:00 2012; Root Thread]: hit exception adding document
    IW 0 [Thu Jan 26 08:43:31 GMT+02:00 2012; Root Thread]: hit OutOfMemoryError inside addDocument
    IW 0 [Thu Jan 26 08:43:31 GMT+02:00 2012; Root Thread]: rollback
    IW 0 [Thu Jan 26 08:43:31 GMT+02:00 2012; Root Thread]: all running merges have aborted
    IW 0 [Thu Jan 26 08:43:31 GMT+02:00 2012; Root Thread]: rollback: done finish merges
    IW 0 [Thu Jan 26 08:43:31 GMT+02:00 2012; Root Thread]: rollback: infos=
    IW 0 [Thu Jan 26 08:43:31 GMT+02:00 2012; Root Thread]: DW: docWriter: abort
    IW 0 [Thu Jan 26 08:43:31 GMT+02:00 2012; Root Thread]: DW: docWriter: abort waitIdle done
    IW 0 [Thu Jan 26 08:43:31 GMT+02:00 2012; Root Thread]: DW: docWriter: done abort; success=true
    IFD [Thu Jan 26 08:43:31 GMT+02:00 2012; Root Thread]: now checkpoint "null" [0 segments ; isCommit = false]
    IW 0 [Thu Jan 26 08:43:31 GMT+02:00 2012; Root Thread]: now flush at close waitForMerges=false
    IW 0 [Thu Jan 26 08:43:31 GMT+02:00 2012; Root Thread]: all running merges have aborted
    IW 0 [Thu Jan 26 08:43:31 GMT+02:00 2012; Root Thread]: now call final commit()
    IW 0 [Thu Jan 26 08:43:31 GMT+02:00 2012; Root Thread]: at close: 
    Exception in thread "Root Thread" java.lang.OutOfMemoryError
        at org.apache.lucene.index.TermsHashPerField.rehashPostings(TermsHashPerField.java)
        at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java)
        at org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java)
        at org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(DocFieldProcessorPerThread.java)
        at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java)
        at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java)
        at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java)
        at org.apache.lucene.indexer.TableIndexer.index(TableIndexer.java)
        at org.apache.lucene.indexer.LuceneDomainIndex.ODCIIndexCreate(LuceneDomainIndex.java)
    

    Extra info:

    OS: Windows Server 2008 SP1 Enterprise x64 (Intel Xeon E5620 @ 2.40Ghz 2.40Ghz RAM 4Gb)
    DB: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0

    SELECT * FROM v$sgainfo

    Fixed SGA Size 1386208 No
    Redo Buffers 12009472 No
    Buffer Cache Size 436207616 Yes
    Shared Pool Size 260046848 Yes
    Large Pool Size 8388608 Yes
    Java Pool Size 301989888 Yes
    Streams Pool Size 8388608 Yes
    Shared IO Pool Size 159383552 Yes
    Granule Size 8388608 No
    Maximum SGA Size 1506570240 No
    Startup overhead in Shared Pool 67108864 No
    Free SGA Memory Available 478150656

    Then I found max count of rows in table TEST(same structure) for success indexing - 262143 rows

    CREATE TABLE TEST AS
    SELECT ROWNUM ID, TO_CHAR(ROWNUM) TEXT FROM DUAL CONNECT BY ROWNUM<=262143;
    CREATE INDEX TEST$LDI$ID ON TEST(ID) 
    INDEXTYPE IS lucene.LuceneIndex 
    PARAMETERS('LogLevel:ALL;Analyzer:org.apache.lucene.analysis.standard.StandardAnalyzer;MergeFactor:500;LobStorageParameters:PCTVERSION 0 ENABLE STORAGE IN ROW CHUNK 32768 CACHE READS FILESYSTEM_LIKE_LOGGING;ExtraCols:TEXT');
    

    Log:

    *** 2012-01-26 09:28:57.904
    *** SESSION ID:(149.839) 2012-01-26 09:28:57.904
    *** CLIENT ID:() 2012-01-26 09:28:57.904
    *** SERVICE NAME:(fms_dev) 2012-01-26 09:28:57.904
    *** MODULE NAME:(PL/SQL Developer) 2012-01-26 09:28:57.904
    *** ACTION NAME:(SQL Window - DROP TABLE TEST; CREATE TABLE TEST AS SELECT ROWNUM) 2012-01-26 09:28:57.904
    
    Jan 26, 2012 9:28:57 AM org.apache.lucene.indexer.LuceneDomainIndex ODCIIndexCreate
    INFO: Indexing column: 'ID'
    Jan 26, 2012 9:28:57 AM org.apache.lucene.indexer.LuceneDomainIndex ODCIIndexCreate
    INFO: Internal Parameters (TEST.TEST$LDI$ID):
    TableSchema:TEST;TableName:TEST;Partition:null;TypeName:NUMBER;ColName:ID;PopulateIndex:false
    Jan 26, 2012 9:28:58 AM org.apache.lucene.indexer.LuceneDomainIndex createLuceneStore
    INFO: .prepareCall 'call LuceneDomainIndex.createTable(?,?)'
    Jan 26, 2012 9:28:58 AM org.apache.lucene.indexer.LuceneDomainIndex createLuceneStore
    INFO: .setString 'TEST.TEST$LDI$ID'
    Jan 26, 2012 9:28:58 AM org.apache.lucene.indexer.LuceneDomainIndex createLuceneStore
    INFO: .setString 'PCTVERSION 0 ENABLE STORAGE IN ROW CHUNK 32768 CACHE READS FILESYSTEM_LIKE_LOGGING'
    *** 2012-01-26 09:28:58.920
    Jan 26, 2012 9:28:58 AM org.apache.lucene.indexer.LuceneDomainIndex createLuceneStore
    INFO: .prepareCall 'call LuceneDomainAdm.createQueue(?)'
    Jan 26, 2012 9:28:58 AM org.apache.lucene.indexer.LuceneDomainIndex createLuceneStore
    INFO: .setString 'TEST.TEST$LDI$ID'
    Jan 26, 2012 9:28:58 AM org.apache.lucene.indexer.LuceneDomainIndex ODCIIndexCreate
    INFO:  ExtraTabs: null WhereCondition: null LockMasterTable: true
    IFD [Thu Jan 26 09:28:58 GMT+02:00 2012; Root Thread]: setInfoStream deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@48648a64
    IW 0 [Thu Jan 26 09:28:58 GMT+02:00 2012; Root Thread]: 
    dir=org.apache.lucene.store.OJVMDirectory@9b6a5f7f lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@12d7b7c0
    index=
    version=3.3-SNAPSHOT
    matchVersion=LUCENE_30
    analyzer=org.apache.lucene.analysis.standard.StandardAnalyzer
    delPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy
    commit=null
    openMode=CREATE
    similarity=org.apache.lucene.search.DefaultSimilarity
    termIndexInterval=128
    mergeScheduler=org.apache.lucene.index.ConcurrentMergeScheduler
    default WRITE_LOCK_TIMEOUT=1000
    writeLockTimeout=1000
    maxBufferedDeleteTerms=-1
    ramBufferSizeMB=139.0
    maxBufferedDocs=-1
    mergedSegmentWarmer=null
    mergePolicy=[LogByteSizeMergePolicy: minMergeSize=1677721, mergeFactor=10, maxMergeSize=2147483648, maxMergeSizeForOptimize=9223372036854775807, calibrateSizeByDeletes=true, maxMergeDocs=2147483647, useCompoundFile=false, noCFSRatio=0.1]
    maxThreadStates=8
    readerPooling=false
    readerTermsIndexDivisor=1
    Jan 26, 2012 9:28:58 AM org.apache.lucene.indexer.LuceneDomainIndex getUserDataStore
    INFO: UserDataStoreClass=org.apache.lucene.indexer.DefaultUserDataStore
    Jan 26, 2012 9:28:58 AM org.apache.lucene.indexer.LuceneDomainIndex ODCIIndexCreate
    INFO: ExtraCols: TEXT User Data Store: org.apache.lucene.indexer.DefaultUserDataStore@9ba3c0a4
    Jan 26, 2012 9:28:58 AM org.apache.lucene.indexer.TableIndexer index
    INFO: index(Logger logger, IndexWriter writer, String col, boolean withMasterColumn), Performing:
    SELECT /*+ DYNAMIC_SAMPLING(L$MT,0) RULE NOCACHE(L$MT) */ L$MT.rowid,L$MT."ID",TEXT FROM TEST.TEST L$MT for update nowait
    *** 2012-01-26 09:29:48.170
    IW 0 [Thu Jan 26 09:29:48 GMT+02:00 2012; Root Thread]: commit: start
    IW 0 [Thu Jan 26 09:29:48 GMT+02:00 2012; Root Thread]: commit: enter lock
    IW 0 [Thu Jan 26 09:29:48 GMT+02:00 2012; Root Thread]: commit: now prepare
    IW 0 [Thu Jan 26 09:29:48 GMT+02:00 2012; Root Thread]: prepareCommit: flush
    IW 0 [Thu Jan 26 09:29:48 GMT+02:00 2012; Root Thread]: now trigger flush reason=explicit flush
    IW 0 [Thu Jan 26 09:29:48 GMT+02:00 2012; Root Thread]:   start flush: applyAllDeletes=true
    IW 0 [Thu Jan 26 09:29:48 GMT+02:00 2012; Root Thread]:   index before flush 
    IW 0 [Thu Jan 26 09:29:48 GMT+02:00 2012; Root Thread]: DW: flush postings as segment _0 numDocs=262143
    *** 2012-01-26 09:29:54.670
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: DW: new segment has no vectors
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: DW: flushedFiles=[_0.fdt, _0.frq, _0.nrm, _0.fdx, _0.tii, _0.tis, _0.prx, _0.fnm]
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: DW: flush: segment=_0(3.3):C262143
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: DW:   ramUsed=59.67 MB newFlushedSize=16.585 MB (9.085 MB w/o doc stores) docs/MB=15,805.769 new/old=15.226%
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: DW: flush time 6515 msec
    IFD [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: now checkpoint "null" [1 segments ; isCommit = false]
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: apply all deletes during flush
    BD 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: applyDeletes: no deletes; skipping
    BD 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: prune sis=org.apache.lucene.index.SegmentInfos@67a5df9d minGen=1 packetCount=0
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: clearFlushPending
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: LMP: findMerges: 1 segments
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: LMP: seg=_0(3.3):C262143 level=7.2403226 size=16.585 MB
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: LMP:   level 6.4903226 to 7.2403226: 1 segments
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: CMS: now merge
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: CMS:   index: _0(3.3):C262143
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: CMS:   no more merges pending; now return
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: startCommit(): start
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: startCommit index=_0(3.3):C262143 changeCount=3
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: done all syncs
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: commit: pendingCommit != null
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: commit: wrote segments file "segments_1"
    IFD [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: now checkpoint "segments_1" [1 segments ; isCommit = true]
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: commit: done
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: waitForMerges
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: waitForMerges done
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: now flush at close waitForMerges=true
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: now trigger flush reason=explicit flush
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]:   start flush: applyAllDeletes=true
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]:   index before flush _0(3.3):C262143
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: DW: flush: no docs; skipping
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: apply all deletes during flush
    BD 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: applyDeletes: no deletes; skipping
    BD 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: prune sis=org.apache.lucene.index.SegmentInfos@67a5df9d minGen=1 packetCount=0
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: clearFlushPending
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: CMS: now merge
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: CMS:   index: _0(3.3):C262143
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: CMS:   no more merges pending; now return
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: waitForMerges
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: waitForMerges done
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: now call final commit()
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: commit: start
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: commit: enter lock
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: commit: now prepare
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: prepareCommit: flush
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: now trigger flush reason=explicit flush
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]:   start flush: applyAllDeletes=true
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]:   index before flush _0(3.3):C262143
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: DW: flush: no docs; skipping
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: apply all deletes during flush
    BD 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: applyDeletes: no deletes; skipping
    BD 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: prune sis=org.apache.lucene.index.SegmentInfos@67a5df9d minGen=1 packetCount=0
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: clearFlushPending
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: startCommit(): start
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]:   skip startCommit(): no changes pending
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: commit: pendingCommit == null; skip
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: commit: done
    IW 0 [Thu Jan 26 09:29:54 GMT+02:00 2012; Root Thread]: at close: _0(3.3):C262143
    

    As I understand the error occurs when a parameter ramUsed is exceeded 60 Mb (ramUsed = 59.67 MB)

    Please help in solving this issue
    Thanks you

     
  • Hi:
       Which LDI version are you using?
       Latest release based on Lucene 3.5.0 was release a few days ago.
       You should add AutoTuneMemory:true parameter, this parameter will estimate the Index Writer RAM usage automatically using a 50% of OracleRuntime.getJavaPoolSize() value.
       For example:

    SQL> CREATE TABLE TEST AS
      2  SELECT ROWNUM ID, TO_CHAR(ROWNUM) TEXT FROM DUAL CONNECT BY ROWNUM<=500000;
    Table created.
    Elapsed: 00:00:02.05
    SQL> CREATE INDEX TEST$LDI$ID ON TEST(ID) 
      2  INDEXTYPE IS lucene.LuceneIndex 
      3  PARAMETERS('LogLevel:INFO;AutoTuneMemory:true;Analyzer:org.apache.lucene.analysis.standard.StandardAnalyzer;MergeFactor:500;LobStorageParameters:PCTVERSION 0 ENABLE STORAGE IN ROW CHUNK 32768 CACHE READS FILESYSTEM_LIKE_LOGGING;ExtraCols:TEXT');
    Index created.
    Elapsed: 00:01:39.11
    

      log:

    IW 0 [Thu Jan 26 18:07:54 GMT-03:00 2012; Root Thread]: 
    dir=org.apache.lucene.store.OJVMDirectory@9b6a5f7f lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@12d7b7c0
    index=
    version=3.5-SNAPSHOT
    matchVersion=LUCENE_34
    analyzer=org.apache.lucene.analysis.standard.StandardAnalyzer
    delPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy
    commit=null
    openMode=CREATE
    similarity=org.apache.lucene.search.DefaultSimilarity
    termIndexInterval=128
    mergeScheduler=org.apache.lucene.index.ConcurrentMergeScheduler
    default WRITE_LOCK_TIMEOUT=1000
    writeLockTimeout=1000
    maxBufferedDeleteTerms=-1
    ramBufferSizeMB=33.0
    maxBufferedDocs=-1
    mergedSegmentWarmer=null
    mergePolicy=[TieredMergePolicy: maxMergeAtOnce=10, maxMergeAtOnceExplicit=30, maxMergedSegmentMB=5120.0, floorSegmentMB=2.0, forceMergeDeletesPctAllowed
    =10.0, segmentsPerTier=10.0, useCompoundFile=true, noCFSRatio=0.1
    maxThreadStates=8
    readerPooling=false
    readerTermsIndexDivisor=1
    Jan 26, 2012 6:07:54 PM org.apache.lucene.indexer.LuceneDomainIndex getUserDataStore
    INFO: UserDataStoreClass=org.apache.lucene.indexer.DefaultUserDataStore
    Jan 26, 2012 6:07:55 PM org.apache.lucene.indexer.LuceneDomainIndex ODCIIndexCreate
    INFO: ExtraCols: TEXT User Data Store: org.apache.lucene.indexer.DefaultUserDataStore@53c50ae9
    Jan 26, 2012 6:07:55 PM org.apache.lucene.indexer.TableIndexer index
    INFO: index(Logger logger, IndexWriter writer, String col, boolean withMasterColumn), Performing:
    SELECT /*+ DYNAMIC_SAMPLING(L$MT,0) RULE NOCACHE(L$MT) */ L$MT.rowid,L$MT."ID",TEXT FROM SCOTT.TEST L$MT for update nowait
    *** 2012-01-26 18:08:26.981
    IW 0 [Thu Jan 26 18:08:26 GMT-03:00 2012; Root Thread]: DW:   RAM: balance allocations: usedMB=34.314 vs trigger=33 deletesMB=0 byteBlockFree=0 perDocFr
    ee=0 charBlockFree=0
    

       Best regards, Marcelo.

     

  • Anonymous
    2012-01-27

    Good day

    version=3.3-SNAPSHOT
    And I thinks LDI use AutoTuneMemory:true as dafault.

     

  • Anonymous
    2012-01-27

    Can you advice how to set my oracle memory parameters for good work LDI?
    for example your SELECT * FROM v$sgainfo

     
  • Hi:
       The main parameter is the java_pool_size but using the auto tune memory of the Oracle 11g by setting

    sga_max_size     big integer 1520M
    sga_target            big integer 1G

      is enough, above values where my values during yesterday test with your example.
      On the other hand latest lucene release 3.5.0 is better on memory footprint.
      Best regards, Marcelo.

     
  • Pedro Pinheiro
    Pedro Pinheiro
    2012-04-03

    Hi,

    I also had this error on 11.2.0.3 and only solve this issue setting the max memory size to the max session size of the jvm session. To do this I create the following functions:

    CREATE OR REPLACE FUNCTION getMaxSessionSize RETURN NUMBER IS
        LANGUAGE JAVA name 'oracle.aurora.vm.OracleRuntime.getMaxSessionSize() returns long';
    /
    grant EXECUTE ON getMaxSessionSize TO public;
    CREATE public synonym getMaxSessionSize FOR lucene.getMaxSessionSize;
    CREATE OR REPLACE FUNCTION setmaxmemorysize(num NUMBER) RETURN NUMBER IS
        LANGUAGE JAVA name 'oracle.aurora.vm.OracleRuntime.setMaxMemorySize(long) returns long';
    /
    grant EXECUTE ON setmaxmemorysize TO public;
    CREATE public synonym setmaxmemorysize FOR lucene.setmaxmemorysize;
    

    Then I set the max memory size this value returned from getmaxsessionsize function. In the same session I create the index and no OOM.

    I don't no the reason for this error, but has become common in 11.2.0.3. It seems like jvm don't get the right memory value (sga and java pool) but I'm not sure about this, it's only a suspicion.

    Best regards,
    Pedro Pinheiro

     
  • Hi Pedro:
      Thanks for tip, I just upgraded my Linux installation with latest patchset and getJavaPoolSize works fine, here the output of my configuration:

    SQL> select getMaxSessionSize() from dual;
    GETMAXSESSIONSIZE()
    -------------------
             4294967295
    SQL> select getJavaPoolSize() from dual;
    GETJAVAPOOLSIZE()
    -----------------
            100663296
    SQL> show parameter sga
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    lock_sga                             boolean     FALSE
    pre_page_sga                         boolean     FALSE
    sga_max_size                         big integer 1520M
    sga_target                           big integer 1G
    SQL> show parameter target
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    archive_lag_target                   integer     0
    db_flashback_retention_target        integer     1440
    fast_start_io_target                 integer     0
    fast_start_mttr_target               integer     0
    memory_max_target                    big integer 0
    memory_target                        big integer 0
    parallel_servers_target              integer     64
    pga_aggregate_target                 big integer 302M
    sga_target                           big integer 1G
    

       Note that getJavaPoolSize() is returning same value as sga_target and getMaxSessionSize() is returning my physical memory limit due is an Oracle 11g 32bits (11.2.0.3.0).
       Best regards, Marcelo.