Menu

#1 Limited Size of NonBlockingHashmap?

closed-invalid
nobody
None
5
2008-05-29
2008-05-12
Anonymous
No

Hello,

I´m having problems with the maximum table size.
Clearly, when using more than 1.000.000 elements in a NonBlockingHashMap I´ve getting NullPointer Exception (I think, due to limited range space of the NonBlockingHashMap).
A simple substitution with ConcurrentHashMap works fine (so I think it´s not my fault).
The API says, that the NonBlockingHashMap should resize when necessary, but it doesn´t?!?
Do I need to copy it the hard way in another NonBlockingHashmap, when realizing its getting full?
Any ideas or suggestions? Is the lib still fine with the Java-Version: 1.6.0_05-b13?

Greetings,
Stefan

Discussion

  • Cliff Click

    Cliff Click - 2008-05-12

    Logged In: YES
    user_id=1757392
    Originator: NO

    This should work fine, up till it throws an OutOfMemory error.
    I've certainly tested the million-entry table here at Azul.
    Can you bundle together a crashing test case for me?
    Please include a hardware description and any JVM flags.

    Thanks,
    Cliff

     
  • Cliff Click

    Cliff Click - 2008-05-29
    • status: open --> open-invalid
     
  • Cliff Click

    Cliff Click - 2008-05-29

    Logged In: YES
    user_id=1757392
    Originator: NO

    I have tested with >1million elements and it works fine.

    SO-

    - the table auto-resizes from 4 elements to 1million
    - you don't need to 'copy the hard way'
    - lib is fine with java6

    Can you send me a test case? Or even just the stack crawl from the NPE?
    I'm going to close this bug as 'invalid' unless I get a test case or other way to repro the problem.

    Cliff

     
  • Cliff Click

    Cliff Click - 2008-05-29
    • status: open-invalid --> closed-invalid
     

Log in to post a comment.