From: <no...@at...> - 2006-05-27 11:48:15
|
HashMap initialization is very unefficient in cache hits -------------------------------------------------------- Key: HHH-1789 URL: http://opensource.atlassian.com/projects/hibernate/browse/HHH= -1789 Project: Hibernate3 Type: Improvement Environment: hibernate 3, db n/a Reporter: Aapo Kyr=C3=B6l=C3=A4 We have an entity which has a <map>-type of collection attached to it, that= uses <many-to-many> mapping. The map has cache setting of <cache usage=3D"= nonstrict-read-write"/>=20 The problem is that the map is often quite large, 500-1000 elements in it. = But when Hibernate3 instantiates it from cache (PersistentMap.initializeFro= mCache()), it will create a HashMap with default parameters and then .put()= each item from the serialized cache data to the map.=20 HashMap default size is 16 and it resizes it to double always when it has 7= 5%*capacity elements in it. So, initializing a HashMap with 1000 entries wi= ll cause 7 resizes (which are expensive): 16->32->64->128->256->512->1024->= 2048. This consumes a lot of memory and cpu because HashMap.resize() is a c= ostly operation.=20 It would be better for Hibernate to initialize the map with loadfactor 1.0 = and size of the cached serialized data array / 2 + some extra.=20 --=20 This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators= .jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira |