I recently did a scaling performance analysis comparing my bitwise tries implementation with red black trees and uthash. If you look at the graphs on http://www.nedprod.com/programs/portable/nedtries/ you'll see how each algorithm scales according to N.
uthash shows very distinct choppiness in scaling - if you test std::unordered_set<> from C++ in an identical framework you get a smooth, flat O(1) line for all operations as one ought to.
In particular you shouldn't be seeing O(1/log N) insert scaling nor O(log N) remove scaling between the bucket expansions as uthash is apparently demonstrating. This suggests that your bucket indexation routine has a bug in it which is introducing the partial log N behaviour, perhaps it isn't making full use of all the buckets for a given hash key?
Cheers,
Niall
Interesting, my first question is whether the key domain on which this particular graph is based is a good match for the default hash function. If its not, ideal% would tend low and your operations get worse and worse than constant time. In that kind of scenario the solution might be as simple as switching to a different hash function. Or maybe there is something deeper. I would need to have the code you used to generate the data to analyze the ideal% (or you could use hashscan or keystats as described in the documentation).
Incidentally I don't routinely use SourceForge so email is preferable to opening tickets. Thanks.