> More important to most people is not even performance: this is a very
> hard library to work on so most users including me could never
> actually fix a bug if one were found.
> So .. the question is, does it work. Full Stop. Forget "as well".
> Yeah, I'd like it to be as fast as Judy or even faster, but if there's
> a single bug my whole product is completely screwed because it relies
> utterly on Judy.
But it's never had a bug after being released. Period.
We tested the crap out of it. I've used it heavily myself, and was
Yeah, there are usability disconnects that can make it hard to write
code to the API, and masquerade as bugs, but the library itself, as
released to SourceForge many years ago, has been flawless, as far as I
know and have experienced.
Given something with simpler source, great, but what's the point of
using it if performance is no better than you can get with hash-this and
that-tree already out there in the public domain? libJudy lets you
build array-of-array, millions/billions, with small memberships,
something that no other API lets you do easily.
> For example suppose you use Judies of Judies to represent strings.
> After 64 bytes, if the string hasn't run out, you just point at a new
> Judy array. But now to delete the top level array in C, without C++
> destructors to automate it, or a garbage collector, is a nightmare
> because you have to scan the whole arrays recursively, deleting
> objects bottom up.
Not if use let the JudyS*() wrappers take care of that for you?
OH well, it's been years since I worked on it, and a couple of years
since I used it as a developer (of higher-level code), so what do I
know? You are probably right.