"Eric M. Ludlam" <eric@...> writes:
> The feature you are looking for can work with the current version
> of semantic with some caveats.
> Mainly, all the "smart" stuff tries to be, well, smart, and only
> search headers. This boosts speed over searching through the tags
> in all files that might be in your project. Unfortunately, it only
> scans down one level of includes. Thus if you include
> "super-header.h", and that includes "header-i-use.h", then you will
> be out of luck.
> You can use `semanticdb-find-test-translate-path' to see if the
> right things are on your path. (I forgot to include it in other
> emails I sent today.)
> As Bruce pointed out, there is commented out code to re-enable
> this, but it is tricky.
> I suspect there is a clever way to solve this, but I just haven't
> been clever enough lately.
I'm not sure about clever. I think the current approach is doomed: if
you #include (for C/C++) certain header files, I think you bring in
enough symbols that trying to load them all is going to make the
system too slow.
For example, the semantic cache for /usr/include/openssl (one which I
use quite a lot) is 2M, and reading (and parsing, building objects,
and especially using) that kind of size of elisp file is going to take
an annoying length of time. Admittedly, you could split that up, but
presumably the cache was limited to one per directory because that
worked out better.
And almost always, you don't care about more than a small number of
symbols---and typically you want exact matches (you want to know what
d2i_X509_CRL is) or you've got an initial substring (you want to list
symbols beginning with d2i_X509).
To me, that just screams for a database of some sort. Source
Navigator uses a collection of db databases (which can also work for
initial substring searches, because you can use btree storage).
That's workable, and it has the nice feature that you don't have to
maintain some kind of SQL database server, but coding with it is
pretty horrible: looking at the code, it seems clear that they really
wanted to use an SQL database, but just didn't have one handy.
On the whole I think it's (using a database that someone else has
written) the dumb solution rather than a clever one: it's admitting
defeat and letting someone else deal with the problem.
If I had lots of free time and were building something like the
semanticdb bits of cedet, I suspect I'd start by coding sqlite
wrappers for emacs (so you could build emacs with configure
--with-sqlite=...), and store the tag tables in a local sqlite
database. (And yeah, I'd probably just put code in cedet to barf if
there's no sqlite support.) Of course, I'm not actually doing this
coding, so my opinion is merely opinion.
On a separate issue, what happened to the Java .class semanticdb
thing, where cedet could produce completion for jndi symbols (or
whatever) just by being pointed at the j2sdk class files?
I was trying to find out how to configure this, and then noticed that
semanticdb-java.el isn't byte-compiled, so I imagine this doesn't work