On 03/25/2012 03:52 PM, MK wrote:
> I am using cedet for my C++ development. I usually set up
> ede-cpp-root-project to specify include files and the root dir for my
> As per Alex Ott's (famous :)) webpage, I have also enabled semanticdb-global
> 1) Does semantic create its own internal database of C++ symbols and
> information (like classes, methods, namespaces and so on) and then use
> this information when I issue a command like
> semantic-ia-fast-jump, semantic-complete-jump, semantic-complete-jump-local ?
> Or, does it rely on gtags to find such information?
Semantic builds its own database of symbols from your project, starting
with the files you have open, and looking across your include files for
If you configure it to use GNU Global (gtags) it can use gtags for the
INITIAL lookup of a symbol by name. After symbols are found, Semantic
will then reparse the file to get the details needed for smart
completion, and will ignore gtags results after that in deference to its
> 2) When working on a project, I often need to include files that are
> not in my current project. Since I specify the include file paths
> correctly as part of the ede-cpp-root-project , cedet has no problem
> finding these included .h header files. But, how is CEDET supposed to
> find the definitions (in the corresponding .cpp files) . ( These .cpp
> files are not underneath my project root dir )
It would not be able to find the sources (.cpp) files associated with
the header files. Once you are in a header file, you could jump from
there to that project's .cpp files if you have a project setup pointing
to that directory, but it would be a 2 step process.
> 3) Sometimes, semantic-ia-fast-jump takes a long time when I first
> search for a particular symbol. Is there a way to get semantic to
> "preprocess everything" when it first encounters a new project so that
> subsequent searches are fast
Yes, there is a setup cost. You can use semanticdb.sh to pre-parse your
code, but if you never open *every* file, you might end up with Emacs
having such a huge data structure it gets larger that your machine can
handle. If your project is small, this shouldn't be a problem.