here a quick update on the creation of test modules.
core: run=60 failed=0 error=0
data: run=544 failed=1 error=0
datadebug: run=526 failed=0 error=0
nonotify: run=526 failed=0 error=0
standard: run=102 failed=0 error=1
io: run=168 failed=2 error=0
extra: run=271 failed=6 error=0
valencycheck: run=77 failed=0 error=1
reaction: run=16 failed=0 error=0
experimental: run=32 failed=3 error=0
total: run=2322 failed=12 error=2
(created with: ant -logfile ant.log clean test-all; bsd extractTestStats.bsh)
Some notes: I will set up the notorious smiles module next week, and will
write some JUnit tests for the new valencycheck module, which cause the
SMILES errors. The current stats for that module are not realistic.
Also note that most tests are for the data classes, where the tests for
datadebug and nonotify are mostly reusing those of the data module. So while
the total is 2322, the number of distint tests is 1052 tests lower! And,
realize that most algorithms in the CDK are still very badly covered by JUnit
And the new depedency map is attached.
(Use graphiz of kgraphview to visualize it the .dot file).
Some notes on this: still way too many crossing arrows, but that should get
better over the next weeks. After splitting out a smiles module, I will focus
on getting the stable modules statistics complete (help appreciated).
Cologne University Bioinformatics Center (CUBIC)