Here's an issue that's currently preventing us from migrating the
Docutils documentation tree to the new subdocs system: It's only
possible to process the master document, which takes a loooong time.
The sub-documents (a) are not individually processable, nor (b) are they
cacheable -- either (a) or (b) needs to be fixed in order to make
handling document trees feasible, since otherwise we cannot quickly test
edits to single files.
Caching might solve our problem in that, if running the master is fast
enough (a couple seconds), then in order to test changes made to
individual files, you'd just re-process the master file.
In order to find out whether caching would actually solve the problem, I
did some measuring about what takes up time, by concatenating all txt
files in the docs/ directory and processing the resulting mess (28k
lines) using rst2html --halt=5. I found that on my 1.1 GHz CPU, we need
41 seconds for parsing everything and applying transforms to subdocs,
107 seconds for running the transforms on the master document , and
35 seconds for writing out the document.
 This includes generating tons of system messages about unresolvable
So, even if we cached individual documents, this would only reduce the
time needed to process the master by 41 seconds -- definitely not enough.
So I guess we're stuck with having a 3-minute processing time for the
master document, right?
The best way to make large trees manageable would might therefore be to
just add a switch that deactivates the resolving of qualified references
(so that sub-documents can be processed without lots of error messages).
Any qualified reference might simply be replaced with its children.
It seems hackish, but it's the best solution I can think of right now.
Get latest updates about Open Source Projects, Conferences and News.