From: Felix Wiemann <Felix.Wiemann@os...> - 2007-03-08 06:06:49
[This is low priority.]
Hey David, hey everyone,
So the cron job works now, but in the long run, we should probably
generate it locally and upload it. Some thoughts:
Since we have lots of CPU power, we should probably generate the whole
website each time instead of trying to update only the changed files (it
causes too many complications [like left-over deleted files] which are
easily avoidable by regenerating the whole tree each time). Just run
buildhtml.py over everything.
Then the script can just upload the new website as a tarball. That's a
lot to upload each time, but my upstream and SF.net's downstream aren't
used *that* much, I suppose, so that's OK.
We could run it as a cron job on your server, or on my server. (I think
I have a server now -- it doesn't do incoming connections, but I can use
it for cron jobs and the like. For your amusement, this is what the
server looks like:
Anyway, my idea is this:
* svn up; if nothing changed, stop.
* Generate the website and all snapshots.
* Upload the snapshots to BerliOS FTP.
* Upload the whole website as a gzipped tarball. _
* On the server, unpack into htdocs.new
* mv htdocs htdocs.old && mv htdocs.new htdocs && rm -rf htdocs.old
..  My guess is that rsync is too inefficient because of the link
latencies and because it's a lot of small files. (That's my
experience from syncing the Gentoo package tree over rsync at least.)
One problem is that buildhtml.py doesn't reset the parser after each
run. In particular, it may be necessary to hack in a reset for the
default role until we have a proper solution.
Apart from buildhtml.py, we also need support for other formats like S5.
Using Makefile.docutils-update is probably good.
If you have any ideas or comments (or see pitfalls to avoid), please
drop me a line.
Felix Wiemann -- http://www.ososo.de/
Get latest updates about Open Source Projects, Conferences and News.