Just Launched: You can now import projects and releases from Google Code onto SourceForge
We are excited to release new functionality to enable a 1-click import from Google Code onto the Allura platform on SourceForge. You can import tickets, wikis, source, releases, and more with a few simple steps. Read More
On 29 May, 2005, at 6:31, moin-devel-request@...
> I have a very simple piece of code to import a set of articles into a
> MoinMoin Wiki.
> It goes like this;
> for PageName in UniqHeadings:
> request = RequestCLI()
> editor = PageEditor(request,PageName)
> editor.saveText(editor.normalizeText(Articles[PageName]), 0)
> this has a list of around 4,500 UniqHeadings to get through. Average
> size is 2k. An Apache served MoinMoin instance is running.
> It is currently writing pages at the rate of around 8 per minute. Do I
> really have to wait 9 hours, or is something amiss?
I think you miss something - looks like this code will not work - how
the code knows where is your wiki data dir with no url?
I tested this code on my idle machine (G5 Dual 2G):
sys.path = [# The path to the wiki directory
# The path to moinmoin, not needed if its installed with
'/Volumes/Home/nir/Projects/moin/fix'] + sys.path
from MoinMoin.PageEditor import PageEditor
from MoinMoin.request import RequestCLI
def save(url, pagename, text):
request = RequestCLI(url=url, pagename=pagename)
editor = PageEditor(request, pagename)
text = editor.normalizeText(text)
dummy, revision, exists = editor.get_rev()
return editor.saveText(text, revision)
url = 'localhost/fix/'
dir = 'imports'
files = [name for name in os.listdir(dir)
if not name.startswith('.')]
for name in files:
text = file(os.path.join(dir, name)).read()
save(url, name, text)
I imported 80 2132 bytes pages in 10.4s.