Probably a question(s) for David Milne, but if anyone has answer, please share.
Interested to know what is the setup that the demo uses to run Wikipedia Miner? ( i.e., say on Hadoop, with N machines, each machine is a … (note I'm a Hadoop newbie)). I recently looked at "annotate/wikify" and it seems to now run much faster then say a year or so ago. I'm guessing that's because it may be running in Hadoop over mulitple machines(?) What performance could one expect - i.e. how long to wikify a standard length webpage article? - if running Hadoop over one or two machines.
I believe the Hadoop was just for building the csv and training files, and what is running is a single server, Tomcat without Hadoop functionality.
Hi Jerry LS,
I have same problem. My wikipedia miner instance is much slower than the public server instance of Waikato University.
Have you found a solution to improve performance? And a good setup?
Thanks in advance,
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.