I've got a relation context_word(Context,Word) that loads the enwik9 corpus
abstracted into paragraphs (contexts) and their words. Its a big file, of
course, and I want it all in memory (it does fit) simply because I want it
to be as fast as possible. However, the only way I can load it is with a
load_dyn. All other methods fail, presumably because they're trying to
create a byte code file in memory too.
I might opt to just load the whole thing with a load_dyn and then interact
with it via the | ? - prompt, however I've run into a problem there: When I
press ^C to get out of some huge output stream (which does happen) I am
stuck at 1 ?- and abort. simply spits out some diagnostics and returns me to
1 ?-. Of course, in that state, I am limited in what I can do in terms of
loading/consulting files, and I have to halt. which then means I have to
wait a long time to start xsb up again loading the data.
So I guess I have to ask, what is the next-fastest option for a simple
relation like that? Should I put a mysql database on a RAM disk?
Get latest updates about Open Source Projects, Conferences and News.