The query answering primitives in the C<-->XSB interface seem 
to break when the given goals and/or results are large-ish. For instance,
suppose that the following straightforward definition of append 
(on user-defined lists, not on native Prolog lists) has been loaded:

append(cons(X,L1),L2,cons(X,R)) :- append(L1,L2,R).

and let goal be the string "append(x,y,Result)" where x and y 
are ground cons-nil lists of integer numerals from 1 to 100K 
and from 100K to 200K respectively. I.e., x1 and x2 are 
cons(100000,cons(99999,... cons(1,nil) ...)) and 
cons(200000,... cons(100000,nil)...) respectively. So,
goal has ~200K nodes and Result should have ~200K
nodes as well. 

Then the following query code chokes (causes a segmentation fault):

xsb_query_string_string(CTXTc goal,&return_string,"|");

That shouldn't happen, both b/c there's plenty of memory on the 
machine, and - more importantly - b/c the specification of xsb_query_string_string
claims that it will return an appropriate error code (via XSB_ERROR) 
if something catastrophic happens.
The exact same code works fine when the terms are smaller.

The "fixed string" interface, using xsb_query_string_string_b, 
does not fare better:

rc = xsb_query_string_string_b(CTXTc goal,return_string,ret_size,&ans_len,"|");

This seg-faults when the goal and the result is large-ish, *even* if 
return_string has more than enough space to hold the result and ret_size 
has the right value. (E.g., try this with goal = append(x1,x2,Result)
where x1 has 100K integer elements and x2 has 200K integer elements.)

I'm using the single-threaded engine.

Keep in mind that for today's standard terms with 200K-500K nodes
are fairly small. E.g., SAT/SMT solvers routinely have to deal with
terms/formulas with millions of nodes, and one might want to use XSB
to manipulate such objects (I would!). Any suggestions on how to get 
around this (or any fixes) will be appreciated. Thanks,