Is there a way in Reduce I can limit the time an algorithm would run for? I am using the Redlog command rlcad which sometimes seem to go on forever. I would like it to run for an hour and then abort if no solution reached and go on to the next command.
Thanks a lot in advance,
Good question, but there is maybe not a very good answer! In ths CSL version at the Lisp level but NOT put in a state where it is in any respect easy to use there is a facility to do calculations with a limit on the resources that they use. That is used in the script that runs all the test/demo cases so it clips things off in the bad case where one of them "escapes". But it is NOT liable to be at all easy to activate that in a general user-friendly way. One reason for that is that if you interrupt some big calculation like rlcad part way through then if you want to be able to go on and do something else afterwards somebody needs to check that the interrupted rlcad calculation has not left anyhing in an odd partly-done state in a way that could mess up what comes later.
Any particular package could in theory provide a time cut-off eg by reading the cl;ock periodically and exiting gracefully if it had taken too long. I am not aware of any such capability in rlcad, but maybe the experts in that stuff can comment.
I have got some old experimental code called "killer.red", where the idea is to check a user-set time limit at the end of every garbage collection. This currently works only in the PSL version but a while ago Arthur had provided me a hook in CSL which should admit to use it there as well. Since there appears to be serious interest now I want to look into it once more.
Also, Arthur's remark is very important that the interrupted procedure should not temporarily modify any global settings, which would be left in an unstable state when interrupting. I am not certain about rlcad at the moment. That would be another thing to check.
As a short term solution, when running things in batch you can use bash's "ulimit -t" to limit the maximum CPU time for the Reduce process. This, however, terminates the entire Reduce when the limit is reached. Anyway, it worked for me in many situations.
Thanks for your responses.
As a brute fix, I just did Ctl-C (or Break) after rlcad had run for an hour and then went on to the next command. In that case, will the interrupted rlcad calculation leave anyhing in an odd partly-done state in a way that could mess up what comes later?
Another question is -- how do I access the elapsed CPU time in Reduce? Is there a function in Reduce corresponding to get-internal-run-time or get-internal-real-time in Common Lisp? (http://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node232.html)
In think after Control-C you possibly have got exactly the problems indicated before. It is in the responsibility of the developers to use "errorset" (essentially catch-throw) so that nothing happens. But I would be too optimistic about that at present.
The is a switch time. Go "on time;" to see CPU times after each computation. And I think the first output at all is the CPU time since starting the system.
I am using "on time" to display the CPU time. But what should I do to store the time in a variable? I want to do something like
t1 := get-time;
for i:= 1:1000000 do 1+1;
t2 := get-time;
return t2 - t1;
and f() should return the time taken by the for loop. What do I use in place of get-time?
t1 := lisp time();
there is also gctime() for the garbage collection times, which are not included in time(). The times are in ms.
Log in to post a comment.