The default travel delay is too small on fast systems (for my system, it's subjectively half of what it should be). Rather than hard-coding the default, we could
auto-tune the travel and explore delays: a call to time() at the start and end of travel/explore, figure out the per-step average time taken, and use that to adjust the delay. The delay options would then be relative (and unit-less) rather than an absolute number of milliseconds; the default delay would be one that should give a reasonable experience on a wide variety of machines.