> And just looking at the kinds of approximations humans seem to use,
> many roads lead to floating point: influence estimations (and aji
> and thickness and whatnot), goodness of shape, probabilities of eyes
> and connections and whatnot, learning algorithms like GA and NN..
however, why could one not scale up to integers nevertheless (because
they are probably a lot faster to compute with) ? what "resolution" do
you need ? a 32-bit int gives you ~4 billion values. isn't that
enough for expressing all the subtleties you need ?