During a load of text formatted cel file of a u133_plus_2 cel
file, after repeatedly crashing on a 512M/512M (real/swap)
machine (load going up to ~20), it ran on a 2G/4G machine
without problems, altho the perl process ate almost
800MB. watching the load, it was the perl script, not the
postgres process that ate the mem. Jason has said that
the problem is due to perl building a huge data struct in
mem, THEN writing it to the DB instead of doing it on a
smaller basis. apparently an easy fix, but medium priority
at this point, unless it impinges on loading multiple files at
once. All the test machines (except my laptop) have
enough mem to do this.