Alan W. Irwin wrote:
Second, I'm going to be plotting about 5000 data points or so, the problem
is that the way my data are structured, the points I need to plot are not
stored sequentially in memory. I have data with about 15 parameters per
point, and I only want to plot points (the x and y are two of the
parameters) that satisfy some filtering criteria. Am I better off simply
taking the memory and performance hit to create an additional data structure
in which these data are sequential? Or can I simply call plpoin to plot each
individual point? In the latter case I would loop through the data only once
and plot points as I found them, whereas in the former there is the extra
overhead of creating the data structure and copying the data before I can
plot it.

I think the speed for unordered access will depend on the size of your cache
and the degree of non-locality of reference.  Taking data out of order may
actually make little difference at all.  Regardless of that, I always
try as a general rule to use the simplest programming to prototype what I
want and optimize further (with the associated increased danger of
introducing bugs) only if really necessary.  Anyhow, that is what I suggest
in the present case.
I would like to support Alan's advice: first set up a prototype that follows the "natural"
data structure that you have. Then if that seems too slow, measure the time spent in
each function (use a profiler for that - do not "guess", we human beings are very
bad at guessing which function will take all computational time) and try to decide
from that how to change the algorithm.

If stepping through the data set seems the dominant factor, then that is easy to tell:
just replace the call to a PLplot function like plpoin() by a call to a dummy function
with the same signature (plotting takes time too!)