I'm using Gnuplot with perl to make graphs
on the web (intranet). I have different categories like 1wk, 2wk ... & all data to graph. Well
when it comes to plotting all, ~14000, with the smoothing function it takes 10mins before the png is created. What can I do before hand to speed up the process?
You'll have to start by providing more details. Some of the smoothing algorithms are computationally expensive, but it depends on a lot of facts how long the job actually takes:
1) which algorithm is used,
2) how many input data points there are,
3) how many unused colums of data are in the file,
4) how many samples of the smoothed curve you take
I'm using smooth beizer and forcing all 14000
data points to be sampled. There are no unused colums. Even when I reduce the sample to half
it takes 7mins.
That's pretty much the worst possible case --- and pretty much guaranteed to be a massive waste of CPU time. Is your PNG anywhere near 14000 pixels wide? If not, it's doing you no good to evaluate that Bezier curve at 14000 sample points.
And bezier is by far the most CPU-intensive smoothing algorithm in gnuplot's arsenal. It's completely nonlocal, i.e. every sampled point depends on each and every one of your input points, and in a rather complicated way, too.
In a nutshell: you got what you asked for.
I've tried the other smoothing algorithms and they don't seem to clean up the data as much as beizer.
What I'd like is to use the previous data point value when current exceeds avg or some max value.
So without me physically cleaning the data how can I program gnuplot to use previous data values in the plot?
Simply said, you can't. If none of the existing algorithms fits your bill, you'll have to look elsewhere. gnuplot is not, nor should it try to be, a full-service number-crunching application. It's a plotting program.
Log in to post a comment.
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.