|
From: Bastian M. <bma...@we...> - 2006-02-22 14:37:51
|
When drawing large images (binary and ascii) gnuplot is veeeery slow on my (windows) machine. E.g. with an image (RGB) of 1024*256 pixels it takes about >5.5s to output on any terminal. Profiling reveals that gnuplot spends >75% of it's time in gp_realloc() <-- cp_extend() <-- get_data(). The crucial point is that the maximum increase of the points[] array is limited to 1000. Changing line 411 in plot2d.c - cp_extend(current_plot, i + (i < 1000 ? i : 1000)); + cp_extend(current_plot, i + i); eliminates that problem, but uses exceedingly large amounts of memory. Can anybody confirm this behaviour on non-windows machines? Or is this a problem of gp_realloc() on Windows? Bastian --=20 Bastian M=E4rkisch Physikalisches Institut, Universit=E4t Heidelberg |
|
From:
<br...@ph...> - 2006-02-22 15:12:13
|
Bastian Maerkisch wrote: > The crucial point is that the maximum increase of the points[] > array is limited to 1000. I'm pretty sure that test is plain and buggily ass-backwards. The general approach to avoid excessively frequent realloc()s is to double the allocation size each time you have to grow it. I'm guessing here, but that business with 1000 is probably intended to avoid the early, rather pointless early stages of that process, i.e. to ensure that the allocation alwyas grows by *at least* 1000, not at most 1000. > eliminates that problem, but uses exceedingly large amounts > of memory. Factor of 2 of waste is not really excessive. But if you don't like it, it can always be replaced by something like new_size = (1.5 * old_size + 1000) Call me biased, but I suspect this code had better use the existing dynarray.h stuff instead. I broke that code out of hidden3d for a reason, see? |
|
From: Daniel J S. <dan...@ie...> - 2006-02-22 19:05:42
|
Hans-Bernhard Br=F6ker wrote: > Factor of 2 of waste is not really excessive. Shouldn't be too bad. Factor of 2 at worst. As far as having reassign m= emory, with images having the array size, one would know right away the e= ventual number of points. With binary data, one could make a reasonable = guess based on file size. With ascii data, not worth the trouble as the = reading of ascii data is just as consuming as reassigning memory. > But if you don't like it,=20 > it can always be replaced by something like >=20 > new_size =3D (1.5 * old_size + 1000) >=20 > Call me biased, but I suspect this code had better use the existing=20 > dynarray.h stuff instead. I broke that code out of hidden3d for a=20 > reason, see? Wasn't aware of it. Dan |
|
From: Ethan A M. <merritt@u.washington.edu> - 2006-02-22 16:20:30
|
I don't have time to do real profiling at the moment, but a quick test under linux handles a 1280x1024 image in 2.36 seconds using the current code, and 2.40 seconds using your modified code. I.e. no significant difference. echo "plot '< convert xxx.jpeg avs:-' binary filetype=avs with rgbimage" > bench.gnu time gnuplot bench.gnu 2.118u 0.260s 0:02.36 100.4% 0+0k 0+0io 0pf+0w time gnuplot_new bench.gnu 2.129u 0.257s 0:02.40 98.7% 0+0k 0+0io 0pf+0w But I agree that the current arbitrary limit of 1000 additional bytes is foolish. On Wednesday 22 February 2006 06:37 am, Bastian Maerkisch wrote: > When drawing large images (binary and ascii) gnuplot is veeeery > slow on my (windows) machine. E.g. with an image (RGB) of 1024*256 > pixels it takes about >5.5s to output on any terminal. > > Profiling reveals that gnuplot spends >75% of it's time in > gp_realloc() <-- cp_extend() <-- get_data(). > > The crucial point is that the maximum increase of the points[] > array is limited to 1000. Changing line 411 in plot2d.c > > - cp_extend(current_plot, i + (i < 1000 ? i : 1000)); > + cp_extend(current_plot, i + i); > > eliminates that problem, but uses exceedingly large amounts > of memory. > > Can anybody confirm this behaviour on non-windows machines? > Or is this a problem of gp_realloc() on Windows? > > Bastian > -- Ethan A Merritt Biomolecular Structure Center University of Washington, Seattle 98195-7742 |
|
From: Daniel J S. <dan...@ie...> - 2006-02-22 18:57:11
|
Bastian Maerkisch wrote: > When drawing large images (binary and ascii) gnuplot is veeeery > slow on my (windows) machine. E.g. with an image (RGB) of 1024*256 > pixels it takes about >5.5s to output on any terminal. > > Profiling reveals that gnuplot spends >75% of it's time in > gp_realloc() <-- cp_extend() <-- get_data(). > > The crucial point is that the maximum increase of the points[] > array is limited to 1000. Changing line 411 in plot2d.c > > - cp_extend(current_plot, i + (i < 1000 ? i : 1000)); > + cp_extend(current_plot, i + i); > > eliminates that problem, but uses exceedingly large amounts > of memory. > > Can anybody confirm this behaviour on non-windows machines? > Or is this a problem of gp_realloc() on Windows? In linux, even running through Octave, a 1024*256 image takes about .75 sec on a five year old pentium 3 (about 1 GHz, I forget memory speed). I agree the linear expension rule is bad. (In fact, in some cases with image one should be able to make a reasonable guess about the final number of points required.) Why your system is so slow though, not sure? With the 1000 rule and 1024*256 image, that means you'd execute cp_extend roughly 250 times. (Not good.) In that routine I see a TRACE_ALLOC() command which I'm sure is deactivated in my compilation. Perhaps if you have that turned on by default, that's where the slowdown comes from. Dan |
|
From: Ethan A M. <merritt@u.washington.edu> - 2006-02-23 03:48:55
|
On Wednesday 22 February 2006 06:37 am, Bastian Maerkisch wrote:
> When drawing large images (binary and ascii) gnuplot is veeeery
> slow on my (windows) machine. E.g. with an image (RGB) of 1024*256
> pixels it takes about >5.5s to output on any terminal.
>
> Profiling reveals that gnuplot spends >75% of it's time in
> gp_realloc() <-- cp_extend() <-- get_data().
OK, I've done a proper profiling run under linux (Mandriva 2006).
lascaux [141] cat rgbimage.dem
plot '< convert sen.jpeg avs:-' binary filetype=avs with rgbimage
replot
replot
replot
replot
replot
lascaux [142] identify sen.jpeg
sen.jpeg JPEG 1280x899 DirectClass 256kb 0.090u 0:01
Profiling output:
% cumulative self self total
6.81 6.39 6.39 cb2gray (pm3d.c:174 @ 80c666c)
5.53 11.57 5.19 store2d_point (plot2d.c:842 @ 80b8711)
4.66 15.94 4.37 X11_image (x11.trm:1868 @ 80efae3)
3.74 19.45 3.51 cb2gray (pm3d.c:180 @ 80c66ee)
3.50 22.73 3.28 cb2gray (pm3d.c:178 @ 80c66d0)
3.09 25.63 2.90 store2d_point (plot2d.c:841 @ 80b849d)
2.87 28.32 2.69 store2d_point (plot2d.c:843 @ 80b899e)
[snip 200 lines]
0.00 93.81 0.00 6954 0.00 0.00 gp_realloc (alloc.c:295 @ 804bc00)
0.00 93.81 0.00 6930 0.00 0.00 cp_extend (plot2d.c:144 @ 80b587c)
>
> Can anybody confirm this behaviour on non-windows machines?
No.
Under linux the gp_realloc() takes 0 time for all intents and purposes.
Most time spent (19% net) is in routine store2d_point()
Next largest chunk of time (14%) is in cb2gray()
After that no single routine stands out
> Or is this a problem of gp_realloc() on Windows?
No idea where the problem is under Windows.
Separate timing run (no profiling):
lascaux [2389] time ./gnuplot rgbimage.dem
1.854u 0.249s 0:02.70 77.4% 0+0k 0+0io 54pf+0w
So, 6 times through plotting a 1280x899 rgbimage takes less than
2 seconds including the file reading and program startup.
Your reported 5+ seconds for 1 plot of an image 1/3 that size
seems excessive any way you look at it.
--
Ethan A Merritt
Biomolecular Structure Center
University of Washington, Seattle 98195-7742
|