I found GraphicsMagick as the much faster alternative to ImageMagick and I really love it! But anyway I need some advice to speed it up more.
I have an internet project which creates videos on the fly with individual inputs like images which are displayed in the video. Until now I realized this basing on flash technology because I could do a lot of video manipulation on client side. But now I need a h.264 solution what means that I must render and encode a 60 seconds clip (25 fps, 640x360) in less than 2 seconds on server side. The sources for the video are 1,500 images (jpg or else). One image for every frame. It is a question of less than a second to encode this 1,500 images with GM into a h.264 video clip (wow!). The perfomance issue is elsewhere: I have to manipulate this 1,500 images before I can encode them to the video. And I couldn't find a solution which is fast enough. The fastest was 35 seconds. But it must be MUCH faster! :-) The point is that the special way of image manipulation I need in my imagination does not really need a complete new image rendering but obviously GM does it anyway. For every single image I need to paste a small image onto it to a given x,y coordinates. The image size and the coordinates are different for every small image which must be pasted to the large images each but this parameters are always the same for every process of rendering the 1,500 source images. (I mean, if i.e. the small image is 80x62 and must be paste with the left top edge to x=112, y=80 on source image number 423, then in every other video generation process the small image is sized and positioned all the same to source image number 423. The only difference is what the small image shows)
So, only about 2% of the image surface is changed in every source image but GM obviously renders also the other 98%. Does anyone has an idea or knows a trick how to render only the necessary part of the source images which is changed? I also tried to paste together the source image with a transparent area with another same sized image where the small image is overlaying the transparent area of the source image (composite -compose In) trusting that GM quit rendering all other area than the transparent one but obvioulsy it renders the complete image.
Or is the problem not in the rendering process but in the way of sequential batch processing in a loop? All source and target images are stored in a RAM disk so there cannot be a problem with i/o access timing.
Best regards from Germany
It seems that the real challenge is to read your 1500 images in less than a second and then output to the format you need. GM is unlikely to be the actual bottleneck here. Most of the overhead will be in libjpeg, reading your JPEG files. You are not likely to get more than 200 frames/second. If you use a simple uncompressed input format like PPM then well over 1000 frames/second is possible with single threaded code. With several reading threads then more throughput is possible. There is fixed overhead in GM (and libjpeg) associated with reading any sized file.
As far as rendering your logo goes, if you use composition (CompositeImage()) to add the logo, only the pixels matching the region composite image will be updated. If you attempt to "draw" on the image (e.g. -draw) it would be much slower and may well be looking at all the pixels.
For best performance, this would be performed from within a C or C++ program so there is no overhead from starting the software.
I should mention that with a simple format like PPM or MIFF, the input files can be simply concatenated together (e.g 'cat frames*.ppm > input.ppm') and then the concatenated file read as a single file. This helps avoid GM overhead associated with opening a file. GM will read all of the frames before doing anything with them though.
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.