I need to run GSCapture on an old PC ((Inspiron 6000, 1GB ram, Ubuntu 9.04). I'm finding that the CPU is very high (about 50% - 320,240 at 1fps). I don't need much resolution or frame rate for the project. I wondered if I could reduce the bpp using Custom Pipe or something similar. I've tried the example on the advanced page of the website but processing can't find the class GSCustom ?
Hello, the class GSCustom was renamed to GSPipeline a while ago, although the website didn't reflect the change (btw, thanks for pointing this out, I just updated the advanced use page). For further examples on the use of the GSPipeline class, please take a look at the example sketches included with the latest release of the library.
Great thanks, I hadn't noticed the pipeline examples. All works fine until I try to adjust the depth or Bpp then the camera fires up but no display.
will it only work at 32bpp with a depth of 24 or are there lower values that I can use.
The last conversion in the pipeline to 32bpp and 24bits of depth, like:
pipe = new GSPipeline(this, "gnomevfssrc location=http://192.168.1.90/axis-cgi/mjpg/video.cgi? ! jpegdec ! ffmpegcolorspace ! video/x-raw-rgb, width=640, height=480, bpp=32, depth=24");
is required (unless the video is already in this format) because Processing needs video frames with 32bpp and 24 bits of depth… But this shouldn't exclude a case where the source device generates frames with a different bpp or depth. But the final conversion at the end for the piping into processing is a must (I think).
yes thanks that makes perfect sense. I just wonder what cheapest conversion I can use is. I notice that when I use cheese or something similar, the CPU usage is much less.
I have loads of ram but my processor is poor. On the other hand I don't need resolution higher than 160, 120 as the video capture is simply for capturing blobs. For my project even latency is much of an issue as I'm running at 1fps. There must be some way I can take advantage of those requirements to bring down my CPU.
also is there a work around for the non-working dispose method ?
what is the CPU usage when run the pipeline from the command line with gst-launch? Is it still high when compared with Cheese?
Actually the CPU is nice and low with gst-launch - the same as when I use cheese - is this significant ?
It seems that Processing and/or Gstreamer-java (the bindings that GSvideo uses to access gstreamer from java) are introducing a significant overhead…. unless there is another issue going on in your system. I'll run some tests on my linux machine (also with ubuntu 9.04)
I particularly suspect of GStreamer-java because it is based on JNA, which has been historically slow in comparison with JNI (the other binding mechanism to access C code from java). Now, I just checked the JNA website, and the developers are announcing a new feature called direct call mapping:
which improves performance significantly and makes it similar to that of JNI…
As I suspected, the reason for the increased CPU usage is GStreamer-java. I tested on my core2 duo machine with ubuntu 9.04, and the capture pipeline with gst-launch takes around 20% on one core. Same pipeline in GSVideo, brings CPU use to 100%
Well thats even worse than mine. Cab you think of a workaround that I could use ?
Well, I was doing capture at 640x480x30 fps, but anyways…
I'm afraid that at this point I don't have a good workaround to suggest… this is an issue with the java bindings themselves. As I said before, the latest version of the bindings seem to introduce a faster mechanism to access gstreamer. But I need to set aside some time to work this out…
Just a brief update. After looking at the gstreamer-java forums, I found this thread:
where at the end it is mentioned that the CPU usage shouldn't be much more than 20%, and it goes down even to just 9% when rendering to an xoverlay component. I don't know yet how this can help solve the performance issues in processing, but at least makes me think it is possible…
Thanks for your help, yes that looks promising, though maybe too deep level for my programming abilities.
I'm programming in java with processing as an imbedded applet so I'm wondering if I could do the video capture bit with java-fx utilising xoverlay mode to reduce my cpu, then somehow feed the data stream into processing (or perhaps miss out the processing bit altogether and do what I want with java) ?
for the time being my workaround is to try and source a much faster computer for the project !
ok, I'll let you know if some improvement takes place (hopefully it will :-) ). Just keep an eye on the sourceforge page or on my blog: