OpenIMAJ has some core functionality for capturing and processing video.
Xuggler is a Java framework for reading and writing video via the FFMPEG program. OpenIMAJ has a wrapper (XuggleVideo) for the Xuggler framework that allows Xuggler to act as a Video within OpenIMAJ.
For example, to display a video in a window, it's a matter of creating a XuggleVideo object and passing it to a VideoDisplay object:
XuggleVideo v = new XuggleVideo( new File( "video.mp4" ) ); VideoDisplay<MBFImage> vd = VideoDisplay.createVideoDisplay( v );
...and that's it. The video will play. The great thing is that with just a few lines of code, we can begin to do some interesting stuff with the frames that are being decoded from the file.
Let's overlay a histogram of each frame onto the video frame. To do this we can utilitise the VideoDisplayListener's beforeUpdate method which is called just before the video display is updated. This method receives the frame that is about to be shown to the user. We can draw a histogram to this frame before it gets shown.
So, first we make our class implement the VideoDisplayListener interface, add it as a listener to the VideoDisplay and then implement the beforeUpdate method.
To do that we need something that will plot histograms. So, here's a really basic little function that takes an image to draw onto (the MBFImage), an image to calculate the histogram of (the FImage) and the colour in which to
plot the histogram results onto the colour image.
/** * Plot a histogram into an image of another image in a given colour. * @param img The image to plot into * @param img2 The image whose histogram is to be plotted * @param colour The colour in which to plot the histogram. */ public void plotHisto( MBFImage img, FImage img2, Float[] colour ) { // The number of bins is set to the image width here, but // we could specify a specific amount here. int nBins = img.getWidth(); // Calculate the histogram Histogram h = HistogramProcessor.getHistogram( img2, nBins ); // Find the maximum so we can scale the bins double max = h.max(); // Work out how fat to draw the lines. double lineWidth = img.getWidth() / nBins; // Now draw all the bins. int x = 0; for( double d : h.getVector() ) { img.drawLine( x, img.getHeight(), x, img.getHeight() - (int)(d/max*img.getHeight()), (int)lineWidth, colour ); x += lineWidth; } }
We can use this method in the beforeUpdate method to plot the red, green and blue channels of our video frame onto the video frame itself.
/** * @inheritDoc * @see org.openimaj.video.VideoDisplayListener#beforeUpdate() */ public void beforeUpdate( MBFImage frame ) { plotHisto( frame, frame.getBand(0), RGBColour.RED ); plotHisto( frame, frame.getBand(1), RGBColour.GREEN ); plotHisto( frame, frame.getBand(2), RGBColour.BLUE ); }
The result we should get is something like the following:
For OpenIMAJ we have created a cross-platform, mavenised, video capture system; that is, all the cross-platform native code you require to capture video from your Linux, Mac or Windows system is included in the OpenIMAJ code-video-capture JAR file. This means you can just import the JAR and off you go with your live video application.
The capture system is encapsulated in the VideoCapture class which implements the Video abstract class. This means that any application you write that uses the Video class can be easily made to use live video through the VideoCapture class.
So, our above application that displays the histogram of video frames can be very easily adapted to show the histogram of live images. We simply need to change one line. Instead of instantiating a XuggleVideo object, we instantiate a VideoCapture object:
VideoCapture v = new VideoCapture( 320, 240 ); VideoDisplay<MBFImage> vd = VideoDisplay.createVideoDisplay( v );
That really is all there is to it! Now we have a live webcam histogram display.
Anonymous
Hi, can I take a photo using OpenIMAJ Video Capture Library from webcam playing video camera?
Hi, Not sure if you mean something specific by "webcam playing camera", but you can take a snapshot and save it to disk like so:
VideoCapture vc = new VideoCapture( ... );
...
MBFImage snapshot = vc.getNextFrame();
ImageUtilities.write( snapshot, "png", new File("snapshot.png") );
Hey, I am trying to grab a video signal from an USB connected S-Video Grabber (Elgato) with the OpenImaJ libraries on a MAC, but the method VideoCapture.getVideoDevices() only recognises internal and external (via USB) webcams. With the webcams it works fine. Do you have any recommendation or does it generally not work with OpenImaJ? Thanks!
Those Elgato capture devices typically only work with the software that comes with them (i.e. EyeTV); they don't present themselves to the system as a capture device like a webcam does, hence why it doesn't work.
If it's really important for you to get live video from the s-video grabber into OpenIMAJ, it might be possible to use VLC to grab the video from the Elgato device (there's a plugin http://www.videolan.org/vlc/download-eyetv.html), and then transcode and publish that video to an rtsp stream. In OpenIMAJ you could then use the XuggleVideo class to connect to the stream and reconstruct the video. I've no idea if/how well this would work though.
I trying to capture the video from a live streaming from a browser using,
Video<MBFImage> video1 = new XuggleVideo(new URL("http://nknvcreplay.nic.in/replay/webcastShow.html"));
but it's not working. Do you have any recommendation about how to capture video from the browser during live streaming.
Thanks!
You can't just give a url to an html page as input - you need the URL of the actual video file, or the url of the rtp/rtsp stream of the video.
View and moderate all "wiki Discussion" comments posted by this user
Mark all as spam, and block user from posting to "Wiki"
Hi Jonathon - I'm on the point of giving up with OpenIMAJ as I can't see any way through the multiple errors I'm experiencing - the documentation and user base are just too thin.
For example, your simple video display example works fine for me with an mpg file but not with mp4 (could not be opened by ffmpeg). Conversely a test program to write a mp4 file works fine (albeit I can't find any way to release the webcam without rebooting!) but won't write mpg. When I try to get under the hood and use methods directly it seems that the Java doc doesn't line up with the code I'm seeing. For example OpenIMAJGrabber which the doc says is a Public method is denied to me when I try to reference it in my program - I'm told it's not Public at all. Web searches fail to enlighten. I'm also bother by the fact that OpenIMAJ sits on top of Xuggle which I know is no longer maintained.
I'm working on Netbeans under WIndows 10 using your maven build so assume I have all the correct libraries.
I don't /want/ to give up as OpenIMAJ seems the only open source Java library that's being actively maintained. If I can get some support from soton I'd be very keen to work up a more comprehensive set of tutorial examples by creating a blog to document my efforts.This might help us both out if you're at all concerned to see OpenIMAJ more widely used. I believe I've got a useful project in mind that might attract some interest.
Please be blunt in your response. I don't want to waste anybody's time, least of all my own.
Regards, Martin Joyce
Last edit: Jonathon Hare 2015-10-31
View and moderate all "wiki Discussion" comments posted by this user
Mark all as spam, and block user from posting to "Wiki"
Hello and thank you so much for this amazing framework taking Java to image processing world !
I'm trying to do camera calibration using so called jar library. I've been able to detect inner conners in chessboard using ChessboardCornerFinder. I can grabb several images from different view points and retrieve each inner corners. So far so good !
But now I would like to compute camera intrinsic parameters using CameraCalibration. I've read papers related to this computation but without success...
My problem : What is the first parameter of the method camera calibration :
CameraCalibration(List<List<? extends="" IndependentPair<?="" extends="" Point2d,="" ?="" extends="" Point2d="">>> points, int width, int height)
width and heigh are image size. Points parameter is surely related to inner points detected during previous step but how are they arranged ?
Thanks,
frank
Hi Frank,
Basic operation is as follows:
with
buildModel()
as follows (which shows point order):Demo code here: https://github.com/jonhare/COMP3204/blob/master/app/src/main/java/uk/ac/soton/ecs/comp3204/l11/CalibrationDemo.java
View and moderate all "wiki Discussion" comments posted by this user
Mark all as spam, and block user from posting to "Wiki"
Hello ,
How can I split video? It is posible with OpenImaj. Is there easy way? I have some videos. Size of video is bigger than 1 GB. I want to split video as 128 MB part (default block size of hdfs) and analys them with hadoop. I created custom InputFormat . When I set isSplitable is false, whoole video processing in a map. I want to split video for make possible processing with multiple map.
Thanks.
Off the top of my head there isn't currently a way to do this directly with OpenIMAJ directly, unless you manually pre-process each video and break it into parts (e.g. using a VideoWriter to save each part, and creating a new file when a certain file size is reached). Alternatively it might be possible to use the underlying Xuggler API to read specific parts of each video, however I don't believe that it will be possible with most video formats as they are not naturally splittable.