Hello and thank you so much for this amazing library.
I've been able to get inner conners of a chessboard using ChessboardCornerFinder.
I want now to proceed with camera calibration using CameraCalibration constructor which takes three parameters :
CameraCalibration(List<List<? extends="" IndependentPair<?="" extends="" Point2d,="" ?="" extends="" Point2d="">>> points,
int width, int height)
width and height are probably related to image size.
But what about points parameter ?
Is this parameter linked to inner conners detected with ChessboardCornerFinder.
If yes, how are they arranged ?
Thanks a lot,
frank
PS : sorry for multiple posts !
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
As you noted, height and width are related to the image size. Basic operation is as follows:
List<List<?extendsIndependentPair<?extendsPoint2d,?extendsPoint2d>>>corners=newArrayList<List<?extendsIndependentPair<?extendsPoint2d,?extendsPoint2d>>>();List<Point2dImpl>model=buildModel(patternWidth,patternHeight,10);ChessboardCornerFinderchessboard=newChessboardCornerFinder(patternWidth,patternHeight,Options.FILTER_QUADS,Options.FAST_CHECK,Options.ADAPTIVE_THRESHOLD);//Add detected corners from many frames:chessboard.analyseImage(frame.flatten());if(chessboard.isFound()){corners.add(IndependentPair.pairList(model,chessboard.getCorners()));}//...chessboard.analyseImage(frame.flatten());if(chessboard.isFound()){corners.add(IndependentPair.pairList(model,chessboard.getCorners()));}//...//etc//then run calibration (use Zhang's method - the CameraCalibration class is rather experimental [optimises an extra parameter, but not sure how well it works in practice])finalCameraCalibrationZhangcalib=newCameraCalibrationZhang(corners,640,480);
with buildModel() as follows (which shows point order):
List<Point2dImpl> buildModel(int width, int height, double d) {
final List<Point2dImpl> pts = new ArrayList<Point2dImpl>();
for (int i = 0; i < height; i++)
for (int j = 0; j < width; j++)
pts.add(new Point2dImpl(j * d, i * d));
return pts;
}
In theory more images allows a better fit of the model (thus less error). The test code to check against Zhang's original results (for which he provided data) uses just 5 images.
I'm less sure about the requirements for the calibration chessboard - more squares will probably help with fitting, but the detection of the corners of those squares becomes harder (as they will be captured with fewer pixels). For my experiments I've just used the standard 9x6 pattern provided with opencv: http://docs.opencv.org/2.4/_downloads/pattern.png
(note: in the code i posted above, the parameters to buildModel are (9,6,10.0) to reflect the arrangement of the opencv chessboard).
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Yes, exactly that. The last parameter is the size of the squares in whatever metric system you want to use (mm, cm, ...). Choice here will affect the camera extrinsics (i.e. the estimated position of the camera will use measurements in that system), and the intrinsics (e.g. focal length, ...).
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello, OpenImaj very interesting library. I try to adopt it to my tasks. I try to implement depth detection by stereo pair.
In class CameraCalibration method getCameras() returns the list of cameras(in my case 8). Why so many?
Are there any examples of stereo calibration by using this library?
Thank you!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The number of cameras returned by getCameras() is equal to the number of sets of matching points used at construction time - if you use matches from a calibration pattern across 8 images, then you'll get 8 camera objects returned (all with the relevant extrinsic parameters set, giving the positions of the camera in the scene [on the assumption that the camera moves and the calibration pattern is stationary of course]).
There isn't currently any stereo calibration code - only single cameras are dealt with, although you can compute the epipolar geometry (ideally using the calibration to remove the radial distortion - see eg the undistort method of the CameraIntrinsics - check the the javadoc for caveats!) using the RobustFundamentalEstimator and FundamentalModel (and from the Fundamental Matrix and the Intrinsic matrix (from the calibration code get the CameraIntrisicscalibrationMatrix) it's possible to estimate the Essential matrix).
View and moderate all "General Discussion" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Hello and thank you so much for this amazing library.
I've been able to get inner conners of a chessboard using ChessboardCornerFinder.
I want now to proceed with camera calibration using CameraCalibration constructor which takes three parameters :
CameraCalibration(List<List<? extends="" IndependentPair<?="" extends="" Point2d,="" ?="" extends="" Point2d="">>> points,
int width, int height)
width and height are probably related to image size.
But what about points parameter ?
Is this parameter linked to inner conners detected with ChessboardCornerFinder.
If yes, how are they arranged ?
Thanks a lot,
frank
PS : sorry for multiple posts !
Hi Frank,
As you noted, height and width are related to the image size. Basic operation is as follows:
with
buildModel()
as follows (which shows point order):Demo code here: https://github.com/jonhare/COMP3204/blob/master/app/src/main/java/uk/ac/soton/ecs/comp3204/l11/CalibrationDemo.java
View and moderate all "General Discussion" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Janathon,
Thanks !
Ok, so more than one processed image must be done.
Do you have any advice on the number of processed images and the conditions.
I've seen opencv video where something like 15 images are acquired before doing calibration.
And what about the chessboard ? The more elements (squares) there are , the best it is ?
Last edit: Anonymous 2016-05-05
In theory more images allows a better fit of the model (thus less error). The test code to check against Zhang's original results (for which he provided data) uses just 5 images.
I'm less sure about the requirements for the calibration chessboard - more squares will probably help with fitting, but the detection of the corners of those squares becomes harder (as they will be captured with fewer pixels). For my experiments I've just used the standard 9x6 pattern provided with opencv: http://docs.opencv.org/2.4/_downloads/pattern.png
(note: in the code i posted above, the parameters to buildModel are (9,6,10.0) to reflect the arrangement of the opencv chessboard).
View and moderate all "General Discussion" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Just to check if I have understood last model parameter in your example :
it has obviously 9x6 inner corners and the last parameter (10.0) is the side square length in prefered unit (probably mm in your example).
Correct ?
thanks :-)
Yes, exactly that. The last parameter is the size of the squares in whatever metric system you want to use (mm, cm, ...). Choice here will affect the camera extrinsics (i.e. the estimated position of the camera will use measurements in that system), and the intrinsics (e.g. focal length, ...).
Hello, OpenImaj very interesting library. I try to adopt it to my tasks. I try to implement depth detection by stereo pair.
In class
CameraCalibration
methodgetCameras()
returns the list of cameras(in my case 8). Why so many?Are there any examples of stereo calibration by using this library?
Thank you!
The number of cameras returned by
getCameras()
is equal to the number of sets of matching points used at construction time - if you use matches from a calibration pattern across 8 images, then you'll get 8 camera objects returned (all with the relevant extrinsic parameters set, giving the positions of the camera in the scene [on the assumption that the camera moves and the calibration pattern is stationary of course]).There isn't currently any stereo calibration code - only single cameras are dealt with, although you can compute the epipolar geometry (ideally using the calibration to remove the radial distortion - see eg the undistort method of the
CameraIntrinsics
- check the the javadoc for caveats!) using theRobustFundamentalEstimator
andFundamentalModel
(and from the Fundamental Matrix and the Intrinsic matrix (from the calibration code get theCameraIntrisics
calibrationMatrix
) it's possible to estimate the Essential matrix).There's not much sample code currently I'm afraid:
https://github.com/openimaj/openimaj/blob/master/demos/sandbox/src/main/java/org/openimaj/demos/sandbox/c3d/Main.java shows how to compute the Fundamental matrix (although the code looks slightly incomplete)
https://github.com/jonhare/COMP3204/blob/master/app/src/main/java/uk/ac/soton/ecs/comp3204/l11/CalibrationDemo.java shows how to do live calibration of a single camera