You might have found out, as we did lately, that calibrating many cameras together is pretty hard. It can result in “jumps” when moving from a camera to another.
They are two reasons behind this problem. To explain them and how we are going to solve this problem, let’s start by explaining how camera calibration works.
How is a camera calibrated ?
A calibrated camera has two sets of parameters : the intrinsic parameters and the extrinsic parameters.
The intrinsic parameters describe the camera sensor itself and its lens. It will also contains coefficients to describe the lens distortion. Some distortions come directly from the lens, as you can see from that OpenCV example, on the left the original distorted image, on the right the image after undistortion :
Some other distortions are due to the way the sensor is soldered and the fact that is is not always 100% flat, or the lens exactly parallel to the sensor :
The extrinsic parameters represent the position and orientation of the camera in the world space :
We have calculated the intrinsic parameters using distortion calculation algorithm, and we apply an undistortion with those parameters. The only problem is that we are using the same parameters for all the cameras.
The extrinsic parameters are calculated when you do a calibration. To have great results, this calibration should use hundreds of points. But you cannot enter 100s of points by hand.
The errors in the calibration result in absolute precision errors. This error can be seen when many camera are looking at the same point (at least 4), the Tag position “jumps” from one position to another.
How to solve this problem ?
Here is the big question ! Rest assured, we are already testing a solution 😉
That solution is called wand calibration.
It consists in moving around in front of the cameras a wand made of two tags separated by a known distance. The cameras records the datas that will then be processed all together using a set of algorithm (Bundle Adjustment and Levenberg-Marquardt).
Those algorithms will calculate precisely the extrinsic parameters of the camera, but they will also improve the intrinsic parameters ! Solving all our problems. The precision is great because instead of taking a few points to calibrate, it will use hundreds, even thousands points.
There is one last trick : this automatic calibration has no reference point. So you have to add three points of known 3D coordinates to transform the positions into a known coordinate system.
How are we going to implement that ?
Our goal is to bring you this solution as fast as possible. Therefore the fastest way will be to release an open source piece of software that will perform those calculations. It won’t be the most practical in a first time, and we plan on implementing it in the Gateway directly later, but we thought you would prefer to be able to have a solution faster, even if it is a bit more complicated to use.
We plan on testing this solution for the next two weeks, to validate it works as expected. We have already started working on it 🙂
Then we will need about six weeks to implement a solution you can use. It will be on Github so if you want to give us a hand on developing this tool faster, you are more than welcome !