We are about to switch to a new forum software. Until then we have removed the registration on this forum.

- All Categories 25.7K
- Announcements & Guidelines 13
- Common Questions 30
- Using Processing 22.1K
- Programming Questions 12.2K
- Questions about Code 6.4K
- How To... 4.2K
- Hello Processing 72
- GLSL / Shaders 292
- Library Questions 4K
- Hardware, Integration & Other Languages 2.7K
- Kinect 668
- Arduino 1K
- Raspberry PI 188
- Questions about Modes 2K
- Android Mode 1.3K
- JavaScript Mode 413
- Python Mode 205
- Questions about Tools 100
- Espanol 5
- Developing Processing 548
- Create & Announce Libraries 211
- Create & Announce Modes 19
- Create & Announce Tools 29
- Summer of Code 2018 93
- Rails Girls Summer of Code 2017 3
- Summer of Code 2017 49
- Summer of Code 2016 4
- Summer of Code 2015 40
- Summer of Code 2014 22
- p5.js 1.6K
- p5.js Programming Questions 947
- p5.js Library Questions 315
- p5.js Development Questions 31
- General 1.4K
- Events & Opportunities 289
- General Discussion 365

Given that I've obtained a camera's focal length, principal points, and distortion values using third party software (GML)

How would I go about calibrating a camera in processing?

I've seen this done a lot with MATLab and OpenCV, both of which I am unfamiliar with :(

so what I'm really asking is:

1) are there any libraries that support this kind of calibration/distortion of PImage?

or

2) What kind of matrix (or handy libraries) do I have to use to figure out where a pixel on a raw webcam translates to after calibration.

and, an extra question

3) anyone know how to do extrinsic calibration with two cameras? (Haven't done as much research on this one yet)

Hopefully someone knows something *fingers crossed*

Thanks.

(

Links to OpenCV/MATLab stuff I'm trying to do in Processing:

http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html

http://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html

http://www.mathworks.com/help/vision/ref/undistortimage.html

)

Tagged:

## Answers

I am interested in this topic as well. What is the purpose of your calibration? Image detection? Shape and automation of size detection? In which way would your distortion values improve your application?

Kf

Spatial awareness based on image detection for the purposes of robotics.

I've got basic image detection already figured out (all I need for this project), but cameras lens naturally distort at the periphery. As I'm basing all my spatial awareness on these values, which at times will be several meters away near the edge of a camera, distortion values have to be minimized so they don't get multiplied and cause problems.

So I'm currently trying to figure out how to do the intrinsic calibrations to either undistort camera input or to figure out where a pixel (via image detection) ends up after undistortion.

I think you need to figure out how that third party software calculate those value and it should also tell you the meaning of those corrections and how you should apply them to your data. I imagine that to get those corrections you use a calibration target?

We can get easily lost in the terminology here. You also talk about extrinsic calibration. The values through the third party software would qualify as intrinsic or extrinsic?

I foresee you will create your own calculation matrix to apply these corrections after you find out their meaning (thus how to use them). I will let the experts provide more ideas as I have neither done this type of corrections to this advance level nor in processing.

Kf

Using a calibration target, the third party software spits out: intrinsic values (focal lengths and principal point) distortion coefficients (confusingly lumped in with intrinsics most of the time) extrinsic values (translation and rotation between the cameras)

In a sense, I have already extracted all the needed variables. Currently casting around to see if anyone has any idea of how to apply it in Processing >_<

Failing that, yes, I'll be trying my hand at parsing all the maths hidden under the OpenCV/MATLab code vis-a-vis the linked pages (of which I am familiar with neither maths nor code T-T)

Well, I am not an expert but I have developed a method regarding this issue...

You say, after using a calibration target (checkerboard?) you get camera parameters. How exactly are you calibrating these cameras? I mean, one at time using the calibration target? As far as I know you need to come up with a so called fundamental matrix (among other names) that is made by correspondences between 2 projection matrices (camera, projectors...).

What I do is, I have a 3D point in space which is accurately measured by the system (it can be made using AR markers, kinects, etc), then I relate this 3D point with a 2D point I see (mouse click) in the 2nd Camera.

I do this for at least 6 times, in an specific order, so I collect enough information ( a set of 3D points and 2d points ) to build that Matrix.

I make some math (I can explain if wanted) using OpenCV for processing. BUT you can do it with another library which includes transpose and inverse/pseudo inverse operations for matrices, like Jama.

Then I end up with an 4x3 matrix which, whatever 3D point I multiply with this matrix I get the exact 2D (x,y) point in the second camera. But again, you need to give this 3D point somehow. Is it anyhow useful for you?

Oh wow, that's almost exactly what I'm looking for (the 4x3 matrix)

I've been doing calibration with a checkerboard yes, an IR depth sensor and a HD webcam taking photos simultaneously with a fixed distance/rotation (ended up with 12 usable sets)

As both cameras can detect the same checkerboard and one of them provides depth, I've technically gotten my 3d and 2d points

I've been using GML C++ Calibration Toolbox[1] to extract all the calibration data of the cameras, but reading up, it seems like the Fundamental Matrix is a simpler way to do this (not bothering to figure out some variables and just skipping to how they relate to one another)

If you could explain the math to get the Fundamental Matrix that would be awesome. I'm using OpenCV for processing, but if you think it is easier with another library that is also fine (my matrix math is super rusty though..)

This would be a giant boon as it would bypass the part that I'm having the most difficulty with tbh.

Also, been trying to wrap my head around the Fundamental Matrix, but do you know if they account for distortion? (fisheye from lenses basically) I think I've mostly figured out distortion, but trying to figure out if I would need to apply them for the F.Matrix or not

[1] http://graphics.cs.msu.ru/en/node/909

I am really sorry for my delay. This is where you can find the code I have made so far: https://github.com/bontempos/processing/tree/master/matrices4projectorCalibration

The code is commented but please let me know if you have any request. The math is pretty much multiplying matrices in the right way, but this was really tricky from examples I found on internet due to my lack of sense in this subject.

This is still basic, and I want (after next week) to make a video explaining how to use it in a more advanced ways.

There is a "TODO" part which might not be that hard to implement. In case you guys can give me a hand that would be awesome ;)

I really hope this can be of any use. Best