GSoC 2015 - Vision Related Libraries

Hi, my name is Naoto, a master student at McGill University, and my projects (http://www.cim.mcgill.ca/~nhieda/) are mainly related to projection mapping. I use Processing for some prototyping because of its handiness, but most of the time I work with openFrameworks since I found it has more computer vision related libraries (and a better Kinect library). So, I'd like to contribute to such libraries.

I had to tweak oF/libfreenect before, so I think can update the Kinect v1 library. SurfaceMapper is another interest as I'm familiar with projection mapping, but also building a new library for structured light and projection mapping can be more beneficial for me.

Comments

  • You'll want to research JNI/JNA to learn about how native interfaces to Java work. Greg Borenstein's OpenCV for Processing is also a good reference.

  • @shiffman, thanks. Instead of digging into the native interface, what about using OpenCV for Java such as JavaCV (https://github.com/bytedeco/javacv) and build projector-camera calibration and mapping examples for Processing?

  • Yes, this is a reasonable path as well. Note we are unlikely to fund a new OpenCV library although you are welcome to make proposals for contributing to Greg Borenstein's should you see missing features or have ideas for other improvements.

  • @shiffman, then I'll propose extending Greg's library by improving calibration/3D features. I've been maintaining a projector-camera calibration library for openFrameworks (https://github.com/micuat/ofxActiveScan), so I can port projector-camera calibration, structured light 3D scanning and projection mapping API/examples to Processing, which I'm also proposing to OpenCV.

  • Hi Naoto,

    I think this is a great idea for a project. I've poked around a little bit with camera calibration and 3D features in OpenCV for Processing, but have never had a project of mine own as an excuse to work more on this stuff.

    I have an example in the library finding the corners for a chessboard, which is something that I wrapped:

    https://github.com/atduskgreg/opencv-processing/blob/master/examples/CalibrationDemo/CalibrationDemo.pde

    And I've played around some with perspective correction:

    https://github.com/atduskgreg/opencv-processing/blob/master/examples/WarpPerspective/WarpPerspective.pde

    but have yet to fully wrap it.

    We should more precisely specify exactly what you'd like to achieve in this project, but here's a basic direction that I'd propose:

    • Plan for a tool (written in Processing) and/or a library that would do something useful for projection mapping/projector-camera calibration/structured light scanning.

    • Figure out what OpenCV functionality you need for that tool/library

    • Add support for that functionality in OpenCV for Processing.

    • Use OpenCV for Processing as a dependency of your tool/library.

    I think this will prevent you from duplicating work that's already done and make the most of the work you do take on.

    A couple of other notes:

    OpenCV for Processing is based on the official OpenCV java API: http://docs.opencv.org/java/ This makes it very easy to experiment with functionality (once you've imported gab.opencv.* you can import and use any of the OpenCV classes; take a look at that second, WarpPerspective, sketch for an example). This means that the best way to proceed is usually to prototype new functionality in standalone Processing sketches and then once you've got a good API going then you can add it to OpenCV for Processing very smoothly.

    Second:

    As I'm sure you know, Kyle McDonald (and others) have built lots of good tools in this area in the OpenFrameworks community. It would be good (if possible) not to exactly duplicate the work they've done. Building out code libraries that give people writing code access to this stuff in Processing is great, but for more end-user facing tools it would just be great if we don't duplicate what already exists in projects like Kyle's ProCamToolkit: https://github.com/YCAMInterlab/ProCamToolkit

    Anyway, I think this is an exciting area and I'm happy to help out however I can.

  • Hi @gregab, I appreciate your detailed comments. Indeed, not to duplicate the work done is really an important thing to keep in mind; there are already ProCamToolkit and ProCamCalib, which are well written. These calibration tools just dump yml files with camera parameters, so I don't think they have to be ported to Processing. Rather, a projection mapping tool written in Processing (like SurfaceMapper or mapamok in openFrameworks) has a potential to encourage users to integrate their sketches with mapping. Also, I would like to build something that is not difficult to use; usually camera calibration is complicated and they have to learn camera geometry and other math stuff to understand it.

    Regarding the two points, not to duplicate existing tools and to make handy tools, I set two goals:

    2D mapping tool: The idea follows the instruction of MadMapper. Structured light is used to find the correspondence between projector-camera (which can be ported from my openFrameworks add-on) and render a projector perspective view. Then SurfaceMapper-like quad mapping or free drawing tool can be implemented to start projection mapping.

    3D scanning and mapping tool: Instead of having a thorough projector-camera calibration, I will port my oF add-on to a Processing library to let users try out easy 3D scanning by automatic calibration. This requires structured lighting as well, and some SVD / Levenberg-Marquardt minimization features from OpenCV. The 3D scan data is further used for projection mapping by quad warping or GLSL.

Sign In or Register to comment.