Push Forward processing-video library or New OpenGL rendering

edited February 2018 in Summer of Code 2018

Hello everyone,

I'm motivated to contribute to the Processing, one of my favorite graphical projects, in this summer. Specifically, I want to work on (Video) or (New OpenGL Renderer) projects.

What have I done so far related to the Processing?

  • Completed nearly all tutorials from the official page.
  • Get familiar with the code base.
  • Compiled and build Processing from source (thanks to ant).
  • Was following Exhibition page for the past 6 years.

What I'm planning to do in the next few days:

  • Get familiar with JNI and experiment with it.
  • Explore GStreamer and LWJGL code base and compile them from source.
  • Looking at issues page of named projects in GitHub and try to solve them.

Experiences related to these projects:

  • I know signal/image processing through prototyping research papers using Matlab and ImageJ.
  • Have written a C# wrapper for DirectShow. (related to Video Project)
  • Have implemented a simulation of 3D renderings, like what is happening in OpenGL(input: world position of vertices output: final pixels).
  • Have practiced TDD(Test Driven Development), Unit Testing and improving Code Coverage on multiple projects implemented with Java.

Short Bio: I'm in last year of my master studies (Computer Science) at the University of Houston. I also have a bachelor in Information Technology. I started my experience with computer graphics around 10 years ago with making flash animation and later moved to 3D graphics. I have experience working on different game and animation projects as a technical artist, pipeline developer, and 3D animator. I'm familiar with dozens of graphical packages for different applications such as image manipulation (Photoshop, GIMP, ImageJ), 3D Content Creation (Maya, Max, MotionBuilder), VFX/Motion Graphics (AfterEffects), Game Development (Unity 3D). I was more of a user of computer graphics application than a developer before. But I decided to learn more about the underlying processes and become a developer. In the past 3 years, I've tried to learn more about this "Pixel" thing - how to generate it (3D rendering and Interaction) and how to understand it (image processing/computer vision). I hope to pick up more fundamental concepts of computer graphics throughout of this project.

Meanwhile, I'm looking for getting in touch with potential mentors (Gottfried Haider, Andres Colubri) of these project to discuss the details of the expected outcomes and challenges.

Comments

  • As the maintainer of the Java bindings for GStreamer, I'm happy to help out too if you want to look at that - that would be working with JNA not JNI directly though.

    An LWJGL backend would be really good! Although, it seems JOGL development may finally be picking up again.

  • Tagging @codeanticode for any thoughts! Thanks for your interest!

  • @neilcsmith_net Awesome! I'm trying to swallow the GStreamer key concepts and have studied the Capture and Movie classes in the processing-video library. I'm trying to make small changes and see if I can compile and use it in the Processing. I'll get back to you with questions...

    An LWJGL backend would be really good! Although, it seems JOGL development may finally be picking up again.

    I'm interested to look at that project as well. But if the community thinks so, I can only focus on the processing-video project.

  • Hi everyone, thanks a lot for your interest!

    The video library update to gstreamer 1.x and the new LWJGL-based OpenGL renderer are definitely separate projects, each quite significant on its own.

    I would say that the video library update is slightly further ahead, as we already have a working beta version of the video library that uses the new java bindings for gstreamer that @neilcsmith_net is maintaining. But still needs more work.

    As the OpenGL renderer is concerned, even if JOGL development is gaining steam again, I think it would still be good to have a separate LWJGL-based renderer that may be of use in certain scenarios. An immediate problem with a potential transition from JOGL to LWJGL in the core renderer is that Processing sketches that use low-level JOGL's API or rely on AWT will break, so this needs some discussion.

    In any case, both projects are important, so my recommendation to @Yaser would be to look into both possibilities and decide which one fits better your interests and skills.

  • @codeanticode Thank you for your input. I have some experience with 3D rendering and multimedia application development and both projects look interesting to me. But I have not used Gstreamer or LWJGL in any project. However, in last week I put a good amount of time studying GStreamer. I wanted to write down a list of requested feature for next release of processing-video library so I went through the issue pages. Stripping down the size of the required library seems a priority. Other than that, there are some small issues. But I couldn't find much to bring it to my feature list. I think I need some advice and elaboration on what needs to be extended and developed for the next release.

    I'll try to put some time on the LWJGL project to gain more knowledge about the scope and challenges. Next week will be more 3D geometry and rendering instead of video streaming...

    a separate LWJGL-based renderer that may be of use in certain scenarios

    @codeanticode Would you bring some example of such scenarios? I've read a little about differences between JOGL and LWJGL, but I need a clear example.

  • Trimming the size of the video library is important, but should be easy to do. Making sure that capture works across all platforms and perhaps improving device listing is another task (although it may depend on platform-dependent gstreamer plugins). Another task could be to improve the opengl integration for better playback performance, and maybe carry out some refactorization so the movie and capture class are descendant from a common pipeline class. @gohai do you have any other suggestions?

    In relation to 3D, I think that both JOGL and LWJGL are in general different solutions to achieve the same goal (OpenGL in Java) so from the perspective of most Processing users there should be no noticeable difference between the two. A strong focus of LWJGL seems to be in games, so perhaps a LWJGL-based Processing renderer could be better fit for that kind of applications. Another issue is supporting RPi, which the JOGL renderer currently does, but LWJGL not yet, or at least in an experimental fashion (https://github.com/LWJGL/lwjgl3/issues/206).

  • I agree with @codeanticode - a bit of a general clean-up (or potentially re-implementation while keeping the same interfaces) would be a good idea. There is some historic code in there that is no longer used, which made it a bit hard to understand at times. GStreamer also has moved a lot in the mean time since certain workarounds were developed, so it might be worth revisiting them, and seeing if this is still the best available way. And having Capture and Movie derive from a shared Video class (or similar) sounds good as well!

    Regarding RPi support for LWJGL: this is something I'd be happy to help with, so this shouldn't be a deterrent ;)

  • @codeanticode - bindings for OpenGL API's in GStreamer are targeted for 1.0 of gst1-java-core. There are also ways of speeding up the current texture upload, particularly if full OpenGL is available. There are definitely some issues around threading and pipeline control in the current video library that could do with addressing.

    In terms of the video library size, I still think that a mechanism for downloading and extracting the OS-specific libs directly from GStreamer would be worth looking at, and then some mechanism for making a cross-platform sketch export with only the required plugins? Generic parts of that would definitely be something that could live / be maintained upstream.

    I'm not sure LWJGL3 is as purely focused on games these days, despite the name! ;-) But have you also given any thought to abstracting over libGDX btw? Just a thought (I used to have a ~75% P2D renderer based on code forked from libGDX over LWJGL2) - it could reduce the amount of work required.

    @gohai - RPi support with LWJGL would be a major requirement for me too, so happy to help with that in any way I can!

  • I would forgo OpenGL at this point - the GStreamer API for it isn't even stable.

  • @gohai - do you mean literally the API itself is unstable (which I realise) or that the implementation is unstable? The former is easy enough to work around, the latter would probably suggest it's not the time for it.

  • @codeanticode @gohai @neilcsmith_net

    Thanks for all of your valuable thoughts and discussions. After reading your comments and considering my limited time until proposal submission period, I've decided to put my energy into the video library for this year. I'm trying to summarize what I've learned so far and laid out a plan for myself. I've itemized the features needed to be implemented and want to clarify steps to approach them.

    1- Make the device listing stable and make sure capturing works across all platform.

    I've been reading GStreamer tutorials and playing with examples for a while. The command line tool seems to provide very detailed and helpful feedback. I thought along with other efforts to make capturing works across all platforms, providing appropriate feedback from GStreamer to Processing console would be helpful, at least for debugging issues we receive from other users with different devices and platforms.

    For the purpose of testing, I have access to Win 8.1 and Win 10 and multiple Linux version through VM. Also, I have an RPi which can be used for testing.

    2- Improve the OpenGL integration for better playback performance

    After a bit of study, I understand that OpenGL integration would be beneficial especially if we want to do any transformation on the video frame. It seems that the video library is not intended to do anything with video stream other than playing and capturing. Then using OpenGL, we are just uploading and downloading frames to GPU for no good reason. I don't have knowledge about how OpenGL currently integrated into the Processing, but if we want to play/capture a stream, transform and then render it, it might be useful to streamline the steps in the GPU and let OpenGL does some steps through GStreamer.

    3- As @neilcsmith_net suggested, providing a mechanism to download and extract OS-Specific GStreamer libraries. Also, when exporting sketches, put only required GStreamer plugins to a have a smaller output.

    I don't know how or even if possible to set a mechanism to download and extract OS-Specific libraries at the time of installing a new library within the Processing application. But about recognizing a subset of plugins to export with a sketch, I've watched this presentation (thanks @gohai for mentioning) to get a sense of how we can use debugging feature of GStreamer.

    Please comment if you see I'm in a wrong direction in any part.

  • edited March 2018

    @Yaser

    Thank you for your post and interest in GSoC w/ Processing! I believe the Video library is a great project, and the Processing community can greatly benefit from the fruits of your work.

    Just a couple quick thoughts on the points you mentioned:

    1- Device listing

    The problem here is that GStreamer itself currently does not implement the necessary functionality to do capture device enumeration on macOS. I am sure they would not mind patches, but this would (a) require quite some involved Objective C, and (b) we have no control over their release timeline.

    2- OpenGL

    I would strongly advise against implementing any of this before getting at least one release of the Video library based on modern GStreamer out of the door.

    I believe that making this switch, while not causing any unnecessary regressions, and fixing as many bugs as possible on the way, should be the goal really. Otherwise we'll end up with yet another video library, and a bunch of users having to stick with the "legacy" one because something does not quite work the way they expect it to in the new / OpenGL one.

  • @gohai Thanks for your thoughts.

    Thinking of a stable release of the library for device listing and capturing, If we put aside macOS for a minute, what approach I can have to test all type of capture devices? I've tried to attack some of the issues on the project page. Reproducing the issue is not always possible since I don't have access to the exact device. Then I have to research the device specification which in many cases does not result in useful information.

    About macOS, I know Processing has a huge macOS user base. But according to @gohai comment, it seems I cannot do much from Processing side.

    About OpenGL, according to your experience, I can focus on the other stuff now and postpone integrating OpenGL for future.

  • I've got a fellowship offer for summer last week and I'm going to take that opportunity. Then I won't be able to participate to GSOC this year.

    Thank you all for your lightning comments and replies. I hope I'd be able to get in touch again to improve this lovely software.

Sign In or Register to comment.