hello again! I've made a sketch in which mouse position triggers a moving visual effects, so I thought I'd use OpenCV Blob centers to use as pointers, and later on I used JMyron. Both had a great impact in performance, making it barely interactive: Processing = 34 fps Processing + OpenCV = 20 fps Processing + JMyron = 24 fps
Can this be made faster? is there any way in which I can read blob centroids withouth the full impact in perfomance?
Hi! I've read many books on Processing and looked everywhere for the right information on what should be used to make a standard interactive floor, but I have failed to find such info. I live in Chile and since we have no careers that deal with these tools and concepts I would like to ask for your guidance.
My objective is to make an interactive floor using Processing, a DLP projector, infrared webcam and IR LAMP. A person walks into the projected surface and circles interact with the person in a very physics kind of way.
Now, I have gotten circles to interact with each other and the blobs from openCV to appear on screen, but I have failed to connect the blobs with the circles in a natural way.
Can anyone help me? Are there any books or pdf guides regarding these types of problems? I normally get 60fps without OpenCV. When I use it, I get like 10fps. Should I be using OpenCV? Is "Interactive Floor" the right name for this application? Because maybe I've been searching the wrong name.
Hi! I have read many books and implemented a few apps using Processing, OpenFramework and OpenCV, like crude interactive floors and such but until now it has mostly been guesswork (I live in Chile, and there are no careers dealing with any interactive media apps)
Where can I learn how to make all these new applications that have popped up lately?
I see many interactive floors appearing in some countries and a lot of projection mapping with complex ideas behind them. What courses exist that mix complex libraries such as Box2D, OpenCV and others? Where can anyone such as me learn all of these very complex uses of cameras, proyectors and software?
I'm from Chile, I have a degree in Civil Industrial Engineering and now I have the oportunity to leave my Country. I'm also learning german right now so I would'nt mind going to Germany.
Hi! This is my first post here. I love Processing, and have trying to get some of my own applications working, one of the involves the tipical openCV blobs colliding with objects and then those objects interacting in a certain way, either bouncing off or flocking and such. So here is my question:
I have imported the openCV library that is available in the Processing homepage and used the "blobs" example to get the outline of all detected blobs, then I managed to combine it with a code I have that involves circles colliding in a very "ice hockey puck" sort of way. I tried this example since it seemed to be the simplest form of interaction for an interactive floor experience.
But here I've been stuck for weeks since I haven't been able to make the pucks bounce off the openCV blobs. I have no idea where to begin the interaction nor how it could be accomplished.
Any ideas where to begin programming such interaction?