heho!
..I start to be a processing-junkie, but I still have some problems..
My project:
the camera/processing tracks a moving object (green, with a hole). If the perspective of the object changes (because of moves) the hole gets bigger or smaller. Sound should be conected to this different sizes.
I have the lonely parts (colortrack, blob-detection connected to sound), but I can't put it together logically...
please help me, if that is the right way:
1.) color tracking (for green)
2.) define an area around this colortrack for blobs-detection
(finally I want to have different color-objects, so I think its better to define an area around the colortrack, otherwise it start to jump to the other objects)
3.) blob detection in this area for the hole-size-> big blobsize=sound, small size = quiet
Is that the best way?
-> Or should I don't use the blob-detection/OpenCV?
(Is there a better size-recongnicing-code for moving elements?)
-> How can I definate an area around the tracker?
(In which part of the code I have to put this)
Sorry, for so many questions...
Thanks for helps!