why is my program running on the wrong side of screen?

JaiJai
edited November 2016 in Kinect

what im looking to do is run face detection on the right side of the window only not display the cam on the right side while processing on the left side which is whats happening.

void setup() {
  size(1280, 480, P3D);

  video2 = new Capture(this, 1280/2, 480, "webcam");

  opencv = new OpenCV(this, 1280/2, 480);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

  video2.start();
}

void R()
{
  scale(1);
  opencv.loadImage(video2);
  image(video2, 1280/2, 0);

  noFill();
  stroke(0, 255, 0);
  strokeWeight(1);
  Rectangle[] faces = opencv.detect();

  for (int i = 0; i < faces.length; i++) {
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
}
}
«1

Answers

  • this is placing the cam on the right side, no?

    image(video2, 1280/2, 0);
    

    should be

    image(video2, 0, 0);
    

    and here in the command rect you want to add 1280/2

    for (int i = 0; i < faces.length; i++) {
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    
  • edited November 2016 Answer ✓

    @Jai -- You are drawing your video at an offset of 640:

    image(video2, 1280/2, 0);
    

    But your faces[i] calls are returning coordinates in terms of the video -- assuming the video is starting at 0,0 -- not at 1280/2,0.

    So you either need to add 640 to the "x" of each rect, in order to bump them over...

    Or, perhaps better, use "translate(x,y)" to move both the video and the rects. If you need to draw more things after that are not translated, you can additionally use pushMatrix() / popMatrix() to isolate the translation and then undo it later.

    void R()
    {
      scale(1);
      opencv.loadImage(video2);
    
      pushMatrix();
        translate(640,0);
        image(video2, 0, 0);
        noFill();
        stroke(0, 255, 0);
        strokeWeight(1);
        Rectangle[] faces = opencv.detect(); 
        for (int i = 0; i < faces.length; i++) {
          rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
        }
      popMatrix();
    }
    

    edit: fixed code typo

  • But your faces[i] calls are returning coordinates in terms of the video -- assuming the video is starting at 0,0 -- not at 1280/2,0.

    So you either need to add 640 to the "x" of each rect, in order to bump them over...

    @jeremydouglass thanks but i do not understand tho i will say that i did try

        Rectangle[] faces = opencv.detect(); 
        for (int i = 0; i < faces.length; i++) {
          rect(faces[i].x/2, faces[i].y/2, faces[i].width, faces[i].height);
    

    as well as

        Rectangle[] faces = opencv.detect(); 
        for (int i = 0; i < faces.length; i++) {
          rect(faces[i].x, faces[i].y, faces[i].width/2, faces[i].height/2);
    

    but nothing idid notice a change but not what you ended up showing me, should i have done

        Rectangle[] faces = opencv.detect(); 
        for (int i = 0; i < faces.length; i++) {
          rect(faces[i].x+640, faces[i].y+640, faces[i].width, faces[i].height);
    

    or something like that?

  • well as in now with your code i get this after 3 seconds of the program starting Capture

  • Answer ✓

    one missing }

    void R()
    {
      scale(1);
      opencv.loadImage(video2);
    
      pushMatrix();
        translate(640,0);
        image(video2, 0, 0);
        noFill();
        stroke(0, 255, 0);
        strokeWeight(1);
        Rectangle[] faces = opencv.detect(); 
        for (int i = 0; i < faces.length; i++) {
          rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
        } /// NEW 
      popMatrix();
    }
    
  • @Chrisir thanks that did the trick, now noticed how we just did R();?ok well now that this is working im going to do the same rewrite on the setup and build a second function for the 2nd cam which will be known as L(); with video1 but here idont get 2 cams just one, any tips?

  • in theory use the list of cameras

    https://www.processing.org/reference/libraries/video/Capture.html

    not sure how to tell this to opencv though

  • ure

    im picking those two types at the end respectfully,

    i just get one left side all black and one right side on being video2 while video1 in left side remain dark

  • Brainstorm: Are those two at the end really different cameras, or are they the same physical camera offering different resolutions?

  • hope this helps guys.

    import gab.opencv.*;
    import processing.video.*;
    import java.awt.*;
    
    Capture video1, video2;
    OpenCV opencv;
    void setup() {
      size(1280, 480, P3D);
    
      video1 = new Capture(this, 1280/2, 480, "USB2.0 Camera");
      video2 = new Capture(this, 1280/2, 480, "USB2.0 PC CAMERA");
    
      opencv = new OpenCV(this, 1280/2, 480);
      opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
    
      video1.start();
      video2.start();
    }
    
    void draw() {
      L();
      R();
    }
    
    void captureEvent(Capture c, Capture video1) {
      video1.read();
      video2.read();
      c.read();
    }
    
    void R()
    {
      scale(1);
      opencv.loadImage(video2);
    
      pushMatrix();
      translate(640, 0);
      image(video2, 0, 0);
      noFill();
      stroke(0, 255, 0);
      strokeWeight(1);
      Rectangle[] faces = opencv.detect(); 
      for (int i = 0; i < faces.length; i++) {
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
      }  
      popMatrix();
    }
    void L()
    {
      scale(1);
      opencv.loadImage(video1);
      image(video1, 0, 0 );
    
      noFill();
      stroke(255, 0, 0);
      strokeWeight(1);
      Rectangle[] faces = opencv.detect();
    
      for (int i = 0; i < faces.length; i++) {
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
      }
    }
    
  • @GoToLoop im sorry what now? i took a look at this code

    import processing.video.Capture;
    Capture cam;
    
    static final String RENDERER = JAVA2D;
    //static final String RENDERER = FX2D;
    
    static final int CAM = 1, FPS_ADJUST = 5, DELAY = 5;
    
    void setup() {
      size(640, 480, RENDERER);
      initFeed();
    
      float canvasFPS = cam.frameRate + FPS_ADJUST;
      frameRate(canvasFPS);
    
      println("Cam's FPS:", cam.frameRate, "\t\tCanvas's FPS:", canvasFPS);
      print("Cam's size:", cam.width, 'x', cam.height, '\t');
      println("Canvas's size:", width, 'x', height);
    }
    
    void draw() {
      background(cam);
      getSurface().setTitle( str(round(frameRate)) );
    }
    
    void captureEvent(final Capture c) {
      c.read();
    }
    
    void initFeed() {
      String[] cams = Capture.list();
      printArray(cams);
      println("\nChosen Cam #" + CAM + ':', cams[CAM]);
    
      ( cam = new Capture(this, cams[CAM]) ).start();
      while (cam.width == 0)  delay(DELAY);
    
      if (cam.width > 0)  getSurface().setSize(cam.width, cam.height);
    }
    

    however altho i see whats going on here i just dont see how this solves or helps with my issue?

  • JaiJai
    edited November 2016

    this is what i get in return for the code above

    [0] "name=USB2.0 PC CAMERA,size=640x480,fps=15"
    [1] "name=USB2.0 PC CAMERA,size=640x480,fps=30"
    [2] "name=USB2.0 PC CAMERA,size=352x288,fps=15"
    [3] "name=USB2.0 PC CAMERA,size=352x288,fps=30"
    [4] "name=USB2.0 PC CAMERA,size=320x240,fps=15"
    [5] "name=USB2.0 PC CAMERA,size=320x240,fps=30"
    [6] "name=USB2.0 PC CAMERA,size=176x144,fps=15"
    [7] "name=USB2.0 PC CAMERA,size=176x144,fps=30"
    [8] "name=USB2.0 PC CAMERA,size=160x120,fps=15"
    [9] "name=USB2.0 PC CAMERA,size=160x120,fps=30"
    [10] "name=Microsoft LifeCam Rear,size=640x360,fps=30"
    [11] "name=Microsoft LifeCam Rear,size=640x480,fps=30"
    [12] "name=Microsoft LifeCam Rear,size=424x240,fps=15"
    [13] "name=Microsoft LifeCam Rear,size=424x240,fps=30"
    [14] "name=Microsoft LifeCam Rear,size=320x240,fps=15"
    [15] "name=Microsoft LifeCam Rear,size=320x240,fps=30"
    [16] "name=Microsoft LifeCam Rear,size=320x180,fps=15"
    [17] "name=Microsoft LifeCam Rear,size=320x180,fps=30"
    [18] "name=Microsoft LifeCam Rear,size=160x120,fps=15"
    [19] "name=Microsoft LifeCam Rear,size=160x120,fps=30"
    [20] "name=Microsoft LifeCam Rear,size=1280x800,fps=5"
    [21] "name=Microsoft LifeCam Rear,size=1280x720,fps=30"
    [22] "name=Microsoft LifeCam Rear,size=960x544,fps=30"
    [23] "name=Microsoft LifeCam Rear,size=848x480,fps=30"
    [24] "name=Microsoft LifeCam Rear,size=960x544,fps=15"
    [25] "name=Microsoft LifeCam Rear,size=848x480,fps=30"
    [26] "name=Microsoft LifeCam Rear,size=640x360,fps=30"
    [27] "name=Microsoft LifeCam Rear,size=640x480,fps=30"
    [28] "name=Microsoft LifeCam Front,size=640x360,fps=30"
    [29] "name=Microsoft LifeCam Front,size=640x480,fps=30"
    [30] "name=Microsoft LifeCam Front,size=424x240,fps=15"
    [31] "name=Microsoft LifeCam Front,size=424x240,fps=30"
    [32] "name=Microsoft LifeCam Front,size=320x240,fps=15"
    [33] "name=Microsoft LifeCam Front,size=320x240,fps=30"
    [34] "name=Microsoft LifeCam Front,size=320x180,fps=15"
    [35] "name=Microsoft LifeCam Front,size=320x180,fps=30"
    [36] "name=Microsoft LifeCam Front,size=160x120,fps=15"
    [37] "name=Microsoft LifeCam Front,size=160x120,fps=30"
    [38] "name=Microsoft LifeCam Front,size=1280x800,fps=5"
    [39] "name=Microsoft LifeCam Front,size=1280x720,fps=30"
    [40] "name=Microsoft LifeCam Front,size=960x544,fps=30"
    [41] "name=Microsoft LifeCam Front,size=848x480,fps=30"
    [42] "name=Microsoft LifeCam Front,size=960x544,fps=15"
    [43] "name=Microsoft LifeCam Front,size=848x480,fps=30"
    [44] "name=Microsoft LifeCam Front,size=640x360,fps=30"
    [45] "name=Microsoft LifeCam Front,size=640x480,fps=30"
    [46] "name=USB2.0 PC CAMERA,size=640x480,fps=15"
    [47] "name=USB2.0 PC CAMERA,size=640x480,fps=30"
    [48] "name=USB2.0 PC CAMERA,size=352x288,fps=15"
    [49] "name=USB2.0 PC CAMERA,size=352x288,fps=30"
    [50] "name=USB2.0 PC CAMERA,size=320x240,fps=15"
    [51] "name=USB2.0 PC CAMERA,size=320x240,fps=30"
    [52] "name=USB2.0 PC CAMERA,size=176x144,fps=15"
    [53] "name=USB2.0 PC CAMERA,size=176x144,fps=30"
    [54] "name=USB2.0 PC CAMERA,size=160x120,fps=15"
    [55] "name=USB2.0 PC CAMERA,size=160x120,fps=30"
    [56] "name=USB2.0 Camera,size=640x480,fps=30"
    [57] "name=USB2.0 Camera,size=352x288,fps=30"
    [58] "name=USB2.0 Camera,size=320x240,fps=30"
    [59] "name=USB2.0 Camera,size=176x144,fps=30"
    [60] "name=USB2.0 Camera,size=160x120,fps=30"
    

    Chosen Cam #1: name=USB2.0 PC CAMERA,size=640x480,fps=30 Cam's FPS: 30.0 Canvas's FPS: 35.0 Cam's size: 640 x 480 Canvas's size: 640 x 480

  • edited November 2016

    ... I just don't see how this solves or helps with my issue?

    It won't fix your code b/c it's got multiple issues. But it's a start! >-)
    Compare my and your captureEvent() callbacks for example. Their signatures...

    Also look up its reference: https://Processing.org/reference/libraries/video/captureEvent_.html

    You should read about its API in order to use it correctly:
    https://Processing.org/reference/libraries/video/index.html

  • JaiJai
    edited November 2016

    @GoToLoop API?

    and are you saying to not do this

    void captureEvent(Capture c, Capture video1) {
      video1.read();
      video2.read();
      c.read();
    }
    

    but rather do this?

    void captureEvent(Capture c) {
      c.read();
    }
    

    or you mean

    void captureEvent(Capture video1 Capture video2) {
      video1.read();
      video2.read();
    }
    

    because i tried all oh these variations and nothing or then again i may have just missed understood your reply

  • Just this:

    void captureEvent(final Capture c) {
      c.read();
    }
    

    Otherwise the Capture instances won't find that out! :-SS
    It's gotta follow the method signature the library expects so. :-B

  • @GoToLoop but isnt that only calling one instance capture ? im looking to call 2

  • edited November 2016
    • It's called callback b/c we don't call it directly.
    • Instead something else automatically does it for us in its own time. B-)
    • Each Capture instance is run by its own separate Thread.
    • When a frame arrives for a particular Capture, it calls captureEvent() if it exists.
    • And it passes to it its own Capture reference.
    • That's why when read() is invoked there, it acts upon that particular passed reference only.
  • This next is from: https://processing.org/reference/libraries/video/captureEvent_.html

    Use the read() method to capture this frame. If there is more than one capture device in the program, captureEvent() is called each time any of the devices has a new frame available. Use an if within the function to determine which device is triggering the event.

    Unfortunately they didn't provide an example for capture. However there is a parallel example that you could try which you can find dealing with movies:

    https://processing.org/reference/libraries/video/movieEvent_.html

    I hope this helps,

    Kf

  • edited November 2016
    • @kfrajer , that movieEvent()'s example is 100% redundant! :O)
    • There's no need whatsoever to check out which particular Movie instance was called back.
    • Given they're both of datatype Movie, they both got its method read().
    • It's just enough to call read() upon the passed Movie reference. :-@
  • It's just enough to call read() upon the passed Movie reference.

    @GoToLoop Are you saying that the movieEvent() example in the video reference is wrong?

    On the other hand, notice I am quoting directly from the reference. However the reference is not clear about how to proceed:

    Use an if within the function to determine which device is triggering the event.

    This is to clarify. I have never manipulated two capture devices simultaneously although I would be interested to know how to do it.

    Kf

  • edited November 2016

    Are you saying that the movieEvent() example in the video reference is wrong?

    Well, you can read I said it was redundant, not wrong.
    Meaning it was teaching an unnecessary procedure as if it was obligatory. Spreading superstition!

    ... notice I am quoting directly from the reference.

    I've ranted it many times in this forum for many years already:
    Processing's web reference is fulla errors, superstitions & omissions! X(
    Especially when the word "must" is abusively uttered there! [-(

    Use an if within the function to determine which device is triggering the event.

    There are times which we may need to know which particular instance was triggered.
    However, never to merely call read()! :O)

    ... although I would be interested to know how to do it.

    Seeing is believing! Why don't you pick up that ignorant example and watch how it works? :-\"
    Here, I've modified that to use that "transit.mov" file used in most of its examples.
    Of course you can replace that for another video, be it the same or different 1s.
    Also there are 2 movieEvent() functions. 1 of them is commented out.
    You'll see w/ your own eyes that it works w/ just m.read();. \m/

    import processing.video.Movie;
    Movie myMovie, yourMovie;
    
    void setup() {
      size(1280, 360);
      ( myMovie   = new Movie(this, "transit.mov") ).play();
      ( yourMovie = new Movie(this, "transit.mov") ).play();
    }
    
    void draw() {
      set(0, 0, myMovie);
      set(myMovie.width, 0, yourMovie);
    }
    
    void movieEvent(final Movie m) {
      m.read();
    }
    
    //void movieEvent(Movie m) {
    //  if (m == myMovie) {
    //    myMovie.read();
    //  } else if (m == yourMovie) {
    //    yourMovie.read();
    //  }
    //}
    
  • hey guys just to be clear cuz i keep hearing about this .mov and its event calls, this is somehow can be adaptive to work with my live feed from my 2 usb webcams right? or are you guys simply trying to come clear on the fact that one single even call can be enough to trigger a capture event from both of my cams when called?

  • edited November 2016

    The same principle applied to movieEvent() is applicable to captureEvent().
    After all, they belong to the same video library: :bz
    https://Processing.org/reference/libraries/video/index.html

  • ok good point . so basically my code is good there is something else preventing my 2 cams from being played at the same time but what can it be?

  • Why don't you try to write a sample code just to deal w/ 2 cams?
    Mixing complex libraries just make things much harder to debug! :-&

  • what you mean? im only using processing, i could just add opencv later still same results

  • edited November 2016

    Can't you adapt my latest 2 movies at the same time example to use 2 web cams instead? :-/

  • import processing.video.*;
    
    Capture video1, video2;
    
    void setup() {
      size(1280, 480, P3D);
    
      video1 = new Capture(this, 1280/2, 480, "USB2.0 Camera");
      video2 = new Capture(this, 1280/2, 480, "USB2.0 PC CAMERA");
    
      video1.start();
      video2.start();
    }
    
    void draw() {
      L();
      R();
    }
    
    void captureEvent(final Capture c) {
      c.read();
    }
    
    void R()
    {
      scale(1);
      translate(640, 0);
      image(video2, 0, 0);
    }  
    
    void L()
    {
      scale(1);
      image(video1, 0, 0 );
    }
    
  • Can't you adapt my latest 2 movies at the same time example to use 2 web cams instead?

    no because it is expecting a .mov file to play, further more the event calls are also looking to play not read, "Reads the current frame" there will be 30fps available every second if this call is on looking for a return true if there is something to read which in fact theres always something to read its 2 live feeds you kno?

  • edited November 2016

    No, because it is expecting a .mov file to play, ...

    Capture class doesn't play files, but gets its feed from a camera! =P~
    Of course, in order to adapt a Movie example to a Capture 1, each video file is replaced by a cam feed!

  • JaiJai
    edited November 2016

    @GoToLoop as you asked me to try, i did and no results..... AND as for your recent comment i actually tried and ide told be that i cannot do what im asking because i cannot convert one into the other.

    import processing.video.*;
    
    Capture cam, cam1;
    
    void setup() {
      size(1280, 480);
    
      //  String[] cameras = Capture.list();
      //
      //  if (cameras == null) {
      //    println("Failed to retrieve the list of available cameras, will try the default...");
      //    cam = new Capture(this, 640, 480);
      //  } 
      //  if (cameras.length == 0) {
      //    println("There are no cameras available for capture.");
      //    exit();
      //  } else {
      //    println("Available cameras:");
      //    for (int i = 0; i < cameras.length; i++) {
      //      println(cameras[i]);
      //    }
    
      cam = new Capture(this, 1280/2, 480, "USB2.0 Camera");
      cam1 = new Capture(this, 1280/2, 480, "USB2.0 PC CAMERA");
    
      cam.start();
      cam1.start();
    }
    //}
    
    void draw() {
    
      //  if (cam.available() && cam1.available() == true) {
      //    cam.read();
      //    cam1.read();
      //  }
      //  image(cam, 0, 0);
      //  image(cam1, 640, 0);
    
      if (cam.available() == true) {
        cam.read();
      }
      image(cam, 0, 0);
    
      if (cam1.available() == true) {
        cam1.read();
      }
      image(cam1, 640, 0);
    }
    
    void captureEvent(final Capture c) {
      c.read();
    }
    
  • edited November 2016

    Hmm... I don't see the captureEvent() callback there! :|

  • yeah i forgot to go that much down on the copy nd paste i got upset that still no result despite what i try,

    void captureEvent(final Capture c) {
      c.read();
    }
    
  • i got this red msg now

    (java.exe:6716): GStreamer-CRITICAL **: 
    Trying to dispose element rgb, but it is in PAUSED instead of the NULL state.
    You need to explicitly set elements to the NULL state before
    dropping the final reference, to allow them to clean up.
    This problem may also be caused by a refcounting bug in the
    application or some element.
    
    
    (java.exe:6716): GStreamer-CRITICAL **: 
    Trying to dispose element Video Capture, but it is in PAUSED instead of the NULL state.
    You need to explicitly set elements to the NULL state before
    dropping the final reference, to allow them to clean up.
    This problem may also be caused by a refcounting bug in the
    application or some element.
    
  • @GoToLoop i got that as i notice that the window ran but no feed so i closed it and then i got that msg in console

  • @GoToLoop i even tried this code and nothing.

    import processing.video.*;
    Capture myMovie, yourMovie;
    
    void setup() {
      size(1280, 480);
      myMovie   =  new Capture(this, 1280/2, 480, "USB2.0 Camera");
      yourMovie = new Capture(this, 1280/2, 480, "USB2.0 PC CAMERA");
      myMovie.start();
      yourMovie.start();
    }
    
    void draw() {
    //  myMovie.read();
    //  yourMovie.read();
      scale(1);
      //  translate(640, 0);
      image(myMovie, 0, 0);
      image(yourMovie, 640, 0 );
    }
    void captureEvent(Capture c) {
      c.read();
    }
    //void movieEvent(final Movie m) {
    //  m.read();
    //}
    
    //void movieEvent(Movie m) {
    //  if (m == myMovie) {
    //    myMovie.read();
    //  } else if (m == yourMovie) {
    //    yourMovie.read();
    //  }
    //}
    
  • edited November 2016

    I don't have 2 actual physical cameras in my laptop here.
    So this sketch doesn't work for me. But I believe it may for ya if you adjust the 2 cams' index.
    Pay attention on how very few lines were modified. It's almost the same as the Movie 1: $-)

    import processing.video.Capture;
    Capture myCam, yourCam;
    
    void setup() {
      size(1280, 480);
    
      final String[] cams = Capture.list();
      printArray(cams);
    
      ( myCam   = new Capture(this, cams[1]) ).start();
      ( yourCam = new Capture(this, cams[3]) ).start();
    }
    
    void draw() {
      set(0, 0, myCam);
      set(myCam.width, 0, yourCam);
    }
    
    void captureEvent(final Capture c) {
      c.read();
    }
    

    For comparison, here's the original example sketch: ;;)

    import processing.video.Movie;
    Movie myMovie, yourMovie;
    
    void setup() {
      size(1280, 360);
      ( myMovie   = new Movie(this, "transit.mov") ).play();
      ( yourMovie = new Movie(this, "transit.mov") ).play();
    }
    
    void draw() {
      set(0, 0, myMovie);
      set(myMovie.width, 0, yourMovie);
    }
    
    void movieEvent(final Movie m) {
      m.read();
    }
    
  • Try this:

    https://forum.processing.org/two/discussion/5960/capturing-feeds-from-multiple-webcams

    http://stackoverflow.com/questions/34206480/capture-video-from-multiple-cameras-in-processing

    From this last link:

    import processing.video.*;
    Capture camA;
    Capture camB;
    String[] cameras;
    
    void setup(){
    cameras=Captures.list();
    camA = new Capture(this,1280,960,cameras[15]);
    camB = new Capture(this,1280,960,cameras[1]);
    camA.start(); 
    camB.start();
    }
    void draw() {
      image(camA, 100, 100, 360,240);
      image(camB, 500, 100, 360,240);
    }
    
    void captureEvent(Capture c) {
      if(c==camA){   
        camA.read();
      }else if(c==camB) {
        camB.read();
      }
    }
    

    Kf

  • im getting this msg in the console

    [0] "name=USB2.0 PC CAMERA,size=640x480,fps=15"
    [1] "name=USB2.0 PC CAMERA,size=640x480,fps=30"
    [2] "name=USB2.0 PC CAMERA,size=352x288,fps=15"
    [3] "name=USB2.0 PC CAMERA,size=352x288,fps=30"
    [4] "name=USB2.0 PC CAMERA,size=320x240,fps=15"
    [5] "name=USB2.0 PC CAMERA,size=320x240,fps=30"
    [6] "name=USB2.0 PC CAMERA,size=176x144,fps=15"
    [7] "name=USB2.0 PC CAMERA,size=176x144,fps=30"
    [8] "name=USB2.0 PC CAMERA,size=160x120,fps=15"
    [9] "name=USB2.0 PC CAMERA,size=160x120,fps=30"
    [10] "name=Microsoft LifeCam Rear,size=640x360,fps=30"
    [11] "name=Microsoft LifeCam Rear,size=640x480,fps=30"
    [12] "name=Microsoft LifeCam Rear,size=424x240,fps=15"
    [13] "name=Microsoft LifeCam Rear,size=424x240,fps=30"
    [14] "name=Microsoft LifeCam Rear,size=320x240,fps=15"
    [15] "name=Microsoft LifeCam Rear,size=320x240,fps=30"
    [16] "name=Microsoft LifeCam Rear,size=320x180,fps=15"
    [17] "name=Microsoft LifeCam Rear,size=320x180,fps=30"
    [18] "name=Microsoft LifeCam Rear,size=160x120,fps=15"
    [19] "name=Microsoft LifeCam Rear,size=160x120,fps=30"
    [20] "name=Microsoft LifeCam Rear,size=1280x800,fps=5"
    [21] "name=Microsoft LifeCam Rear,size=1280x720,fps=30"
    [22] "name=Microsoft LifeCam Rear,size=960x544,fps=30"
    [23] "name=Microsoft LifeCam Rear,size=848x480,fps=30"
    [24] "name=Microsoft LifeCam Rear,size=960x544,fps=15"
    [25] "name=Microsoft LifeCam Rear,size=848x480,fps=30"
    [26] "name=Microsoft LifeCam Rear,size=640x360,fps=30"
    [27] "name=Microsoft LifeCam Rear,size=640x480,fps=30"
    [28] "name=Microsoft LifeCam Front,size=640x360,fps=30"
    [29] "name=Microsoft LifeCam Front,size=640x480,fps=30"
    [30] "name=Microsoft LifeCam Front,size=424x240,fps=15"
    [31] "name=Microsoft LifeCam Front,size=424x240,fps=30"
    [32] "name=Microsoft LifeCam Front,size=320x240,fps=15"
    [33] "name=Microsoft LifeCam Front,size=320x240,fps=30"
    [34] "name=Microsoft LifeCam Front,size=320x180,fps=15"
    [35] "name=Microsoft LifeCam Front,size=320x180,fps=30"
    [36] "name=Microsoft LifeCam Front,size=160x120,fps=15"
    [37] "name=Microsoft LifeCam Front,size=160x120,fps=30"
    [38] "name=Microsoft LifeCam Front,size=1280x800,fps=5"
    [39] "name=Microsoft LifeCam Front,size=1280x720,fps=30"
    [40] "name=Microsoft LifeCam Front,size=960x544,fps=30"
    [41] "name=Microsoft LifeCam Front,size=848x480,fps=30"
    [42] "name=Microsoft LifeCam Front,size=960x544,fps=15"
    [43] "name=Microsoft LifeCam Front,size=848x480,fps=30"
    [44] "name=Microsoft LifeCam Front,size=640x360,fps=30"
    [45] "name=Microsoft LifeCam Front,size=640x480,fps=30"
    [46] "name=USB2.0 Camera,size=640x480,fps=30"
    [47] "name=USB2.0 Camera,size=352x288,fps=30"
    [48] "name=USB2.0 Camera,size=320x240,fps=30"
    [49] "name=USB2.0 Camera,size=176x144,fps=30"
    [50] "name=USB2.0 Camera,size=160x120,fps=30"
    
    (java.exe:1804): GStreamer-CRITICAL **: 
    Trying to dispose element rgb, but it is in READY instead of the NULL state.
    You need to explicitly set elements to the NULL state before
    dropping the final reference, to allow them to clean up.
    This problem may also be caused by a refcounting bug in the
    application or some element.
    
  • edited November 2016

    Have you chosen the 2 correct indices as each Capture controls 2 distinguished hardware cameras?
    Like for example 1 & 11? (:|

    ( myCam   = new Capture(this, cams[1]) ).start();
    ( yourCam = new Capture(this, cams[11]) ).start();
    

    I've already mentioned I've only got 1 physical camera in my laptop.
    So I can't have more than 1 Capture here. You've gotta test it in your own hardware! 8-|

  • @kfrajer i fixed your code but at the end it fails to produce what we been talking about, i think my code works just fine i just think any other attempts to rewrite the same algorithm is still going to bring up those msg's i been getting, maybe thats why no one has done this lol ? i will keep trying tho its been 9 months since i been on here asking about this

    import processing.video.*;
    Capture camA;
    Capture camB;
    String[] cameras;
    
    void setup() {
      size(1280, 640);
      cameras=Capture.list();
      camA = new Capture(this, 640, 480, cameras[4]);
      camB = new Capture(this, 640, 480, cameras[48]);
      camA.start(); 
      camB.start();
    }
    void draw() {
      image(camA, 100, 100, 360, 240);
      image(camB, 500, 100, 360, 240);
    }
    
    void captureEvent(Capture c) {
      if (c==camA) {   
        camA.read();
      } else if (c==camB) {
        camB.read();
      }
    }
    
  • @GoToLoop yes thats the first thing i did in fact i been trying for array [4] & [48]

  • Have you even tried my own attempt solution using the indices 1 & 11?
    If you don't test my unmodified code, how can I know it works, since I don't have the means to test it out myself? [-(

  • edited November 2016

    Gonna repeat it here, but w/ indices 1 & 11 instead: >-)

    import processing.video.Capture;
    Capture myCam, yourCam;
    
    void setup() {
      size(1280, 480);
    
      final String[] cams = Capture.list();
      printArray(cams);
    
      ( myCam   = new Capture(this, cams[1])  ).start();
      ( yourCam = new Capture(this, cams[11]) ).start();
    }
    
    void draw() {
      set(0, 0, myCam);
      set(myCam.width, 0, yourCam);
    }
    
    void captureEvent(final Capture c) {
      c.read();
    }
    
  • @Jai Some relevant posts although a bit old:

    https://forum.processing.org/two/discussion/3090/gstreamer-critical-2360

    https://forum.processing.org/one/topic/unable-to-read-big-videos-with-processing-2-0-3.html

    Unfortunately I can't helpmuch more atm bc I don't have two capture devices either. I might try something tomorrow but I can't promise anything.

    Only if you could try with another camera as it is possible it could be a hardware issue. unless you happen to have a diff cam around...

    Kf

  • Try lower resolution and using the same frameRate. This is just a guess. Also try only having two cameras connected at the time if possible. Don't use an USB hub as described in the link (in case you are using one). Try setting your cameras' resolutions to 160x120 and 30fps (or 15 fps if available).

    Kf

  • Lowest res & fps pair is: 18 & 36. :-/

  • JaiJai
    edited November 2016

    thanks guys i really appreciate your help and tips, ill keep working and i will order new cam altho coming from china they will take a while so i guess till then ill keep in touch with hopefully mind blowing results ! thnks again!!

    i do have another project i been working on and will tend that one for now keep my ass debugging till new years lol

  • @Jai

    I just tested both codes and they worked.

    I tested using the following settings for both cameras: 320x240 30fps and 640x480 30fps.

    Here I have attached the reference codes:

    import processing.video.*;
    Capture camA;
    Capture camB;
    String[] cameras;
    
    void setup() {
      size(1280, 640);
      cameras=Capture.list();
      camA = new Capture(this, 640, 480, cameras[4]);
      camB = new Capture(this, 640, 480, cameras[48]);
      camA.start();
      camB.start();
    }
    void draw() {
      image(camA, 100, 100, 360, 240);
      image(camB, 500, 100, 360, 240);
    }
    
    void captureEvent(Capture c) {
      if (c==camA) {  
        camA.read();
      } else if (c==camB) {
        camB.read();
      }
    }
    

    This next one tested as well from @GoToLoop's:

    import processing.video.Capture;
    Capture myCam, yourCam;
    
    void setup() {
      size(1280, 480);
    
      final String[] cams = Capture.list();
      printArray(cams);
    
      ( myCam   = new Capture(this, cams[1])  ).start();
      ( yourCam = new Capture(this, cams[11]) ).start();
    }
    
    void draw() {
      set(0, 0, myCam);
      set(myCam.width, 0, yourCam);
    }
    
    void captureEvent(final Capture c) {
      c.read();
    }
    

    Kf

Sign In or Register to comment.