org.eclipse.jdi.TimeoutException

edited October 2013 in Kinect

I'm writing a custom blob detection engine that uses getRawData from Daniel Shiffman's Open Kinect for Processing library, and encountering this error:

crashed in event thread due to Timeout occurred while waiting for packet 603.
org.eclipse.jdi.TimeoutException: Timeout occurred while waiting for packet 603.
    at org.eclipse.jdi.internal.connect.PacketReceiveManager.getReply(PacketReceiveManager.java:186)
    at org.eclipse.jdi.internal.connect.PacketReceiveManager.getReply(PacketReceiveManager.java:197)
    at org.eclipse.jdi.internal.MirrorImpl.requestVM(MirrorImpl.java:191)
    at org.eclipse.jdi.internal.MirrorImpl.requestVM(MirrorImpl.java:226)
    at org.eclipse.jdi.internal.ThreadReferenceImpl.frames(ThreadReferenceImpl.java:257)
    at org.eclipse.jdi.internal.ThreadReferenceImpl.frames(ThreadReferenceImpl.java:240)
    at processing.mode.java.runner.Runner.findException(Runner.java:909)
    at processing.mode.java.runner.Runner.reportException(Runner.java:892)
    at processing.mode.java.runner.Runner.exceptionEvent(Runner.java:818)
    at processing.mode.java.runner.Runner$2.run(Runner.java:707)

At first, I was sure this was a logic error. Blob detection uses a lot of recursive method calls, so it's easy to get something wrong, and wind up in an infinite loop of calls. However, Processing doesn't seem to hang if a sketch enters an infinite loop or generate the above error. I've gone over this code with a fine toothed comb, and can't see a way for it to get stuck (I'm flagging each pixel as checked before extending my search, so the engine can't double back on itself).

If I reduce the granularity of my engine (ie. exactly the same logic, but checking every N pixel), it reduces the frequency of the crash. At n=1, I crash immediately. At n=2, I usually crash within a few seconds. At n>=3 I don't crash. It seems to roughly correspond with when my sketch hits ~100% CPU if running without a delay(), though that could just be a coincidence - adding a delay reduces CPU%, but does not affect the crash.

Further, running with n=2 immediately after setup will cause an instant crash, but running after a 1s delay does not. This makes me think the added CPU load during setup is related.

Memory is effectively stable (excepting a small known leak in the Kinect library, but that is consistent and small enough to be ignored).

This makes me think one of two things are happening:

  1. I'm hitting a maximum execution stack depth. This generally makes sense, but doesn't account for why adding the 1s delay helps - perhaps its just that the data that comes back from the kinect in the first 1s is "darker", so the search tree is deeper?

  2. I'm hitting a maximum execution time, which results in the timeout. Also generally makes sense, but Processing doesn't time out if you just run an infinite loop, so it doesn't seem likely either. Ex. while (true) { if (random(1) == 2) { break; } }

Any ideas? Thanks in advance!

Sign In or Register to comment.