I'd like to start getting familiar with Kinect and Processing for a robotics project coming up next year. My problem is I won't be actually getting a Kinect for a year. Would anyone be willing to share some existing Kinect data, depth & RGB of something moving, that they may have used for a sketch?
An odd request but the researchers I've contacted seem reluctant to share (no responses yet), and while video are abundant , I've not come across any downloadable raw kinect data. Any help or guidance would be appreciated.
Background: Hi. I have been doing some agent work with Netlogo. I have been working on agent AI initially implemented in Netlogo but now moved to a Java module that is called by Netlogo. Netlogo allows this by the use of
extensions; you create your java code, put it in a .jar, and then Netlogo calls the .jar. The approach sort of works but I I'm not the creator of the .jar (just the algorithms) and frankly, I'm not really clear on how the two programs actually communicate.
What I would like to do, is use Processing for the AI module. The reason is, I want to be able to provide a visualization of what is happening in the AI. The end result would look like: Netlogo environment with agents doing their thing in one window and a visualization (sketch) of the thinking process of one agent being delivered in a second window. What would be desirable is also being able to interact with the AI visualization.
Question: Has anyone done a Netlogo <--> Processing extension or could anyone provide guidance on how it might be done?
PS - I have worked through "Getting Started with Processing" and am just starting on "Visualizing Data". I am a very advanced Matlab programmer but Java and OO are not very familiar to me.