TL:DR - Syncing video and data stream - Whats the best way to display a long (900+) sequence of images at a high frame rate (30fps);
I am working with an eyetracker that outputs an x and y coordinate based on the users gaze point.
I want to be able to show the user a video clip, record their eye movement data and then render a video of a point using that data on top of the original video.
E.g.
http://uoregon.edu/~josh/eyetracker/test.mov (I am using mouse data during testing).
Initially I tried play the video and outputting the data to text file. Then playing the video, reading the text file, drawing the gaze point and then drawing the entire frame to a new video. This works for short clips (how the example video was made) but over time the data and video go out of sync. Processing has no way of telling what 'frame' a movie is at, only a 'time' which isn't accurate enough.
So then I broke the video up into individual images (30 per second of video) and then displayed these frames in sequence, recording one piece of eye movement data per frame. This worked great and allows for perfect syncing when rendering.
However this limits me to a frame rate within processing of around 20fps when recording the data. Given the frame sequence is generated at 30fps, the resulting 'animation' plays back at 2/3 of its actual speed.
Loading the images into an array during setup gives me a playback fps of 30, but memory issues keep occurring around the 400 frame mark.
Does anyone have any suggestions Is there a better way entirely to do this I looked at jitter as it can cycle through a movie 1 frame at a time, but would prefer to get this working in processing.
Sorry for the lengthy post. Thought it best to go through my process. Let me know if you need any clarification.
Thanks,
TB