Thanks, it's coded using processing-py which is simply a python implementation of the processing environment (A good blog post on how to set it up here:
http://bit.ly/KBNxx2 ) Here's the python code for now anyway, I have commented most of the lines of code, so even if you don't know python it shouldn't be that hard to understand/recreate in java if you wanted:
- #image input = 480x270
- #cropped image (without border) = 480x192
- count = 0
- def setup():
- size(1280, 720, P3D)
- background(255)
- sphereDetail(3)
- lights()
-
- def draw():
- #centres the drawing
- rotateX(0.12)
- translate(width/3 , (height-192)/2, -100)
-
- background(255)
-
- frame = imageUpdate() #read in the frame to be used
-
- pointMap(frame)
-
- screenshot() #saves an image of the current canvas
- #Draws the output/3d spheres based on the frame given
- def pointMap(frame):
- frame.loadPixels() #load the pixels of the frame
-
- for x in range(frame.width):
- for y in range(39, 231): #altered slightly to get rid of the black top and bottom
- pixelColour = frame.get(x, y) #get pixel colour
- fill(pixelColour, 50) #change the fill and stroke to the pixel colour
- stroke(pixelColour)
- if (x%4)==0 and (y%4)==0: #only use every 4th pixel on x and y axis
- pushMatrix()
- #move sphere according to x and y coordindates
- #and the brightness of the colour
- translate(x, y, 2*(int(brightness(pixelColour))))
- sphere(3)
- popMatrix()
- #returns the next frame to be drawn
- def imageUpdate():
- fileName = 'Y/Y ' #Location and beginning of the file name
- fileNumber = str(frameCount%5817) #Frame number
- fileNumber= fileNumber.zfill(4) #Padding the digits to 4 digits
- fileName = fileName + fileNumber + '.jpg'
- return loadImage(fileName)
- #Saves an image of each frame
- def screenshot():
- global count
- filename="720p\screen "+ str(count) +".png"
- save(filename)
- count+=1
Note: The original videos frames were stored in a folder Y in the same directory as the sketch and named Y0000, Y0001, Y0002
I used the P3D renderer.
The basic algorithm is simply read in the image/frame, load the pixels of the frame into a pixelarray, every 4th pixel get the colour and draw a sphere with the same colour, it's position depends on the x and y coordinate of the original pixel it's based on and it's z coordinate depends on the brightness of the colour, then save the canvas into an image.
So like you said it's not in real time as I found I wasn't able to calculate everything fast enough, definitely one of the limitations of python, so it's based on the saved frames and I have rerendered all of the output frames together using a movie editing software. As a result it also meant that I didn't have to use a video library and simply used the PImage class that python provides.