OpenKinect - depth and type size


I'm on the way to creating a generative typographic installation with many effects the user can have on the type. One being using the Kinect depth sensor to control the type size (this combined with the average point tracking) ... i.e. the closer the user the larger the text becomes. Any pointers?

I've got the average point tracking working with a dictation library, so the user can input their own text as well through speech.

Sign In or Register to comment.