Hi, I am wondering what is the best way to render an arbitrary number of polygons with different vertex counts using GLGraphics?
I have a toxiclibs voronoi instance, and I'm taking its regions and feeding the vertices of each region into a separate GLModel. I have the feeling this is not the most efficient way to do it. Performance starts to suffer with less than 200 vertices.
I suspect I could put all vertices in one GLModel and somehow keep track which vertex belongs to which region, and then call GLModel.render(first, last) as many times as needed to draw each polygon.
But then I'm not sure what mode to set for the GLModel - TRIANGLE_STRIP would bind all regions into one, TRIANGLES would probably create problems when I try to texture each region with a different image. POLYGON turns all points into one single polygon...
Here's my code:
import processing.opengl.PGraphicsOpenGL;
import toxi.geom.Polygon2D;
import toxi.geom.Vec2D;
import toxi.geom.mesh2d.Voronoi;
import codeanticode.glgraphics.GLConstants;
import codeanticode.glgraphics.GLGraphics;
import codeanticode.glgraphics.GLModel;
Voronoi voronoi = new Voronoi();
ArrayList<GLModel> glModels = new ArrayList<GLModel>();
do everything offline, fft the audio one video frame at a time, render that frame & save it, merge audio after-the-fact
so, if you had audio exactly 60 seconds long, you'd do exactly 1800 fft's (60*30) spanning exactly 79380000 samples (1800*44100) and render exactly 1800 video frames, each of which "looked at" an fft of exactly 1470 samples. That's the sort of math you'd like to see for "perfect" sync.
However, as far as I get it, Minim cannot seek to a cue point. Is there a way around this?
-l