Would it be any use to run a very brief and processor intensive function in the setup that will produce how many calculations were performed in a timeframe and would be a figure you could later divide your framerate by for the actual application? It's possibly too rough and unreliable a way to work off (for example, it will only work if the processor is not being used by anything else in the test and the actual application. Otherwise the results are skewed).
Or you could possibly have a set ideal framerate as a number. While monitoring the fps you could get the code to lower/higher one scalable variable until the frame rate matches.