What is the performance difference between floats and ints?

edited February 2018 in Programming Questions

Does it make a significant difference to use ints whenever possible? Also, is there any performance difference when using a variable for both floats and ints and setting the value to int when possible through int()? For example, does a 1.00000 perform worse than a 1.23624?

Answers

    • AFAIK, integers got fewer CPU cycles than floats.
    • But if you need floats, avoiding converting them to integers.
    • Conversions also take CPU cycles.
  • What happened when you did some profiling?

    If you haven't encountered any issues, then you probably shouldn't be worrying too much about this stuff.

  • Really having fps issues even at low resolutions. I tried to remove calls to classes one by one to try finding the culprit but when I wrote a lot of the code the organization is chaos and it's really bloated at this point. It's difficult to test individual things. But I've automatically just been using floats for everything except increments, and I wondered how much that makes a difference. Without even drawing anything really to the screen I can only get 120fps max now, when my code was 1/5th the amount I could get 300fps.

  • 120fps max now, when my code was 1/5th the amount I could get 300fps.

    Yeah, I mean, that makes sense. Computers have a finite processing speed, and you can't throw an unlimited amount of things at them without sacrificing FPS. 120fps is still very high.

    The best thing you can do is come up with a small example program that allows you to measure and test what happens when you use different values or different data structures or different algorithms.

  • Answer ✓

    If you are doing something where you don't need float, definitely use int. It's not just a matter of the clock cycles, it's the simplicity. float involves roundoff error and other nastiness that is far more complex. But if you are trying to represent velocities, and want continuous behavior, then floating point is definitely what you want. The difference in speed on a modern CPU is negligable. It's easy to understand how negligable simply by writing a few raw benchmarks, though micro-benchmarking is hard to do in Java. Because the Just In Time compiler will make things faster after a while, you want to "warm up" your code to see what it's like. Do it a few times, then do it again many times and measure the speed.

    Write a loop to sum integers. Do the same for summing float. You'll see the difference is not huge.

    long t0 = System.nanoTime(); float sum = 0; for (int i = 0; i < n; i++) sum += i; long t1 = System.nanoTime();

    System.out.println((t1-t0)*1e-9);

    But if you don't understand these things, then you probably need to learn some other basic things, like what operations really are slow, what computations are going to result in wrong answers. My experiments a decade ago showed the following hierarchy, which I believe is roughly unchanged. I will list the costs as rough integer weights

    integer addition 1 multiplication 3 static function call 5 regular method call 9 object creation 50

    Compilers have gotten a lot better since then, and they will remove some of this overhead whenever possible. But if you want to make your code faster, here are some good ideas.

    1. Make methods private when they are not needed by external classes
    2. Make methods final (very important) when they do not need to be overridden.
    3. Use either static functions or final methods whenever possible. These calls can be optimized better by the compiler.
    4. Don't go object-crazy. Object creation is the single highest expense in Java. Consider a list of Integer:

    ArrayList list

    vs. writing your own list of int:

    IntList list2;

    The ArrayList will have an array of objects (pointers) each of which is 4 bytes. Each object in Java has an overhead of 12 bytes, plus in this case the 4 bytes for the int inside of Integer. So the cost to build 1 million Integer in an array list is 20 bytes per element, or 20 million bytes, and the memory is not contiguous, because each element is allocated separately with new.

    For the IntList implementation, in your class somewhere you allocate a single array of int:

    int[] data = new int[size];

    That's 4 million bytes, 1/5 the size of the ArrayList implementation, which immediately means 5 times the speed, but it's more because it's all a single block of memory, and sequential access is much faster.

    This is one reason I love processing's choice of int as a color. It works extremely well.

    Last, you should be aware that for computation of any kind in floating point, never use float. Always double. Unless you are really sophisticated, it is so easy to lose all your digits to roundoff and have no accuracy left. Float may be used for drawing coordinates, but if you are doing a model of anything physical, use double. The performance hit is nothing compared to being right.

  • edited March 2018

    This is one reason I love processing's choice of int as a color. It works extremely well.

    Most displays use 32bit colour values (ARGB) and because the Java int data type is a 32bit integer it makes sense to use ints when working with pixels.

    Be warned that if you use the double data type you need to avoid using most of Processing's maths functions expect a float parameter..

    double d = 1.23456789123;
    double ps = sin(d); // cause an error because it is expecting a float
    

    :

    double d = 1.23456789123;
    double js = Math.sin(d); // works because Java uses doubles
    
Sign In or Register to comment.