millis() vs Java's System.currentTimeMillis()?

As far as I understand it, the Java one just returns the time since 1970, while the processing one returns the time since thr sketch began running. For the use of the timer, this should not matter, as you're subtracting the starting and current value anyway. However, something does not seem to be quite right:

int listLength = 10; // NUMBER OF 4 CHARACTER ELEMENTS OF RANDOM LETTERS WITHIN THE ARRAY
String[] list = new String[listLength];
StopWatch stopWatch = new StopWatch();
MiscMethods misc = new MiscMethods(); 
SomeSorters sorts = new SomeSorters();


void setup() {
  stopWatch.start();

  misc.makeData(list);
  misc.showList(list);
  sorts.selection(list);
  misc.showList(list); //   SHOWS LIST AFTER BEING SORTED

  stopWatch.stop();
  println(stopWatch.time());
  stopWatch.reset();

  exit();

}



class StopWatch {
  long startTime, endTime;

  void start () {
    startTime = millis();

  }

  void stop() {
    endTime = millis();
  }

  double time() {
    return endTime - startTime;
  }

  void reset() {
    startTime = endTime = 0;

  }

}

When I time the sort it returns anywhere from 1.0 - 3.0, and that does not seem right.

Tagged:

Answers

  • Why doesn't that seem right?

  • edited May 2014

    I guess this is related to your previous thread?
    http://forum.processing.org/two/discussion/4824/stop-watch-class

    Anyways, Processing's millis() returns an int, not a long! :-@

  • edited May 2014

    Why doesn't that seem right?

    It seems to take much longer than 1.0 millis.

    I guess this is related to your previous thread?

    Yes.

  • How long does it seem to take? Where is your MCVE?

  • Personally I think 1.0 millis is a little long to sort just 10 elements ;) probably because you are also measuring the time to display the list before and after the sort.

    Move the watch start/stop method calls to surround JUST what you want to measure.

    void setup() {
      misc.makeData(list);
      misc.showList(list);
    
      stopWatch.start();
      sorts.selection(list);
      stopWatch.stop();
    
      misc.showList(list); //   SHOWS LIST AFTER BEING SORTED
    
      println(stopWatch.time());
      stopWatch.reset();
      exit();
    }
    

    It might seem longer because there is significant overhead in launching a sketch - for example it has to create and initialise a window even if there is no draw method.

  • edited May 2014

    Okay, I see. However, I now get a reading of 0.0. Although, it does seem to work, as when I put the list length to 1000, it takes 5.0 millis(). Should I be using nano perhaps?

    I know time is generally not accepted as a way of determining efficiency of a sort, but in the case I am doing so. It is being used to show the time difference of different sorts with different lengths of data.

  • nanoseconds is good and is accurate for measuring time intervals.

    I know time is generally not accepted as a way of determining efficiency of a sort

    There are other measures of efficiency e.g. memory requirements, processor cycles, processor stack usage (for recursive methods) etc. You can also examine the algorithmic efficiency i.e. number of comparisons and number of swaps made. If using time then it is important to remember that the sort is just one task being processed on the computer so there will be variations between tests even for the same data set size and sort algorithm.

    If you have 3 algorithms A, B and C then I suggest that for a particular data set size you run each algorithm 5 times e.g. A B C A B C A B C A B C A B C and average out the five results for each algorithm ignoring unusually low or high values.

  • If the entire process takes only a few milliseconds at the longest, why are you so worried about it?

    What you're doing is microbenchmarking which is at best unreliable, and most likely is useless anyway. Especially in a language like Java (which is what you're really programming in if you're using Processing), where things like the JRE startup, the JIT compiler, and garbage collection can greatly skew results.

    One way to get more accurate results is to perform the test multiple times: not just 5, but 10,000 times or more.

    But then again, as Donald Knuth famously said, "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."

  • I am aware of the negligible effectiveness of using time as a benchmark for these. However, it was a requirement of the assignment.

Sign In or Register to comment.