Testing / unit testing / TDD with Processing development? [GSOC]

How does Processing core development handle tests -- unit tests and UI testing, etc.?

What about library or mode developers -- do you write tests against Processing in general or sketches specifically, and if so, how?

Context: Recently @gaocegege has been strategizing code coverage and thinking about whether / how to do testing while developing the Processing.R mode for GSOC.

Also, a light moment from the Processing README:

Someday we'll also fix all these bugs, throw together hundreds of unit tests, and get rich off all this stuff that we're giving away for free. But not today.

Comments

  • edited May 2017

    Regarding how Processing core handles tests: here are some of the files in core that search turned up as using JUnit, or describe testing, or map errors / specific bug tests to error messages.

  • edited May 2017

    An issue about How to Write Unit Tests is currently open on Processing.R#10. Advice appreciated!

  • I am not an expert on this kind of testing, however, one idea that occurs to me:

    Sketches could be tests -- running them to see if things work is a form of testing, even without automation.

    Automating sketch-testing might be done with pixel-diffs: expected sketch saveFrame() output vs. actual saveFrame() output. This would be good for checking reimplementations of existing documented sketches, and good for catching regressions.

    1. run a canonical sketch in Processing(Java), e.g. to test rect().
    2. the sketch generates a known-good image with saveFrame() and/or known-good text log
    3. write a sketch stub, e.g. to test rect() in Processing.R
    4. the test will pass when running the sketch generates a matching (or almost-matching) image file to the known-good image.

    I once wrote a Processing assignment checker that could also be used as a kind of automated regression tester for sketches. Essentially, the test consists of the sketch and known-good screenshot(s) of the sketch on a certain frame. The test runs the sketch, which generates a new screenshot using saveFrame(). The test then compares the known-good screenshot and the new one. A change in visual output may indicate a regression of the feature that the sketch demonstrates.

    (This approach wouldn't work with everything -- it could get really complicated simulating live input to interactive sketches, and reproducibility would suffer with clock-based math because frameCount and millis() are not lockstep.)

  • Just to close the loop: this was informed by an earlier discussion of testing Processing code:

    ...and it later continued in a related post on pixel-based end-to-end testing for Processing:

Sign In or Register to comment.