Continuous integration for Processing / Questions about the current build process

Dear Processing community,

I had the idea to create a Debian package archive for Processing, so that users of Debian, Ubuntu and related Linux distributions can install and update Processing directly from that archive, which is imo a cleaner solution than the builtin script-based update routine.

First I need to know how the Linux versions of Processing are built normally, so I can recreate that process in e.g. Travis. Is that documented anywhere? If not, could a Processing developer kindly give us some insights here?

And my next question is how and when it is decided to make a new Processing release (e.g. when the IDE shows me that there is a new version available) and if there's a way I can pull that information from somewhere so I can use that to trigger a new build.

The long-term goals of this would be to have a Travis pipeline that triggers a new build everytime a new Processing version is released (and maybe also nightly builds). I hope I can get Travis to reliably build .deb packages and deploy them to the archive, which will either run on a hosted service such as Launchpad or a dedicated server that I would set up.

Any opinions on this idea in general?

Thank you!

Comments

Sign In or Register to comment.