Quite a broad question here. I've started dabbling in some C# recently after getting comfortable with processing as a first language.
I'm curious as to the equivalent of the draw() method in other languages. Would it usually be a matter of using a 'for' or 'while' loop instead? As you can see I'm a little confused where this handy draw() method in processing originally came from?
I'm looking into ways of utilising the html5/javascript youTube api to control streaming videos from youtube in a mobile app. As far as I understand processing.js could work for creating a webpage, but is there a way to use javascript inside Processing in Android mode? or perhaps there are better tools for this other than processing?
For anyone with an interest I just managed to complete my first little app made in Processing and get it onto the Play store. It's a tool for finding musical chord progressions quickly. I do a lot of songwriting workshops with kids and have always thought something simple like this would come in handy.
I'm not sure what I should be changing it to, if at all. It shows up in both the AndroidManifest.xml and ant.properties file. I thought I was almost there with this whole process but a few little things like this are giving me eleventh hour anxiety.
I've got a small audio test working using soundpool. The below code plays a note when hitting the menu button and is the basis for a larger application I've made.
import android.media.SoundPool;
import android.content.res.AssetManager;
import android.media.AudioManager;
SoundPool soundPool;
AssetManager assetManager;
int sound1;
void setup() {
soundPool = new SoundPool(20, AudioManager.STREAM_MUSIC, 0);
When I run this code on the emulator or a device it works as expected. But if I hit the home button, open a different application that uses audio, then go back to my test app the sound sometimes plays a few times then drops out while the rest of the app continues to work, with no errors in the console.
I may be clutching at straws here but do I need to add something to the AndroidManifest.xml to tell it to prioritise my audio? Otherwise could there be something wrong with the code? I suppose Processing converts the code to Android-friendly java and keeps some of the more complex activity/service/audio focus business behind the scenes.. which is great if it works, but this has got me stumped. Frustrating because the problem is quite intermittent and the rest of my little music app is working very well.
I've looked at the Android dev site but haven't found any answers there yet. Any ideas much appreciated.
I'm attempting to write a semi-random auto-play music generator for Android based on the below code which is causing some unpredictable timing. I'm wondering if there's a more efficient/clever way to do this?
void zTrigger() {
float r = random(40);
if (r >= 0 && r <= 4) {
chord1();
}
if (r >= 5 && r <= 7) {
chord2();
}
if (r >= 8 && r <= 10) {
chord 3();
}
if (r >= 11 && r <= 14) {
chord4();
}
if (r >= 15 && r <= 18) {
chord5();
}
if (r >= 19 && r <= 22) {
chord6();
}
if (r >= 23 && r <= 24) {
chord7();
}
}
The chord functions are being triggered by the following code in draw which ads a simple beat. The beat plays reasonably in time on it's own but when I add the zTrigger function I get some unintended skipping in timing every 5-10secs.
After some searching I'm still wondering what the best approach is to make an application in processing able to save some settings in a separate file (eg - video file paths, volume control variables, a project name) which can then be loaded back in once the sketch/application is open. This isn't for a web page but for a stand-alone windows application. Would xml be the right place to look? On a related topic, is it possible to use a webpage to interact with an application that is on the client machine, ie - not an applet on the webpage.
I'm trying to build an app that loops ogg theora/vorbis video files between in/out points but am having trouble getting consistent timing simply using the GSPipeline.jump() method. There's some explanation of segment seeking with Flush
here which sounds interesting but I don't see how this can be used with the GSVideo library. Should I be approaching this from another direction?