|
Author |
Topic: LiveOutput times 2 (Read 729 times) |
|
flight404
|
LiveOutput times 2
« on: Aug 25th, 2004, 2:04am » |
|
Is it possible to have 2 tones playing at the same time? To be more specific, how would one go about having a sample of length x play at the same time as a different sample of length y. To be even more specific, I want to create noise for LiveOutput.data[x] and have a separate LiveOutput.data[y]. But x and y are different lengths so i cannot simply add one to the other. Am i missing something obvious or is this harder than it sounds like it should be? r
|
|
|
|
fphunct
|
Re: LiveOutput times 2
« Reply #1 on: Aug 25th, 2004, 11:06pm » |
|
I don't know if this advice is really relevant since I haven't tried Sonia but I was able to accomplish playing two waves at the same time using threads. It's a bit of a CPU hog but I had a thread run for each .wav I want to play. Here's an example of what I used it for: http://www.columbia.edu/~jims/experiments/index.html
|
|
|
|
flight404
|
Re: LiveOutput times 2
« Reply #2 on: Aug 31st, 2004, 8:49pm » |
|
Nice. But it is starting to sound like this is more difficult than it sounds. Alas, if only Pitaru were here.
|
|
|
|
gll
|
Re: LiveOutput times 2
« Reply #3 on: Aug 31st, 2004, 10:17pm » |
|
http://ingallian.design.uqam.ca/goo/P55/jTak/index.html I was using many Samples running at the same time. But if you want to use the synthesis with LiveOutput, I guess that you have to do something like StreamA[]+StreamB[] before sending it to LiveOutput.data[]. Code: for(int i = 0; i < LiveOutput.data.length; i++) LiveOutput.data[i] = StreamA[i]+StreamB[i]; or Both streams but 50% volume: for(int i = 0; i < LiveOutput.data.length; i++) LiveOutput.data[i] = StreamA[i]/2+StreamB[i]/2; |
| I could be wrong, I didn't try, tell me if it sound ok.
|
« Last Edit: Aug 31st, 2004, 10:31pm by gll » |
|
guillaume LaBelle
|
|
|
pitaru
|
Re: LiveOutput times 2
« Reply #4 on: Sep 1st, 2004, 3:36pm » |
|
Hey Rob, sorry for the late response, just got back from tokyo. i'm guessing the main issue is the fact that your sounds have various lengths. the key is using a phasing synthesis technique. the big advantage of this technique is that you will not have to worry about fitting the samples accurately into the LiveOutput buffer. I'll whip up an example and post it when i get home tonight. amit
|
|
|
|
flight404
|
Re: LiveOutput times 2
« Reply #5 on: Sep 1st, 2004, 5:07pm » |
|
Phasing Synthesis Technique?? Sounds delicious. Looking forward to it. r
|
|
|
|
pitaru
|
Re: LiveOutput times 2
« Reply #6 on: Sep 3rd, 2004, 6:10pm » |
|
Rob, a simple phase example is here - http://www.pitaru.com/sonia/examples/LiveOutput_phase/index.html but on second reading of your post, i realized this is not what you need. Are you interested in external sample data and not a synthesized sine wave? If that's the case, why not simply use sample objects? Othersie, just fit the sine wave to the buffer sine. your turn..
|
|
|
|
flight404
|
Re: LiveOutput times 2
« Reply #7 on: Sep 7th, 2004, 9:42pm » |
|
Okay, here is what i want... For example, say I have a 2D field of noise... just a random pattern of greyscale dots on the screen. In this field are 2 different lines and each line has 2 anchor points that you can drag around. Each line picks points equally spaced along its line and checks to see what the average color of the pixels beneath it are. It then places these floats into a sample and plays the corresponding noise in a loop. However, the longer the line, the more points placed into the LiveOutput.data[] array. So, what would I need to do to populate the .data[] array with the information from 2 lines. I cant simply add the information together because the lines will always be of different lengths, can I? Make any sense? r
|
|
|
|
|