Repository for soundscape investigation

Hi everyone,

The project I'm working on aims to support the Investigation of soundscapes. At this stage of the project I'm searching for repositories of geolocated environmental sound records, a relevant example is favouritesounds.org.

The questions are two:

  1. Do you know any similar web site, collection, repository similar to favorite sounds, dealing with environmental sounds?

  2. In the specific case of favouritesounds.org do you have any suggestion on how it could be used as repository of data for analysis and visualization in processing?

cheers

Comments

  • I am not aware of other sites that make soundscapes available. Sounds like you will have to spend some time using search engines.

    Looking at the web site I didn't see an option for accessing audio files using a Processing sketch. You will have to contact favouritesounds.org yourself to ask if there is a way to retrieve audio for specific locations via HTTP, for example.

  • Please correct discussion title

    1. There are further sound bites in the Wikipedia article you linked to above

    https://www.nps.gov/yell/learn/photosmultimedia/sounds-artistpaintpots.htm

    Yellowstone for example

    You could download the sounds and use them locally for your own visuals?

  • In what way is the title not appropriated ?

    I came across this interesting research paper SOUND MAPPING ON THE WEB: CURRENT SOLUTIONS AND FUTURE DIRECTIONS by B.Mechtley et al. collecting and analyzing 94 websites dealing with sound maps. I will go through the most relevant of them an see if one or more has an API providing access to the database....

  • The N in sound in the title is missing ;-)

  • Suggestion for visual:

    The map is from above

    Now imagine you stand in the land and see the map from the side. Like a walking person

    You would see sound spots on the horizon

    You could look around and see different ones (other angle)

    Their distances are different, so they appear smaller or bigger

    As a symbol you could use a marker with text above OR use an image showing the FFT of the first 40 seconds or so.

    So each sound looks differently

    This could be a 2D image with transparent background (calculation asynchronous) that rotates towards user (when he walks around) or be a 3D shape that you calculate and store as a PShape

    Use queasyCam for walking

  • I really like your suggestion. If I understand It correctly you are suggesting a sort of "virtual soundwalk". Something similar to google street viewer. Instead of representing the elements in a top view (a common map) the user dives into a prospective view, where sounds are visualized according to their specific intensity at the POV of the user.... I think the main problem is the density of information, in google street viewer the user can interact constantly with the surrounding. In the case of this project the density of environmental sound records is currently too low IMHO, the user wound be forced to walk a lot before reaching the next sound event. Your idea would work best if specific soundwalks would be recorded intentionally, so that the user csould enjoy that portion of the map, which is in fact the way google street viewer is made. If I will find an area with enough high density of data I would definitely give it a try, but first let's see if i get at least access to one of the databases out there.... #-o

  • I guess you would only hear a sound when you click on a sound image

    As for the density, I think that’s a matter of scale

  • sculpture:

    waits a bit into the song till it records a 3D sound sculpture. Then stops the song and kills minim.

    Use peasyCam to rotate and scale the soundsculpture with the mouse.

    Chrisir

    /**
     * VERSION generates a 3D sculpture based on sound. 
    
     * This sketch demonstrates how to use an FFT to analyze
     * the audio being generated by an AudioPlayer.
     * <p>
     * FFT stands for Fast Fourier Transform, which is a 
     * method of analyzing audio that allows you to visualize 
     * the frequency content of a signal. You've seen 
     * visualizations like this before in music players 
     * and car stereos.
     * <p>
     * For more information about Minim and additional features, 
     * visit http://code.compartmental.net/minim/
     */
    
    import ddf.minim.analysis.*;
    import ddf.minim.*;
    
    import peasy.*;
    import peasy.org.apache.commons.math.*;
    import peasy.org.apache.commons.math.geometry.*;
    
    int state=0; 
    int i2=0;
    
    Minim       minim;
    AudioPlayer jingle;
    FFT         fft;
    PeasyCam    cam; 
    
    ArrayList< MyLine> myLines = new ArrayList(); 
    
    int t1; // timer  
    
    void setup() {
      size(1512, 1000, P3D);
    
      cam = new PeasyCam(this, 0, 0, 0, 500);  
    
      minim = new Minim(this);
    
      // specify that we want the audio buffers of the AudioPlayer
      // to be 1024 samples long because our FFT needs to have 
      // a power-of-two buffer size and this is a good size.
      jingle = minim.loadFile("jingle.mp3", 1024);
    
      if (jingle==null) {
        println ("file not found :"
          +"jingle.mp3"
          +" ++++++++++++++++++++++++++++++++++++++++++++++++");
        exit();
        return;
      }
    
      // play the file 
      jingle.play();
    
      background(0);
    
      t1=millis();
      // time to pass BEFORE starting recording sculpture : 3000 millis 
      while (millis() - t1 < 3000) {
        //
      }
      println ("start");
    
      // create an FFT object that has a time-domain buffer 
      // the same size as jingle's sample buffer
      // note that this needs to be a power of two 
      // and that it means the size of the spectrum will be half as large.
      fft = new FFT( jingle.bufferSize(), jingle.sampleRate() );
    
      t1=millis();
    }
    
    void draw() {
      if (state==0) {
        // recording sculpture 
        background(0);
        text("please wait", 22, 22); 
        // recording sculpture 
        initSculpture();
      } else {
    
        // display sculpture 
        background(0);
    
        lights(); 
    
        //  translate(width/2, 0);
        for (MyLine ml : myLines ) {
          ml.display();
        }//for
      }//else 
      //
    }//draw
    
    void initSculpture() {
      // 
      // perform a forward FFT on the samples in jingle's mix buffer,
      // which contains the mix of both the left and right channels of the file
    
      fft.forward( jingle.mix );
    
      // do frequency band i
      for (int i = 0; i < fft.specSize(); i++) {
    
        // angle in degree: 
        float angle = map (i, 0, fft.specSize(), 
          0, 360); 
        float radius =  fft.getBand(i) * 2; 
    
        PVector from = new PVector ( 0, i2+33, 0 ) ; 
        PVector to = new PVector (  cos(radians(angle)) * radius, i2+33, sin(radians(angle)) * radius  ) ;
    
        MyLine line1 = new MyLine ( from, to, radians(angle) );
    
        // store the line for frequency band i
        myLines.add(line1);
      }//for
    
      i2++;
    
      // time HOW LONG we record  
      if (millis() - t1 > 3000) {
        minim.stop();
        jingle.pause();
        jingle=null; 
        fft=null; 
        minim = null;
        state=1;
        println ("stop preparing ");
      }
    }//func
    
    // =========================================================
    
    class MyLine {
    
      PVector from;
      PVector to;
      float weight; 
    
      //constr
      MyLine(PVector from_, 
        PVector to_, 
        float weight_) {
    
        from=from_.copy(); 
        to=to_.copy();
        weight=weight_;
      }//constr
    
      void display() {
        stroke(from.y+33, 2, 2);
        strokeWeight(map(weight, 0, TWO_PI, 
          1, 4) );
        line(from.x, from.y, from.z, 
          to.x, to.y, to.z);
      }//func 
      //
    }//class 
    //
    
Sign In or Register to comment.