Howdy, Stranger!

We are about to switch to a new forum software. Until then we have removed the registration on this forum.

  • how to place dots and circles acquired from an image instead of random dots and circles?

    It's been years since I touched processing (probably back in 2012?), anyway, tutorials are tough becuase they are either too easy or skip some of the basics I need to revisit. I guess I just need to be patient.

    This simple visual that certainly is overused (blockchain/neural networks :) but is still very cool, this visual of animated points with connected lines, is just so cool. The idea on this feed to sync it to an image is even cooler. Hoping I can inspire someone to share more code.

    But also thought I'd share for BEGINNERS with ambition, the sources I have been using all day to try to get up to speed on this animation.

    I'll save you a couple of hours, skipping the references and general examples and get right to what I found the most valuable tutorials:

    Save yourself and go straight to Shiffman's youtube, https://www.youtube.com/user/shiffman/playlists?view=50&sort=dd&shelf_id=2

    my favorite:

    Thanks for all the shared work! @360 Render did you finally solve your challenge?

  • How to understand what is slowing down the code on many iterations?

    I was able to learn (with the strategical use of "println") that the code is slowing down when this loop is running:

     for ( int i = 0; i< 20; i++) {
          aliens.add(new Worker(width/2, height/2, elites.get(j).net));
          j++;
          if (j >= elites.size()) {
            j = 0;
          }
        }
    

    So somehow adding a new "Worker" is slow. Here is what the constructor looks like:

      public Worker(float _x, float _y, NeuralNetwork _net) {
    
    
        directionsPI = new float [] {0,QUARTER_PI, PI/2, PI/2 + QUARTER_PI, PI, PI + QUARTER_PI, PI + PI/2, PI*2 - QUARTER_PI, PI};
        directionalValues = new float [directionsPI.length];
        for ( int i = 0; i < directionalValues.length; i++){
          directionalValues [i] = 0;
        }
    
        _layers = new int[] {16, 30, 16, directionalValues.length};
        net = new NeuralNetwork ( _layers, _net);
        type = "Worker";
        diameter = worker_size;
        pos = new PVector (_x, _y);
        speed = worker_speed;
        cor = workerColor;
        hearing_distance = 100;
        controls = new String[] {"Evolve Fighter"};
        out = minim.getLineOut();
        // create a sine wave Oscil, set to 440 Hz, at 0.5 amplitude
        wave = new Oscil( 200, 0.5f, Waves.SINE );
        fftLinA = new FFT (out.bufferSize(), out.sampleRate());
        fftLinA.linAverages(30);
        this.registerObserver(tutorial);
        collidable = true;
        selectable = true;
        soundInterval = 10;
        fitness = 0;
    
         float ran = random(1);
            if ( ran > mutation_chance){
              net.Mutate();
            }
    
      }
    

    As you can see that it contains a constructor for the other class which is NeuralNetwork which is the constructor that creates a copy of an existing net which is one of the arguments for the Worker. It also runs a "Mutate()" command which makes random changes to the neural network.

  • How to understand what is slowing down the code on many iterations?

    I have a very big code with multiple classes which creates a simulation of artificial creatures. It contains an evolution algorithm which at the end of each generation saves several copies of the best individuals and then recreates an entirely new generation using their neural networks with added mutations. On the creation of a new generation, the old one is wiped. However, with each generation, it loads the next generation slower practically stopping on gen 4 or 5 (the number of agents doesn't change from generation to generation). At that point, it just basically freezes after I call the "evolveWorkers()" function. Is there any tool which I can use in order to find the function on which it gets stuck?

    Thanks in advance to anyone who replies.

    And in case you can help me, here are some bits of code which I think might be creating a problem, if you want a function of something in particular I am happy to share it too.:

    void evolveWorkers() {
        int j = 0;
        ArrayList <Alien> elites = new ArrayList <Alien>(); //creating an array list where to store the best fit 
        elites = returnFit();
    
        for ( int i = 0; i< aliens.size(); i++) {
          if ( aliens.get(i) instanceof Worker) {
            aliens.get(i).dead = true; //marks the old aliens dead so they get removed
          }
        }
    
        for ( int i = 0; i< 20; i++) {
          aliens.add(new Worker(width/2, height/2, elites.get(j).net));
          j++;
          if (j >= elites.size()) {
            j = 0;
          }
        }
      }
    }
    
    ArrayList <Alien> returnFit() {
        ArrayList <Alien> workers; //array to pick the workers out of all aliens
        ArrayList <Alien> elites;
        elites = new ArrayList<Alien>();
        workers = new ArrayList <Alien>();
        for ( int i = 0; i < aliens.size(); i++) {
          Alien m = aliens.get(i);
          if ( m instanceof Worker) {
            workers.add(m);
          }
          Collections.sort(workers, new Sortbyfitness());
        }
    
        for (int i = workers.size()-1; i >= workers.size()-1 - int(workers.size()* elitesPercent/100); i --) {
          elites.add(workers.get(i));
        }
        return elites;
      }
    
     //remove the dead aliens
      void clearDead() {
        for (int i = 0; i< aliens.size(); i++) {
          Alien a = aliens.get(i);
          if (a.dead) {
            aliens.remove(a);
            a.removeObserver(tutorial); //important to remove the observers in order to preserve framerate
          }
        }
      }
    

    I suspect that the agents are not really wiped at the new gen so the program somehow needs to go through all of them and the data gets accumulated somewhere... I am not sure.

  • Maze solve check

    I've always been tempted to recreate a better version of something like this.

    I want to get computer control over the intake and exhaust valves, fuel, ignition timing, and port geometry and see how well I can get a neural network or genetic algorithm to tune for power/efficiency.

    Or maybe turbine blades (using the fluid portion of the PixelFlow library) like this guy did almost 10 years ago

  • Fill colors randomly changing in processingJS

    My code can be found here: https://www.openprocessing.org/sketch/525303

    It's a simple interactive neural network. When I run it in the Processing IDE, the neurons (circles) are all white (as they should be) when they don't get any input. However, when I tried to upload it, the neurons start white but quickly start to cycle through the colors of the rainbow. Anyone know why and how to fix this?

  • GSoC Proposal - A Platform for Algorithmic Composition on p5.js-sound

    Hey picorana, thanks for the feedback!

    I'm super excited to see how users can make use of neural nets in p5.js as well, will definitely aim to make it a part of the final deliverables.

    Great suggestion in terms of musical analysis, I do agree that signal processing can be hard for a general audience to wrap their heads around, without being given the appropriate background. I didn't have this worked into my initial outline, but given that this is indeed a big part of p5.js-sound, I will try to add this onto my proposal under the umbrella of Peripheral Support objectives.

    Cheers! :)

  • GSoC Proposal - A Platform for Algorithmic Composition on p5.js-sound

    Oh I would love to see more neural networks in processing, especially neural networks that generate music.

    I like how detailed your project description is (: Just a random thought: as a user, I had some trouble in understanding musical analysis, and I would love to have an example that explained to me in the simplest way possible how music is analyzed. It would be very useful for music visualization!

  • Syntax Error: Expected an operand but found class

    I'm trying to follow the neural network video tutorials, but for some reason my Matrix class stopped working. It was working until I started on the feedforward part of the tutorial. I had an error on it's browser so I tried to exit out of it then reopen, but now I get "Synteax Error: Expected an operand but found class" when I click run. I switched out my Matrix class code with the one in the github but it still won't work. The link to the github code is below. I'm using p5.js

    https://github.com/CodingTrain/Rainbow-Code/blob/master/Courses/intelligence_learning/session4/toy-neural-network-js/lib/matrix.js

  • Creative Technologist @ IDEO Palo Alto

    Hello processing!

    IDEO Palo Alto is in search of a Software Designer who will leverage an understanding of software to design and deliver innovative solutions that address core user needs. A software designer will:

    • Participate in the full design process, from talking with users to identifying potential opportunities, to delivering a great software product.

    • Solve problems in a broad set of domains, ranging from health, consumer technology, and mobility.

    • Explore and develop concepts with emerging technologies, such as new user interaction technologies, computer vision, or machine learning.

    What gets us excited

    We are particularly interested in candidates that have demonstrated skills in one or more of the following focus areas:

    • Physical/Digital. You’re a Creative Technologist who works with software and hardware to create connected installations, experiences, and augmented objects. Your skills may include electrical engineering, micro-controllers, sensors, creative coding, or robotics.

    • Emergent Interactions. You’re a pioneering Software-meets-Interaction Designer who works with emergent edge technology to explore new interactions at the frontier of what’s possible. You’ve dabbled, but have focus in a specialization ranging from computer vision, to chatbots, to augmented/mixed reality, to blockchain.

    • Computational Design. You’re a Generative Technologist. A pattern recognizer. A problem classifier. You works with algorithms or simulations to crack challenges that humans alone can’t compute. Your toolkit of tactics may include procedural design, machine learning, or neural networks.

    An example of what you might work on:

    Apply Here

    For more details and application, go on our website.

    image

  • AI endless runner

    If you look at the book section

    there’s nature of code, a book that’s also online

    It has a chapter on neural networks iirc

  • AI endless runner

    You might also skip the whole neural network thing completely... If your platformer has a fixed pattern of platforms and pits to navigate, then a genetic algorithm might create a pattern of actions for a moving player to take that gets it close to a goal.

    https://forum.processing.org/two/discussion/17728/genetic-algorithm

    This would, of course, only create a pattern that is a good match for some input environment. If you were to, say, suddenly add a pit or remove a platform, it would screw with a genetically generated move pattern's solution a lot more than it would with a neural network's attempts.

  • AI endless runner

    Sounds like a good job for a neural network. If you're coding your own platformer too, start with that.

  • Object Recognition from an Image

    Re:

    "simple" objects like cars, trees, flowers, apples, cats, birds, etc.

    Fair warning -- until fairly recently, this has been an extremely hard problem -- you aren't going to get good performance (or maybe even any performance at all) trying to throw together a bird detector from OpenCV or boofCV starter algorithms. That will work better for things traffic cam car detection or marker detection, or built-ins like face detection. Attempting to reliably tell an apple from a pear or banana -- or a flower from a tree -- is generally a long, humbling process -- especially if you try to go the 'simple' route and aren't using machine learning from day one.

    There have recently been some really exciting rapid advancements in object recognition -- but I'm not sure if there is off-the-shelf "deep learning" neural network style image analysis available yet to plug in to a Processing sketch.

  • Backpropagation problems (Neural network)

    Hello all, I am working on creating a neural network template (users can determine # of layers and # neurons/layer) and I am currently using the MNIST database to test it. There is one thing wrong, however. The last set of connections (the ones that lead to the output) have consistently all approached a value of 0.5, making all the calculations made in the previous layers moot. This problem leads to every part of the network working except for the last set of connections and output neurons. Since the equation for error sensitivity is different for the last set of connections, I assumed that was the problem, but I have checked it and I think that it should work correctly. The code is a bit messy, so feel free to ask me to clarify what any section of the code means. I have a feeling the problem is in the Train method (lines 257-310). The program is intensive, so if you run it to test it (which I highly recommend) I would use a capable computer. Macbook Airs simply do not cut it, unfortunately. Also, use the MNIST database found at https://pjreddie.com/projects/mnist-in-csv/ (thanks, Dinoswarleafs).

    import java.lang.Math.*;
    
    int[] numNs;
    Brain b;
    ArrayList<NeuronDisplay[]> nDisplay;
    ArrayList<Neuron[]> NeuronsToMap = new ArrayList<Neuron[]>();
    
    Table trainReader;
    Table testReader;
    //TableRow csvRow;
    int counter = 0;
    
    
    TableRow csvRow;
    
    
    int pixelSize = 6;
    
    float[][] expectedValues;
    
    float[][] trainingData;
    
    void setup() {
      size(1000, 800);
      nDisplay = new ArrayList<NeuronDisplay[]>();
      int[] numNeurons = {784, 26, 16, 10};
      trainReader = loadTable("mnist_train.csv");
      testReader = loadTable("mnist_test.csv");
    
    
      //int[] numNeurons = new int[int(random(3, 10))];
      //for(int i = 0; i<numNeurons.length; i++){ numNeurons[i] = int(random(2, 10));}
    
      numNs = numNeurons;
      b = new Brain(numNs);
    
      float[][] ex = new float[trainReader.getRowCount()][10];
      for (int i = 0; i< trainReader.getRowCount(); i++) {
        for (int j = 0; j< 10; j++) {
          if (j == trainReader.getInt(i, 0)) ex[i][j] =1;
          else ex[i][j] = 0;
        }
      }
      expectedValues = ex;
    
      float[][] train = new float[trainReader.getRowCount()][trainReader.getColumnCount()-1];
      for (int i = 0; i< train.length; i++) {
        for (int j = 1; j< train[0].length; j++) {
          train[i][j] = trainReader.getInt(i, j);
        }
      }
    
      trainingData = train;
      // b.train(ex);
    }
    
    
    void draw() {
      background(#FF820D);
      strokeWeight(1);
      //pushMatrix();
      displayNet();
      // popMatrix();
      //ud();
      //  if (mousePressed) ud();
      if (counter < 60000) {
        ud(counter);
        counter ++;
      } else println("-----~~~~~DONE~~~~~-----");
    
      if (counter < 60000) {
        csvRow = trainReader.getRow(counter);
        pushMatrix();
        for (int i = 0; i < trainReader.getColumnCount(); i++) {
          fill(csvRow.getInt(i));
          noStroke();
          rectMode(CENTER);
    
          rect(pixelSize * (i % 28) + 600, pixelSize * (i / 28) + 200, pixelSize, pixelSize);
          textAlign(CENTER);
          stroke(255);
          textSize(20);
          fill(255);
          text(trainReader.getInt(counter, 0), 650, 400);
        }
        popMatrix();
      }
    } 
    
    //void mouseClicked(){
    // ud(); 
    //}
    
    void ud(int wh) {
    
    
      float[][] corresponding = expectedValues;
    
      Neuron[] n = new Neuron[numNs[0]];
    
      //  int which = int(random(trainingData.length));
      int which = wh;
    
      for (int i = 0; i< n.length; i++) {
        n[i] = new Neuron(trainingData[which][i]);
      }
    
      b.updateInputs(n);
      b.update();
    
    
      float[] corr = new float[numNs[numNs.length-1]];
    
      corr = corresponding[which];
    
      b.train(corr);
      println("done");
    }
    
    class Brain {
    
      int[] layerSizes;
      ArrayList<Neuron[]> netLayers = new ArrayList<Neuron[]>();
      ArrayList<Connection[][]> connections = new ArrayList<Connection[][]>();
    
      public Brain(int[] layers) {
    
        layerSizes = layers;
    
        for (int i = 0; i< layers.length; i ++) {
          // println(layers[i]);
          netLayers.add(new Neuron[layers[i]]);
        }
    
        for (int i = 0; i< netLayers.size(); i++) {
          // NeuronDisplay[] ns = new NeuronDisplay[netLayers.get(i).length];
    
          for (int j = 0; j< netLayers.get(i).length; j++) {
            //ns[j] = new NeuronDisplay((i * 100 + 100), (j*60 + 100), i, j);
            netLayers.get(i)[j]= new Neuron(0);
          }
          //  nDisplay.add(ns);
        }
    
        NeuronsToMap = netLayers;
    
        Neuron[] inputs = new Neuron[layers[0]];    
        //  println(inputs.length);
    
        for (int i = 0; i< inputs.length; i++) {
          inputs[i] = new Neuron(random(0, 1));
        }
    
        updateInputs(inputs);
    
        for (int i = 1; i< layers.length; i++) {
          connections.add(randomWeights(layers[i-1], layers[i]));
        }
    
        for (int i = 0; i< netLayers.size(); i++) {
          // println();
          // printArray(netLayers.get(i));
          for (int j = 0; j< netLayers.get(i).length; j++) {
            //   println(netLayers.get(i)[j].input);
          }
    
          // println("___");
        }
    
        update();
    
        for (int i = 0; i< connections.size(); i++) {
          Connection[][] holder = connections.get(i);     
          for (int j = 0; j< holder.length; j++) {
            for (int k = 0; k< holder[j].length; k++) {
              //     println("which: " + i + " ["+ j + "][" + k + "] " + holder[j][k].weight);
            }
          }
        } 
    
        for (int i = 0; i< netLayers.size(); i++) {
          //   println();
          for (int j = 0; j< netLayers.get(i).length; j++) {
            //    println("| Layer: " + i + " | Neuron : " + j + " | Value: " + netLayers.get(i)[j].output);
          }
          // printArray(netLayers.get(i));
        }
    
        for (int k = 0; k< netLayers.size(); k++) {
          NeuronDisplay[] ns = new NeuronDisplay[netLayers.get(k).length];
    
          for (int j = 0; j< ns.length; j++) {
            ns[j] = new NeuronDisplay((k * 100 + 100), (j*60 + 100), k, j);
            //  netLayers.get(i)[j]= new Neuron();
          }
    
          nDisplay.add(ns);
        }
      }
    
      public Connection[][] randomWeights(int r, int c) {
    
        Connection[][] ret = new Connection[r][c];
    
        for (int i = 0; i< r; i++) {
          for (int j =0; j< c; j++) {
            ret[i][j] = new Connection(random(-1, 1));
          }
        }
    
        return ret;
      }
    
      void updateWeights(int which, Connection[][] change) {    
        connections.set(which, change);
      }
    
      Neuron[] retInputs() {
        return netLayers.get(0);
      }
    
      Neuron[] retOutputs() {
        return netLayers.get(netLayers.size()-1);
      }
    
      int[] retLayerNums() {
        return layerSizes;
      }
    
      void updateInputs(Neuron[] in) {   
        netLayers.set(0, in);
      }
    
      void update() {
    
        for (int i = 0; i< netLayers.size()-1; i++) {
          //  float[] ret = new float[netLayers.get(i+1).length];
          Neuron[] layer1 = netLayers.get(i);
          Neuron[] nextLayer = new Neuron[netLayers.get(i+1).length];
    
          Connection[][] conns = connections.get(i);
    
          for (int j = 0; j< netLayers.get(i+1).length; j++) {
            nextLayer[j] = new Neuron(0);
            for (int k = 0; k< layer1.length; k++) {
              nextLayer[j].input += layer1[k].output * conns[k][j].weight;
            }
            nextLayer[j].activate();
            //nextLayer[j].carry();
            // else nextLayer[j].activate();
          }
          netLayers.set(i+1, nextLayer);
        }
        NeuronsToMap = netLayers;
      }  
    
      void train(float[] expect) {
    
        //float[] errors = new float[netLayers.get(netLayers.size()-1).length];
        float[] expected = expect;
    
        //float[][] errorSensitivities = ;
        // float totalError = 0;
        float learningRate = .1;
    
        for (int l = netLayers.size()-1; l>= 0; l--) { 
          Neuron[] outputNeurons = netLayers.get(l);      
          for (int j = 0; j< netLayers.get(l).length; j++) {
            if (l >= netLayers.size() -1) {
              netLayers.get(l)[j].dC_dAj = 2*(outputNeurons[j].output - expected[j]);
            } else {
              for (int p = 0; p< netLayers.get(l+1).length; p++) {
                //  println(netLayers.size(), " ", l, " ", netLayers.get(l).length, " ", j, "  ", p);
                netLayers.get(l)[j].dC_dAj += connections.get(l)[j][p].weight * ((1/(1+ exp(-(netLayers.get(l+1)[p].input))))
                  *(1-(1/(1+ exp(-(netLayers.get(l+1)[p].input)))))) * netLayers.get(l+1)[p].dC_dAj;
              }
            }
          }
        }
    
        for (int n = 0; n<netLayers.size()-1; n++) {
          //   println(netLayers.size(), " ", n);
          //   Neuron[] outputNeurons = netLayers.get(n+1);
          Connection[][] layerN = connections.get(n);  
          Neuron[] previous  = netLayers.get(n);
    
          for (int i = 0; i<layerN.length; i++) {
            for (int j = 0; j< layerN[i].length; j++) { 
              //println(netLayers.size(), " ", outC.length, " ", outC[i].length, " ", n, " ", i, " ", j);
              //connections.get(n)[i][j].updateSensitivity(1);
              connections.get(n)[i][j].updateSensitivity(previous[i].output * ((1/(1+ exp(-(netLayers.get(n+1)[j].input)))) *
                (1-(1/(1+ exp(-(netLayers.get(n+1)[j].input)))))) * netLayers.get(n+1)[j].dC_dAj);
            }
          }
        }
    
        for (int i = 0; i< connections.size(); i++) {
          Connection[][] ret = new Connection[connections.get(i).length][connections.get(i)[0].length];
          for (int j = 0; j< connections.get(i).length; j++) {
            for (int k = 0; k< connections.get(i)[j].length; k++) {
              float newWeight = connections.get(i)[j][k].weight - learningRate * connections.get(i)[j][k].errorSensitivity * connections.get(i)[j][k].weight;
              //println( connections.get(i)[j][k].errorSensitivity);
    
              ret[j][k]= new Connection(newWeight);
            }
          } 
          connections.set(i, ret);
        }
      } // end train()
    }
    
    class Connection {
    
      float errorSensitivity;
      float weight;
      float learningRate = .01;
      float fill;
    
      public Connection(float weight) {
        this.weight = weight;
        this.errorSensitivity = 0;
        fill = map(weight, -1, 1, 0, 255);
      }
    
      void updateWeight() {
        // weight = weight - learningRate*(errorSensitivity);
        weight = random(-1, 1);
        fill = map(weight, -1, 1, 0, 255);
      }
    
      void updateSensitivity(float in) {
    
        errorSensitivity = in;
      }
    }
    
    class Neuron {
    
      public float input = -10; // weighted input into neuron
      float output = -10; //activated output
      float bias;
      float fill = 255;
      float dC_dAj = 0;
    
      //public Neuron() {
      //  input = 0;
      //  output = 0;
      //  bias = 0;
      //  fill = int(map(input, 0, 1, 0, 255));
    
      //}
    
      public Neuron(float input) {
        this.input = input;
        this.output = input;
        bias = 0;
        fill = int(map(input, 0, 1, 0, 255));
      }
    
      void updateInput(float in) {
        input = in;
      }
    
      void updateBias(float in) {
        bias = in;
      }
    
      void activate() {        
        // output = (float)Math.tanh(input);    
        // output = 1/(1+e^(-input))
        output = 1/(1+ exp(-(input-bias)));
        //    println(output);
        fill = int(map(output, 0, 1, 0, 255));
      }
    
      void isInput() {
        output = input;
      }
      void carry() {
        output = input;
      }
    }
    
    class NeuronDisplay {
    
      int x, y;
      float size = 30;
      int layer, position;
      int fill;
    
      public NeuronDisplay(int x, int y, int whichLayer, int whichNeuron) {   
        this.x = x;
        this.y = y;
        this.layer = whichLayer;
        this.position = whichNeuron;
      }
    
      void display() {
        // println("netLayers size: " + b.netLayers.size(), " ", layer, " ", b.netLayers.get(layer).length + " " + position);
        if (layer<b.netLayers.size() && position < b.netLayers.get(layer).length)  fill(b.netLayers.get(layer)[position].fill);
        else fill(255, 0, 0);
        ellipse(x, y, size, size);
      }
    }
    
    void displayNet() {
    
      for (int i = 0; i< nDisplay.size()-1; i++) {
        NeuronDisplay[] hold = nDisplay.get(i);
        NeuronDisplay[] next = nDisplay.get(i+1);
    
        for (int j = 0; j< hold.length; j++) {
          for (int k = 0; k<next.length; k++) {
            stroke(b.connections.get(i)[j][k].fill);
            line(hold[j].x, hold[j].y, next[k].x, next[k].y);
          }
        }
      }
    
      for (int i = 0; i< nDisplay.size(); i++) {
        //  NeuronDisplay[] hold = nDisplay.get(i);
        for (int j = 0; j< nDisplay.get(i).length; j++) {
          stroke(255);
          nDisplay.get(i)[j].display();
        }
      }
    }
    
  • How can I use the MNIST database to train a Processing based Neural Network?

    Your code is kinda formatted weirdly so it's hard for others to help. Also since it is such a massive amount of code don't expect anyone to search through it since it pretty overwhelming. What I did when using the MNIST dataset in Python was feed the pixel data of each pixel into a 1 Dimensional array (like its color level for each element) and just feed those into the neural network. I had about 95-98% success just using this method IIRC

  • How can I use the MNIST database to train a Processing based Neural Network?

    Hello, I've got a question for all the machine learning experts here on the forum. I have recently been working on a neural network template that allows users to specify the number of layers and how many neurons should be in each layer. I have recently implemented a function that "trains" the network using back propagation (hopefully). I am not that experienced with backpropagation, but I am taking a linear algebra class and have taken calculus. I do not know if the network actually trains, so I want to use a database to train and test the network. However, I have not been able to get definitive results from the networks I have been using, hence why I want to use the MNIST database. I also do not know how to get the MNIST database into a form that I can use to train my network. My ultimate goal would be to take the 28*28 images and transform them into an array of 784 values, with a corresponding integer array. How can I do this?

      import java.lang.Math.*;
    
      int[] numNs;
      Brain b;
      ArrayList<NeuronDisplay[]> nDisplay;
      ArrayList<Neuron[]> NeuronsToMap = new ArrayList<Neuron[]>();
    
      void setup() {
        size(1000, 800);
        nDisplay = new ArrayList<NeuronDisplay[]>();
        int[] numNeurons = {2, 2, 1};
    
        //int[] numNeurons = new int[int(random(3, 10))];
        //for(int i = 0; i<numNeurons.length; i++){ numNeurons[i] = int(random(2, 10));}
    
        numNs = numNeurons;
        b = new Brain(numNs);
        //float[] ex = {1,1,1, 1};
        // b.train(ex);
      }
    
    
      void draw() {
        background(#FF820D);
        strokeWeight(1);
        //pushMatrix();
        displayNet();
        // popMatrix();
        //ud();
        if (mousePressed) ud();
      } 
    
      void ud() {
    
        float[][] train = 
          {
          {1, 0}, 
          {0, 1}, 
          {0, 0}, 
          {1, 1}
          }
          ;
    
        float[] corresponding ={0, 0, 1, 1};
    
        Neuron[] n = new Neuron[numNs[0]];
    
        int which = int(random(train.length));
    
        for (int i = 0; i< n.length; i++) {
          n[i] = new Neuron(train[which][i]);
        }
    
        b.updateInputs(n);
        b.update();
    
        //for (int i = 0; i< b.connections.size(); i++) {
        //  Connection[][] holder = b.connections.get(i);     
        //  for (int j = 0; j< holder.length; j++) {
        //    for (int k = 0; k< holder[j].length; k++) {
        //      println("which: " + i + " ["+ j + "][" + k + "] " + holder[j][k].weight);
        //    }
        //  }
        //}
    
        //for (int i = 0; i< b.netLayers.size(); i++) {
        //  println();
        //  for (int j = 0; j< b.netLayers.get(i).length; j++) {
        //    println("| Layer: " + i + " | Neuron : " + j + " | Value: " + b.netLayers.get(i)[j].output);
        //  }
        //  // printArray(netLayers.get(i));
        //}
    
        //  float[] ex = {1,1,1, 1};
    
        float[] corr = new float[numNs[numNs.length-1]];
        corr[0] = corresponding[which];
        b.train(corr);
      }
    
      class Brain {
    
        int[] layerSizes;
        ArrayList<Neuron[]> netLayers = new ArrayList<Neuron[]>();
        ArrayList<Connection[][]> connections = new ArrayList<Connection[][]>();
    
        public Brain(int[] layers) {
    
          layerSizes = layers;
    
          for (int i = 0; i< layers.length; i ++) {
            // println(layers[i]);
            netLayers.add(new Neuron[layers[i]]);
          }
    
          for (int i = 0; i< netLayers.size(); i++) {
            // NeuronDisplay[] ns = new NeuronDisplay[netLayers.get(i).length];
    
            for (int j = 0; j< netLayers.get(i).length; j++) {
              //ns[j] = new NeuronDisplay((i * 100 + 100), (j*60 + 100), i, j);
              netLayers.get(i)[j]= new Neuron(0);
            }
              //  nDisplay.add(ns);
          }
    
          NeuronsToMap = netLayers;
    
          Neuron[] inputs = new Neuron[layers[0]];    
          println(inputs.length);
    
          for (int i = 0; i< inputs.length; i++) {
            inputs[i] = new Neuron(random(0, 1));
          }
    
          updateInputs(inputs);
    
          for (int i = 1; i< layers.length; i++) {
            connections.add(randomWeights(layers[i-1], layers[i]));
          }
    
          for (int i = 0; i< netLayers.size(); i++) {
            println();
            // printArray(netLayers.get(i));
            for (int j = 0; j< netLayers.get(i).length; j++) {
              println(netLayers.get(i)[j].input);
            }
    
            println("___");
          }
    
          update();
    
          for (int i = 0; i< connections.size(); i++) {
            Connection[][] holder = connections.get(i);     
            for (int j = 0; j< holder.length; j++) {
              for (int k = 0; k< holder[j].length; k++) {
                println("which: " + i + " ["+ j + "][" + k + "] " + holder[j][k].weight);
              }
            }
          } 
    
          for (int i = 0; i< netLayers.size(); i++) {
            println();
            for (int j = 0; j< netLayers.get(i).length; j++) {
              println("| Layer: " + i + " | Neuron : " + j + " | Value: " + netLayers.get(i)[j].output);
            }
            // printArray(netLayers.get(i));
          }
    
          for (int k = 0; k< netLayers.size(); k++) {
            NeuronDisplay[] ns = new NeuronDisplay[netLayers.get(k).length];
    
            for (int j = 0; j< ns.length; j++) {
              ns[j] = new NeuronDisplay((k * 100 + 100), (j*60 + 100), k, j);
              //  netLayers.get(i)[j]= new Neuron();
            }
    
            nDisplay.add(ns);
          }
        }
    
        public Connection[][] randomWeights(int r, int c) {
    
          Connection[][] ret = new Connection[r][c];
    
          for (int i = 0; i< r; i++) {
            for (int j =0; j< c; j++) {
              ret[i][j] = new Connection(random(-1, 1));
            }
          }
    
          return ret;
        }
    
        void updateWeights(int which, Connection[][] change) {    
          connections.set(which, change);
        }
    
        Neuron[] retInputs() {
          return netLayers.get(0);
        }
    
        Neuron[] retOutputs() {
          return netLayers.get(netLayers.size()-1);
        }
    
        int[] retLayerNums() {
          return layerSizes;
        }
    
        void updateInputs(Neuron[] in) {   
          netLayers.set(0, in);
        }
    
        void update() {
    
          for (int i = 0; i< netLayers.size()-1; i++) {
            //  float[] ret = new float[netLayers.get(i+1).length];
            Neuron[] layer1 = netLayers.get(i);
            Neuron[] nextLayer = new Neuron[netLayers.get(i+1).length];
    
            Connection[][] conns = connections.get(i);
    
            for (int j = 0; j< netLayers.get(i+1).length; j++) {
              nextLayer[j] = new Neuron(0);
              for (int k = 0; k< layer1.length; k++) {
                nextLayer[j].input += layer1[k].output * conns[k][j].weight;
              }
              nextLayer[j].activate();
              //nextLayer[j].carry();
              // else nextLayer[j].activate();
            }
            netLayers.set(i+1, nextLayer);
          }
          NeuronsToMap = netLayers;
        }  
    
        void train(float[] expect) {
    
          //float[] errors = new float[netLayers.get(netLayers.size()-1).length];
          float[] expected = expect;
    
          //float[][] errorSensitivities = ;
          // float totalError = 0;
          float learningRate = .1;
    
          for (int l = netLayers.size()-1; l>= 0; l--) { 
            Neuron[] outputNeurons = netLayers.get(l);      
            for (int j = 0; j< netLayers.get(l).length; j++) {
              if (l >= netLayers.size() -1) {
                netLayers.get(l)[j].dC_dAj = 2*(outputNeurons[j].output - expected[j]);
              } else {
                for (int p = 0; p< netLayers.get(l+1).length; p++) {
                  //  println(netLayers.size(), " ", l, " ", netLayers.get(l).length, " ", j, "  ", p);
                  netLayers.get(l)[j].dC_dAj += connections.get(l)[j][p].weight * ((1/(1+ exp(-(netLayers.get(l+1)[p].input))))*(1-(1/(1+ exp(-(netLayers.get(l+1)[p].input)))))) * netLayers.get(l+1)[p].dC_dAj;
                }
              }
            }
          }
       //for (int n = netLayers.size()-1; n> 0; n--) {
          //  println(netLayers.size(), " ", n);
          //  //   Neuron[] outputNeurons = netLayers.get(n+1);
          //  Connection[][] outC = connections.get(n-1);  
          //  Neuron[] previous  = netLayers.get(n-1);
    
          //  for (int i = 0; i< outC.length; i++) {
          //    for (int j = 0; j< outC[i].length; j++) { 
          //      println(netLayers.size(), " ", outC.length, " ", outC[i].length, " ", n, " ", i, " ", j);
          //      outC[i][j].updateSensitivity(previous[i].output * ((1/(1+ exp(-(netLayers.get(n)[j].input))))*(1-(1/(1+ exp(-(netLayers.get(n)[j].input)))))) * netLayers.get(n)[j].dC_dAj);
          //    }
          //  }
          //}
    
          for (int n = 0; n<netLayers.size()-1; n++) {
            println(netLayers.size(), " ", n);
            //   Neuron[] outputNeurons = netLayers.get(n+1);
            Connection[][] layerN = connections.get(n);  
            Neuron[] previous  = netLayers.get(n);
    
            for (int i = 0; i<layerN.length; i++) {
              for (int j = 0; j< layerN[i].length; j++) { 
                //println(netLayers.size(), " ", outC.length, " ", outC[i].length, " ", n, " ", i, " ", j);
                //connections.get(n)[i][j].updateSensitivity(1);
                connections.get(n)[i][j].updateSensitivity(previous[i].output * ((1/(1+ exp(-(netLayers.get(n+1)[j].input))))*(1-(1/(1+ exp(-(netLayers.get(n+1)[j].input)))))) * netLayers.get(n+1)[j].dC_dAj);
              }
            }
          }
    
          for (int i = 0; i< connections.size(); i++) {
            Connection[][] ret = new Connection[connections.get(i).length][connections.get(i)[0].length];
            for (int j = 0; j< connections.get(i).length; j++) {
              for (int k = 0; k< connections.get(i)[j].length; k++) {
                float newWeight = connections.get(i)[j][k].weight - learningRate * connections.get(i)[j][k].errorSensitivity * connections.get(i)[j][k].weight;
                //println( connections.get(i)[j][k].errorSensitivity);
    
                ret[j][k]= new Connection(newWeight);
              }
            } 
            connections.set(i, ret);
          }
    
          //for (int j = 0; j< outs[0].length; j++) {    
          //  for (int i = 0; i< outs.length; i++) {
          //  float errorSensitivity = 2*errors[j] * (1-(sq((float)Math.tanh(outP[j]))));      
          //  float newWeight = outs[i][j].weight - learningRate * errorSensitivity * previous[i];
          //  }
          //}
    
          //  if (netLayers.size() > 2) {
    
          //  for (int layer = netLayers.size()-2; layer >=0; layer--) {
          //    // Connection[][] prevLayer = connections.get(layer-1);
          //    Connection[][] thisLayer = connections.get(layer);
          //    float[] prev = netLayers.get(layer);
          //    float[] next = netLayers.get(layer+1);
    
          //    for (int j = 0; j< thisLayer[0].length; j++) {    
          //        for (int i = 0; i< thisLayer.length; i++) {
    
          //        // float errorSensitivity = 2*errors[j] * (1-(sq((float)Math.tanh(outP[j]))));    
          //       // float errorSensitivity = 2* totalError * (1-(sq((float)Math.tanh(next[j])))) * thisLayer[;
          //        //* (0.5*(float)Math.log( (prev[i] + 1.0) / (prev[i] - 1.0)))  ;
          //        float newWeight = thisLayer[i][j].weight - learningRate * errorSensitivity * prev[i];
          //        }
          //      }
          //    }
          //  }
        } // end train()
      }
    
      class Connection {
    
        float errorSensitivity;
        float weight;
        float learningRate = .01;
        float fill;
    
        public Connection(float weight) {
          this.weight = weight;
          this.errorSensitivity = 0;
          fill = map(weight, -1, 1, 0, 255);
        }
    
        void updateWeight() {
          // weight = weight - learningRate*(errorSensitivity);
          weight = random(-1, 1);
          fill = map(weight, -1, 1, 0, 255);
        }
    
        void updateSensitivity(float in) {
    
          errorSensitivity = in;
        }
      }
    
      class Neuron {
    
        public float input = -10; // weighted input into neuron
        float output = -10; //activated output
        float bias;
        float fill = 255;
        float dC_dAj = 0;
    
        //public Neuron() {
        //  input = 0;
        //  output = 0;
        //  bias = 0;
        //  fill = int(map(input, 0, 1, 0, 255));
    
        //}
    
        public Neuron(float input) {
          this.input = input;
          this.output = input;
          bias = 0;
          fill = int(map(input, 0, 1, 0, 255));
        }
    
        void updateInput(float in) {
          input = in;
        }
    
        void updateBias(float in) {
          bias = in;
        }
    
        void activate() {        
          // output = (float)Math.tanh(input);    
          // output = 1/(1+e^(-input))
          output = 1/(1+ exp(-(input-bias)));
          println(output);
          fill = int(map(output, 0, 1, 0, 255));
        }
    
        void isInput() {
          output = input;
        }
        void carry() {
          output = input;
        }
      }
    
      class NeuronDisplay {
    
        int x, y;
        float size = 30;
        int layer, position;
        int fill;
    
        public NeuronDisplay(int x, int y, int whichLayer, int whichNeuron) {   
          this.x = x;
          this.y = y;
          this.layer = whichLayer;
          this.position = whichNeuron;
        }
    
        void display() {
          println("netLayers size: " + b.netLayers.size(), " ", layer, " ", b.netLayers.get(layer).length + " " + position);
          if (layer<b.netLayers.size() && position < b.netLayers.get(layer).length)  fill(b.netLayers.get(layer)[position].fill);
          else fill(255, 0, 0);
          ellipse(x, y, size, size);
        }
      }
    
      void displayNet() {
    
        for (int i = 0; i< nDisplay.size()-1; i++) {
          NeuronDisplay[] hold = nDisplay.get(i);
          NeuronDisplay[] next = nDisplay.get(i+1);
    
          for (int j = 0; j< hold.length; j++) {
            for (int k = 0; k<next.length; k++) {
              stroke(b.connections.get(i)[j][k].fill);
              line(hold[j].x, hold[j].y, next[k].x, next[k].y);
            }
          }
        }
    
        for (int i = 0; i< nDisplay.size(); i++) {
          //  NeuronDisplay[] hold = nDisplay.get(i);
          for (int j = 0; j< nDisplay.get(i).length; j++) {
            stroke(255);
            nDisplay.get(i)[j].display();
          }
        }
      }
    
  • A free to use core code for a basic feed-forward Neural Network in processing

    These past few days I have been experimenting with deep learning in processing, and since I haven't been able to find a proper Neural Network layout for it, I have decided to share the one that I've created for any other curious novices like me. Feel free to express any questions or critics you may have.

    class Neuron {
      float[] inputs = {};     //each neuron takes in a number of inputs...
      float output;            //and produces an output.
      float[] weights = {};    //every input is weighed with its own unique weight...
      float bias;          //each neuron must have a bias. The bias has to be trained, but it's easier to initialize it at one, then add a weight for the bias and train it along with all the other weights
      int numberOfInputs;
    
      Neuron(int tempNumberOfInputs) {
        numberOfInputs = tempNumberOfInputs;
        bias = 1;
        for(int i = 0; i < numberOfInputs; i++) {
          inputs = (float[]) append(inputs, 0);      //add inputs equal to the amount of inputs the neuron expects
        }
        for(int i = 0; i < numberOfInputs+1; i++) {                        //add weights equal to the amount of inputs plus one to weigh the bias, too//
          weights = (float[]) append(weights, random(-1,1));
        }
      }
    
     float processAndActivateInputs(float[] getInputs) {    //function to activate inputs...
       float processedInputsSum = 0;
       if(getInputs.length!=inputs.length) {
         println("Error: number of inputs changed during the sketch");
         exit();
       }
       for(int i=0; i<numberOfInputs; i++) {
         inputs[i] = getInputs[i];
       }
       for(int i = 0; i<inputs.length; i++) {
         processedInputsSum += inputs[i]*weights[i];       //works by getting all the inputs, multiplying them by their corresponding weight, and summing all the results...
       }
       processedInputsSum += bias*weights[inputs.length];             //don't forget to weigh and add the bias (bias weight is at the end of the weights array)
       //activation function (tanh)//                    //the sum then goes through an activation function to produce an output. NOTE!!! You may have to change this function depending on what the aim of your network is.
       output = (1-exp(2*processedInputsSum))/(1 + exp(2*processedInputsSum));       //I have used a hyperbolic tangent (it returns values from -1 to 1) but other common functions are a logistic sigmoid function (returns values between 0 and 1) or a simple step function
       return output;
     }
    }
    
    class Layer {
      Neuron[] neurons = {};    //each layer has a number of neurons....
      float[] outputs = {};     //...and a number of outputs equal to the neurons...
      Layer(int numberOfInputs, int numberOfNeurons) {
        for(int i = 0; i<numberOfNeurons; i++) {              //for each neuron that the layer expects...
          Neuron tempNeuron = new Neuron(numberOfInputs);     
          neurons = (Neuron[]) append(neurons, tempNeuron);   //add a neuron...
          outputs = (float[]) append(outputs, 0);             //add an output...
        }
      }
    
      float[] produceOutputs(float[] getInputs) {  //function to get layer's outputs
        for(int i=0; i<neurons.length; i++) {                      //for each neuron...
          outputs[i] = neurons[i].processAndActivateInputs(getInputs);        //pass the layer's inputs to the neuron and get its output (every neuron in the layer processes the same inputs)...
        }
        return outputs;            //return the layer's outputs...//
      }
    }
    
    class Network {
      Layer[] layers = {};    //each network has a number of layers...
      float[] inputs = {};    //...expects inputs...
      float[] outputs = {};   //..and produces outputs.
      Network() {
      }
    
      void addLayer(int numberOfInputs, int numberOfNeurons) {                              //function to add layers to the network. This is usually done in the setup function of the program//
        layers = (Layer[]) append(layers, new Layer(numberOfInputs, numberOfNeurons));
      }
    
      float[] produceOutputs(float[] tempInputs) {                 //function to produce outputs from inputs...
        inputs = tempInputs;                                    
        outputs = layers[0].produceOutputs(inputs);                //the first layer gets the outputs from the outside world...
        for(int i = 1; i<layers.length; i++) {                     //the second layer and all other layers take the outputs from the previous layer as inputs...
          outputs = layers[i].produceOutputs(outputs);
        }
        return outputs;                                 //the outputs of the final layer are the outputs of the neural network.
      }
    }
    
    
    Network brain;
    float inputs[];
    float outputs[];
    void setup() {
      size(800,800);
      //this is where you want to create your network's layers
      brain = new Network();
      brain.addLayer(4,10);   //the first layer you add actually determines two layers - the input layer and the first hidden layer. This is because the input layer does not actually process the information, it just feeds the raw inputs to the hidden layer
      brain.addLayer(10,2);   //Now we have a neural network with an input layer expecting four inputs, a hidden layer with ten neurons and an output layer producing two outputs.
      //the inputs must be equal to the inputs of the first layer...
      inputs = new float[4];
      //and the outputs must be equal to the outputs of the last layer...
      outputs = new float[2];
    }
    
    void draw() {
      //Now you can experiment with the network as much as you want. You need to know what outputs (decisions) it has to make, and what affects that decision (the inputs).
      //For example, let's say you have a creature that can move and accelerate. This is what its properties would be:
      creatureXspeed += creatureXacceleration;
      creatureYspeed += creatureYacceleration;
      creatureX += creatureXspeed;
      creatureY += creatureYspeed;
      //Now you want it to decide where to accelerate depending on another static object. then as inputs it would need to know where the object is depending on its position, and the outputs would be its acceleration (if the object is moving we would also have to add its mvoement as inputs):
      inputs[0] = map(objectX-creatureX, 0, width, -1, 1);   //when using a hyperbolic tangent as an activation function it is wise to map your inputs between -1 and 1. This will train the network faster
      inputs[1] = map(objectY-creatureY, 0, height, -1, 1);
      inputs[2] = map(creatureXspeed, 0, maxCreatureSpeed, -1, 1);
      inputs[3] = map(creatureYspeed, 0, maxCreatureSpeed, -1, 1);
      //now that we've created the outputs, we're going to feed them into the creature's brain and get its decision...
      outputs = brain.produceOutputs(inputs);
      //instead of making four inputs for each type of acceleration (up, down, left, right), we're only going to create two inputs that decide movement across the X axis and movement across the Y axis.
      //remember that the hyperbolic tangent function returns values between -1 and 1, so we need to map the outputs..
      creatureXacceleration = map(outputs[0], -1, 1, -0.1, 0.1);
      creatureYacceleration = map(outputs[1], -1, 1, -0.1, 0.1);
      //Voila! Now our creature can decide where to accelerate, knowing how fast it's currently going, where it's going and where the object is. 
      //The best part is that if you change how the creature interacts with the object (for example, having one case where the creature is rewarded for hitting the object and another where it has to avoid it) you don't need to change anything about the creature, it will learn to perform well on its own given a proper training algorithm.
      //From then on, it's just a matter of training the network accordingly. This is the harder part, and is usually done through backpropagation (an algorithm that is heavy on calculus), but it can also be done more simply with some creativity. 
      //For example, I've found that genetic algorithms are simple and produce good results in such occasions where creatures have to interact with simple environment.
      //Good luck!
    }
    
  • Is it possible to save an Object for use in other sketches?

    I am currently working on training an AI to play a simple 1v1 game. It has a neural networks with 2 hidden layers, 21 neurons each, so the sum of every individual weight amounts to nearly half a thousand. I am using a genetic algorithm where 100 AI’s are paired up and only the victorious ones get to reproduce. My question is: once an adequately intelligent AI is born, I want to be able to easily replicate its weights in other sketches in order to have it play against players. Thing is, since its weights are in the hundreds, it would be insane to for example println the weights and have to copy them one by one. Is there a better way?

  • Why do these nested for loops not work?

    I am currently working on training a Neural Network with a genetic algorithm. The DNA of the creatures are the weights of their Neural Networks. Every creature has its own Neural Network with two hidden layers. The whole code is excruciatingly long, but this is the part that is bugging for me. It is a function under the Creature class used to produce a new generation from the fittest creatures once a generation dies out. This is how the function looks:

    Creature crossOver(Creature parent2) {  
        Creature child = new Creature();
        for(int i = 0; i<NeuralNetwork.layers.length; i++) {                               //for every layer of the network...//
          for(int j = 0; j<NeuralNetwork.layers[i].neurons.length; i++) {                  //for every neuron of the layer...//
            for(int k = 0; k<NeuralNetwork.layers[i].neurons[j].weights.length; k++) {     //for every weight of the neuron...//
              float determineInheritance = random(1);                                      //50/50 chance for every single weight to be inherited from parent 1 or from parent 2...//
              if(determineInheritance>=0.5) {
                child.NeuralNetwork.layers[i].neurons[j].weights[k] = NeuralNetwork.layers[i].neurons[j].weights[k];
              } else if(determineInheritance<0.5) {
                child.NeuralNetwork.layers[i].neurons[j].weights[k] = parent2.NeuralNetwork.layers[i].neurons[j].weights[k];
              }
            }
          }
        }
        return child;
      }
    

    Unfortunately, I get an array out of bounds exception for the second for loop (the one with integer j). I don't understand why.

  • Neuroevolution simulation (Neural network and genetic algorithm)

    Thank you so much for this code, it is extremely enlightening. I will go through it thoroughly these days, I’m sure it will make the process of making my own neural network much easier. Also expect a lot of questions from me in the coming days. To start: I haven’t examined your code, but you have used only a genetic algorithm to train the network right? No backpropagation?