Help me if you want on that great project ! :)

edited November 2014 in General Discussion

Hello !

I'm working on a "tool-set" for a tetraplegic girl called Zora who can only move it's head (but not smoothly) , cannot speak, have no hand and don't know how to read... I build some tools with kinect to be able to control a cursor (X/Y position) and click with almost no motion from the "controler-head" and it work well enough.

Now that it work, I want to create some fun stuff for her like youtube-access, draw app, sound creation and so on. None of these apps can be complex in their usage.

They must be super simple to use, almost nothing to know to get it work. It should be super simple because even she 30 years old, she never use a computer / UI / video games / nothing !

There is some person around her but the only way to comunicate right now is to show her a grid of pictures, point one picture by time with a finger and ask her what she want, waiting for a blink eye... Then people cannot do a lot of thing for her.

That's why I'm working on that.

I dont know her actually, I even never met her. It's a friend of mine who talk to me about her without asking for anything... But I have no job right now but still get "paid" (dont know the correct english expression to say that), and I thought I could do something without losing my whole life on it, then I'm working on it since last september. I will met her in 2 weeks - I 'm waiting to be 100% sure it will work with her before met her because I don't want to create fake-hope -


My stuff doesn't detect the face or anything automaticly. It shows the HD video and allow someone to define some reactive areas that can be used to provide X/Y normalised position (based on blob tracking in the nose area for example), click action (based on average color of the frame ; or activity between current and previous frame ; or other kind of test ).

It may sounds bad to not be able to use it directly. But I made maybe 12 differents version of kinect-head-controler and I really think it's better for everyone to have a custom-made tool because it allow every scenario possible like multi-user control and compared to the old way (picture-grid, finger and blink eye for everything) it remains very easy to use

I think it could be a great experience for her if people around her was able to play with her in real time using the same controler. Since you can create any "video-area-analyser" as you want, it's possible.


My friend told me one day that Zora would like to be a singer. I know how weird it sounds because she can't even talk, but I would like to try to build something, not to really sing, I don't know what to do actually.... I want to do something but it's not clear what I can do... I have another friend who is a women-singer in a great band in France. She 's ok to give me some "voice-ressources" recorded on studio if I need.

Do you have some ideas of how I can build a fun-singing-voice generator from mousePosition (head position actually, but I manage it as mouse) ?

My kinect-stuff can potentialy do any kind of mouse activity but I think that during the first months Zora could not be able to control herself enough to be able to really control a "mousePressed-action" , it should be "mouseReleased"-based at the beginning. She never need to control herself until now because she had no reason to do it, and I think it would take some time before she could use my stuff in a natural way (maybe I'm wrong, I don't know at all, it's the first time I'm working on that kind of project...)

There is some tools that already exists for that kind of people, but it's very expensive. The cheaper cost 1200 euros and only concern people who can read.

I know about the eye-can project, and I think it's a very good initiative but I also think it's a pain to wear something and to limit the view-area of the user.

Kinect V2 and its 1920x1080 video make possible to create a remote eye-tracking, it's not so expensive and it make possible multi-users app from the same controler then I choose that way.


It's not done already, but I think I will write the head-activity-values in a MySql database to make possible for an external app to get the data easily without knowing anything about kinect / processing. At the beginning, I thought about using my tool as socket-server but even if it's not so hard to use, it may affraid some devs more than a MySql database.

What do you think about that ?

I'm not an expert at all in that kind of stuff, then any observation/advice is welcome

But I think the apps should be executables opened from a main-UI because of ressource-management and because I want that devs could be able to share something without to share their code. (It's better if you can share your code, but I don't want it as a constraint )

I'm not sure if I can do the main-UI directly with Processing, but I already did something like that with Adobe Air and then it's not a problem (but I'll use processing if it's possible, I didn't search yet)


After we defined a good and easy way to transfer data from my kinect-app to others, would you create something for her ?

Any kind of interactive-toy could be good, but it could be a tool too. I'm think about a tool that allow her to search images from google image without writing anything anywhere ; maybe a google-earth app...

It could be also an app to communicate with the outside world via phone-notification to be able to say "I want to go to the toilet", "I need to drink", etc... Because right now, she can't do that ; she have to wait that someone take care of her, show her the grid of pictures and ask what she want.... It sounds so painfull....

I asked to another friend who is designer to vectorize the pictures contained in her picture-grid. I 'll got them soon as ressource if you need it.

There is a lot of thing possible.

It's easy to help because she has nothing at all for the moment.

So any kind of things is welcome !

But I insist : it must be super easy to use. If there is some buttons, they must be big because even if my stuff is accurate enought I think Zora-head-motion would not very smooth and at the begining, I think it would be a pain to click without moving the cursor away (I maybe wrong, but I prefer keeping in mind the worst scenario possible)

I'm working with Zora in mind right now, but I obviously think about the other tetraplegic in the world... I'll put my stuff on github and will provide a project-web-page when I'll be sure all work correctly.

Thanks by advance !

( Obviously, I'm working for free, and if you want to participate, it must be for free too )

Answers

  • You should join our Programming group portals. We need more people, but there you tell us what to do, and we help you!

  • portalswwd@gmail.com

  • edited November 2014

    hello!

    I think it's a great project!

    Did you read this:

    http://forum.processing.org/one/topic/face-head-tracking-tilt.html

    maybe ask those guys to help.

    I'm sure, these things you do have been done.

    But I'm willing to join.

    Let me see if I understood you correctly.

    Unfortunatly, I don't have a kinect. So what I can do is to write a program with the mouse and you modify it so that you can work with it? Did I understood that correctly so far?

    ok....

    please PM me when you need more.

    First Example

    you said she couldn't read or write, but here a handicapped person could enter words (by mouse so far).

    Since you said she uses a sheet with symbols now and a person has to watch her eyes blink, you could enter the symbols in the grid and let her select a symbol (with your kinect).

    Please let me know if that's the right direction.

    ideas

    Also, she could learn spelling with this grid: It's a small game. When you display a cat image and the word "cat", she then needs to recognize the letters and enter them on the grid (cat) . When she spells "cat" correctly, she gets 10 points and gets the next image. (Later only image without the word cat).

    Or you display the letters "cat" and the grid shows different images (cat mouse dog house) and she needs to recognize the word and select an image.

    Please let me know what you need or want from me.

    Best wishes, Chrisir ;-)

    Rect[][] rekt = new Rect[6][6];
    PFont font1;
    String textResult = "";
    String lastWord="";
    
    Rect buttonSubmit ; 
    Rect buttonClear ; 
    Rect buttonBackspace ; 
    
    // ------------------------------------------------
    // MAIN functions 
    
    void setup() {
      size( 990, 600);
      //
      defineGrid();
    
      buttonSubmit =  new Rect( width-100, height - 36, "Submit");
      buttonClear=  new Rect( width-220, height - 36, "Clear");
      buttonBackspace =  new Rect( width-312, height - 36, "<-");
    
      font1  = createFont("Arial", 32);
      textFont(font1);
      background(111);
      //
    } // func 
    
    void draw() {
      background(111);
    
      // the rect outline 
      rectMode(CENTER);
      stroke(39, 20, 1, 150);
      fill(255);
    
      int dist = 40;
    
      quad (  rekt[0][ 5].x-dist, rekt[0][ 5].y+dist, 
      rekt[0][ 0].x-dist, rekt[0][ 0].y-dist, 
      rekt[5][ 0].x+dist, rekt[5][ 0].y-dist, 
      rekt[5][ 5].x+dist, rekt[5][ 5].y+dist);
    
      for (int i = 0; i < rekt.length; i++) {
        for (int j = 0; j < rekt[i].length; j++) {
          rekt[i][ j].show();
        }
      }
    
      buttonSubmit.show();
      buttonClear.show();
      buttonBackspace.show();
    
      fill(255, 5, 4);
      textAlign(LEFT, TOP);
      text (textResult, 40, height-40);
      text (lastWord, width-400, 40);
      //
    } // func 
    
    // ------------------------------------------------------
    // Inputs 
    
    void mousePressed() {
      // 
      for (int i = 0; i < 6; i++) {
        for (int j = 0; j < 6; j++) {
          if (rekt[i][j].nearMouse()) {
            //  add the letter 
            textResult = textResult +  rekt[i][j].letter + "";
            // quit the function 
            return;
          }
        }
      }
    
      if (buttonSubmit.nearMouse()) {
        // submit 
        submit() ;
      } // if 
    
      if (buttonClear.nearMouse()) {
        // delete
        textResult = "";
      }
      if (buttonBackspace.nearMouse()) {
        // shorten
        shortenWord();
      }
    }
    
    void keyPressed () {
      // keyboard 
      if (keyCode==DELETE) {
        // delete 
        textResult = "";
      }
      else if (key==BACKSPACE) {
        // shorten
        shortenWord();
      }
      else if (key==RETURN || key == ENTER) {
        // submit
        submit() ;
      }
      else if (key=='X') {
        // reset 
        defineGrid() ;
      }
      else {
        //
      }
    }
    
    void shortenWord() {
      // shorten
      if (textResult.length()>0) {
        textResult = textResult.substring(0, textResult.length()-1) ;
      }
    }
    
    void submit() {
      if (textResult.length()>1) {
        println("" + textResult+".");
        lastWord = textResult;
        textResult = "";
      } // if
    }
    
    // -------------------------------------------------------------------
    // Misc
    
    void defineGrid() {
    
      // reset 
    
      int dist = 58; 
      int k=65;
      for (int j = 0; j < rekt.length; j++) {
        for (int i = 0; i < rekt.length; i++) {
          char letterToDefine = getLetterBasedOnNumber(k);
          rekt[i][ j] = new Rect( dist + i*(800/10), dist + j*(800/10), letterToDefine );
          k++;
        }
      }
    } // func 
    
    
    char getLetterBasedOnNumber(int k) {
      if (k<=91) {  
        return char(k);
      }
      else {
        switch (k) {
        case 92:
          return ' '; 
        case 93:
          return '.';
        case 94:
          return '!';
        case 95:
          return '?';
        case 96:
          return '+';
        case 97:
          return '=';
        case 98:
          return '-';
        case 99:
          return ',';
        case 100:
          return ';';
        default:
          return '?';
        }// switch
      }//else
    }//func
    
    // =====================================================
    
    class Rect {
    
      float x;
      float y;
      float rectWidth = 55; 
      float rectHeight = 55; 
      char letter; 
      String s = "";
      color rectColor = color(255);
    
      Rect(float x_, float y_, char letter_) {
        x = x_;
        y = y_;
        letter = letter_;
      } // constr I
    
      Rect(float x_, float y_, String s_) {
        // button for the Mouse
        x = x_;
        y = y_;
        s = s_;
        rectWidth=textWidth(s)*3;
        rectColor = color(255, 0, 0);
      } // constr II 
    
      void show() {
        // the rect outline 
        rectMode(CENTER);
        stroke(39, 20, 1, 150);
        // noFill();
        fill(rectColor);
        rect(x, y, rectWidth, rectHeight, 7);
    
        if (s.equals("")) {
          // letter 
          fill(0);
          textAlign(CENTER, CENTER);
          text (letter, x, y);
        }
        else 
        {
          fill(0);
          textAlign(CENTER, CENTER);
          text (s, x, y);
        }
      } // method
    
      boolean nearMouse () {
        // Is the mouse close ?
        // Can return true or false.
        float distToMouse = dist (x, y, mouseX, mouseY) ; 
        if ( distToMouse < rectWidth/2 ) {
          // println ("hit");
          return true;
        }
        else {
          return false;
        }
      } // method
      //
    } // class 
    
    // ===================================
    
  • edited November 2014

    simple draw program

    usage:

    • mouse clicks draw connected lines

    • to delete a small area, hold q and click mouse on the part you want to delete (A)

    • hold another key to draw without connecting lines (B)

    • when you want to draw two shapes, click mouse for 1st shape, then hold key (B) and click mouse to start 2nd shape, release key and draw 2nd shape with mouse

    issues

    we need another solution instead of A and B (both very easy, just mouse command button on the screen like submit in the previous sketch) - but she can draw now. There are a lot drawing sketches around here

    int x, y, z, w ;
    float distTotal;
    ArrayList<Float> distances = new ArrayList();
    float currArea=0;
    
    void setup()
    {
      size(640, 360);
      background(111);
    }
    
    void draw()
    { 
      //  background(111);
    
      stroke(0);
    
      x= mouseX;
      y= mouseY;
    } // func 
    
    
    void mousePressed() {
    
      fill(255, 0, 0);
      rect(mouseX, mouseY, 5, 5);
      point(pmouseX, pmouseY);
    
      if (z>0 && w>0 && !keyPressed) {
        line(z, w, mouseX, mouseY );
        distances.add ( dist( z, w, mouseX, mouseY ) );
        distTotal = distTotal + dist( z, w, mouseX, mouseY )  ;
        println (distTotal);
        // area 
        println (distances);
        if (distances.size() == 2) 
        {
          currArea=distances.get(0)*distances.get(1); 
          text ( currArea, 14, 14);
        }
        // delete old text
        fill(111);
        noStroke();
        rect (0, 0, 170, 20);
        // bring in new text
        fill(255);
        text (int(distTotal)+"; area: "+ currArea, 12, 12);
      } // if 
    
      // delete 
      if (keyPressed && key=='q') {
        fill(111);
        noStroke();
        rect(mouseX-25, mouseY-25, 50, 50);
      } // if 
      else {  
        z=mouseX;
        w=mouseY;
      } // else
    }
    //
    
  • @ Techwizz

    Thank you for your interest, really. I didn't join your "Programming group portals" because I consider the processing-forum as a "Programming group portals" and I prefer to be active in one forum at a time :)

    @ Chrisir

    Thank you too for your interest in this project. I really think it could be a great project for everyone :)

    "Did you read this:"

    I didn't - and still don't actually but I will :) -

    "I'm sure, these things you do have been done."

    I don't think so because Kinect v2 is actually the first camera HD that works with USB 3.0. It's a very recent device and then I don't think there is a lot of "video-blob-tracking" that work with 1920x1080 picture. It suggests that it's possible to be very precise with almost no motion because even a very small area represents thousands of pixels.

    It's really possible to select a 50x50px icon using the nose. Because the video-tracking is done on a small area, pixel grid act like a natural quadtree and it result a very good user experience (without any kind of value-filtering).

    The algo is very simple, the big work is done by Kinect actually - and by me when I tryed a lot of different solution to see what was the best, but every solution was "simple" and was already done by a lot of people before me -

    "But I'm willing to join."

    Great ! Thank you !

    "Unfortunatly, I don't have a kinect. So what I can do is to write a program with the mouse and you modify it so that you can work with it? Did I understood that correctly so far?"

    Actually I open the subject a bit too early... Each time, when I'm working on something big, I can't contain myself and can't never wait to speak about it before finishing it entirely... Don't know exactly why I do that...

    Anyway !

    As I said in the message, the very first thing I need is an efficient way to connect the main sketch that contains every video-trackers and generates 0-to-1 values (or pair of 0-to-1 values for the XY axis).

    I would like to be able to do at least these things :

    • communicate the values to a local-webpage that use javascript (I look a bit at "websocket" but I'm not an expert at this and it sounds more complex than I though...) Javascript is a "must-have", I can't count on other devs to use Processing for making / giving app.

    • communicate the values with a true socket-server (then every app could connect to it, and not javascript-dev only could contribute)

    • communicate the values to another sketch (it works already with processing.net )

    Then, a second "must have" tool would be a "kinectVideoTracker-Simulator" that dispatch fake values to be able to test the app in "real" condition.

    Another "must have" would be a github page with all sources accessible (I will do that soon)

    "Since you said she uses a sheet with symbols now and a person has to watch her eyes blink, you could enter the symbols in the grid and let her select a symbol (with your kinect)."

    That's exactly what I'm going to do ! :)

    "Also, she could learn spelling with this grid"

    This is great !!! This is exactly the kind of stuff I was thinking about (for the next steps). My secret-goal is to learn her to read/write small word and then to create herself it's own grid/menu to interact with other stuff.

    Thank also for the puzzle & draw app, I 'll look at it in detail soon.

    I'm working on a library right now that will help me in different aspect on that project after. Then I stopped the project temporarly and I 'll get it back in 2-3 days (when my lib will be totally finished).

    I'll send you a MP at this moment.

    Thank you again !

  • Great!

    keep me up to date please

    ;-)

  • by the way I'm a member of "Programming group portals" too.

    It's not instead the forum it's just a higher commitment in the group.

  • edited November 2014

    Do you have an idea on how I could do that ;

    "communicate the values to a local-webpage that use javascript"

    Do you think it need a lot of work ?

  • javascript programs can be written in processing.js

    so when your lib works with it, you're there

    I know what you mean, but I can't answer it

  • Actually, for the sketch-to-JS communication, there is a very basic solution : it just need to write the values in the same text-file for every-frame, and ask JS to load the file at every frame too.

    Very simple, but it should work. I'll do some tests soon :)

Sign In or Register to comment.