Howdy, Stranger!

We are about to switch to a new forum software. Until then we have removed the registration on this forum.

  • 3D like Wolfenstein

    Thanks for the comments.

    Yes, I did it like in the video with a simple raycasting technique. I'm aware of QuesyCam, but for this I wanted to do it "the old way", for fun and as a challenge for myself.

  • 3D like Wolfenstein

    Looks good -- thanks for sharing!

    Quick questions (I haven't looked at your download code):

    1. did you do use a raycasting approach, like in your reference video?
    2. are you familiar with the QuesyCam library and the mazerunner demo, and/or did you try using it?
  • How would you do raycasting in Processing

    For beziers specifically you can figure out wheter a point is on the left or right side of a bezier segment by solving it's equation, here is my code which you can run on bookofshader.
    https://github.com/Prince-Polka/bezier-boundary/blob/master/shadercoderevision

    If you wan't to use actual raycasting you loop through one axis and store the points of intersection on the other axis.
    My code above does not do that, but the bezier equation is the same.

  • How would you do raycasting in Processing

    @Scott06 -- Have you already searched the forum for recent discussions of raycasting?

    Some of them might be helpful.

    Can you say something more about these inside/outside tests? Are they in 2D or 3D? What is their purpose?

  • How would you do raycasting in Processing

    Hi, I've got a question. What I want to do is be able to draw complex shapes with bezier curves (which I already know how to do). But then what I want to do is write a function that is able to determine if a specific point is either inside, outside, or touching the complex shape. I believe this is called raycasting. Does anybody know how to do that? Thank you.

  • How to Ray Cast in Box2DProcessing

    The following is a mix of BumpySurfaceNoise example that comes with the library and the EdgeShapes demo from the jbox2d testbed.

    
    import shiffman.box2d.*;
    import org.jbox2d.collision.shapes.*;
    import org.jbox2d.common.*;
    import org.jbox2d.dynamics.*;
    import org.jbox2d.dynamics.*;
    import org.jbox2d.callbacks.RayCastCallback;
    
    Box2DProcessing box2d;
    Surface surface;
    EdgeShapesCallback callback = new EdgeShapesCallback();
    
    void setup() {
      size(500,400);
      box2d = new Box2DProcessing(this);
      box2d.createWorld();
      box2d.setGravity(0, -20);
      surface = new Surface();
    }
    
    void draw() {
      box2d.step();
      background(255);
      surface.display();
      rayCasting();
    }
    
    
    public void rayCasting(){
      float m_angle = frameCount * 0.01f;
      float radius = 25.0f;
      Vec2 point1 = new Vec2(0.0f, 10.0f);
      Vec2 p1p2 = new Vec2(radius * cos(m_angle), -radius * abs(sin(m_angle)));
      Vec2 point2 = point1.add(p1p2);
    
      callback.m_fixture = null;
      box2d.world.raycast(callback, point1, point2);
    
      strokeWeight(1);
      stroke(0);
      noFill();
      if (callback.m_fixture != null) {
        Vec2 head = callback.m_normal.mul(2).addLocal(callback.m_point);
        point(callback.m_point, 25);
        line(point1, callback.m_point);
        line(callback.m_point, head);
      } else {
        line(point1, point2);
      }
    }
    
    void point(Vec2 p1, float size){
      Vec2 px_p1= box2d.coordWorldToPixels(p1);
      ellipse(px_p1.x, px_p1.y, size, size);
    }
    
    void line(Vec2 p1, Vec2 p2){
      Vec2 px_p1 = box2d.coordWorldToPixels(p1);
      Vec2 px_p2 = box2d.coordWorldToPixels(p2);
      line(px_p1.x,px_p1.y, px_p2.x, px_p2.y);
    }
    
    
    static class EdgeShapesCallback implements RayCastCallback {
      
      Fixture m_fixture = null;
      Vec2 m_point;
      Vec2 m_normal;
    
      public float reportFixture(Fixture fixture, final Vec2 point, final Vec2 normal, float fraction) {
        m_fixture = fixture;
        m_point = point;
        m_normal = normal;
        return fraction;
      }
    }
    
    
    
    class Surface {
    
      ArrayList surface;
    
      Surface() {
        surface = new ArrayList();
        ChainShape chain = new ChainShape();
        float xoff = 0.0;
        for (float x = width+10; x > -10; x -= 5) {
          float y;
          if (x > width/2) {
            y = 100 + (width - x)*1.1 + map(noise(xoff),0,1,-80,80);
          } else {
            y = 100 + x*1.1 + map(noise(xoff),0,1,-80,80);
          }
          surface.add(new Vec2(x,y));
          xoff += 0.1;
        }
        
        Vec2[] vertices = new Vec2[surface.size()];
        for (int i = 0; i < vertices.length; i++) {
          Vec2 edge = box2d.coordPixelsToWorld(surface.get(i));
          vertices[i] = edge;
        }
        
        chain.createChain(vertices,vertices.length);
        BodyDef bd = new BodyDef();
        bd.position.set(0.0f,0.0f);
        Body body = box2d.createBody(bd);
        body.createFixture(chain,1);
      }
    
      void display() {
        strokeWeight(2);
        stroke(0);
        noFill();
        beginShape();
        for (Vec2 v: surface) {
          vertex(v.x,v.y);
        }
        endShape();
      }
    }
    
    
    
    
    
    
    
    
    
    
    
    
    
  • How to Ray Cast in Box2DProcessing

    jbox2d is Java, which you can use directly in processing. And even if you find a lot of C tutorials/sourcecode, there is not much difference to the java implementation either. Actually, even javascript resources or any other ports use more or less the same code. The jbox testbed examples basically cover pretty much everthing to get started.

    box2d textbed RayCastTest:

    raycasting-tutorial, altough not java, the concepts dont change:

    more search results:

    box2d reference:

  • How to Ray Cast in Box2DProcessing

    I tried using sensors before this but the beginContact() and endContact() function logic was functioning properly. I've spent ages trying to debug it tho, but if you think that would be easier than rayCasting please let me know and I'll provide the code.

    Some Context to this issue:

    I am attempting to make a Neuroevolution program to create cars that can avoid other cars and park themselves. I have been able to implement the neural network, and the math needed to support it. Each car has a neural network that takes in inputs (from sensors) and produces output (steering and throttle). A car receives this input from its 12 sensors: 8 collision aversion sensors tell the car if it is about to collide with something, and 4 other senses tell it how far it is from its parking spot. The 8 collisions aversion (C.A.) sensors have fd.isSensor = true. I took into account that sensors collide with their own car (since they originate from it's center). The issue is that sometimes the collision listener beginContact/endContact does not trigger when it overlaps with a Boundary and only triggers in response to a Car when it collides with the center of the car. Sometimes, the sensors collisions do not register until the car collides with something. This defeats the purpose of the sensor. If anyone is familiar with collision listening in Box2D I would really appreciate you help so I can get on to the fun part-the genetic algorithm. Thank you!

  • How to do Raycasting?

    Hi! I'm thinking of doing a science fair project on raycasting. Can anyone help? (Btw i know what it is already so save youre time)

  • GSoC 17 application - Processing for Android and Helping with the PDE

    @kfrajer

    Implementing the VR to interact with the magnetic sensor available in most phones should be straight forward and encourage. The concept is quite simple and can be done in processing in Android mode.

    Haha I can't actually find this in the documentation, if you do find it can you send me a link? Maybe I just couldn't find it (:

    However, the implementation of this is done with the magnetometer of the phone, it should not be hard to do. But this solution is not used by everyone as not all the phones have magnetometers, so the raytracing solution is sometimes preferred as it doesn't need any additional sensor.

    I don't have experience with ray tracing but it is an interesting concept to explore.

    Oh raytracing is a standard technique used to interact from a 2D set of coordinates with a 3D world, and you can see this in a lot of cardboard examples. Also, raytracing is in a lot of 3D games as you are clicking on your screen (that has 2D coordinates) objects in a 3D world, and the program needs to understant on which object you clicked. In practice, every time you click in your 3D game, an invisible line (a ray, actually) is formed, starting from the point in which you clicked and perpendicular to your screen (so, going along the z-axis if you consider the x and y axis as the ones of the screen). In this way, the program can compute which object does the ray intersect and in which order, thus determining what is the object that you clicked on.

    More info on raytracing used for this purpose: click here

    In cardboard, the ray is used to determine what are you looking at, since also the screen of your phone is 2D but you are looking into a 3D world. The games often have a function that lets you interact with an object (or click on an object, for example) if you look at it long enough, say 3 seconds. In this way, you can interact without the use of any other sensor or device on your phone. This is what I was referring to as 'gaze controls'. Yes, it does need you to think your experience in a way that fits with you looking at something still for some seconds, but it's a tradeoff.

    Thinking in a bigger picture, maybe you could generate a VR demo experience with the option for the user to hook their own physical trigger. You don't have to implement all these technologies but just creating an API where users can easily connect their favorite trigger.

    I tought about this, but I understood that this was just a demo to show what Processing was capable of, and I also wanted it to be simple to use, so to not require any additional stuff to be used, like an Arduino (and a gyroscope/accelerometer with it). So, they should be not required, but the idea of creating an API to try to hook up whatever you want is sweet (: I am just a little scared thinking about how many differences in the calibration and the output of a lot of different accelerometers there may be <: D

    A recent VR experience for example, the VR viewer comes with a handy remote control, like a wii control. I believe this was the sony VR viewer... I have to check. The experience was enhanced allowing to actively interact with the scene.

    Oh yes, many headset manufacturers are doing this: HTC Vive link, Newest Oculus Rift link. And it's awesome. But they are incredibly precise and specifically manufactured for that device (and cost a lot of money), so we have some more challenges in doing this (:

    I tried doing the same thing with Wii controllers, as I mentioned in a previous post. The problem is, they don't have an easy way to communicate with them (they communicate via bluetooth), as when I did this, you had to have root privileges on your phone to access custom bluetooth communication, and I don't actually want to make something that requires root privileges on people's phones...

    Another concept to explore for VR is to for the users be able to generate their own 360 experience. For example, if they go to an art gallery and they take a panoramic shot of one of the rooms in the exposition, or to take a bunch of overlapped pictures and then have some code to stitch the pictures together and make it available to the VR as a source scene for other users to explore or to share, for example in collaborations or even a way to advertise. This approach as you see, is using photo data instead of generating an scene. I have to admit I am not sure if it is possible to create an use a 360 image in VR but then, the data is there to create the experience so I don't see why not.

    Unluckily for us, Google already tought about this: click here

    Thank you for all your inputs!