Howdy, Stranger!

We are about to switch to a new forum software. Until then we have removed the registration on this forum.

  • Get the RGB value of a pixel on a textured sphere ?

    I have a P3D sketch with a Sphere primitive at 0,0,0. Surrounding the sphere (though not touching) are a number of cubes that are orientated so their z-axis points to the center of the sphere.

    The sphere is textured with an image file. How would I go about finding the colour on the surface of the sphere at the point a line from each cube to 0,0,0 intersects the surface ?

    looking for general ideas at this stage... was thinking collision detection,raytracing... but also wondered (hoping!) if there may be a simpler method ?

    cheers, mala

  • Nvidia Realtime Raytracing

    Hi, I just saw that Nvidia has introduced REALTIME RAYTRACING in their Volta card - I'd like to know if there's any way to program this in p5.js, so that I can have realtime raytraced scenes in the browser? ...say an architectural render - I model the building in Max or whatever, and then I can let the user wander around the building inside Firefox by pressing the cursor keys....that kind of thing. What must I do?

    Please note, I'm completely new to all this - I've never used p5 prior to this, so....go easy on me :)

    Thanks.

  • Can I create VR 3D games with real time problems

    It would be an library or platform to enable others to make their 3D games in VR.

    So like eg a Level Editor with raytracing and shadows and a physics engine for First Person Shooter or Third Person Shooter.

    People could design levels (inside buildings or outside in the streets of a city). Or a rollercoaster

    What kind of games did you have in mind?

  • No Export

    Hello
    already good answers here. Usually you are going for a 32bit App, unless want do something crazy like an Videoeditor, or modern 3D Shooter with Megatextures, Hybrid -Raytracing, Forward Shadow Mapping and SSAA, - kind of Doom 4 situation.

    • Yes, the user has to install Java on the Machine. You have to write a Readme including this Information.

    For the Enduser with zero experience, the steps would be: go to the Oracle -Website, download and install the Java lib, wait til installation has finishend. Lanch your Application. ~ 5min, done.
    Your app is worth it : )
    I think you have to build up a community first, and downloading the Java files within your app would be kind of painful (whit your next release).
    To download your 32bit App and then separate Java download you make sure, your "fans" already have the java environment installed and only download the newest content.

    Big advantage is the smaller file size to download and slightly faster loading time (on 64x). Also 32bit runs on 64x architecture, not vice versa. So you can compile 32 bit for Mac+Linux+Windows and don't worry about it, also not if the User is on a 64x architecture. If you compute everything on the CPU the required work stored inside your RAM, implies you can address up to 4GB Ram with your awesome App, and that's a lot!
    Also "a lot of Ram" for running other Apps on the Desktop of the User like a Browser, and Videorecorder, maybe to make a Play trough, or Youtube Review of your app. 64x means the Computer is ready to allocate the memory for your App in particular, this instruction might conflict with an also expensive Application running at the same time. (On Notebook/ Android it is also matter of Battery life. )
    Special case as mentioned with Videoeditor would be, if you do some Imageprocessing you have to decompressed your images .jpg .png to a RAW Data Type, (1024x1024x4)~RGBA and push the pixels inside the RAM, could end up as expensive computation, - but in general it should work.

    Also try previous versions of processing. 2.1.1 e.g https://processing.org/download/ Just to avoid confusion, with 64x vs 32bit and also get fimilar with Amazon in first place and not worry about Java 1.8 etc.

    And to merge your other topic:

    https://developer.amazon.com/de/docs/mac-pc/faq.html

    identify the folder as the “Binary” and identify the readme file as the “Installer executable file name”.

    Easy peasy.

    Side note: Yes, amazon is okay, but also more focusing on sales, i think. If you just experimenting, there are tons of other platforms,more easily to access and get through verification. https://itch.io/

    Edit: So i'm talking about the executable file created for each OS, that is pointing to the .jar files, for the exucution of the *jar files inside the JVM it doesn't matter if the bytecode referring to 32bit or 64bit. So the workaround instead of an ".exe" could be, to create a bash/shell script and point the java installation.

  • How to check if free-shaped image has been clicked?

    Ok, so I'm trying to make a board game, which is made of small parts like the one below A part of the map. So, I need vertices, so I can check if the mouse is inside of the image when it's clicked.

    Is there any other way I can check if transparent(not 100% transparent, but only parts of it) has been clicked. Without vertices. I wanted vertices because I could check it with raytracing.

  • In P3D, the rendered and rotated ellipse at mouse position is hidden by half of another shape

    Yes, I was trying to do the projection of the mouse to the rotated plane. Less trivial as you need to take into account the camera position

    So, I cannot continue on this atm... I suggest exploring

    https://forum.processing.org/two/search?Search=raytracing

    Specifically:

    https://forum.processing.org/two/discussion/21644/picking-in-3d-through-ray-tracing-method/p1

    If you don't make it work, please reply below and I can give it a try at a later time or other forum members will be able to assist more.

    Kf

  • Picking in 3D through ray tracing method

    Hello,

    I have been trying to write a code for being able to pick objects or select points in 3D by using a ray tracing method, based on the guidelines I read from this source: http://schabby.de/picking-opengl-ray-tracing/

    However, I have no idea how to check intersection between the ray and the object in the scene, and also I am not sure that the code is 100% correct. I am posting the code below. If anyone could help it would be great!

    import peasy.*;
    import toxi.geom.*;
    
    float x; 
    float y;
    
    // -- camera parameters ---------
    PeasyCam cam;
    Vec3D camTarget;
    Vec3D camPosition;
    Vec3D view = new Vec3D();
    Vec3D camHorizontal = new Vec3D();
    Vec3D camVertical = new Vec3D();
    Vec3D pos;
    Vec3D dir;
    float fov = 1; //PI/5
    float aspect = float(width)/float(height);
    float nearClip = 1;
    float farClip = 100000;
    
    Cube cube0;
    
    // ------------------------------
    
    void setup() {
      size(800, 800, OPENGL);
      createCamera();
      Vec3D start = new Vec3D(0, 0, 0);
      cube0 = new Cube(start, 30);
    }
    
    // ------------------------------
    
    void draw() {
      smooth();
      background(255);
      cube0.run();
      raytracing();
      testIntersection();
    }
    
    // ------------------------------
    
    void createCamera() {
      cam = new PeasyCam(this, 300);
      perspective(fov, aspect, nearClip, farClip);
    }
    
    // ------------------------------
    
    void raytracing() {
      x = mouseX;
      y = mouseY;
    
      // --- get camera target position
      float a[] = cam.getLookAt();
      float a1 = a[0];
      float a2 = a[1];
      float a3 = a[2];
      camTarget = new Vec3D(a1, a2, a3);
    
      // --- get camera position
      float b[] = cam.getPosition();
      float b1 = b[0];
      float b2 = b[1];
      float b3 = b[2];
      camPosition = new Vec3D(b1, b2, b3);
    
      // --- get view
      view = camTarget.sub(camPosition);
      view.normalize();
    
      // -- get cameraUp vector
      Vec3D Zaxis = new Vec3D(0, 0, 1);
      Vec3D rotAxis = view.cross(Zaxis);
      float theta = -PI/2;
      Vec3D camUp = view.getRotatedAroundAxis(rotAxis, theta);
    
      // -- calculate screenHorizontally and screenVertically
      camHorizontal = view.cross(camUp);
      camHorizontal.normalize();
      camVertical = camHorizontal.cross(view);
      camVertical.normalize();
    
      float vLength = tan(fov/2)*nearClip;
      float hLength = vLength*aspect;
      camVertical.scaleSelf(vLength);
      camHorizontal.scaleSelf(hLength);
    
      // translate mouse coordinates so that the origin lies in the center    
      x -= width/2;
      y -= height/2;  
      x /= (width/2);
      y /= (height/2);
    
      // compute intersection of picking ray with viewport plane
      float posX = camPosition.x + view.x*nearClip + camHorizontal.x*x + camVertical.x*y;
      float posY = camPosition.y + view.y*nearClip + camHorizontal.y*x + camVertical.y*y;
      float posZ = camPosition.z + view.x*nearClip + camHorizontal.z*x + camVertical.z*y;
      pos = new Vec3D(posX, posY, posZ);
      dir = pos.sub(camPosition);
    }
    
    // ------------------------------
    
    void testIntersection() {
    }
    

    There is also a simple cube class just for displaying and getting the location vector of the cube. Thank you very much for your help! :)

  • GSoC 17 application - Processing for Android and Helping with the PDE

    @kfrajer

    Implementing the VR to interact with the magnetic sensor available in most phones should be straight forward and encourage. The concept is quite simple and can be done in processing in Android mode.

    Haha I can't actually find this in the documentation, if you do find it can you send me a link? Maybe I just couldn't find it (:

    However, the implementation of this is done with the magnetometer of the phone, it should not be hard to do. But this solution is not used by everyone as not all the phones have magnetometers, so the raytracing solution is sometimes preferred as it doesn't need any additional sensor.

    I don't have experience with ray tracing but it is an interesting concept to explore.

    Oh raytracing is a standard technique used to interact from a 2D set of coordinates with a 3D world, and you can see this in a lot of cardboard examples. Also, raytracing is in a lot of 3D games as you are clicking on your screen (that has 2D coordinates) objects in a 3D world, and the program needs to understant on which object you clicked. In practice, every time you click in your 3D game, an invisible line (a ray, actually) is formed, starting from the point in which you clicked and perpendicular to your screen (so, going along the z-axis if you consider the x and y axis as the ones of the screen). In this way, the program can compute which object does the ray intersect and in which order, thus determining what is the object that you clicked on.

    More info on raytracing used for this purpose: click here

    In cardboard, the ray is used to determine what are you looking at, since also the screen of your phone is 2D but you are looking into a 3D world. The games often have a function that lets you interact with an object (or click on an object, for example) if you look at it long enough, say 3 seconds. In this way, you can interact without the use of any other sensor or device on your phone. This is what I was referring to as 'gaze controls'. Yes, it does need you to think your experience in a way that fits with you looking at something still for some seconds, but it's a tradeoff.

    Thinking in a bigger picture, maybe you could generate a VR demo experience with the option for the user to hook their own physical trigger. You don't have to implement all these technologies but just creating an API where users can easily connect their favorite trigger.

    I tought about this, but I understood that this was just a demo to show what Processing was capable of, and I also wanted it to be simple to use, so to not require any additional stuff to be used, like an Arduino (and a gyroscope/accelerometer with it). So, they should be not required, but the idea of creating an API to try to hook up whatever you want is sweet (: I am just a little scared thinking about how many differences in the calibration and the output of a lot of different accelerometers there may be <: D

    A recent VR experience for example, the VR viewer comes with a handy remote control, like a wii control. I believe this was the sony VR viewer... I have to check. The experience was enhanced allowing to actively interact with the scene.

    Oh yes, many headset manufacturers are doing this: HTC Vive link, Newest Oculus Rift link. And it's awesome. But they are incredibly precise and specifically manufactured for that device (and cost a lot of money), so we have some more challenges in doing this (:

    I tried doing the same thing with Wii controllers, as I mentioned in a previous post. The problem is, they don't have an easy way to communicate with them (they communicate via bluetooth), as when I did this, you had to have root privileges on your phone to access custom bluetooth communication, and I don't actually want to make something that requires root privileges on people's phones...

    Another concept to explore for VR is to for the users be able to generate their own 360 experience. For example, if they go to an art gallery and they take a panoramic shot of one of the rooms in the exposition, or to take a bunch of overlapped pictures and then have some code to stitch the pictures together and make it available to the VR as a source scene for other users to explore or to share, for example in collaborations or even a way to advertise. This approach as you see, is using photo data instead of generating an scene. I have to admit I am not sure if it is possible to create an use a 360 image in VR but then, the data is there to create the experience so I don't see why not.

    Unluckily for us, Google already tought about this: click here

    Thank you for all your inputs!

  • GSoC 17 application - Processing for Android and Helping with the PDE

    Hello again (:

    I took some time to think about what could I do, and I came up with some ideas.

    I would like to ask if you'd prefer a project that is simple enough to use it as an example for documentation or if you'd prefer a more complex/complete project to show what the platform is capable of.

    For the first scenario I was thinking about a series of live wallpapers aimed at showing how different sensors could be used? Consider for example, showing an equalizer on the wallpaper based on ambient sounds from the phone's microphone, or a landscape that changes its appearance based on the temperature and/or light.

    In the second case I have some ideas I would like to share with you.

    • I'd love the idea of making a game/tool/visual experience with VR! However, I don't really see a way of interacting with it. Neither the magnetic button that many viewers have, nor gaze controls (meaning that, if you have a button in your VR world, you can push it by looking at it for a few seconds) are currently supported. I have some experience with raytracing, maybe I could make an example where I show how gaze controls can be implemented.
    • Some time ago I made a cardboard game that I exhibited during an event. The player was a bird, flying through an archipelago of islands, mimicking the movement of the wings with its own arms. The movements were sent to the phone with two Nintendo Wii controllers that the player held in its hands. Each game lasted about 5 minutes, in which the players had to fly around collecting small seeds hidden in the landscape. By bringing them to a central crater they could make a (procedurally generated) 'tree of life' grow taller. At the end of the event, we showed everyone the tree grown by the efforts of all the players, thus creating a little collaborative VR experience. This game was made with the only purpose of showing it during this event, and never distributed, so I could perhaps use this idea as a starting point for a VR game/collaborative experience.
    • Also, as a VR experience I'd also like to create an application through which users could explore shape shifting, mesmerizing geometries. That is what Processing is really fantastic for, and we should absolutely take advantage of this by showing how cool that would be in VR. Imagine this one: click here, or this one: click here in VR!

    Please let me know what you think about these ideas, I'd really like your opinion on what could be more appropriate for the project (: I'll be working on a prototype in the next days.

  • SOLVED [Toxiclibs] Is there a quicker method to test mesh inclusion

    Hi

    I was looking to see if I could test if a point was inside a mesh. I was looking online and found this article which suggested raytracing as a method and find the number of intersections. Using toxiclibs I could find if a ray intersected a mesh only according to the mesh normals. So I had to check against two meshes, one which is flipped.

    It works but it just takes a long time to test and I also have to loop through the function a few times to get rid of all the points( even though I shouldnt need to).

    Anyway is there a quicker method to check that, maybe with another library. Here is my code.

    import java.util.*;
    import peasy.*;
    import toxi.geom.*;
    import toxi.geom.mesh.*;
    import toxi.processing.ToxiclibsSupport;
    
    
    import wblut.processing.*;
    import wblut.hemesh.*;
    import wblut.geom.*;
    
    
    
    private ToxiclibsSupport gfx;
    TriangleMesh cave;
    TriangleMesh cave2;
    
    
    
    
    ArrayList<Vec3D> pts = new ArrayList<Vec3D>();
    
    
    public void settings() {
      size(1400, 800, P3D);
      smooth();
    }
    
    public void setup() {
    
    
      cave = (TriangleMesh) new STLReader().loadBinary(sketchPath("data/" + "cave.stl"), STLReader.TRIANGLEMESH);
      cave2 = (TriangleMesh) new STLReader().loadBinary(sketchPath("data/" + "cave.stl"), STLReader.TRIANGLEMESH);
      cave2.flipVertexOrder();
    
      Vec3D a = cave.computeCentroid();
      PeasyCam cam = new PeasyCam(this, a.x, a.y, 0, 2200);
    
      gfx = new ToxiclibsSupport(this);
    
      for (int i = 0; i < 20; i++) {
        for (int j = 0; j < 20; j++) {
          for (int k = 0; k < 20; k++) {
            pts.add(new Vec3D(i * 70, j * 70, k * 30));
          }
        }
      }
    
      //Point in Mesh Function ( a bit slow)
    
      for (int j = 0; j < 10; j++) {    // Need to run it a few times
        for (int i = 0; i < pts.size(); i++) {
          Vec3D v = pts.get(i);
          Ray3D r = new Ray3D(v, new Vec3D(0, 0, 1));
          if (!cave.intersectsRay(r)) {
            pts.remove(v);
          } else {
            if (cave2.intersectsRay(r)) {
              pts.remove(v);
            }
          }
        }
      }
    
    }
    
    
    public void draw() {
      background(0);
    
      for (Vec3D a : pts) {
        stroke(255);
        strokeWeight(2);
        point(a.x, a.y, a.z);
      }
    
      pushMatrix();
      fill(40, 120);
      noStroke();
      lights();
      gfx.mesh(cave2, false, 0);
      popMatrix();
    
    }
    
  • Bugs with window size, colors changing on weirdly. (Raytracer)

    In this case, it is raytracing a white cube.

    Which bit of the code is this?

  • Bugs with window size, colors changing on weirdly. (Raytracer)

    A simple raytracer is a program that, for each pixel on the screen, casts a ray that goes out to find an object. The ray then returns the RGB value of the color of the object it hit. Most of the time, (but not in this case, because I'm still working on it) it casts another ray towards a light to work out whether or not that pixel is to be rendered in shadow or not. In this case, it is raytracing a white cube. The relevant part of this to the issue is that it goes and individually sets the color of each pixel on the screen using rect() (because point() wasn't working, as stated above). For some reason, it does this oddly and processing itself seems to be part of the issue.

  • Bugs with window size, colors changing on weirdly. (Raytracer)

    Can you start by explaining what the program is meant to do? So far or only hint is the class name, raytracer. What's it raytracing? And how?

  • PhD Research on Architectural Facades

    don't accept an answer too soon.... thread looks closed then

    see also openprocessing.org | collections | raytracing

    there is one sketch with walls....