We are about to switch to a new forum software. Until then we have removed the registration on this forum.

- All Categories 25.7K
- Announcements & Guidelines 13
- Common Questions 30
- Using Processing 22.1K
- Programming Questions 12.2K
- Questions about Code 6.4K
- How To... 4.2K
- Hello Processing 72
- GLSL / Shaders 292
- Library Questions 4K
- Hardware, Integration & Other Languages 2.7K
- Kinect 668
- Arduino 1K
- Raspberry PI 188
- Questions about Modes 2K
- Android Mode 1.3K
- JavaScript Mode 413
- Python Mode 205
- Questions about Tools 100
- Espanol 5
- Developing Processing 548
- Create & Announce Libraries 211
- Create & Announce Modes 19
- Create & Announce Tools 29
- Summer of Code 2018 93
- Rails Girls Summer of Code 2017 3
- Summer of Code 2017 49
- Summer of Code 2016 4
- Summer of Code 2015 40
- Summer of Code 2014 22
- p5.js 1.6K
- p5.js Programming Questions 947
- p5.js Library Questions 315
- p5.js Development Questions 31
- General 1.4K
- Events & Opportunities 289
- General Discussion 365

Hi all,

I'm trying to use shaders to draw points. I figured out that into shaders all the coordinates have values between -1 and 1, so I have to convert my PVector coordinates. For the X (0, width) and Y (0, height) values the operation is simple. How can I convert the Z coordinate?

Thanks. paolofuse

Tagged:

## Answers

Would screenX() and screenY() help? http://processing.org/reference/screenX_.html You can give them an x, y, z point and they will map it to the screen coordinates. Then you only need to normalize the values... or did I get it wrong?

No, I need all 3 coordinates. Shader accepts 3 coordinates like PVector (x, y, z) but with values between -1 and 1. It's simple to convert X value and Y value, but I don't know to convert Z value. Maybe is the normalize operation you are talking about?

Ok, I guess then you have to decide which Z depth corresponds to 0 and which corresponds to 1. You said with x and y it's simple, but if x is -20, it can still be seen in the screen, for certain z values (as things get smaller further away). Or?

I'm not sure if I get it, but I guess you have to decide to map a certain volume of space into the range 0 .. 1 (that's what I meant with normalization).

So for instance:

Here I just randomly decided to grab the z range between 0 and the (width+height)/2. You could also use

or any other space region you like. But I'm saying all this only half understanding the problem :)

If the desired range is 0 to 1, we can use

norm()instead ofmap(): :ar!http://processing.org/reference/norm_.html

For clarify this is what I do:

xShader goes from -1 (when X is 0) to 1 (when X is width)

yShader goes from -1 (when Y is height) to 1 (when Y is 0)

I want to do the same with Z coordinate, without have to choose arbitrary values. Thanks

We got

width&heightalready. But I wonder how muchdepththere'd be?... :-??Well, depth is related to the View Frustum (the area between the Near and Far planes of the camera). z0 is the near plane, z1 is the far plane. Things close to the near plane appear to be bigger and anything outside the view frustum is cropped.