• Advertisement
Sign in to follow this  

OpenGL XMMatrixPerspectiveFovLH unit cube z in clipping space goes from 0,1 or -1,1 ?

This topic is 1986 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hi all,

very simple question coz I need to know whether I need to rescale the z range in 0,1 or not and I couldn't find any explanation on the xnamath docs.

If it's like the old dx convention z should be in 0,1 in clipping space (the openGL convention was different in fact we had -1,1 for z in clipping space).

The function that I'm using to project my scene is XMMatrixPerspectiveFovLH

thank in advance ;)

Share this post


Link to post
Share on other sites
Advertisement
I haven't used that function directly, but I'm 99.9% sure it is scaled from (0,1). I have never heard of any of the D3D functions using a (-1,1) range.

In any case, it should be fairly easy to try out. Pass the position.z value to your pixel shader, and color the output based on if the z value is positive or negative. Then you will know for sure what you are dealing with.

Share this post


Link to post
Share on other sites

Yeah the D3D viewport transform works with [0, 1] range for Z in clip space, and the DirectXMath perspective functions are designed to work with that.


thanks for the infos.

One side track question:

What are the conditions to output just depth?

I tried setting pixel shader to null and render target to null with rtCount to 0 and with depthStencil only. But I can't see any depth in PIX output. Only if I specify a render target along with its depth stencil I'll be able to see depth.

The format for depth stencil is D32_FLOAT and R32_FLOAT for the render target. Texture is R32_TYPELESS.
For now I output z/w in the pixel shader, but I'll remove the pixel shader if I can get the depth output only to work.
Then probably I'll go for linear z...

I'd like to use nvidia nsight but I have an intel integrated gpu which is not compatible with nvidia nsight. Just waiting to get a new nvidia card...

Share this post


Link to post
Share on other sites
You shouldn't need to have a render target bound, you can just render to a depth buffer only. To get around the PIX issue with 32-bit depth buffers, you can try just using a full screen shader that samples the depth buffer and outputs it to the screen for visualization. Just make sure you rescale or linearize the depth in your pixel shader, otherwise it will look like everything is white.

Share this post


Link to post
Share on other sites

You shouldn't need to have a render target bound, you can just render to a depth buffer only. To get around the PIX issue with 32-bit depth buffers, you can try just using a full screen shader that samples the depth buffer and outputs it to the screen for visualization. Just make sure you rescale or linearize the depth in your pixel shader, otherwise it will look like everything is white.


I've read that you suggested this trick to get linear depth on the fly:


float getLinearDepth(in float zw){
return lightProjector._43 / (zw - lightProjector._33);
}


is zw == z/w ? or z*w ?

I currently output z/w on the render target (which is also the one that woudl result in the depth buffer) ...

Share this post


Link to post
Share on other sites

[quote name='MJP' timestamp='1347511217' post='4979581']
You shouldn't need to have a render target bound, you can just render to a depth buffer only. To get around the PIX issue with 32-bit depth buffers, you can try just using a full screen shader that samples the depth buffer and outputs it to the screen for visualization. Just make sure you rescale or linearize the depth in your pixel shader, otherwise it will look like everything is white.


I've read that you suggested this trick to get linear depth on the fly:


float getLinearDepth(in float zw){
return lightProjector._43 / (zw - lightProjector._33);
}


is zw == z/w ? or z*w ?

I currently output z/w on the render target (which is also the one that woudl result in the depth buffer) ...
[/quote]

I've verified to be z/w, infact it works perfect.

thanks

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
  • Advertisement