Jump to content
  • Advertisement

olivelarouille

Member
  • Content Count

    5
  • Joined

  • Last visited

Community Reputation

108 Neutral

About olivelarouille

  • Rank
    Newbie
  1. olivelarouille

    Projection matrix model of the HTC Vive

    Hi Dirk,   Thanks a lot!   This makes much sense. In the meantime, I found a blog post detailing this for the Oculus https://rifty-business.blogspot.de/2013/10/understanding-matrix-transformations.html?m=1 It's not the Vive but the principle is the same, I post it here for reference as the diagrams are useful for understanding.   As for CAVE, they also have parallel view direction, but the frustum are asymmetric in the other direction. There is more frustum space between the eye, and this is required to correctly project (with a beamer) both images on the same wall, so that they are create a perceived parallax when observed from the tracked head viewpoint.   This means for me that the legacy code for CAVE stereo is not useful, but that's another story. :/
  2. Hello everyone,   I am implementing support for the HTC Vive in an application that was designed to render stereo images for CAVE environment (is, anaglyph or with stereo glasses).   To render with the Vive on OpenGl, it's very easy to get the projection matrix for each eye, however, the app I am working on requires me to use different parameters, namely the Fov, the image ratio/size, and a parameter called convergence distance or sometimes focal length, which is usually the distance from the viewer to the wall in a cave. This is a standard model for stereo rendering, and more details can be found here for example:   http://paulbourke.net/stereographics/stereorender/   See sections about the "Off-axis" model   I was trying to match the Vive with such model but until now without success. Do you guys know which model is used to compute the proj matrix of the vive? Especially, if you look at the matrix (IVRSystem::GetProjectionMatrix), we can see that it is not symmetric:   left projection +        [0]    0x000000000039b270 {0.756570876, 0.000000000, -0.0577721484, 0.000000000}    float[4] +        [1]    0x000000000039b280 {0.000000000, 0.680800676, -0.00646502757, 0.000000000}    float[4] +        [2]    0x000000000039b290 {0.000000000, 0.000000000, -1.01010108, -0.101010107}    float[4] +        [3]    0x000000000039b2a0 {0.000000000, 0.000000000, -1.00000000, 0.000000000}    float[4]   This can be also seen with IVRSystem::GetProjectionRaw, which gives for the left eye:   left          -1.39811385    right        1.24539280    bottom    1.45936251    top          -1.47835493      Which puzzles me here is that in absolute, left is bigger than right, which indicates that the frustum for the left eye is oriented towards the left, and not towards the center of sight (right of left eye) as it would be expected looking at the model described by Paul Bourke.   Any help, experience would be greatly appreciated. Thanks
  3. olivelarouille

    DX11 full screen quad problem

    And for the way I draw the vertices, I send them directly in the range -1,1 so I don't have to use an orthographic projection.
  4. olivelarouille

    DX11 full screen quad problem

    Hi everyone, Thanks for the suggestions and sorry for the late reply, I work on this project part time and I was buzy working on some opengl problems but this is another story. I found a solution by creating the device in debug mode as suggested by Nik02. I got the following error : D3D11: ERROR: ID3D11Device::DrawIndexed: Rasterization Unit is enabled (PixelShader is not NULL or Depth/Stencil test is enabled) but position is not provided by the last shader before the Rasterization Unit. [ EXECUTION ERROR #362: DEVICE_DRAW_POSITION_NOT_PRESENT ] Turned out that the outpu of my vertex shader had semantiv POSITION instead of SV_POSITION. I am translating a whole bunch of shaders from DX9 to DX11 so this change didn't get my attention! Thanks again!
  5. Hi there, This is my first post here and I would like to ask for some help or fresh ideas for debugging. I am trying to draw a simple full screen quad using directX11. I draw two triangles in the range (-1,1) in x and y. I use a pass through vertex shader and the fragment shader writed the pixel to red. I checked the render target, and states but I can’t get to have anything on the screen. I used PIX to debug the frame and the draw call seems to output the correct geometry to the viewport. When I debug a pixel, there is no draw call event so I guess that the primitives are clipped, at least something wrong at the rasterizer stage... I tried to change the order of the vertices, disabled depth clip but without succes... Doesn any one has a suggestion on something I could be missing? Thanks a lot. Olive
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!