Sign in to follow this  
RexHunter99

OpenGL [solved]D3D's RHW and OpenGL's W

Recommended Posts

--I've rewritten this topic in hopes that someone will reply as I am in need of help desperetly. ========================================================== Solution: Call glLoadIdentity(); after clearing the buffers at the beginning of the scene fixed my problem (which was not anything to do with RHW, but screen co-ordinates. ========================================================== Though this topic blurs between the two API's I'm asking for an explanation of the difference between the two API's seemingly similar co-ordinate component. Why? In a game I have the source-code to that was made in 1999 and uses Direct3D7, Glide 3 and a Software Rasterizer, the fan community have encountered problems with all three renderers at some stage, most people with older machines with 32-bit CPU's and old nVidia cards (such as my GeForce 6200A) don't have a problem with the D3D7 or Software renderers, but the problems vary and there have been too many reports with a lack of information (from a large number of people who don't even know what the difference between add-on and onboard) for us to narrow down the causes. So what I did to counter the problems was propose to my teammates that we rebuild the Renderer's a Direct3D9 and OpenGL Renderer each, would replace the old ones and hopefully kill all renderer related problems. Also note that the game has all three renderers in the same project and uses an #ifdef pre-processor check to see if the current Build-Target is supposed to use the Render###.CPP file the compiler is checking, it's an old and inefficient way but right now we aren't up to the task of rewriting large portions of code to implement a better solution. Now there are many sources that say a lot of things, so far I've read that Glide and OpenGL are similar and while I agree, it seems only the way you write applicaiton code for them are similar, while OpenGL and Direct3D9 now share a lot of similarities. What I began doing is rewriting the 3Dfx Glide code with OpenGL code and since the original renderers don't make use of the Projection or World matrices, we're stuck with the D3D RHW and Glide's messy equivilent's the z,oow,ooz (oow is for W-Buffering and ooz for Z-Buffering IIRC, but I don't see a purpose for W-Buffering in the game since the code seems to indicate the use of only the Z-Buffer) Now... I've done a bit of googling for resources and come up kind of dry for explanations on the differences between the D3D RHW and OpenGL W values of Vertices and am hoping someone here might be able to clear it up for me as I'd like to write these new renderer's ASAP and there are a lot of people eagerly waiting for modern Hardware support. The best I've managed to scrounge up, sadly, is the following:
Quote:
RHW is often 1 divided by the distance from the origin to the object along the z-axis.
From my understanding (and please correct me if I am wrong) that would mean that RHW is:
float origin_z = 0.f;
float vert_z = 10.0f;

float rhw = 1.0f / (vert_z - origin_z);

result: rhw = 0.1f


Now I've tried using the same data for the OpenGL renderer as the D3D renderer but I'm not getting much more than either a mess of triangles that have a heart attack all over the screen, or no triangles at all. RHW and W equal to 1.0f seem to do the same thing if I provide x,y,z values that lie within the view range (except OpenGL's 2D floating point origin is in the middle of the screen (0.0f,0.0f) where as D3D's is not) Anyway hopefully someone reads this topic this time around and can help me out as I'm stuck Trial and Erroring (which is not working out so well with a complete game engine I'm afraid) ~James [Edited by - RexHunter99 on March 16, 2010 9:05:08 AM]

Share this post


Link to post
Share on other sites
-Bump-

I hate bumping but I really need help with this, I've spent the last 3 days trying to figure it out through Trial and Error but nothing beats knowing the correct answer straight up.

I don't know if the topic scared anyone away or if I've just gotten accidentally unnoticed, but the help is very very much needed and appreciated.

Share this post


Link to post
Share on other sites
I know Opengl expects a certain triangle winding, counter clock wise by default, for a triangle to be considered forward facing. Directx of course expects the opposite. This could mean that triangles are being culled in opengl that would not be culled in direct X and vise verse. As far as actual coordinates being off I don't know that much about DX OGL differences sorry.

Share this post


Link to post
Share on other sites
Why can't you just use view and projection matrices like everybody else? There's no need to mess with all this RHW stuff, just use standard perspective calculation and you're done. AFAIK Triangle winding is the same in DX and OGL because its possible to import models from one to the other without issues.

Share this post


Link to post
Share on other sites
I'll post a picture of the OpenGL renderer in action (so you can see what I mean) note that in order to even see the triangles, I had to fix the first vertex to the co-ordinates: 0.0f, 0.0f, 0.0f, 1.0f (x, y, z, w)

^^^ Note that the pixels that are not red (that area to the lower left) is the background color I defined when I cleared the color buffer.)

Quote:
Original post by stonemetal
I know Opengl expects a certain triangle winding, counter clock wise by default, for a triangle to be considered forward facing. Directx of course expects the opposite. This could mean that triangles are being culled in opengl that would not be culled in direct X and vise verse. As far as actual coordinates being off I don't know that much about DX OGL differences sorry.

Yes, OpenGL expects the opposite triangle winding to Direct3D, OpenGl requires Anti-Clock-Wise and Direct3D requires Clock-Wise, I've accommodated for this in the renderers (OpenGl will now take in Direct3D information, for testing purposes right now, I'll properly fix this problem at a later stage)

Quote:
Original post by Momoko_Fan
Why can't you just use view and projection matrices like everybody else? There's no need to mess with all this RHW stuff, just use standard perspective calculation and you're done. AFAIK Triangle winding is the same in DX and OGL because its possible to import models from one to the other without issues.

Oh snappy snappy, you know you could have worded this reply in a nicer way? You make it sound like I'm a stupid idiot. I would have tried to use the 'normal' way to do it, but like I've said, the code is very hard to mess around with, right now I'm only replacing code with updated code that uses Direct3D9 and OpenGL ( my problem lies with the OpenGL though) The game's original creators have already processed all the projection data prior to the rendering stage, or as much of it as possible. I have to deal with this RHW stuff because otherwise I'd have to do a complete code overhaul and my position on the team is the graphical programmer, I work on the 3D code because I can visualize a 3D scene in my head rather easily with the given information.

And you're wrong, D3D and OpenGL have different triangle winding by default, you can change how they deal with that if you enable backface culling and change the winding then, but that's a short-term unpreferred fix. Also OpenGL and Direct3D have different co-ordinate systems, one is Left-Hand and the other is Right-Hand, I think I've accomodated for this already though... not 100% sure because I can't quite tell until this RHW stuff is down.


Just going to say this for anyone else going to tell me to do this the 'normal/modern' way with the Perspective/Ortho matrix functions:
If anyone else wants to be a smart-arse like Momoko_Fan was, then why don't you try rewriting over 20,000 lines of code for me? remember you have to accommodate for at least two different graphics APIs and comment most of the functions as you go along so you/others know what they do?

[Edited by - RexHunter99 on March 12, 2010 9:10:03 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by RexHunter99
Also OpenGL and Direct3D have different co-ordinate systems, one is Left-Hand and the other is Right-Hand, I think I've accomodated for this already though... not 100% sure because I can't quite tell until this RHW stuff is down.


Silly question, but sometimes the silly stuff is what gets us... do you mean you transposed the [4]x[4] D3D matrix in order to get an OpenGL [16] element matrix?

Most of the time thats all I have to do when "translating" matrix operations intended for D3D to OpenGL.

Good Luck.

Share this post


Link to post
Share on other sites
No, my quick fix solution was to inverse the Y co-ordinate so it'd appear in the 'correct place' or appear in the same place as it would in Direct3D, anyway that's besides the point, I want to know if there's a difference between OpenGL and Direct3D's W and RHW values and if there is, what the difference is so I can accommodate for it in the game code.

Share this post


Link to post
Share on other sites
Quote:
Original post by RexHunter99
No, my quick fix solution was to inverse the Y co-ordinate so it'd appear in the 'correct place' or appear in the same place as it would in Direct3D, anyway that's besides the point, I want to know if there's a difference between OpenGL and Direct3D's W and RHW values and if there is, what the difference is so I can accommodate for it in the game code.


I am not sure if there is a difference, shouldn't be, I usually just leave W as 1 in OpenGL, I think the OpenGL ModelView matrix is 2 separate matrices in D3D, so maybe you should factor that in.

Perhars this would help:
Quote:

9.011 How are coordinates transformed? What are the different coordinate spaces?

Object Coordinates are transformed by the ModelView matrix to produce Eye Coordinates.
Eye Coordinates are transformed by the Projection matrix to produce Clip Coordinates.
Clip Coordinate X, Y, and Z are divided by Clip Coordinate W to produce Normalized Device Coordinates.
Normalized Device Coordinates are scaled and translated by the viewport parameters to produce Window Coordinates.
Object coordinates are the raw coordinates you submit to OpenGL with a call to glVertex*() or glVertexPointer(). They represent the coordinates of your object or other geometry you want to render.
Many programmers use a World Coordinate system. Objects are often modeled in one coordinate system, then scaled, translated, and rotated into the world you're constructing. World Coordinates result from transforming Object Coordinates by the modelling transforms stored in the ModelView matrix. However, OpenGL has no concept of World Coordinates. World Coordinates are purely an application construct.
Eye Coordinates result from transforming Object Coordinates by the ModelView matrix. The ModelView matrix contains both modelling and viewing transformations that place the viewer at the origin with the view direction aligned with the negative Z axis.
Clip Coordinates result from transforming Eye Coordinates by the Projection matrix. Clip Coordinate space ranges from -Wc to Wc in all three axes, where Wc is the Clip Coordinate W value. OpenGL clips all coordinates outside this range.
Perspective division performed on the Clip Coordinates produces Normalized Device Coordinates, ranging from -1 to 1 in all three axes.
Window Coordinates result from scaling and translating Normalized Device Coordinates by the viewport. The parameters to glViewport() and glDepthRange() control this transformation. With the viewport, you can map the Normalized Device Coordinate cube to any location in your window and depth buffer.
For more information, see the OpenGL Specification, Figure 2.6.


If you're dealing with matrices though, you shouldn't just mirror Y, you should transpose the matrices because of how they are accessed, OpenGL defines them as a one dimension array whereas D3D access them as a 4x4 two dimensional array, mapping one to the other doesn't leave the elements on the proper positions, check point 9.005 on the link I posted above.

Share this post


Link to post
Share on other sites
Just passing through briefly.. might have some code you can look at later that might help?

Aliens Vs Predator original D3D 5/6 code, the linux OpenGL renderer update for said game and my D3D9 equivelant. Source code repository for linux port doesn't seem to be online at the moment so I can't just link you at the moment.

Out of curiosity, what game is it? If it's one I like i'd be interested in helping :)

Share this post


Link to post
Share on other sites
Quote:
Original post by sirlemonhead
Just passing through briefly.. might have some code you can look at later that might help?

Aliens Vs Predator original D3D 5/6 code, the linux OpenGL renderer update for said game and my D3D9 equivelant. Source code repository for linux port doesn't seem to be online at the moment so I can't just link you at the moment.

Out of curiosity, what game is it? If it's one I like i'd be interested in helping :)


D3D 5/6 and D3D7 are quite similar despite how far they came ;) After that D3D just went up exponentionally...

Urm that might be nice actually, thanks, when you can could you show me some code? Would be a great help (I'm just implementing the basic HUD UI function equivalents now, hopefully glDrawPixels isn't too slow to hamper gameplay)

The game is Carnivores 2, created by a company known as Action-Forms, currently, Tatem Games a mobile gaming company has a license to make an iPhone App called Carnivores: Dinosaur Hunter which is set for release this year. Action-Forms gave us the source-code and we were dismayed to find that they'd overwritten the first game's code with the second game's code and also lost the menu code (the game compiles into a .REN file (a renamed .EXE file) that the menu executes after you've selected your level, dinosaurs and weapons.

You may or may not have heard of it ;)

Share this post


Link to post
Share on other sites
So I've been tinkering around again and got the Direct3D9 version to work (with extra tinkering) I think I understand somewhat how the z and RHW values work (when the RHW is required eg; when no projection matrix is used)

The z value of a vertice defines the value used within the Z-Buffer, where 1.0f is as close to the camera as possible and 0.0f is as far away as possible. The RHW is typically calculated as 1.0f / z_dist_from_origin (origin is the camera position which in my case always remains 0,0,0)

For some reason, the original D3D renderer defines a value _ZSCALE as -16.0f and then divides it by the vertices z value, then the RHW value is processed where _AZSCALE is equal to 1.0f / 16.0f and RHW is equal to z * _AZSCALE

Also, by default Direct3D's co-ordinate origin is at the top-left of the window and you simply pass the width of the window as a float to a vertex and the vertex will be placed on the right hand side of the window, where as in OpenGl the origin is in the center of the window space, and a value of 1.0f for the x axis will place the vertex on the right side of the window, a value of -1.0f will place it on the left side of the window. For y it's the same, 1.0f will put the y position on the top side of the window and -1.0f will put it on the bottom side.


I'll post some of the data I dumped during a test of mine. I have to work with this data that I have without manipulating it too much ( I can modify it if absolutely necessary but I'd prefer not to)

ev0: 438.974976,371.207733,-19070.726563
ev1: 438.974976,371.207733,-19070.726563
ev2: 438.974976,371.207733,-18559.201172
ev0: 438.974976,371.207733,-19070.726563
ev1: 438.974976,371.207733,-18559.201172
ev2: 438.974976,371.207733,-18558.964844
ev0: 456.157349,371.207733,-19070.726563
ev1: 456.157349,371.207733,-19070.726563
ev2: 456.157349,371.207733,-18559.201172
ev0: 456.157349,371.207733,-19070.726563
ev1: 456.157349,371.207733,-18559.201172
ev2: 456.157349,371.207733,-18559.201172
ev0: 559.253235,364.763855,-19070.019531
ev1: 559.253235,364.763855,-19069.783203
ev2: 559.253235,364.763855,-18558.257813
ev0: 559.253235,364.763855,-19070.019531
ev1: 559.253235,364.763855,-18558.257813
ev2: 559.253235,364.763855,-18558.494141
ev0: 473.339691,371.207733,-19070.726563
ev1: 473.339691,371.207733,-19070.726563
ev2: 473.339691,371.207733,-18559.201172
ev0: 473.339691,371.207733,-19070.726563
ev1: 473.339691,371.207733,-18559.201172
ev2: 473.339691,371.207733,-18559.201172
ev0: 542.069885,366.911865,-19070.253906
ev1: 542.069885,366.911865,-19070.019531
ev2: 542.069885,366.911865,-18558.494141
ev0: 542.069885,366.911865,-19070.253906
ev1: 542.069885,366.911865,-18558.494141
ev2: 542.069885,366.911865,-18558.728516


Left-hand float is the X, the middle float is the Y and the Right-hand float is the Z. These are the same values used for the old D3D and the new D3D9 code.


EDIT:
I think I might be able to solve the origin problem with glFrustum... this will take some testing but the way i'm thinking of it is something along the lines of:

glFrustum(0.f,(float)WinW, 0.f, (float)WinH, 0.1f, 100000.f);

This should set the upper left corner as the origin (or the lower left corner, either one) and then the D3D data should work so long as z and w aren't treated differently from D3D's z and RHW. (again, seems not to be a difference between them other than OpenGl drops the 'rh' prefix.)


[Edited by - RexHunter99 on March 12, 2010 11:56:47 PM]

Share this post


Link to post
Share on other sites
Ah! never mind folks... I got it working after I put glLoadIdentity() right after I cleared the Buffers, seems to work 100% the same as the D3D9 version so long as I leave the RHW value in OpenGL as 1.0f o-0

Not really solved the question at hand, but my problem is over for now so I guess this means this is solved somewhat.

Thanks to all who replied.

Share this post


Link to post
Share on other sites
Sorry I didn't reply sooner. In the D3D world, the RHW (reciprocal homogenous W) is another way of saying, bypass the projection and modelview transform.

In the GL world, there is no such thing as a bypass. The solution is to set the projection and modelview matrices to identity.

In todays world of GPUs, in other words, a shader world, RHW is a dead concept. The vertex shader always processes ALL vertices.

Quote:
Ah! never mind folks... I got it working after I put glLoadIdentity() right after I cleared the Buffers, seems to work 100% the same as the D3D9 version so long as I leave the RHW value in OpenGL as 1.0f o-0


That is the only solution. Also, as someone said, w is always 1 by default for vertices. It is the same for D3D.

Share this post


Link to post
Share on other sites
But my D3D9 Renderer (it's the old D3D7 renderer just initializes D3D9 and calls that) uses RHW fine, infact it works just as it did 'back-in-the-day' it seems. I spent quite a bit of time fiddling around with the RHW in D3D9 in the end I had to inverse the old value to get a correctly working one for D3D9... so either you are wrong or I am doing something that is not done normally.

Share this post


Link to post
Share on other sites
It will work with D3D9. The only thing I am saying that it is a dead concept. I haven't gotten into D3D10 but I wonder if they got rid of it. I know that you must do shaders with D3D10.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627714
    • Total Posts
      2978775
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now