Jump to content
  • Advertisement
Sign in to follow this  
SapphireStorm

OpenGL Texture Coordinates

This topic is 2613 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi guys,

I'm rendering assets originally created for a D3D pipeline with OpenGL. To account for the texture coordinate differences between the two API's, I am applying a simple transform to all V coordinates per vertex, namely V = 1.0f - V.

Now, this sort of works. However, I don't think it's quite right.

The U values seem to be reflected after this transform. For most models, the human visual system is not acute enough to actually tell a difference. Models which have text on them, however, clearly illustrate the problem. They look as if you wrote a sentence in MS Paint and then clicked 'flip horizontally'.

I'm just wondering if this side effect makes sense or maybe I'm doing something else wrong entirely.

Share this post


Link to post
Share on other sites
Advertisement
My advice is to stop and learn, from the beginning, how texture coordinates work in both APIs and how it relates to the image data you actually upload as a texture.

In OpenGL, the pointer passed to glTexImage is the texel whose texture coordinate is (0,0). The first coordinate increases along the row, and the second coordinate increases for every row, from the start of the image buffer.

Now, that's fine, but you must also know how you load your images. Do you load them bottom up or top down? That, and not OpenGL, determines how the texture will be oriented. If you don't even know that for sure, then you are really going to have problems, because you don't even know what you're starting with so any adjustments is only going to be based on guesses and not be robust and actual solutions. As soon as you have a problem, the actual problem could be anywhere, and even in multiple places, in a long line of parameters that orients a texture.

More often than not, when someone has a problem with the texture being upside down, the texture isn't even loaded in memory the way around the person thinks to begin with, and tries to find a solution later down the chain of operations. I assume Direct3D has the same problem, but just that it works the other way around at some point since you're having some difference.

Learn the way textures are handled, and work with the differences instead of against them.That is, find a point as early as possible where API-dependent corrections can be made, and then treat textures and coordinates the same. It is, at least from OpenGL perspective, possible to do that, so if anything, you can adapt OpenGL to work as Direct 3D, if the other way around is not reasonable.

Share this post


Link to post
Share on other sites
Thanks for the post Bob. That's basically the kind of post I needed. Something like, "Hey. You should be able to do this. Your code is broken. Go fix it."

So, I started with textures, all the way back to my DDS reader. Loaded up a sample DDS in an external viewer and rendered some full screen quads using a tex coordinates scheme from directX. The quad looked upside down as expected and as I remembered testing the code originally. If I go out and change change the tex coordinates to match how OpenGL works, the textures match.

At this point, I was pretty confident the texture pipeline was alright. As long as you account for the different orientation of the V-axis, you should be in the clear.

Then, I started digging a little deeper, into the models themselves. I really stumbled up the actual problem by pure luck. I was messing around with triangle winding and eventually noticed the textures looked correct on the back faces of all the triangles in the model. Which is when I remembered I never actually accounted for the handedness difference between the two renderers. Well, that's not entirely true. I basically just hand waived at it.

I was like, "Well, instead of the default pose of a model looking at me, it will be looking away from me. So, who really cares? I'll just have the camera and lights be oriented from the other side of Z and be done with it."

And, honestly, that worked pretty well. However, something else happens, which at the conceptual level I still don't fully understand. The best way to describe it is it causes your models to render inside-out. If someone can clarify exactly what happens, I would like to know.

But anyways, the actual solution was, as you initially read in data, negate the Z in all model position vectors and negate the I and J in all the model quaternions.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!