Jump to content
  • Advertisement
Sign in to follow this  
MichaelBarth

OpenGL Texture Coordinate Mapping (OBJ)

This topic is 1031 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello everyone, I hope that this is an okay place to post this as it is specific to the OBJ file, but I'm using OpenGL to map the coordinates.

 

But anyway, I've tried every which way I can think of to map the texture coordinates of this simple cube. And no matter what I do, the texture coordinates are at least screwed up on 4 or more of its faces.

 

In Blender, the cube is like so:

 

npyYC56.png

 

But in my program, it comes out looking like this:

 

kzxacxk.png

 

 

I have gotten it to the point where at least one face looked correct, but that doesn't help because I want the rest of them to be correct obviously.

 

I hope you can follow my monstrous code, but first I'll share my shader code:

 

Vertex Shader:

#version 330

layout(location = 0) in vec3 position;
layout(location = 1) in vec2 texCoord;

out vec2 texCoord0;

uniform mat4 transform;

void main()
{
    gl_Position = transform * vec4(position, 1.0);
    texCoord0 = texCoord;
}

Fragment Shader:

#version 330

in vec2 texCoord0;

uniform sampler2D sampler;

void main(void)
{
    gl_FragColor = texture2D(sampler, texCoord0.xy);
}

So very simple shaders. The only thing I could think of is using the deprecated gl_FragColor, but I would think that would still work, but moving on to the code.

 

These are my Mesh and Vertex classes:

class Vertex
{
public:
	Vertex(Vector3f position, Vector2f texCoord);
	Vector3f position;
	Vector2f texCoord;
	Vector3f normal;
};

class Mesh : public GameObject
{
public:
	Mesh();
	Mesh(Shader *shader, std::vector<Vertex> Vertices, std::vector<int> Indices);
	Mesh(Shader *shader, std::string fileName);

	void Update();

	void Render();

	float Temp;

private:
	enum {
		VERTEX_VB,
		TEXCOORD_VB,

		NUM_BUFFERS
	};
	void Initialize(Shader *shader, std::vector<Vertex> Vertices, std::vector<int> Indices);
	std::vector<Vertex> Vertices;
	std::vector<int> Indices;
	std::vector<int> IndicesTexCoord;
	std::vector<Vector2f>texCoords;
	int iSize;
	Shader *shader;
	std::vector<GLfloat> GetVerticesPositionData();
	std::vector<GLfloat> GetVerticesTexCoordData();
	GLuint m_IndexBufferObject;
	GLuint m_VertexArrayObject;
	GLuint m_VertexArrayBuffers[NUM_BUFFERS];
};

Mind you I'm not really using the Vertex texCoord really, and I haven't even begun calculating normals yet.

 

Here is the code where I'm reading the OBJ file:

Mesh::Mesh(Shader *shader, std::string fileName)
{
	if (fileName.substr(fileName.find_last_of(".") + 1) != "obj")
	{
		std::cout << "Error: unsupported model format!\n";
	}
	else
	{
		std::vector<Vector3f> _positions;
		std::vector<Vector2f> _texCoords;
		std::ifstream file(fileName);
		float x, y, z;

		while (!file.eof())
		{
			std::string buffer;
			std::string line;
			std::vector<std::string> tokens;
			std::getline(file, line);
			std::stringstream ss(line);

			while (ss >> buffer)
			{
				tokens.push_back(buffer);
			}
			if (tokens.size() == 0 || tokens[0] == "#")
				continue;
			else if (tokens[0] == "v" && tokens[1] != "")
			{
				x = (float)atof(tokens[1].c_str());
				y = (float)atof(tokens[2].c_str());
				z = (float)atof(tokens[3].c_str());


				//std::coutTexCoords << "X: " << x << " Y: " << y << " Z: " << z << std::endl;
				_positions.push_back(Vector3f(x, y, z));

				//Vertices.push_back(Vector3f(
				//	x,
				//	y,
				//	z));
			}
			else if (tokens[0] == "vt")
			{
				float x = (float)atof(tokens[1].c_str());
				float y = (float)atof(tokens[2].c_str());
				texCoords.push_back(Vector2f( x, y));
				//texCoords.push_back(x);
				//texCoords.push_back(y);
				_texCoords.push_back(Vector2f(x, y));

				//std::cout << "U: " << x << " V: " << y << std::endl;
			}
			else if (tokens[0] == "f")
			{
				//std::cout << "f: " << atoi(tokens[1].c_str()) - 1 << " " << atoi(tokens[2].c_str()) - 1 << " " << atoi(tokens[3].c_str()) - 1 << std::endl;
				Indices.push_back(atoi(tokens[1].c_str()) - 1);
				Indices.push_back(atoi(tokens[2].c_str()) - 1);
				Indices.push_back(atoi(tokens[3].c_str()) - 1);

				tokens[1].erase(tokens[1].begin(), tokens[1].begin() + tokens[1].find_first_of("/") + 1);
				tokens[2].erase(tokens[2].begin(), tokens[2].begin() + tokens[2].find_first_of("/") + 1);
				tokens[3].erase(tokens[3].begin(), tokens[3].begin() + tokens[3].find_first_of("/") + 1);

				IndicesTexCoord.push_back(atoi(tokens[1].c_str()) - 1);
				IndicesTexCoord.push_back(atoi(tokens[2].c_str()) - 1);
				IndicesTexCoord.push_back(atoi(tokens[3].c_str()) - 1);
			}

		}
		file.close();
		for (unsigned int i = 0; i < _positions.size(); i++)
			Vertices.push_back(Vertex(_positions[i], _texCoords[i]));
	}

	Initialize(shader, Vertices, Indices);
}

As you can see I'm using my IndicesTexCoord to push back the values of f 2/TEXCOORD# in the OBJ file which as far as I know tells which vertex what texture coordinate to use.

 

And here's my Initialize function:

void Mesh::Initialize(Shader *shader, std::vector<Vertex> Vertices, std::vector<int> Indices)
{
	this->shader = shader;
	this->Vertices = Vertices;
	this->Indices = Indices;
	iSize = Indices.size();

	std::vector<GLfloat> vertexBufferData = GetVerticesPositionData();
	std::vector<GLfloat> texCoordData = GetVerticesTexCoordData();

	if (vertexBufferData.size() > 0)
	{
		glGenVertexArrays(1, &m_VertexArrayObject);
		glBindVertexArray(m_VertexArrayObject);

		glGenBuffers(NUM_BUFFERS, m_VertexArrayBuffers);
		glGenBuffers(1, &m_IndexBufferObject);

		glBindBuffer(GL_ARRAY_BUFFER, m_VertexArrayBuffers[VERTEX_VB]);
		glBufferData(GL_ARRAY_BUFFER, sizeof(vertexBufferData[0]) * vertexBufferData.size(), vertexBufferData.data(), GL_STATIC_DRAW);

		if (texCoordData.size() > 0)
		{
			glBindBuffer(GL_ARRAY_BUFFER, m_VertexArrayBuffers[TEXCOORD_VB]);
			glBufferData(GL_ARRAY_BUFFER, sizeof(texCoordData[0]) * texCoordData.size(), texCoordData.data(), GL_STATIC_DRAW);
		}

		if (Indices.size() > 0)
		{
			glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_IndexBufferObject);
			glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices[0]) * Indices.size(), Indices.data(), GL_STATIC_DRAW);
		}
	}
}

As you can see it all comes down to this GetVerticesTexCoordData() function:

std::vector<GLfloat> Mesh::GetVerticesTexCoordData()
{
	std::vector<GLfloat> texCoordData;

	for (unsigned int i = 0; i < Indices.size(); i++)
	{
		texCoordData.push_back(texCoords[IndicesTexCoord[i]].x);
		texCoordData.push_back(texCoords[IndicesTexCoord[i]].y);
	}

	return texCoordData;
}

Right here is where I'm sure I'm screwing up my calculation somehow.

 

I'm really pounding my head here and if somebody that can follow whatever it is I'm doing could please help me somehow, I'd GREATLY appreciate it. Because I just can't figure this out no matter which way I smash my head against the keyboard. O.o

 

I'd really appreciate any help. One last thing, since it's possible I may be screwing up my rendering somehow, I'll post my render code as well:

void Mesh::Render()
{
	if (Vertices.size() > 0)
	{
		if (shader)
			shader->bindShader();
		glEnableVertexAttribArray(0);
		glEnableVertexAttribArray(1);
		glBindBuffer(GL_ARRAY_BUFFER, m_VertexArrayBuffers[VERTEX_VB]);
		glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);

			glBindBuffer(GL_ARRAY_BUFFER, m_VertexArrayBuffers[TEXCOORD_VB]);
			glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, (void*)0);

		if (Indices.size() > 0)
			glDrawElements(GL_TRIANGLES, iSize, GL_UNSIGNED_INT, 0);
		else
			glDrawArrays(GL_TRIANGLES, 0, 3);
		glDisableVertexAttribArray(0);
		glDisableVertexAttribArray(1);
	}
}

Again, I'd REALLY appreciate any help.

Share this post


Link to post
Share on other sites
Advertisement

Sanity check, when you load it in your program do you have 24 vertices and 24 texture coordinates?

 

I'm guessing you have less vertices than texture coordinates, in which case you need to weld your vertices. Basically, for each unique vertex + texcoord combination, you need to create a unique vertex (because obj files have separate indices per component, but OpenGL only supports a single array of indices).

Share this post


Link to post
Share on other sites

Do you have backface culling turned on?  Try it if not.

I do actually.

 

Sanity check, when you load it in your program do you have 24 vertices and 24 texture coordinates?

 

I'm guessing you have less vertices than texture coordinates, in which case you need to weld your vertices. Basically, for each unique vertex + texcoord combination, you need to create a unique vertex (because obj files have separate indices per component, but OpenGL only supports a single array of indices).

 

See I was wondering about that, that seemed weird. So welding vertices huh? I'll have to research that. Thank you so much! That makes perfect sense!

Share this post


Link to post
Share on other sites
I believe you can use the "edge split" modifier to create hard edges.

In Blender, an object as a whole can either be shaded flat or shaded smooth. With your cube selected, press space to open the command palette, and select "shade smooth". Next, add "edge split" to the object's modifier stack. This modifier will duplicate edges and vertices for edges having neighboring faces at an angle greater than the specified threshold. And since you have 12 edges that match this criteria, it should result in the familiar flat-shaded cube.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!