Jump to content

  • Log In with Google      Sign In   
  • Create Account

Spirrwell

Member Since 11 Nov 2011
Offline Last Active Jan 13 2016 11:31 PM

Topics I've Started

Texture Coordinate Mapping (OBJ)

30 November 2015 - 08:01 AM

Hello everyone, I hope that this is an okay place to post this as it is specific to the OBJ file, but I'm using OpenGL to map the coordinates.

 

But anyway, I've tried every which way I can think of to map the texture coordinates of this simple cube. And no matter what I do, the texture coordinates are at least screwed up on 4 or more of its faces.

 

In Blender, the cube is like so:

 

npyYC56.png

 

But in my program, it comes out looking like this:

 

kzxacxk.png

 

 

I have gotten it to the point where at least one face looked correct, but that doesn't help because I want the rest of them to be correct obviously.

 

I hope you can follow my monstrous code, but first I'll share my shader code:

 

Vertex Shader:

#version 330

layout(location = 0) in vec3 position;
layout(location = 1) in vec2 texCoord;

out vec2 texCoord0;

uniform mat4 transform;

void main()
{
    gl_Position = transform * vec4(position, 1.0);
    texCoord0 = texCoord;
}

Fragment Shader:

#version 330

in vec2 texCoord0;

uniform sampler2D sampler;

void main(void)
{
    gl_FragColor = texture2D(sampler, texCoord0.xy);
}

So very simple shaders. The only thing I could think of is using the deprecated gl_FragColor, but I would think that would still work, but moving on to the code.

 

These are my Mesh and Vertex classes:

class Vertex
{
public:
	Vertex(Vector3f position, Vector2f texCoord);
	Vector3f position;
	Vector2f texCoord;
	Vector3f normal;
};

class Mesh : public GameObject
{
public:
	Mesh();
	Mesh(Shader *shader, std::vector<Vertex> Vertices, std::vector<int> Indices);
	Mesh(Shader *shader, std::string fileName);

	void Update();

	void Render();

	float Temp;

private:
	enum {
		VERTEX_VB,
		TEXCOORD_VB,

		NUM_BUFFERS
	};
	void Initialize(Shader *shader, std::vector<Vertex> Vertices, std::vector<int> Indices);
	std::vector<Vertex> Vertices;
	std::vector<int> Indices;
	std::vector<int> IndicesTexCoord;
	std::vector<Vector2f>texCoords;
	int iSize;
	Shader *shader;
	std::vector<GLfloat> GetVerticesPositionData();
	std::vector<GLfloat> GetVerticesTexCoordData();
	GLuint m_IndexBufferObject;
	GLuint m_VertexArrayObject;
	GLuint m_VertexArrayBuffers[NUM_BUFFERS];
};

Mind you I'm not really using the Vertex texCoord really, and I haven't even begun calculating normals yet.

 

Here is the code where I'm reading the OBJ file:

Mesh::Mesh(Shader *shader, std::string fileName)
{
	if (fileName.substr(fileName.find_last_of(".") + 1) != "obj")
	{
		std::cout << "Error: unsupported model format!\n";
	}
	else
	{
		std::vector<Vector3f> _positions;
		std::vector<Vector2f> _texCoords;
		std::ifstream file(fileName);
		float x, y, z;

		while (!file.eof())
		{
			std::string buffer;
			std::string line;
			std::vector<std::string> tokens;
			std::getline(file, line);
			std::stringstream ss(line);

			while (ss >> buffer)
			{
				tokens.push_back(buffer);
			}
			if (tokens.size() == 0 || tokens[0] == "#")
				continue;
			else if (tokens[0] == "v" && tokens[1] != "")
			{
				x = (float)atof(tokens[1].c_str());
				y = (float)atof(tokens[2].c_str());
				z = (float)atof(tokens[3].c_str());


				//std::coutTexCoords << "X: " << x << " Y: " << y << " Z: " << z << std::endl;
				_positions.push_back(Vector3f(x, y, z));

				//Vertices.push_back(Vector3f(
				//	x,
				//	y,
				//	z));
			}
			else if (tokens[0] == "vt")
			{
				float x = (float)atof(tokens[1].c_str());
				float y = (float)atof(tokens[2].c_str());
				texCoords.push_back(Vector2f( x, y));
				//texCoords.push_back(x);
				//texCoords.push_back(y);
				_texCoords.push_back(Vector2f(x, y));

				//std::cout << "U: " << x << " V: " << y << std::endl;
			}
			else if (tokens[0] == "f")
			{
				//std::cout << "f: " << atoi(tokens[1].c_str()) - 1 << " " << atoi(tokens[2].c_str()) - 1 << " " << atoi(tokens[3].c_str()) - 1 << std::endl;
				Indices.push_back(atoi(tokens[1].c_str()) - 1);
				Indices.push_back(atoi(tokens[2].c_str()) - 1);
				Indices.push_back(atoi(tokens[3].c_str()) - 1);

				tokens[1].erase(tokens[1].begin(), tokens[1].begin() + tokens[1].find_first_of("/") + 1);
				tokens[2].erase(tokens[2].begin(), tokens[2].begin() + tokens[2].find_first_of("/") + 1);
				tokens[3].erase(tokens[3].begin(), tokens[3].begin() + tokens[3].find_first_of("/") + 1);

				IndicesTexCoord.push_back(atoi(tokens[1].c_str()) - 1);
				IndicesTexCoord.push_back(atoi(tokens[2].c_str()) - 1);
				IndicesTexCoord.push_back(atoi(tokens[3].c_str()) - 1);
			}

		}
		file.close();
		for (unsigned int i = 0; i < _positions.size(); i++)
			Vertices.push_back(Vertex(_positions[i], _texCoords[i]));
	}

	Initialize(shader, Vertices, Indices);
}

As you can see I'm using my IndicesTexCoord to push back the values of f 2/TEXCOORD# in the OBJ file which as far as I know tells which vertex what texture coordinate to use.

 

And here's my Initialize function:

void Mesh::Initialize(Shader *shader, std::vector<Vertex> Vertices, std::vector<int> Indices)
{
	this->shader = shader;
	this->Vertices = Vertices;
	this->Indices = Indices;
	iSize = Indices.size();

	std::vector<GLfloat> vertexBufferData = GetVerticesPositionData();
	std::vector<GLfloat> texCoordData = GetVerticesTexCoordData();

	if (vertexBufferData.size() > 0)
	{
		glGenVertexArrays(1, &m_VertexArrayObject);
		glBindVertexArray(m_VertexArrayObject);

		glGenBuffers(NUM_BUFFERS, m_VertexArrayBuffers);
		glGenBuffers(1, &m_IndexBufferObject);

		glBindBuffer(GL_ARRAY_BUFFER, m_VertexArrayBuffers[VERTEX_VB]);
		glBufferData(GL_ARRAY_BUFFER, sizeof(vertexBufferData[0]) * vertexBufferData.size(), vertexBufferData.data(), GL_STATIC_DRAW);

		if (texCoordData.size() > 0)
		{
			glBindBuffer(GL_ARRAY_BUFFER, m_VertexArrayBuffers[TEXCOORD_VB]);
			glBufferData(GL_ARRAY_BUFFER, sizeof(texCoordData[0]) * texCoordData.size(), texCoordData.data(), GL_STATIC_DRAW);
		}

		if (Indices.size() > 0)
		{
			glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_IndexBufferObject);
			glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices[0]) * Indices.size(), Indices.data(), GL_STATIC_DRAW);
		}
	}
}

As you can see it all comes down to this GetVerticesTexCoordData() function:

std::vector<GLfloat> Mesh::GetVerticesTexCoordData()
{
	std::vector<GLfloat> texCoordData;

	for (unsigned int i = 0; i < Indices.size(); i++)
	{
		texCoordData.push_back(texCoords[IndicesTexCoord[i]].x);
		texCoordData.push_back(texCoords[IndicesTexCoord[i]].y);
	}

	return texCoordData;
}

Right here is where I'm sure I'm screwing up my calculation somehow.

 

I'm really pounding my head here and if somebody that can follow whatever it is I'm doing could please help me somehow, I'd GREATLY appreciate it. Because I just can't figure this out no matter which way I smash my head against the keyboard. O.o

 

I'd really appreciate any help. One last thing, since it's possible I may be screwing up my rendering somehow, I'll post my render code as well:

void Mesh::Render()
{
	if (Vertices.size() > 0)
	{
		if (shader)
			shader->bindShader();
		glEnableVertexAttribArray(0);
		glEnableVertexAttribArray(1);
		glBindBuffer(GL_ARRAY_BUFFER, m_VertexArrayBuffers[VERTEX_VB]);
		glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);

			glBindBuffer(GL_ARRAY_BUFFER, m_VertexArrayBuffers[TEXCOORD_VB]);
			glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, (void*)0);

		if (Indices.size() > 0)
			glDrawElements(GL_TRIANGLES, iSize, GL_UNSIGNED_INT, 0);
		else
			glDrawArrays(GL_TRIANGLES, 0, 3);
		glDisableVertexAttribArray(0);
		glDisableVertexAttribArray(1);
	}
}

Again, I'd REALLY appreciate any help.


(64 bit)Cross Platform GUI with OpenGL Support?

26 November 2015 - 10:40 AM

Hi there. I've been pounding my head against the wall with this one. I've tried wxWidgets, couldn't get to compile. I tried GTK+, couldn't find meaningful directions and therefore couldn't get it to compile. I tried Qt, and that's the only one I managed to get to actually work, but couldn't get it to integrate with OpenGL because of an odd lack of information.

 

 

Basically, all I'm trying to do is find a library that can be compiled or is already compiled for 64 bit usage of MinGW-w64. I want to then use the library to create an OpenGL viewport that's embedded in the application, but is still able to have things like buttons, labels, and the like.

 

I want something cross platform so that I could also do this on Linux, but I'm on WIndows for now.

 

Does anybody know of a good solid library that works in a 64 bit environment? Perhaps one that's already pre-compiled? I wanted to use Code::Blocks and GitHub for a project to make it easier for cross platform development, so I could develop no matter which OS I'm on and develop in either 64 bit or 32 bit.


Best Way to Achieve Transformations?

21 November 2015 - 11:19 AM

Hi there!

 

I've recently been getting pretty in depth in learning OpenGL and boy is it complicated. I've never dealt with matrices before, not even matrix math in high school or the like.

 

Aside from understanding that, I'm trying to understand the best way to perform transformations. Which from what I gather is by using your own matrices such as view, projection, and model and multiplying them together. Which again, I don't understand them, nor have I really grasped the concept of what the projection, view, and model matrix really are.

 

Up to this point I've been using glTranslate to achieve what I want as far as translations go. But, in reading I notice that people seem to be using the shader as a way to manipulate position with matrices. Which this seems a little odd to me, do you need to do that? Or is there a way to do the same thing without a shader? I know at some point I'd be implementing shaders anyway which I'm already looking into, but still.

 

I'm just having a very difficult time wrapping my head around these concepts, especially trying to teach it to myself as I don't have a teacher. If somebody could please explain this to me like I'm an idiot, which I won't deny, I kind of am, I'd greatly appreciate it.

 

Any help in understanding these concepts is much appreciated and I just wish I was smarter.


Using Physics to Generate Sound

17 October 2015 - 12:01 AM

Hi there.

I have a rather "out there" question which I'm sure the answer will be that there's a limit based on computational power. This may be a sound related question, but I feel my question is more related to physics.

To break down what I'm asking, traditonally in games we see pre-recorded audio that is played back and perhaps modified with things like reverb based on the types of objects that surround the sound like water, metal, or wood.

I was wondering about sound that would be actually generated based on physical interaction. Like two sticks slammed against each other. Instead of playing a pre-recorded sound, analyze the collision of the objects, what type they are, and what type of vibration they would theoretically really generate if those objects were real. So that in theory if those objects collided differently, it would always be a unique sound even if it's identifiable as two wood objects slamming against each other.

Is something like this even possible?

Do we not understand the physics of that well enough, or is too much for an ordinary computer to handle because it would take too much information and computational power to do?

This is just a thought experiment really. I'm very curious.

Deciding on a Game Engine?

11 October 2014 - 01:19 AM

Alright, I know this is probably going to come across as horrible for asking something that's been asked before 100s of times and recently a thread was created about engines to use for indie developers. I'm just really REALLY having a tough time figuring out what to go for.

 

To start off, I've worked with the Unity engine, UDK, and Source. I've looked around at other engines, what they're capable of, what I'm looking for in them specifically, and if I can even actually use it.

 

So far the biggest challenge I've come across isn't so much a technical issue, but a licensing one. Unity is free for both commercial and non-commercial use, but comes at the cost of being limited with what you can actually do with it. Since the pro version is essentially out of the question for me, I'm limited to using its free version. It is a wonderful engine in that while limited it's incredibly flexible, and cross platform. Though it still lacks dynamic lighting, video playback, and a load of other features.

 

With Unreal 4 having been released, I was very excited that it would be out, support Linux\more platforms, and be all around better, but it requires a subscription... UDK is an absolute powerhouse, and has an excellent editor and interface. However, UDK is only UE3, doesn't have Linux support, and thus dwindles my interest.

 

Source... it's cross platform, it works, but license fees... Bugs... built on an ancient engine... Enough said... Sadly it's likely the engine I have the most experiences with, and I like it for having programs with their own set of functions, and not that whole all in one type deal.

 

I kind of would like to go a homebrew engine route as then I'd be limited purely by the libraries I use, and I'd learn a lot along the way, like how I'd much rather have a big budget engine. XD

 

I had a look at CryEngine 3, but, well, licensing! I was kind of interested in testing out the Blender game engine as I'm familiar with it as a model editor, however, I'd need to learn Python and it doesn't seem like a well fledged engine, nor do I know its potential licensing debacle. Really it seems like my own best bet is Unity, but is that really it? I've looked at other engines, but it seems like everywhere I look there's always a wall somewhere whether it be license or limitation.

 

I apologize for the multi-paragraph essay, but I'm just banging my head against the wall. All I want is a simple flexible engine that's cross platform, that isn't binding or limiting. That's probably asking way too much, but I figured I'd give it a shot!


PARTNERS