Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


nkarasch

Member Since 01 Nov 2012
Offline Last Active Nov 01 2013 10:22 AM

Topics I've Started

Euler angle rotations

26 October 2013 - 06:23 PM

First, rotate alpha around the z axis, then rotate beta around the new y', finally rotate gamma around the new z''.

 

How would you accomplish this? I know how to do rotations around the X, Y, Z axis, but those are not applicable in this case other than the inital rotation right? It's the rotations around the "new" axis that are throwing me off.


All matrix operations in shader?

30 September 2013 - 04:51 PM

I ask my Graphics Programming professor if I could use something I am familiar with like GLM and he said no, we are supposed to do everything in the vertex shader. From my experience/understanding, it makes more sense to calculate the model-view-projection matrix once per model for this simple program and pass it in through a uniform.

 

I think he wants us to create all of the matrices in the shader code, can that ever make sense? I don't understand why you would EVER create your main view and projection matrices within a shader. Thanks


Android GLES1 2d collisions detection

30 April 2013 - 02:04 AM

So far I can draw textured quads (triangle strips), move them and it all works independently of screen resolution and aspect ratio. I'm hitting a huge snag when it comes to collision detection. My only prior experiences were with fixed resolutions with no rotation.

 

How do you guys handle it? Is there any easy way to get my objects vertices after a translation and rotation? I know how to peek at the model view matrix and it isn't of much help as far as I can tell. I feel like all of my problems would be solved if I could just get the post-transform coordinates of my vertex array.


Passing attributes to shaders

03 January 2013 - 04:01 PM

I'm pretty confused and sick of failing, I searched like crazy and resisted the urge to make a new thread for a long time but here it is.

 

I don't feel like my attributes are getting passed into my shaders properly. Here is my code for when I bind objects to VBOs

(this is LWJGL)

 

		int vaoId = glGenVertexArrays();
		glBindVertexArray(vaoId);

		int vboVertexHandle = glGenBuffers();
		int vboTextureHandle = glGenBuffers();
		int vboNormalHandle = glGenBuffers();

		glBindBuffer(GL_ARRAY_BUFFER, vboVertexHandle);
		glBufferData(GL_ARRAY_BUFFER, vertices, GL_STATIC_DRAW);
		glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);

		glBindBuffer(GL_ARRAY_BUFFER, vboTextureHandle);
		glBufferData(GL_ARRAY_BUFFER, textureCoordinates, GL_STATIC_DRAW);
		glVertexAttribPointer(1, 2, GL_FLOAT, false, 0, 0);

		glBindBuffer(GL_ARRAY_BUFFER, vboNormalHandle);
		glBufferData(GL_ARRAY_BUFFER, normals, GL_STATIC_DRAW);
		glVertexAttribPointer(2, 3, GL_FLOAT, false, 0, 0);
		
		glEnableVertexAttribArray(0);
		glEnableVertexAttribArray(1);
		glEnableVertexAttribArray(2);

		glBindBuffer(GL_ARRAY_BUFFER, 0);
		glBindVertexArray(0);
		return vaoId;

when I call glVertexAttribute(), the first number can be whatever I want...right? It is the position that I want to find that information at in the VAO in the future?

 

and then I did this

	glBindAttribLocation(shaderProgram, 0, "VertexPosition");
	glBindAttribLocation(shaderProgram, 1, "TextureCoordinate");
	glBindAttribLocation(shaderProgram, 2, "Normals");

before using the shader, and tried initialize access of the information in the vertex shader like this

in vec3 VertexPosition;

Is this right? What is missing?

 

Also, if I'm using GLSL 4.0 is using the location like this

layout (location = 0) in vec3 VertexPosition;

a direct replacement for this?

glBindAttribLocation(shaderProgram, 0, "VertexPosition");

 


Miscellaneous OpenGL questions (mostly dealing with shaders)

28 December 2012 - 04:29 PM

I'm trying to learn the modern, full programmable OpenGL using LWJGL. At the moment I know how to parse files for models, use them in VBOs and draw them to the screen. I also have a first-person style camera that lets me freely move around and look at whatever I want.

 

Is glLight completely out of date? All lighting should be done with shaders, right?

I know how to apply shaders to models, is there a way to create a fragment shader that acts on everything? Is that the modern approach to light?

 

My render code for models looks like this:

		glPushMatrix();
		glRotatef(rotation, rx, ry, rz);
		glTranslatef(x, y, z);
		glScalef(sx, sy, sz);
		glBindBuffer(GL_ARRAY_BUFFER, vbos[0]);
		glVertexPointer(3, GL_FLOAT, 0, 0);
		glBindBuffer(GL_ARRAY_BUFFER, vbos[1]);
		glNormalPointer(GL_FLOAT, 0, 0L);
		
		glEnableClientState(GL_VERTEX_ARRAY);
		glEnableClientState(GL_NORMAL_ARRAY);
		glUseProgram(shaderProgram);
		glColor4f(r, g, b, 1f);
		glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, 10f);
		
		glDrawArrays(GL_TRIANGLES, 0, model.getFaces().size() * 3);
		glUseProgram(0);
		glDisableClientState(GL_VERTEX_ARRAY);
		glDisableClientState(GL_NORMAL_ARRAY);
		glBindBuffer(GL_ARRAY_BUFFER, 0);
		
		glPopMatrix();

Am I enabling and disabling the shader program at the correct times? Are glColor4f and glMaterialf worthless if shaders are used properly? Am I pushing and popping the matrix at the right time?

 

Thanks! I've read a lot but even with my Linear Algebra experience a lot of the math is hard to visualize.


PARTNERS