Jump to content

  • Log In with Google      Sign In   
  • Create Account

rocklobster

Member Since 24 Jul 2011
Offline Last Active Apr 22 2014 04:29 AM

Topics I've Started

RTT with working on ATI but not nVidia (OpenGL)

26 August 2013 - 08:27 AM

Hey guys,

 

Since the start of my hobby project, I've been using one PC for development which is using an ATI card. I've just tried to run my project on my friends computer and my laptop (both have nVidia cards) and nothing renders, the screen is the same colour as the glClear(...) with nothing showing. 

 

I did some debugging with glGetError and I'm not getting any errors showing up. I removed my render target and just rendered my scene to the default framebuffer and then things started drawing (minus the post-processing). So this leads me to believe my framebuffer object is not created correctly for nVidia cards?

bool Graphics::CreateRenderTarget( RenderTarget* target )
	{
		GLuint fbo;
		glGenFramebuffers(1, &fbo);
		glBindFramebuffer(GL_FRAMEBUFFER, fbo);
		target->SetFrameBufferId(fbo);

		for (uint32 i = 0; i < target->GetNumSources(); i++)
		{
			glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); 
			glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); 
			glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
			glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

			GLuint tex;
			glActiveTexture(GL_TEXTURE0 + i);
			glGenTextures(1, &tex);
			glBindTexture(GL_TEXTURE_2D, tex);
			glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, target->GetWidth(), target->GetHeight(), 0, GL_RGB, GL_UNSIGNED_BYTE, 0);
			glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, tex, 0);
			target->SetSourceId(i, tex);
		}

		if (target->UseDepth())
		{
			GLuint depth;
			glGenRenderbuffers(1, &depth);
			glBindRenderbuffer(GL_RENDERBUFFER, depth);
			glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, target->GetWidth(), target->GetHeight());
			target->SetDepthBufferId(depth);
			glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);
		}

		glDrawBuffers(target->GetNumSources(), target->GetDrawBuffers());

		if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
		{
			return false;
		}
		return true;
	}

Anything obviously incorrect? I can post more info If needed.


Directional lighting problems with transforming

18 August 2013 - 06:19 AM

Hey guys,

 

I'm implementing deferred shading and I've decided to calculate shading for a single global light in the first pass, then do all the other lights in the second pass. I'm not sure If I've got my normal/light and vertex positions all in the correct space and the lighting effect does not look correct.

 

Vertex shader:

#version 400

layout (location = 0) in vec3 in_Position;
layout (location = 1) in vec3 in_Normal;
layout (location = 2) in vec3 in_TexCoord;
layout (location = 3) in vec3 in_Tangent;
layout (location = 4) in vec3 in_Binormal;

out vec3 PositionEye;
out vec3 NormalEye;
out vec3 TexCoord; 
out vec3 Tangent;
out vec3 Binormal;

out vec3 LightDirEye;

uniform vec3 GlobalLightPos;

uniform mat4 u_ModelMatrix;
uniform mat4 u_ViewMatrix;
uniform mat4 u_ModelViewMatrix;
uniform mat3 u_NormalMatrix;
uniform mat4 u_MVP;

void main()
{
	PositionEye = vec3(u_ModelViewMatrix * vec4(in_Position, 1.0));
	NormalEye =  u_NormalMatrix * in_Normal;
	TexCoord = in_TexCoord;
	
	vec3 lightDirWorld = normalize(GlobalLightPos - in_Position);
	LightDirEye = u_NormalMatrix * lightDirWorld;	
	
	gl_Position = u_MVP * vec4(in_Position, 1.0);
}

Fragment shader:

#version 400

in vec3 PositionEye;
in vec3 NormalEye;
in vec3 TexCoord;
in vec3 Tangent;
in vec3 Binormal;

in vec3 LightDirEye;

uniform sampler2D Texture;
uniform float TexRepeat;

layout (location = 0) out vec3 out_Position;
layout (location = 1) out vec3 out_Normal;
layout (location = 2) out vec3 out_Diffuse; 
layout (location = 3) out vec3 out_Tangent;
layout (location = 4) out vec3 out_Binormal;

vec3 shadePixel(vec3 diff)
{
	vec3 s = LightDirEye;
	float sDotN = max(dot(s, NormalEye), 0.0);
	vec3 diffuse = vec3(1.0, 1.0, 1.0) * diff * sDotN;

	return diffuse;
}

void main()
{
	vec3 diff = texture(Texture, TexCoord * TexRepeat);

	out_Position = PositionEye;
	out_Normal = NormalEye;
	out_Diffuse = shadePixel(texture(Texture, TexCoord * TexRepeat));
	out_Tangent = Tangent;
	out_Binormal = Binormal;
}

Uniforms:

glm::vec3 lightPos = m_sceneGraph->GetGlobalLightPos();
glm::mat4 viewMatrix = m_renderSystem->GetCamera()->GetViewMatrix();
glm::mat3 normalMat =  m_renderSystem->GetCamera()->GetNormalMatrix();

m_graphics->SetUniform(id, "GlobalLightPos", lightPos);
m_graphics->SetUniform(id, "u_ViewMatrix", 1, false, viewMatrix);
m_graphics->SetUniform(id, "u_NormalMatrix", 1, false, normalMat); 

glm::mat4 modelMatrix = modelNode->GetTransform();
glm::mat4 modelView = viewMatrix * modelMatrix;
glm::mat4 mvp = m_renderSystem->GetCamera()->GetProjectionMatrix() * modelView;

m_graphics->SetUniform(id, "u_ModelMatrix", 1, false, modelMatrix);
m_graphics->SetUniform(id, "u_ModelViewMatrix", 1, false, modelView);
m_graphics->SetUniform(id, "u_MVP", 1, false, mvp);

The light pos is just glm::vec3(0.0f, 50.0f, 0.0f)

 

and the normal matrix is:

glm::mat3 Camera::GetNormalMatrix()
{
	return glm::transpose(glm::inverse(glm::mat3(m_view)));
}

Can't seem to work out what is going wrong here.


Deferred Shading lighting stage

14 August 2013 - 09:33 AM

Hey guys,

 

I've attempted to implement deffered shading lately and I'm a bit confused about the lighting stage.

 

This is the algorithm for what I do at the moment:

FirstPass

- Bind render target
- bind my G-buffer shader program
- render the scene

SecondPass

- Bind my normal shader for rendering the quad 
(fragment shader takes Position, Normal and Diffuse samplers as uniforms)
- Render full screen quad
- swap buffers

So am I right to assume that to light the scene I should also add this change to the second pass:

SecondPass

- Bind my normal shader for rendering the quad 
(fragment shader takes Position, Normal and Diffuse samplers as uniforms)
- FOR EACH LIGHT IN THE SCENE
    - Set this light as uniform for the shader
    - Render full screen quad 
- END FOR
- swap buffers

And in my fragment shader I'll have the code which shades all the pixels based on the type of light passed in (directional, point or spot).

 

Cheers for any help


Collada pivot node

11 August 2013 - 04:33 AM

What is jointOrientX? Why is it there? Why is there a sub-node with the suffix -pivot?

 

Can someone please explain this to me, here is part of the .dae file:

<node name="Box012" id="Box012" sid="Box012">
        <translate sid="translate">91.838813 42.641855 -53.673642</translate>
        <rotate sid="jointOrientX">1 0 0 -90.000000</rotate>
        <scale sid="scale">2.540000 2.540000 2.540000</scale>
        <node id="Box012-Pivot" name="Box012-Pivot">
          <translate>0.000000 -13.999999 -0.128876</translate>
          <rotate>0.000000 0.000000 0.000000 0.000000</rotate>
          <instance_geometry url="#Box012-lib">
            <bind_material>
              <technique_common>
                <instance_material symbol="_04 - Default" target="#_04 - Default"/>
              </technique_common>
            </bind_material>
          </instance_geometry>
        </node>
        <extra>
          <technique profile="FCOLLADA">
            <visibility>1.000000</visibility>
          </technique>
        </extra>
      </node>

The object "Box012" is not rendered in the correct place, and I think it is because it doesn't have the transform of "Box012-Pivot". But pivot is a sub node, and it doesn't reference any mesh? I'm a bit confused why it's even there. I don't remember having a pivot when I exported from 3ds max. 


Animating tiles in 2D

27 January 2013 - 10:54 AM

Hey guys,

 

I'm working on a little 2D framework (I usually do 3D stuff so I'm not too sure how some things should be handled in 2D). My question is how to handle animation of tiles (eg. Water) which are stored in a vertex buffer with the rest of the grass (terrain) tiles. Basically I have a map which can be represented like so in a text file:

 

1 1 1 1 

1 1 2 1 

1 1 2 1

1 1 1 1

 

Each number represents a different type of tile, and a tile is basically a texture coordinate (bottom left) for the location of the tile in the texture atlas and a number representing the size of a tile (eg 1.0f / 16.0f, because my atlas is 2048x2048 and I have 16 tiles in a row).

 

struct Tile
{
    float m_tileSize;
    glm::vec2 m_texCoord;
};

 

My vertex buffer is constructed by iterating through two for loops (one for width and height of map) and constructing the vertices for each quad, and using the texture coordinates of the corresponding tile in that position.

 

I want to expand my tile class so it can contain animation information, and because i'm using a texture atlas (and not drawing each tile separately) I cant just swap the texture for animation, I must update the texture coords. I'm not sure if it is a good idea to update this VBO each frame, to modify texture coordinates for animated tiles. This sounds like premature optimisation to me, even if so, i'd like some advice on how other people handle animation like this?

 

Thanks guys


PARTNERS