Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


OzgurEra

Member Since 22 Jun 2012
Offline Last Active Yesterday, 08:33 PM

Posts I've Made

In Topic: [SOLVED] [GLSL] Weird behavior with interface blocks.

04 February 2015 - 10:39 AM

It does sound like a bug. You have to specify the flat qualifier for integral out variables, e.g.
flat uint DrawID;, so the compiler should have warned you about that.

 

Woah, thanks for the information!

Yeah I guess, giving me no error about that was a bug..
I added the 'flat' qualifier to both shaders with 'uint', and worked as intended.


In Topic: [SOLVED] [GLSL] Weird behavior with interface blocks.

03 February 2015 - 10:43 PM

Update!

Wow.. It seems like everything works perfectly if I simply use "float" instead of "uint" in interface block.

I just changed it to:

// Vertex Shader
out InterfBlock {
	vec4 WorldPos;
	vec3 WorldNormal;
	vec2 TexCoord[2];
	vec3 CamDir;
 	float DrawID; // Change here
} OUT;	

//Fragment Shader
in InterfBlock {
	vec4 WorldPos;
	vec3 WorldNormal;
	vec2 TexCoord[2];
	vec3 CamDir;
	float DrawID; // Change here
} IN;

and then simply cast my DrawIDs to float by:

// Vertex shader
OUT.DrawID = float(DrawIDs);

// Fragment shader
vec3 color = texture2DArray(colorTexture, vec3(IN.TexCoord[0], IN.DrawID)).xyz;

Now everything works perfectly..

 

But what is the problem with "uint" ???  Was that a bug?? C'mon drivers..


In Topic: [SOLVED] [GLSL] Weird behavior with interface blocks.

03 February 2015 - 08:33 PM

Thanks for the reply!

 

My buffer class is just a wrapper around OpenGL.

I am filling the buffer like this:

 

unsigned int* buffer = (unsigned int*)drawids->MapRange(0, sizeof(unsigned int) * 6, BufferLock_Write);  // My own buffer class. Just a simple wrapper, tested, working.. 
    for(int i = 0; i < 6; ++i)
        buffer[i] = i;
drawids->Unmap();

And then simply enable the attribute and and set the pointer: (Already use glVertexAttribIPointer())
 

glBindBuffer(GL_ARRAY_BUFFER, drawids->GetHandlerGL());
glEnableVertexAttribArray(8);
glVertexAttribIPointer(8, 1, GL_UNSIGNED_INT, 0, BUFFER_OFFSET(0));
glVertexAttribDivisor(8, 1);

Actually my attribute already works, I know it. Because when I only use it on vertex shader to get every instances (there are only two for testing) matrices, they can succesfully get their matrices. But the problem is just I cannot pass it to fragment shader. If I don't try to pass it to fragment shader, It works as intended.
 

And by "it stops working", literally just stops working. No GLSL error is generated, the glsl compiling looks failed but no information generated by glGetShaderInfoLog(), It returns an empty text with size 1. I know it failed since glGetShaderiv() with GL_COMPILE_STATUS returns GL_FALSE.
Actual code for checking compile errors, if you interested:
 

static void TestCompileError(String file, GLuint handler) {
	GLint result = 0;
	glGetShaderiv(handler, GL_COMPILE_STATUS, &result);

	if (result == GL_FALSE) {
		GLint length;
		glGetShaderiv(handler, GL_INFO_LOG_LENGTH, &length);
		std::vector<char> buffer;
		buffer.resize(length);
		GLsizei final;
		glGetShaderInfoLog(handler, length, &final, &buffer[0]);
		std::string message(&buffer[0], length);
		std::cout << file.c_str() << ": " << length << ": " << message.c_str() << std::endl;
	}
}

I hope it is not a Driver Bug sad.png


In Topic: O(pow(N,12))

24 October 2014 - 02:09 PM

I'm truly amazed!


In Topic: Vertex Buffers in OpenGL

19 January 2013 - 08:46 PM

Actually I won't need DirectX codes. I'm just writing a renderer, I will implement it using OpenGL, but I want it to become flexible. So I just want things to be correct, and won't be a problem when I decide to write a DirectX renderer.
 

My VertexBuffer class keeps the system pointers, and it seems that I can use them both in OpenGL and DirectX..

Thank you so much mhagain.


PARTNERS