Write to Texture3D, gl_Layer problem

Started by
3 comments, last by Marc J 10 years, 9 months ago

Hello,
I have a problem with writing into an other layer than gl_Layer 0, with an 3d Texture attached to my FBO: Here is my code for the texture and FBO generation:


glActiveTexture(GL_TEXTURE0);

glGenTextures(1, &tex3DLVRed);
glBindTexture(GL_TEXTURE_3D, tex3DLVRed);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_REPEAT);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA32F, 32, 32, 32, 0, GL_RGBA, GL_FLOAT, 0);
glBindTexture(GL_TEXTURE_3D, 0);


glGenFramebuffers(1, &fboLightInject);
glBindFramebuffer(GL_FRAMEBUFFER, fboLightInject);

glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,tex3DLVRed,0);
glBindFramebufferEXT(GL_FRAMEBUFFER, 0);

?

?

My display routine looks like:


glUseProgram(lightInjectProgram);
glViewport(0, 0, 32, 32);
glClearColor(0.f, 0.f, 0.f, 0.f);           
glBindFramebuffer(GL_FRAMEBUFFER, fboLightInject);
glBindVertexArray(pointVAO);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawArraysInstanced(GL_POINTS,0,1,512*512);
glUseProgram(0);

My geometry shader looks like this:


#version 330
layout(points) in;
layout(points, max_vertices = 1) out;

void main() 
{
	gl_Position = vec4(0.0, 0.0, 0.0, 1.0);
	gl_Layer = 1;
	EmitVertex();
	EndPrimitive();
}

with gl_Layer=0, it works but when I use some other value, for example 1 it does not produce any output. In the Fragment shader I just assign red color for debug purposes.

It would be great if anybody comes up with an idea what to try or where my failure is.

I have an example program which works, therefore I am sure that my hardware is ready for this. but the example is such a complex program that I seem to overlook important instructions.

Thank you

Advertisement

I believe you must enable mipmapping for the texture, by setting the texture parameter GL_GENERATE_MIPMAP to GL_TRUE, but I also recall reading somewhere that using FBOs with mipmapping doesn't work.

I would get the texture loaded up by rendering to the FBO, then I would use glGenerateMipmap, which is an extension function:

http://www.opengl.org/wiki/GLAPI/glGenerateMipmap


glDrawArraysInstanced(GL_POINTS,0,1,512*512);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_3D, tex3DLVRed);
glGenerateMipmap(GL_TEXTURE_3D);

You mean to do this?
If so it does not work :(

Well, according to the documentation, gl_Layer is actually for designating which face of a cubemap to use - which is different.

Somehow I confused gl_Layer with sampler2D/3D lod parameter, which allows a fragment shader to specify which texture mipmap to sample from.

Can I ask what it is you are trying to achieve through the use of gl_Layer ?

http://www.opengl.org/sdk/docs/manglsl/xhtml/gl_Layer.xml

You are using glFrameBufferTexture, when there is a glFrameBufferTexture3D. EDIT: apparently the 3D variant is deprecated in favor of the function you are using, but it is still unclear to me what you are trying to accomplish.

Yeah,

I am implementing Light Propagation Volumes right now, there I have some 2D Textures which are accumulated into one 3D texture. To do this I have to write into a 3d texture or a 2D texture array which is more or less the same when it comes to the implementation. And for this I need to set gl_Layer to assign to which slice of the 3d texture I want to write.

And yes you are right it is possible to use gl_Layer for cubemaps, but one can use this aswell for 3d textures. I have working and understandable example code right now but I have to adapt it to my program. And then I have to find out where the difference is to my version.

If you have another suggestion how to do this please tell me.

This topic is closed to new replies.

Advertisement