Jump to content
  • Advertisement
Sign in to follow this  
Solid_Spy

OpenGL How to send a CubeMap texture to a compute shader?

This topic is 955 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello. I am having a bit of trouble trying to get a cubemap sent over to a compute shader I am working on.

 

-I am using opengl 4.3.

-The cube map is loaded from a file, and loaded successfully. I was able to view it in gdebugger. It is 16x16x16.

-The compute shader compiles just fine. I am also able to output values I manually type into the compute shader.

-I am using Windows 8.

-MinGW compiler.

-And the GLEW and GLFW libraries.

-And yes, my gfx card does support opengl 4.3

 

For some reason, whenever I sample even just one pixel from the cube map in the compute shader, all I get is zero.

 

Here is the code on the c++ side:

void BaseRenderer::RenderHarmonicCoefficients()
{
        //This is just me getting the shader.
	mpActiveRenObjs->curShaderProgram = mpShaderManager->GetShaderObject(SHADER_COMPUTE_HARMONIC_COEFFICIENTS);

	glUseProgram(mpActiveRenObjs->curShaderProgram);

	GLuint dataBufferHarmonicCoefficients;

        //The buffers are created just fine.
	glGenBuffers(1, &dataBufferHarmonicCoefficients);

	glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1,
			dataBufferHarmonicCoefficients);
	glBufferData(GL_SHADER_STORAGE_BUFFER,
			sizeof(GLfloat) * 9 * 4,
			harmonicCoefficients,
			GL_DYNAMIC_DRAW);

        SendUniform(UNIFORM_1I, "sceneCubeMap", 0);
        ChangeTextureSlot(GL_TEXTURE0);

        //Now this is where I am having trouble. I am not sure if I am even
        //doing this right.
	glBindImageTexture(0,
			mCubeMapPreIrradiance,
			0,
			GL_TRUE,
			0,
			GL_READ_ONLY,
			GL_RGBA32F);

        //Start the compute.
	glDispatchCompute(1, 1, 1);

        //Extract the harmonic coefficients.
	glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
	glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, 0);

	glBindBuffer(GL_SHADER_STORAGE_BUFFER, dataBufferHarmonicCoefficients);

	glm::vec4* outCoefficientData = reinterpret_cast<glm::vec4*>(glMapBuffer(GL_SHADER_STORAGE_BUFFER, GL_READ_ONLY));

	for(unsigned int i = 0; i < 9; i++)
	{
		harmonicCoefficients[i] = outCoefficientData[i];
	}

	glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
        
        //This returns the y value of the second harmonic coefficient, which is zero.
        //All of the other ones return zero as well.

	Msg("Coefficient.", boost::lexical_cast<std::string>(harmonicCoefficients[1].y).c_str());
}

And the glsl shader:

//COMPUTE SHADER

#version 430

const uint numOfUVWVecs = 1536;

//Computes 9 harmonic coefficients
layout(local_size_x = 9, local_size_y = 1, local_size_z = 1)in;

layout(rgba32f, binding = 0) readonly uniform imageCube sceneCubeMap;

layout(std430, binding = 1) 
writeonly buffer PrimaryDataChunkHarmonicCoefficients
{
	vec4 harmonicCoefficients[9];
};

void main()
{
        //Trying to get the middle pixel on the positive x face of the cubemap. This doesn't work.
	currentCoefficient = imageLoad(sceneCubeMap, ivec3(8, 8, 0)).xyz;
	
	//return the final coefficient.

        //Doesn't work! Returns zero.
	harmonicCoefficients[gl_GlobalInvocationID.x] = 
		vec4(vec3(currentCoefficient), 0.0);

        //But this works.
	//harmonicCoefficients[gl_GlobalInvocationID.x] = 
	//    vec4(vec3(1.0, 1.0, 1.0), 0.0);
}

Edit: I changed the layer parameter to GL_TRUE, as I heard it needs to be enabled for cubeMaps, however it is still not working. The texture is sent over to the cpp function. I'm still not sure if it is being sent to the compute shader properly. I have never had issues with regular 2D textures. I hear you can get cubeMaps to work in compute shaders though, and I have no idea what i am doing wrong T_T.

 

Edit2: Added uniform location and changed current texture slot to GL_TEXTURE0. Still not working.

Edited by Solid_Spy

Share this post


Link to post
Share on other sites
Advertisement

Ok, I am starting to believe that the reason this isn't working, is because the cubemap needs to be immutable, which I read in this article:

http://malideveloper.arm.com/resources/sample-code/introduction-compute-shaders-2/

 

"A very important restriction for using shader images is that the underlying texture must have been allocated using “immutable” storage, i.e. via glTexStorage*()-like functions, and not glTexImage2D()."

 

I have modified my cube map creation code like so, however, I am not sure if I am doing it right. Gdebugger will not display the texture (probably since immutable textures only have support for 4.2 or above, and gdebugger is meant for lower version of OpenGL). EDIT: I changed the texture to be immutable and successfully tested it in a vertex/pixel shader, but in the compute shader, things still arent working.

	glGenTextures(1, &cubeMapObject);
	glBindTexture(GL_TEXTURE_CUBE_MAP, cubeMapObject);

        int levels = floor(log2((16.0)))+1;

	glTexStorage2D(GL_TEXTURE_CUBE_MAP, levels, GL_RGBA8, 16, 16);

	glTexSubImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X,
			0,
			0,
			0,
			16,
			16,
			GL_BGRA,
			GL_UNSIGNED_BYTE,
			cubeMapSideDataLeft);

	glTexSubImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_X,
			0,
			0,
			0,
			16,
			16,
			GL_BGRA,
			GL_UNSIGNED_BYTE,
			cubeMapSideDataRight);

	glTexSubImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Y,
			0,
			0,
			0,
			16,
			16,
			GL_BGRA,
			GL_UNSIGNED_BYTE,
			cubeMapSideDataUp);

	glTexSubImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Y,
			0,
			0,
			0,
			16,
			16,
			GL_BGRA,
			GL_UNSIGNED_BYTE,
			cubeMapSideDataDown);

	glTexSubImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Z,
			0,
			0,
			0,
			16,
			16,
			GL_BGRA,
			GL_UNSIGNED_BYTE,
			cubeMapSideDataFront);

	glTexSubImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Z,
			0,
			0,
			0,
			16,
			16,
			GL_BGRA,
			GL_UNSIGNED_BYTE,
			cubeMapSideDataBack);

	glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S,
					GL_CLAMP_TO_EDGE);
	glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T,
					GL_CLAMP_TO_EDGE);
	glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R,
					GL_CLAMP_TO_EDGE);
Edited by Solid_Spy

Share this post


Link to post
Share on other sites

I decided to ditch cubemaps for now and use image2d's instead, and was suprised to see that those don't work either!!

 

Something is seriously wrong. I am able to get image2d to work in other compute shaders. I think the root of the problem is the SSBO, because all these problems started ever since I started using them. I think that is causing a bug, maybe in the drivers, because I am certain I am implementing them correctly. I looked at various tutorials on how to use SSBOs. However, I don't want to ditch the SSBO for storing 9 vectors, but it looks like I have no choice.

 

I guess I have no choice but to maybe use another texture that is write only to store the harmonic coefficients somehow.. until anyone can help me find out what the hell is going on here.

Edited by Solid_Spy

Share this post


Link to post
Share on other sites

Success!

 

I found the issue. At first I thought that GL_RGBA8 and GL_RGBA32F were equivalent, for some reason :p. I changed my textures to GL_RGBA32F and now it is officially working.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!