Jump to content

  • Log In with Google      Sign In   
  • Create Account


Chris_F

Member Since 04 Oct 2010
Online Last Active Today, 04:13 PM
-----

Topics I've Started

Array of samplers vs. texture array

18 March 2014 - 11:26 PM

I'm curious as to what the tradeoffs would be for using an array of samplers with the bindless texture extension over using texture arrays. It seems like there would likely be more overhead with sampler arrays. On the other hand, with texture arrays you are limited to 2048 slices on a lot of modern GPUs. You could potentially have a lot more than that with bindless, and each texture can have it's own dimensions and sampler properties.


mat3x3 array issue

15 March 2014 - 11:58 AM

When I try to upload a buffer of mat3x3s to a storage buffer, the matrices don't work.

layout(packed, binding = 0) readonly buffer matrix_buffer

{

    mat3x3 matrix[];

};
 
...
 
gl_Position = vec4(vec3(pos, 0.0f) * matrix[gl_InstanceID], 1.0);
//doesn't work (nothing rendered)
 
gl_Position = vec4(vec3(pos, 0.0f) * mat3(1.0), 1.0);
//works

However, I have no issues when using mat4x4.

layout(packed, binding = 0) readonly buffer matrix_buffer

{

    mat4x4 matrix[];

};
 
...
 
gl_Position = vec4(pos, 0.0f, 1.0) * matrix[gl_InstanceID];
//works

My GL code basically looks like this:

std::vector<glm::mat3> matrix(8192, glm::mat3(1.0f));
glNamedBufferStorageEXT(buffer, sizeof(matrix[0]) * matrix.size(), matrix.data(), 0);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, buffer);
 
//example2
 
std::vector<glm::mat4> matrix(8192, glm::mat4(1.0f));
glNamedBufferStorageEXT(buffer, sizeof(matrix[0]) * matrix.size(), matrix.data(), 0);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, buffer);

TexSubImage2D performance

19 February 2014 - 06:31 PM

I was curious to see the performance of texture uploads with my configuration using OpenGL and noticed something I think is odd. I create a 4K texture using glTexStorage2D with one MIP level and a format of GL_RGBA8. Then, every frame I use glTexSubImage2D to re-upload a static image buffer to the texture. Based off the frame rate I get about 5.19GB/s. Next, I changed the format of the texture to GL_SRGB8_ALPHA8 and re-try the experiment. This time I am getting 2.81GB/s, a significant decrease. This seems odd because as far as I know there shouldn't be anything different about uploading sRGB data verses uploading RGB data, as there is no conversion that should be taking place (sRGB conversion takes place in the shader, during sampling).

 

Some additional information. All I'm rendering is a fullscreen quad with a pixel shader that simply outputs vec4(1), I'm not even sampling from the texture or doing anything else each frame other than calling glTexSubImage2D. For the first test I use GL_RGBA and GL_UNSIGNED_INT_8_8_8_8_REV in the call to glTexSubImage2D, as this is what the driver tells me is ideal. For the second test I use GL_UNSIGNED_INT_8_8_8_8, as per the drivers suggestion. A bit of testing confirms that these are the fastest formats to use respectively. This is using an Nvidia GPU.


Bindless texture bug?

18 February 2014 - 07:54 PM

I'm trying out bindless textures and I noticed what I think may be a driver bug, but I am not certain.

 

Basically, if my shader looks like this:

#version 440 core
#extension GL_ARB_bindless_texture : require
 
layout(location = 0) uniform sampler2D texture0;

I get an error saying that sampler handle updates are not allowed if the bindless_sampler qualifier is not set. Fair enough. Change the shader to this and all is well.

#version 440 core
#extension GL_ARB_bindless_texture : require
 
layout(location = 0, bindless_sampler) uniform sampler2D texture0;

However, if I do this:

#version 440 core
 
layout(location = 0) uniform sampler2D texture0;

Then I get no errors and everything works fine, despite the fact that I am still using a bindless handle. The GL code is:

GLuint64 texture_handle = glGetTextureHandleARB(texture);

glMakeTextureHandleResidentARB(texture_handle);

glProgramUniformHandleui64ARB(shader_program, 0, texture_handle);

Is this a driver bug? (Buffer Textures)

11 February 2014 - 07:31 AM

I am messing around with programmable vertex pulling, using buffer textures to store indices and vertex attributes. The problem is that when I try to bind my index containing buffer texture to GL_TEXTURE0, it doesn't work. If I bind it to any other texture unit, it works. Please excuse the mess. http://pastebin.com/tKQSDPN2

 

Line 254:

glBindMultiTextureEXT(GL_TEXTURE0, GL_TEXTURE_BUFFER, TextureName[TEXTURE_INDEX]);

Change to (for example):

glBindMultiTextureEXT(GL_TEXTURE10, GL_TEXTURE_BUFFER, TextureName[TEXTURE_INDEX]);

Line 287:

glProgramUniform1i(shader_program, 0, 0);

Change to:

glProgramUniform1i(shader_program, 0, 10);

With that change, it suddenly works for me.


PARTNERS