Jump to content

  • Log In with Google      Sign In   
  • Create Account

Chris_F

Member Since 04 Oct 2010
Offline Last Active Mar 29 2016 10:09 AM

Topics I've Started

What am I doing wrong with these vertex attributes

14 August 2015 - 04:17 PM

My shader has the following attributes:

 

layout(location = 0) in float pos_x;
layout(location = 1) in float pos_y;
layout(location = 2) in float rotation;
layout(location = 3) in vec2 scale;
layout(location = 4) in uint TextureID;

 

And the GL code is as follows:

 

glNamedBufferStorage(buffers[0], sprite_count * sizeof(float), sprites.position_x.data(), GL_DYNAMIC_STORAGE_BIT);
glNamedBufferStorage(buffers[1], sprite_count * sizeof(float), sprites.position_y.data(), GL_DYNAMIC_STORAGE_BIT);
glNamedBufferStorage(buffers[2], sprite_count * sizeof(float), sprites.rotation.data(), GL_DYNAMIC_STORAGE_BIT);
glNamedBufferStorage(buffers[3], sprite_count * sizeof(glm::vec2), sprites.scale.data(), GL_DYNAMIC_STORAGE_BIT);
glNamedBufferStorage(buffers[4], sprite_count * sizeof(unsigned int), sprites.texid.data(), GL_DYNAMIC_STORAGE_BIT);
 
glEnableVertexArrayAttrib(vao, 0);
glVertexArrayVertexBuffer(vao, 0, buffers[0], 0, sizeof(float));
glVertexArrayBindingDivisor(vao, 0, 1);
glVertexArrayAttribFormat(vao, 0, 1, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribBinding(vao, 0, 0);
 
glEnableVertexArrayAttrib(vao, 1);
glVertexArrayVertexBuffer(vao, 1, buffers[1], 0, sizeof(float));
glVertexArrayBindingDivisor(vao, 1, 1);
glVertexArrayAttribFormat(vao, 0, 1, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribBinding(vao, 1, 1);
 
glEnableVertexArrayAttrib(vao, 2);
glVertexArrayVertexBuffer(vao, 2, buffers[2], 0, sizeof(float));
glVertexArrayBindingDivisor(vao, 2, 1);
glVertexArrayAttribFormat(vao, 0, 1, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribBinding(vao, 2, 2);
 
glEnableVertexArrayAttrib(vao, 3);
glVertexArrayVertexBuffer(vao, 3, buffers[3], 0, sizeof(glm::vec2));
glVertexArrayBindingDivisor(vao, 3, 1);
glVertexArrayAttribFormat(vao, 0, 2, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribBinding(vao, 3, 3);
 
glEnableVertexArrayAttrib(vao, 4);
glVertexArrayVertexBuffer(vao, 4, buffers[4], 0, sizeof(unsigned int));
glVertexArrayBindingDivisor(vao, 4, 1);
glVertexArrayAttribFormat(vao, 0, 1, GL_INT, GL_FALSE, 0);
glVertexArrayAttribBinding(vao, 4, 4);

 

I seem to be getting nonsensical values from pos_x, but everything else is fine.


Trilinear texture filtering

31 July 2015 - 03:40 PM

I was wondering if anyone knew where I could find some information on how trilinear texture filtering is implemented on GPUs. I remember that in the past GPU vendors would claim that their cards could perform a trilinear filtered texture sample per cycle. It would be interesting to know the architectural details of how that was accomplished and how things may have changed now that GPU architectures have become more general purpose. Information on how it might be efficiently implemented in software using SIMD would also be welcome.


Irradiance environment map and visibility

26 April 2015 - 09:17 AM

Ok, so I have a question. If I wanted to render an object with a non-filtered environment map and wanted to take self occlusion into account I could render the object to a hemicube from a point on its surface, effectively giving me a mask that can be multiplied with the environment map. What should you do however if you are using a filtered irradiance environment map?

Efficient 24/32-bit sRGB to linear float image conversion on CPU

11 April 2015 - 01:53 PM

Does anyone know of some efficient ways of converting 24/32-bit sRGB to linear floating point on the CPU? I haven't got access to a CPU with AVX2 instructions yet, but I am intrigued by the new gather instructions. I was thinking that these could possibly be used for this type of conversion, such as in this example below. The LUT would be 256x4 bytes, so I imagine it would fit entirely into L1 data cache.

__m256 RGBA8toRGBA32F(const char* pixel_data, const float* LUT)
{
    return _mm256_i32gather_ps(LUT, _mm256_cvtepu8_epi32(_mm_load_si128((__m128i*)pixel_data)), 4);
}

Sampling theory

15 March 2015 - 04:41 AM

So it's early in the morning and I haven't had my coffee and i am trying to wrap my head around a question that I believe is related to sampling theory. Specifically I want to know why taking the average of four pixels is not an ideal 50% image reduction algorithm. When I first had to write a mipmap generator it seemed like the obvious thing to do and it wasn't at all obvious to me that this wasn't a perfect solution. Having read a bit about sampling theory I know a little better than this now, but I am still unsure of the details of why exactly this doesn't work well. The basic explanation I found was that a simple averaging is going to leave in some frequencies above nyquist causing aliasing while also removing some frequencies below nyquist causing blurring. If this is true, can someone point me in the direction of a better explanation for why this is?

 

Just to test things empirically I created an image with uniform noise and used photoshop to do a 50% bilinear reduction. A 50% bilinear reduction should be equivalent of a simple 4 pixel average. Looking at a histogram of the image it is obvious that the distribution of values now fits a Gaussian curve, as I expected. I was curious to know what was happening in the frequency domain, so this time I generated an audio sample with uniform noise and then reduced it's length by 50% using averaging. I loaded the sample into Audacity and performed a frequency analysis only to find that after averaging the frequency distribution was still uniform when I think I was expecting it to also be Gaussian.


PARTNERS