Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Chris_F

Member Since 04 Oct 2010
Offline Last Active Yesterday, 09:29 PM

Topics I've Started

Irradiance environment map and visibility

26 April 2015 - 09:17 AM

Ok, so I have a question. If I wanted to render an object with a non-filtered environment map and wanted to take self occlusion into account I could render the object to a hemicube from a point on its surface, effectively giving me a mask that can be multiplied with the environment map. What should you do however if you are using a filtered irradiance environment map?

Efficient 24/32-bit sRGB to linear float image conversion on CPU

11 April 2015 - 01:53 PM

Does anyone know of some efficient ways of converting 24/32-bit sRGB to linear floating point on the CPU? I haven't got access to a CPU with AVX2 instructions yet, but I am intrigued by the new gather instructions. I was thinking that these could possibly be used for this type of conversion, such as in this example below. The LUT would be 256x4 bytes, so I imagine it would fit entirely into L1 data cache.

__m256 RGBA8toRGBA32F(const char* pixel_data, const float* LUT)
{
    return _mm256_i32gather_ps(LUT, _mm256_cvtepu8_epi32(_mm_load_si128((__m128i*)pixel_data)), 4);
}

Sampling theory

15 March 2015 - 04:41 AM

So it's early in the morning and I haven't had my coffee and i am trying to wrap my head around a question that I believe is related to sampling theory. Specifically I want to know why taking the average of four pixels is not an ideal 50% image reduction algorithm. When I first had to write a mipmap generator it seemed like the obvious thing to do and it wasn't at all obvious to me that this wasn't a perfect solution. Having read a bit about sampling theory I know a little better than this now, but I am still unsure of the details of why exactly this doesn't work well. The basic explanation I found was that a simple averaging is going to leave in some frequencies above nyquist causing aliasing while also removing some frequencies below nyquist causing blurring. If this is true, can someone point me in the direction of a better explanation for why this is?

 

Just to test things empirically I created an image with uniform noise and used photoshop to do a 50% bilinear reduction. A 50% bilinear reduction should be equivalent of a simple 4 pixel average. Looking at a histogram of the image it is obvious that the distribution of values now fits a Gaussian curve, as I expected. I was curious to know what was happening in the frequency domain, so this time I generated an audio sample with uniform noise and then reduced it's length by 50% using averaging. I loaded the sample into Audacity and performed a frequency analysis only to find that after averaging the frequency distribution was still uniform when I think I was expecting it to also be Gaussian.


Particle alpha blending

06 October 2014 - 11:25 AM

I want to render some soft round particles. Currently I'm rendering white quads with an alpha value calculated as:

float alpha = 1.0f - clamp(length(position), 0.0f, 1.0f);

The blending function looks like:

glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

But the particles seem to subtract from each other around the outside (dark halo):

 

Attached File  example.jpg   45.17KB   0 downloads


Help with using WGL

24 September 2014 - 05:11 PM

Can someone help me out? I'm looking for a bare minimum example (preferably just one file) for how to set up a core profile context with sRGB framebuffer using WGL.


PARTNERS