Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 04 Oct 2010
Offline Last Active Today, 05:33 AM

#5244044 Trilinear texture filtering

Posted by Chris_F on 01 August 2015 - 02:16 PM

I think I'd be more interested in how they manage to hide the memory latency considering that in the case of a monochrome texture they have to read 8 bytes, maybe from 8 different cache lines.

#5243901 Trilinear texture filtering

Posted by Chris_F on 31 July 2015 - 03:40 PM

I was wondering if anyone knew where I could find some information on how trilinear texture filtering is implemented on GPUs. I remember that in the past GPU vendors would claim that their cards could perform a trilinear filtered texture sample per cycle. It would be interesting to know the architectural details of how that was accomplished and how things may have changed now that GPU architectures have become more general purpose. Information on how it might be efficiently implemented in software using SIMD would also be welcome.

#5237778 mipmaping a texture after being writen by FBO

Posted by Chris_F on 30 June 2015 - 07:54 PM

well it doesn't seem to work


I'm sorry to hear that.

#5224112 Glossy reflections - how to do a proper blur

Posted by Chris_F on 17 April 2015 - 07:14 PM


#5215501 Vulkan is Next-Gen OpenGL

Posted by Chris_F on 09 March 2015 - 03:20 PM


No, Intel dedicates a huge portion of the die to the HD/Iris graphics cores.

According to AnandTech's article on Iris Pro (see last paragraph), Intel is dedicating somewhere around 65% of their total die area to the GPU in this generation.



Yeah, and it feels like such a waste of space and transistors when you have a dedicated GPU and it goes unused. Those transistors would be better spent on more cores. 16 core i7 when? tongue.png

#5215466 Vulkan is Next-Gen OpenGL

Posted by Chris_F on 09 March 2015 - 12:35 PM

It seems you can explicitly query&use different GPUs in the system. Does that mean I can use dedicated + integrated GPUs in the same application? 


That's the idea. You should even be able to use radically different GPUs from different vendors together, though the amount of support for this will very.


Thinking about it, do integrated GPUs like Intel's HD/Iris use separate hardware on the die or do they basically use the vector stuff (AVX/SSE) of the available cores?


No, Intel dedicates a huge portion of the die to the HD/Iris graphics cores.

#5209166 glTexSubImage3D, invalid enum

Posted by Chris_F on 06 February 2015 - 04:34 PM

If this doesn't produce any errors from AMD's driver then it looks like you've uncovered another bug in their implementation. I'm so glad I don't have an AMD GPU anymore. Trying to develop modern OpenGL on one was a nightmare.

#5207775 Ways to render a massive amount of sprites.

Posted by Chris_F on 30 January 2015 - 06:32 PM

4096 sprites rendered using instancing. I found out that H.264 really hates this as it seems to be about as compressible as white noise. tongue.png



This one came out better.



I was using instancing with programmable vertex pulling. The limiting factor is definitely fill rate.

#5202628 Highest number of samples for SSAO?

Posted by Chris_F on 07 January 2015 - 11:57 AM

I think you must be doing something wrong. Why are the pillars close to the camera and the floor so occluded?

#5193711 Compression questions

Posted by Chris_F on 19 November 2014 - 09:02 PM

This is a nice article about compressing normals (in a g-buffer): http://aras-p.info/texts/CompactNormalStorage.html


In general, the trend on modern hardware seems to be that math gets cheaper and cheaper while bandwidth gets (relatively) more expensive. So compression at the cost of a few ops is often worthwhile. It does depend on the specific hardware and use case though (and I'm no expert).


I think half floats would struggle a bit to cover 1000m at 0.1m intervals. A half float is only 16 bits so only has 65536 possible values, plus most of them will be focused close to zero, so perhaps not appropriate for position data. Half floats are probably fine for direction and colour though.


I found this a while back. Still have yet to go through and read all of it, but it looked interesting. http://jcgt.org/published/0003/02/01/paper.pdf

#5190399 Normal map artifact still here.

Posted by Chris_F on 31 October 2014 - 09:21 AM

I believe Johnny is referring to this: http://interplayoflight.wordpress.com/2013/05/17/correctly-interpolating-viewlight-vectors-on-large-triangles/

#5190329 Why don't you use GCC on windows?

Posted by Chris_F on 31 October 2014 - 05:17 AM

I do use GCC(MinGW-w64) on Windows. I'd like to be able to switch over to Clang, especially if they ever port libc++ to Windows.

#5188969 how to limit fps in glut ?

Posted by Chris_F on 24 October 2014 - 01:47 PM

As far as I know GLUT doesn't have any way of enabling vsync. Maybe you should consider using a modern

library like GLFW or SDL. In GLFW you would call:



#5188870 With regards to texturing, what is "linear space" and "nonlinear...

Posted by Chris_F on 24 October 2014 - 01:15 AM

They are talking about sRGB encoding. Ordinary images (as in photographs with 8 bits per component) are typically encoded in the sRGB color space. You cannot perform math with these values until you have first converted them to linear RGB color space. If you create a texture using a sRGB format (e.g. GL_SRGB8_ALPHA8 or DXGI_FORMAT_R8G8B8A8_UNORM_SRGB) then this conversion happens automatically when you sample the texture.


Some additional information: http://www.gamedev.net/topic/652795-clarifications-gamma-correction-srgb/#entry5127278

#5188388 Scenes with large and small elements

Posted by Chris_F on 21 October 2014 - 02:38 PM

Of course, it's unfeasible to render such a scene using metres as my base unit, as I have to specify the spacecraft's position in hundreds of thousands of metres relative to the centre of Earth, and using such massive numbers to position objects in Direct3D seems to cause problems.


Hundreds of thousands of meters doesn't sound like a whole lot, not if you are using 32-bit floats. If you were simulating the entire galaxy, I could see this being an issue, but you are only simulating Earth out to LEO.


Edit: Then again, now that I think about it you would only have accuracy to like 1/10th of a meter far away from the origin. If the origin is centered around the spacecraft then maybe it wouldn't be an issue. You don't need better than 1/10th of a meter accuracy for something >100,000 km away.


Also, it doesn't really matter if you are using meters, kilometers or millimeters as your base unit. This has no effect on the precision of the calculation when you are working with floating point numbers, as you are only changing the exponent.