Jump to content
  • Advertisement
Sign in to follow this  
4fingers

OpenGL Noise on GPU v CPU

This topic is 3657 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have created a procedural brick texture using GLSL. It uses the noise implementation created by Stefan Gustavson over at the OpenGL discussion boards. The problem is that my brick texture needs at least 4 noise calls to create the look I am after and there were other times when I needed more for making things like marble. The thing is that with nothing happening in the fragment shader I get around 500FPS then with just 4 noise calls it takes it down to around 220FPS. This is fine on my graphics card but on older cards it slows it right down to something barely above 1FPS. I would have thought it were better to simply create a noise texture using the CPU and ask the shader to read from it every time it wanted a noise value. Would I be right in thinking that it is more effective that if I only need one noise texture but use it multiple time that I cache it? Then if I need multiple noise textures and only use each of them once (for example when you create something like marble) I keep it on the fragment shader? Otherwise what are the advantages to having the noise generation (reads from a gradient look up table created by the CPU) on the shader? I just wanted to ask these three questions just to make sure I am not missing anything.

Share this post


Link to post
Share on other sites
Advertisement
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!