Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 30 Jun 2007
Offline Last Active Apr 10 2015 02:28 PM

Posts I've Made

In Topic: Sparse Textures broken on Nvidia Linux drivers.

27 March 2015 - 12:13 PM

Needed to commit the storage before calling texSubImage to upload the data. Simple but elusive issue.

In Topic: Best practices for packing multiple objects into buffers

20 February 2015 - 01:02 PM

Little late to this topic but....


First off since you mention you are learning, do yourself a favor and if you really want to learn opengl, get yourself in a situation where you can limit the frustration of having to do setup code, and provide yourself with an environment that can help you debug. THIS IS NOT MAC OSX. Ubuntu or Windows  + Clion or VS + GDEbugger + common support libraries like Glew, GLFW and SOIL help you get a window and load textures quickly (just a hypothetical example)


Also, as has been suggested before, there is very much an old way and a new way of doing things in OpenGL, and they have big implications on performance.


Luckily, An Nvidia guy has a github page that has sample code for the new way of doing things. I highly suggest you clone that repo and study it religiously.


That github page, paired with study of the opengl spec that describes the features in that code, will IMO be your best bet at learning opengl.


In a nutshell... the movement in the OpenGL API has been that of 0 driver overhead. That is, the application does some of what the driver usually does and thus the driver does less work, so you can really slim down the runtime graphics stuff.


Lastly, in the repo is a solution to packing multiple textured quads using (might not have the right names) ARB_BUFFER_STORAGE, SHADER_DRAW_PARAMETERS,  BINDLESS_TEXTURE_ARB, SPARSE_TEXTURE_ARB, and MULTI_DRAW_INDIRECT. Again, the application is doing what the driver might do, thus reducing overhead. If you think of a draw call as packing data of similar objects as opposed to actually drawing a single object, you begin to see how the transition of old opengl to modern opengl is themed.

In Topic: My attempt at bindless textures not working....

09 February 2015 - 01:20 PM

Phew, I grossly misunderstood bindless texturing. Residency is only for the texture handles. Sparse Texture support is for residency of texture memory. Luckily the two work together quite well, so if I add sparse texture immutable storage I can leverage the same querys of determining bindless residency to deal with sparse residency.

In Topic: My attempt at bindless textures not working....

30 January 2015 - 12:21 PM

So I think it actually is working.... what throws me off is that vram seems to stabilize... so the driver must keep residency memory reserved using a data structure of some sort (makes sense).


i can handle 80 high rez images w/ mipmaps (basically camera animates over them (by 2s).

In Topic: My attempt at bindless textures not working....

29 January 2015 - 09:39 PM

I wanted to keep my issues with bindless in the same thread, as to keep things somewhat tied together.

See latest edit at the top for details.


I could post more detailed code if needed, But it seems like a simple thing.


What are the requirements for making a resident texture non resident? By definition, what does making a texture non resident actually do?


A comment in the spec says that if the texture isnt going to be used "for a while" it can be made non resident, maybe the GPU won't specifically release the memory if it has plenty? Perhaps the crashing i am seeing is when it actually starts releasing and adding textures from VRAM.