Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 30 Jun 2007
Offline Last Active Jun 22 2015 03:31 PM

Topics I've Started

[solved] re-committed sparse texture is mangled

04 June 2015 - 09:56 AM

Puzzling issue. I have a code path that creates a sparse texture, and returns a Texture that is committed and sparse.


I then can uncommit it, which turns the texture to black ( i am rendering it onto a quad). If i then commit it again, which is the same path as was used when the texture was first created (minus the storage part), the texture appears but is totally mangled.


I checked width and height and texid values and format values, and they are the same (or at least after many checks appear to be, i could be missing something stupid). I know my data is at least getting in because the recommitted image is there, just mangled and rearranged in chunks.


Are there any things i need to do when recommitting a sparse textures memory? I know i need to reupload my pixels, and i know the memory needs to be commited before doing so, but I use the same code path when creating it the first time, and it renders and looks fine. the ONLY difference is i call uncommit before that same code path gets executed.


Any ideas? 




PS: First time image is created and committed (and bits uploaded) and rendered :

Attached File  firstTime.png   622.63KB   0 downloads


Second Time the image has been uncommitted, then recommitted (and bits uploaded) and rendered:

Attached File  recommit.png   619.96KB   0 downloads 



Sparse Textures broken on Nvidia Linux drivers.

27 March 2015 - 11:05 AM

Anyone else experiencing completely broken sparse texture support on Nvidia Linux?


I tested my program on a GT 750m and k5000 with 331 drivers and 346 drivers and got the exact same behavior. basically when you do a glTexSubImage the driver loads the texture data incorrectly into the tiles of the sparse texture. If the image is the size of one tile, then it is uploaded fine, but if it exceeds 1 tile, the same (some arbitrary subset of) texture data is seemingly copied to each tile, rather than the driver correctly offsetting and mapping the correct texture data to the correct tile.


Super annoying since for a while i thought it was my code, but my tests and reading suggest otherwise (there is no mention anywhere of an application having to manage tiles).

Spase Texture Virtual Page Size and storing Texture Dimensions

20 February 2015 - 01:09 PM

Sparse texture requires that Texture storage be a multiple of Virtual Page Size.


Any ideas at what the best approach is for handling any width and height of image?

Does Texture data HAVE to fill exactly the dimensions of a TexStorage?


If a texture is added at 0,0 and just doesnt fill the entire texture storage, you would need to know the texture width and physical width and then divide to figure out where the sample should be. Is there any way to avoid this?

My attempt at bindless textures not working....

19 January 2015 - 10:21 PM

Double EDIT: I have updated my renderer with a prototype to test this feature. and glMakeTextureHandleNonResident seems to not do anything.


I used the following code as a test to see how much memory i have (at the end of my Renderer::Draw() before unbinding the GLContext):

    GLint currentAvailableVideoMemory = -1;
    glGetIntegerv(GPU_MEMORY_INFO_CURRENT_AVAILABLE_VIDMEM_NVX, &currentAvailableVideoMemory);
    GLint totalAvailableVideoMemory = -1;
    glGetIntegerv(GPU_MEMORY_INFO_TOTAL_AVAILABLE_MEMORY_NVX, &totalAvailableVideoMemory);
    float percent = (float)currentAvailableVideoMemory/(float)totalAvailableVideoMemory * 100.0f;
    fprintf(stdout, "VRAM (%d MB) Usage: %f%% \n", totalAvailableVideoMemory/1024, 100.0f - percent);

which reports 95.xxx% always...


And in gDebugger i always have the same number of texture objects.


when i add 100 images. i get a couple of frames and then the renderer dies, asummingly because the textures are not actually made NonResident.


Is there something i need to be doing to ensure that non resident calls are executed? Could this be a bug in the nvidia linux driver?




EDIT: stupid shader in/out blocks didnt match... thanks compiler!


Here is the output for my program, i also checked in gDebugger and the texture loads just fine.


Vendor: 4.4.0 NVIDIA 343.36
Supported GLSL version is 4.40 NVIDIA via Cg compiler.
Aspect Ratio: 2.400000 
vbo: 1
index buffer: 2
Shader Source: #version 440
layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec2 TexCoords;
uniform mat4 worldMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
out block
    vec2 Texcoord;
} Out;
void main()
    mat4 mvp = projectionMatrix * viewMatrix * worldMatrix;
    Out.Texcoord = vec2(TexCoords.x, 1.0-TexCoords.y);
    gl_Position = mvp * vec4(VertexPosition, 1.0);
../shaders/simpleTexture.vert Compilation Successful 
Shader Source: #version 440
//#extension GL_NV_gpu_shader5 : require    // for 64-bit integer types
#extension GL_ARB_bindless_texture : require
in block
    vec2 texCoords;
} In;
layout (bindless_sampler) uniform sampler2D textureID;
layout (location = 0) out vec4 FragColor;
void main()
    FragColor = texture(textureID, In.texCoords);
../shaders/simpleTexture.frag Compilation Successful 
Linking Shader Program: simpleTexture.vert
Shader Link Successful
  0) type:mat4 name:projectionMatrix location:0
  1) type:sampler2D name:textureID location:1
  2) type:mat4 name:viewMatrix location:2
  3) type:mat4 name:worldMatrix location:3
width: 500, height 331: 
Done Loading Texture... 
SOIL loading error: 'Image loaded'
texture handle pointer: 4294969856
My question is, given those shaders and the fact that glGetTextureHandleARB returns non 0, should it work? (of course given the correct geometry and uvs)

please review my C++11 Chrono render/game loop?

16 January 2015 - 12:53 PM

I am struggling to get a buttery smooth framerate with chrono.


I have a simple animated rectangle that is adding an angle (multipled by delta time) to a sin wave and after seemingly random intervals of time i get a blip. (anywhere between 10 seconds and 5 minutes).


Here is my loop (based on Gaffers Fix your timestep!)

using namespace std::chrono;
using Clock = high_resolution_clock;

    auto kMaxDeltatime = duration<long,std::ratio<1,60>>(1);
    auto lastEndTime = Clock::now();
    auto accumulator = microseconds(0);
    while (!glfwWindowShouldClose(tsxWindow->window)) {
        auto newEndTime = Clock::now();
        const auto frameTime = newEndTime - lastEndTime;
        lastEndTime = newEndTime;
        accumulator += duration_cast<microseconds>(frameTime);

        while ( accumulator >= kMaxDeltatime )
            //accumulator -= duration_cast<microseconds>(kMaxDeltatime);
            accumulator -= duration_cast<microseconds>(frameTime);

and my objects update:

void Object::Update(long delta_time) {

    angle += 0.05 * (float)delta_time;
    angle = fmod(angle,360);
    double scale = 1;
    transformWorld = glm::scale(glm::mat4(1.0f), glm::vec3(0.5f));
    transformWorld = glm::translate(transformWorld, glm::vec3(sin(angle*PI/180.0)*scale, 0.0f, 0.0f));
//    fprintf(stdout, "deltatime: %ld \n angle: %f \n WorldTransform: \n %s \n", delta_time, angle, glm::to_string(transformWorld).c_str());
//    fflush(stdout);

and my smoothness is best with a high resolution clock (not a steady one) but even still after a long while (~5 minutes) it can stutter (not tear).


I am using linux and clang++ (swap interval is 1).