Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


AverageJoeSSU

Member Since 30 Jun 2007
Offline Last Active Mar 27 2015 12:12 PM

Topics I've Started

Sparse Textures broken on Nvidia Linux drivers.

27 March 2015 - 11:05 AM

Anyone else experiencing completely broken sparse texture support on Nvidia Linux?

 

I tested my program on a GT 750m and k5000 with 331 drivers and 346 drivers and got the exact same behavior. basically when you do a glTexSubImage the driver loads the texture data incorrectly into the tiles of the sparse texture. If the image is the size of one tile, then it is uploaded fine, but if it exceeds 1 tile, the same (some arbitrary subset of) texture data is seemingly copied to each tile, rather than the driver correctly offsetting and mapping the correct texture data to the correct tile.

 

Super annoying since for a while i thought it was my code, but my tests and reading suggest otherwise (there is no mention anywhere of an application having to manage tiles).


Spase Texture Virtual Page Size and storing Texture Dimensions

20 February 2015 - 01:09 PM

Sparse texture requires that Texture storage be a multiple of Virtual Page Size.

 

Any ideas at what the best approach is for handling any width and height of image?

Does Texture data HAVE to fill exactly the dimensions of a TexStorage?

 

If a texture is added at 0,0 and just doesnt fill the entire texture storage, you would need to know the texture width and physical width and then divide to figure out where the sample should be. Is there any way to avoid this?


My attempt at bindless textures not working....

19 January 2015 - 10:21 PM

Double EDIT: I have updated my renderer with a prototype to test this feature. and glMakeTextureHandleNonResident seems to not do anything.

 

I used the following code as a test to see how much memory i have (at the end of my Renderer::Draw() before unbinding the GLContext):

    GLint currentAvailableVideoMemory = -1;
    glGetIntegerv(GPU_MEMORY_INFO_CURRENT_AVAILABLE_VIDMEM_NVX, &currentAvailableVideoMemory);
    GLint totalAvailableVideoMemory = -1;
    glGetIntegerv(GPU_MEMORY_INFO_TOTAL_AVAILABLE_MEMORY_NVX, &totalAvailableVideoMemory);
    float percent = (float)currentAvailableVideoMemory/(float)totalAvailableVideoMemory * 100.0f;
    fprintf(stdout, "VRAM (%d MB) Usage: %f%% \n", totalAvailableVideoMemory/1024, 100.0f - percent);
    fflush(stdout);

which reports 95.xxx% always...

 

And in gDebugger i always have the same number of texture objects.

 

when i add 100 images. i get a couple of frames and then the renderer dies, asummingly because the textures are not actually made NonResident.

 

Is there something i need to be doing to ensure that non resident calls are executed? Could this be a bug in the nvidia linux driver?

 

 

 

EDIT: stupid shader in/out blocks didnt match... thanks compiler!

 

Here is the output for my program, i also checked in gDebugger and the texture loads just fine.

 

Vendor: 4.4.0 NVIDIA 343.36
Supported GLSL version is 4.40 NVIDIA via Cg compiler.
Aspect Ratio: 2.400000 
vbo: 1
index buffer: 2
Shader Source: #version 440
 
layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec2 TexCoords;
 
uniform mat4 worldMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
 
out block
{
    vec2 Texcoord;
} Out;
 
void main()
{
    mat4 mvp = projectionMatrix * viewMatrix * worldMatrix;
    Out.Texcoord = vec2(TexCoords.x, 1.0-TexCoords.y);
    gl_Position = mvp * vec4(VertexPosition, 1.0);
}
 
../shaders/simpleTexture.vert Compilation Successful 
Shader Source: #version 440
//#extension GL_NV_gpu_shader5 : require    // for 64-bit integer types
#extension GL_ARB_bindless_texture : require
 
in block
{
    vec2 texCoords;
} In;
 
layout (bindless_sampler) uniform sampler2D textureID;
 
layout (location = 0) out vec4 FragColor;
 
void main()
{
    FragColor = texture(textureID, In.texCoords);
}
 
../shaders/simpleTexture.frag Compilation Successful 
Linking Shader Program: simpleTexture.vert
Shader Link Successful
GL_ACTIVE_UNIFORMS = 4
  0) type:mat4 name:projectionMatrix location:0
  1) type:sampler2D name:textureID location:1
  2) type:mat4 name:viewMatrix location:2
  3) type:mat4 name:worldMatrix location:3
width: 500, height 331: 
Done Loading Texture... 
SOIL loading error: 'Image loaded'
texture handle pointer: 4294969856
 
 
My question is, given those shaders and the fact that glGetTextureHandleARB returns non 0, should it work? (of course given the correct geometry and uvs)
 

please review my C++11 Chrono render/game loop?

16 January 2015 - 12:53 PM

I am struggling to get a buttery smooth framerate with chrono.

 

I have a simple animated rectangle that is adding an angle (multipled by delta time) to a sin wave and after seemingly random intervals of time i get a blip. (anywhere between 10 seconds and 5 minutes).

 

Here is my loop (based on Gaffers Fix your timestep!)

using namespace std::chrono;
using Clock = high_resolution_clock;
    init();

    auto kMaxDeltatime = duration<long,std::ratio<1,60>>(1);
    auto lastEndTime = Clock::now();
    auto accumulator = microseconds(0);
    while (!glfwWindowShouldClose(tsxWindow->window)) {
        auto newEndTime = Clock::now();
        const auto frameTime = newEndTime - lastEndTime;
        lastEndTime = newEndTime;
        accumulator += duration_cast<microseconds>(frameTime);

        while ( accumulator >= kMaxDeltatime )
        {
            //accumulator -= duration_cast<microseconds>(kMaxDeltatime);
            Update(duration_cast<milliseconds>(frameTime).count());
            accumulator -= duration_cast<microseconds>(frameTime);
        }
        Draw(duration_cast<milliseconds>(kMaxDeltatime).count());
        glfwPollEvents();
    }

and my objects update:

void Object::Update(long delta_time) {

    angle += 0.05 * (float)delta_time;
    angle = fmod(angle,360);
    double scale = 1;
    transformWorld = glm::scale(glm::mat4(1.0f), glm::vec3(0.5f));
    transformWorld = glm::translate(transformWorld, glm::vec3(sin(angle*PI/180.0)*scale, 0.0f, 0.0f));
//    fprintf(stdout, "deltatime: %ld \n angle: %f \n WorldTransform: \n %s \n", delta_time, angle, glm::to_string(transformWorld).c_str());
//    fflush(stdout);
}

and my smoothness is best with a high resolution clock (not a steady one) but even still after a long while (~5 minutes) it can stutter (not tear).

 

I am using linux and clang++ (swap interval is 1).


GL_ACTIVE_UNIFORMS returns 0 after successful link, locations are -1.

29 December 2014 - 10:29 PM

I am stumped, i have tried various ways (old ways and explicit). all seem to not be working.

 

here is my shader code, very simple

#version 440

layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec2 TexCoords;

layout(location = 2) uniform mat4 worldMatrix;
layout(location = 3) uniform mat4 projectionMatrix;
layout(location = 4) uniform mat4 viewMatrix;

void main()
{
    mat4 mvp = projectionMatrix * viewMatrix * worldMatrix;
    gl_Position = mvp * vec4(VertexPosition, 1.0);
    //gl_Position = vec4(1.0, 1.0 , 1.0, 1.0);
}

and the code after the successful link:

fprintf(stdout, "Shader Link Successful\n");
fflush(stdout);

uniforms = new std::map<std::string, GLint>();

GLint location;
location = glGetUniformLocation(ShaderProgram, "worldMatrix");

ive tried without the explicit, i've tried calling glUseProgram and THEN getting the locations, nothing seems to work.

 

I AM using glew, I'm assuming there are no issues there since my shader program actually runs and the pixel shader outputs what i expect. The output of the vertex shader is as if those uniforms are all identity.

 

any ideas?

 


PARTNERS