Jump to content
  • Advertisement
Sign in to follow this  
Silverlan

GLSL Error C1502 (Nvidia): "index must be constant expression"

This topic is 1068 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have a uniform block in my shader, which I'm accessing within a loop:

#version 330 core

const int MAX_LIGHTS = 8; // Maximum amount of lights
uniform int numLights; // Actual amount of lights (Cannot exceed MAX_LIGHTS)
layout (std140) uniform LightSourceBlock
{
    vec3 position;
    [...]
} LightSources[MAX_LIGHTS]; // Light Data

void Test()
{
    for(int i=0;i<numLights;i++)
    {
        vec3 pos = LightSources[i].position; // Causes "index must be constant expression" error on Nvidia cards
        [...]
    }
}

This works fine on my AMD card, however on a Nvidia card it generates the error "index must be constant expression".

I've tried changing the shader to this:

#version 330 core

const int MAX_LIGHTS = 8; // Maximum amount of lights
uniform int numLights; // Actual amount of lights (Cannot exceed MAX_LIGHTS)
layout (std140) uniform LightSourceBlock
{
    vec3 position;
    [...]
} LightSources[MAX_LIGHTS]; // Light Data

void Test()
{
    for(int i=0;i<MAX_LIGHTS;i++)
    {
        if(i >= numLights)
        	break;
        vec3 pos = LightSources[i].position; // Causes "index must be constant expression" error on Nvidia cards
        [...]
    }
}

I figured this way it might consider "i" to be a constant, but the error remains.

 

So how can I access "LightSources" with a non-const index, without having to break up the loop and just pasting the same code below each other a bunch of times?

Share this post


Link to post
Share on other sites
Advertisement

You could try moving to a higher #version number. Version 330 is for cards that are 5 years old, some of which probably can't perform uniform array indexing.
D3D/HLSL will compile that code, but the actual runtime asm might be equivalent to the following (which is shit, so a compiler error may be better tongue.png)

void Test()
{
    for(int i=0;i<numLights;i++)
    {
        vec3 pos;
        if(i==0) pos = LightSources[0].position;
        else if(i==1) pos = LightSources[1].position;
        else if(i==2) pos = LightSources[2].position;
        //and so on...
        [...]
    }
}

To avoid such shenanigans on old GPUs, store your arrays in a texture or buffer (aka "texture buffer" in GL jargon, not a "uniform buffer") and fetch the values from them.

Edited by Hodgman

Share this post


Link to post
Share on other sites
Are you absolutely certain that line is causing the problem and not some other line? I did a quick search and found this post on StackOverflow which seems to suggest your initial usage is okay (which is also what I would have said before checking).
From experience I would be a bit surprised to learn an AMD driver accepts something it should not according to the standard but an Nvidia driver complains. Have you tried running it through the reference compiler?

Share this post


Link to post
Share on other sites
You could try moving to a higher #version number.

This.

 

See here: https://www.opengl.org/wiki/Interface_Block_%28GLSL%29

 

For uniform and shader storage blocks, the array index must be a dynamically-uniform integral expression.

[...]

If an expression is said to be dynamically uniform, and you're not using GLSL 4.00 or above, this should be read as "constant expression". Only 4.00-class hardware and greater is able to make this distinction.

 

So basically, you need #version 400 or higher for this to work.

Edited by samoth

Share this post


Link to post
Share on other sites
To avoid such shenanigans on old GPUs, store your arrays in a texture or buffer (aka "texture buffer" in GL jargon, not a "uniform buffer") and fetch the values from them.

Hm... My uniform block contains both integer and float data, I suppose I'd have to add the integers as floats, and then cast them back in the shader? Couldn't that result in imprecision errors? (Although I suppose I could just round the value to be sure)

 

 

Have you tried running it through the reference compiler?

I didn't realize there was such a thing.

Well, I tried running my example through the glslangValidator and no errors were generated.

Edited by Silverlan

Share this post


Link to post
Share on other sites

This overlaps with what Matias said, but is from a bit different perspective, and I'm not someone with too much actual in-depth knowledge, but here goes anyway:

 

The array of blocks syntax creates several blocks, with separate binding points for each, and each of them can be backed by a completely different buffer. My intuition says indexing blocks is therefore possibly implemented very differently than indexing within a block, where arrays are actually laid sequentially in memory.

 

On second thought, on cards where the blocks are only 16kb at most or something similarly small, the data could be copied around in the GPU when glBindBufferRange or glBindBufferBase is called so that they end up being laid out in an array for the shader to access, but there doesn't seem to be any guarantee for that to be case.

 

I also just tried a very simple vertex shader with an array block, and the program crashes at shader linking in a way that hints the cause might be outside my code. The code has so far worked well in other (simple test) cases and the reference compiler has no complaints about the shader, but of course there's still a chance it's a bug in my code. (In any case, I'm running on an oldish Radeon card on Linux with the free drivers, so knowledge of this potential bug may not be very relevant to you.)

Share this post


Link to post
Share on other sites

y uniform block contains both integer and float data, I suppose I'd have to add the integers as floats, and then cast them back in the shader?

v330 introduced the intBitsToFloat/floatBitsToInt functions, which allow bitwise/reinterpret casts. You can use a 32bit int buffer and then reinterpret it's elements as floats as required.

Share this post


Link to post
Share on other sites

lol yeah, I remember when I got this issue (same GLSL version, nVidia hardware), apparently "const int" isn't constant enough for nVidia :D As Mathias said, a #define works fine in this case.

Share this post


Link to post
Share on other sites

lol yeah, I remember when I got this issue (same GLSL version, nVidia hardware), apparently "const int" isn't constant enough for nVidia biggrin.png As Mathias said, a #define works fine in this case.

 

Thanks, sadly the same error still occurs.

Changing the version to "#version 420 core" also didn't help.

 

 

y uniform block contains both integer and float data, I suppose I'd have to add the integers as floats, and then cast them back in the shader?

v330 introduced the intBitsToFloat/floatBitsToInt functions, which allow bitwise/reinterpret casts. You can use a 32bit int buffer and then reinterpret it's elements as floats as required.

 

Are buffer texture lookups as expensive as regular texture lookups? I have 12 variables(=512 bytes) per light, and since the data types differ (int,float,vec3,mat3,...) I suppose GL_R32I makes the most sense for the texture format? However, that would result in a lot of lookups.

 

Also, I've noticed that texelFetch returns a 4-component vector. So, if I have a 1-component texture format, would that give me the data of 4 texels?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!