Jump to content

  • Log In with Google      Sign In   
  • Create Account


Implicit conversion errors


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
3 replies to this topic

#1 Juliean   GDNet+   -  Reputation: 2245

Like
0Likes
Like

Posted 15 December 2013 - 04:39 PM

Hello,

 

while testing my generated shaders in OpenGL-mode on my main , I noticed that there are an aweful lot of conversion errors coming from conversion completely legal in HLSL, and on another PC, only that one seems to have some issues with this. I am aware that the GLSL-compiler is a little sensitive on most machines, but is there any way to bring him to allow those implicit conversion, for example float to vec4? Example pixel shader that produces this problem:

in float indepth;
out vec4 outdepth;

void main()
{

       outdepth = indepth;

}

0(5) : error C7011: implicit cast from "float" to "vec4"

I'd hate having to rewrite all my shaders all over again (litterally every single one of them has one or more such errors), and I'd also hate having to write float4(indepth, indepth, indepth, 1.0f); everywhere for returning the single value... any ideas/suggestions?


Edited by Juliean, 15 December 2013 - 04:39 PM.


Sponsor:

#2 Kaptein   Prime Members   -  Reputation: 1844

Like
1Likes
Like

Posted 15 December 2013 - 07:24 PM

Nothing legal about that, im afraid smile.png

 

However, you can certainly use some of the constructors of vec4:

vec4 vector(float); --> (float, float, float, float)

 

In your case it would turn out like this:

outdepth = vec4(indepth);

 

Not sure why you would spend 4 channels saving the same value, though.

I'm seeing you are expecting the value (f, f, f, 1.0)... if HLSL really does this, then they are in the wrong. Bad design decision!

 

I guess you're just stuck with vec4(vec3(indepth), 1.0f);

 

Also, admit it: It's not much work. :P


Edited by Kaptein, 15 December 2013 - 07:27 PM.


#3 Hodgman   Moderators   -  Reputation: 27879

Like
3Likes
Like

Posted 15 December 2013 - 08:14 PM

I am aware that the GLSL-compiler is a little sensitive on most machines

The problem is that some drivers stricly follow the spec, whereas some other drivers are relaxed and allow what should technically be erroneous code to work.
 
This is actually an extremely evil business tactic -- for example, imagine a dev who builds a game on a "relaxed" nVidia driver, unknowingly writing invalid GLSL code. They then release their game to the world, and some players with AMD cards report that the game crashes. The public is then given the perception that AMD has buggy drivers, when actually the cause of the problem was invalid nVidia drivers! They're abusing devs as pawns in a war to undermine confidence in their competitors. Just say no tongue.png 
 
Kaptein's post has the typical fix of:
outdepth = vec4(indepth);
but you can also try these (not actually sure if these are valid GLSL -- they're what I use in HLSL):
outdepth = (vec4)indepth;
outdepth = (indepth).xxxx;



#4 Juliean   GDNet+   -  Reputation: 2245

Like
0Likes
Like

Posted 16 December 2013 - 05:41 AM


In your case it would turn out like this:

outdepth = vec4(indepth);



Not sure why you would spend 4 channels saving the same value, though.

I'm seeing you are expecting the value (f, f, f, 1.0)... if HLSL really does this, then they are in the wrong. Bad design decision!

 

Ok, if that works I'll try it out, hopefully it compiles to float4(in.depth) correctly in HLSL to. No, HLSL doesn't convert this to (f, f, f, 1.0f), but since like this shader is being used to render to a target without an alpha channel and/or with blending disabled, it doesn't make much of a difference, or is there some disadvantage in doing so regarding shader instructions?

 

 

 


Also, admit it: It's not much work. tongue.png

 

It isn't for new shaders I'm going to write, I give you that. but for my 25 shader I currently have, it gets quite abnoxious. The hardest part actually is finding out which line belongs to the error message in my generated shaders biggrin.png

 

 

 


The problem is that some drivers stricly follow the spec, whereas some other drivers are relaxed and allow what should technically be erroneous code to work.

 

So am I right to assume that unlike the HLSL-compiler that can e.g. precompile shaders on any GPU and use it on any other, the GLSL code is specific to the GPU it was compiled on?

 

 

 


This is actually an extremely evil business tactic -- for example, imagine a dev who builds a game on a "relaxed" nVidia driver, unknowingly writing invalid GLSL code. They then release their game to the world, and some players with AMD cards report that the game crashes. The public is then given the perception that AMD has buggy drivers, when actually the cause of the problem was invalid nVidia drivers! They're abusing devs as pawns in a war to undermine confidence in their competitors. Just say no tongue.png

 

Wow, that is evil indeed. Well, in this case it was a Geforce 780 Gtx Ti that was causing problems, now if someone came to declare nVidias current flagships driver as faulty due to this reason, that would be bad for buisness for sure biggrin.png

 

 

 


outdepth = (indepth).xxxx;

 

Ah sure, I've already used stuff like that a few times in my HLSL shaders like for projecting the skysphere to the far plane with .xyzz. If it doesn't work in GLSL it's only halfway bad, before generating my shaders I'm already parsing all functions, with a bit of work I should be able to get him to recognize this pattern and convert it to vec4(indepth) too. Man, really I don't want to see the amount of stuff simply not working in the GLSL version after I got everything to compile :/

 

EDIT: Damn, now I've got something thats really annyoing: So HLSL has the saturate()-function, which pretty much equals out to clamp(... ,0.0f, 1.0f). In GLSL. So decided to support the saturate-function in my shader files, and parse them to clamp() in GLSL. Only problem: When I clamp a vec3 or vec4, I get the damn conversion error. I don't process or store the exact types of all variables, so there is no way for me to know whether the argument of saturate(...) actually is float, or any of the vector types. I quess I'll have to specify the type in the saturate function then, like saturate2 or saturate4, but I'm uncertain how much work it would be, and whether or not it would be better to actually store type information for all variables/functions right away...

 

EDIT2: Oh, nevermind, clamp seems to be more lenient, I just had to convert the result of the saturate/clamp.


Edited by Juliean, 16 December 2013 - 06:14 AM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS