In your case it would turn out like this:
outdepth = vec4(indepth);
Not sure why you would spend 4 channels saving the same value, though.
I'm seeing you are expecting the value (f, f, f, 1.0)... if HLSL really does this, then they are in the wrong. Bad design decision!
Ok, if that works I'll try it out, hopefully it compiles to float4(in.depth) correctly in HLSL to. No, HLSL doesn't convert this to (f, f, f, 1.0f), but since like this shader is being used to render to a target without an alpha channel and/or with blending disabled, it doesn't make much of a difference, or is there some disadvantage in doing so regarding shader instructions?
Also, admit it: It's not much work.
It isn't for new shaders I'm going to write, I give you that. but for my 25 shader I currently have, it gets quite abnoxious. The hardest part actually is finding out which line belongs to the error message in my generated shaders
The problem is that some drivers stricly follow the spec, whereas some other drivers are relaxed and allow what should technically be erroneous code to work.
So am I right to assume that unlike the HLSL-compiler that can e.g. precompile shaders on any GPU and use it on any other, the GLSL code is specific to the GPU it was compiled on?
This is actually an extremely evil business tactic -- for example, imagine a dev who builds a game on a "relaxed" nVidia driver, unknowingly writing invalid GLSL code. They then release their game to the world, and some players with AMD cards report that the game crashes. The public is then given the perception that AMD has buggy drivers, when actually the cause of the problem was invalid nVidia drivers! They're abusing devs as pawns in a war to undermine confidence in their competitors. Just say no tongue.png
Wow, that is evil indeed. Well, in this case it was a Geforce 780 Gtx Ti that was causing problems, now if someone came to declare nVidias current flagships driver as faulty due to this reason, that would be bad for buisness for sure
outdepth = (indepth).xxxx;
Ah sure, I've already used stuff like that a few times in my HLSL shaders like for projecting the skysphere to the far plane with .xyzz. If it doesn't work in GLSL it's only halfway bad, before generating my shaders I'm already parsing all functions, with a bit of work I should be able to get him to recognize this pattern and convert it to vec4(indepth) too. Man, really I don't want to see the amount of stuff simply not working in the GLSL version after I got everything to compile :/
Damn, now I've got something thats really annyoing: So HLSL has the saturate()-function, which pretty much equals out to clamp(... ,0.0f, 1.0f). In GLSL. So decided to support the saturate-function in my shader files, and parse them to clamp() in GLSL. Only problem: When I clamp a vec3 or vec4, I get the damn conversion error. I don't process or store the exact types of all variables, so there is no way for me to know whether the argument of saturate(...) actually is float, or any of the vector types. I quess I'll have to specify the type in the saturate function then, like saturate2 or saturate4, but I'm uncertain how much work it would be, and whether or not it would be better to actually store type information for all variables/functions right away...
EDIT2: Oh, nevermind, clamp seems to be more lenient, I just had to convert the result of the saturate/clamp.
Edited by Juliean, 16 December 2013 - 06:14 AM.