writing a GLSL compiler form scratch?

Started by
8 comments, last by robotech_er 12 years, 7 months ago
Building a shader compiler myself maybe too hard for a newbie, but i think understanding the [font=Arial,]inner workings of shader compilers would be beneficial.[/font]
[font=Arial,]
[/font]
[font=Arial,]I have done some search work, but didn't get anything useful. could anyone give me some links about this problem? it would be perfect if some tutorials exist.[/font]
[font=Arial,]
[/font]
[font=Arial,]thanks very much![/font]
Advertisement
The GLSL shader compiler is part of the GL driver and it would be hardware specific. I suggest that you look into the Mesa3d project and look at some open source Linux drivers. I don't know of any tutorials about writing Linux drivers or gl drivers or glsl compilers.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

The GLSL shader compiler is part of the GL driver and it would be hardware specific. I suggest that you look into the Mesa3d project and look at some open source Linux drivers. I don't know of any tutorials about writing Linux drivers or gl drivers or glsl compilers.


thanks for your reply , v-man.
do you know any article that depicts the inside works of shader compilers, not a detailed tutorial, but some basic workflow would be enough, such as, how to translate some GLSL statements into GPU specific instrument? or any website involve this?
If you want to learn how compilers work, I would recommend creating a simple C compiler that outputs x86 ASM or similar instead. There are vast amounts of resources for that, and it's basically the same thing: interpret the code, translate it to a different format, optimize it, and output the ASM.
I doubt there are any tutorials at all that deal with compiling GLSL to GPU ASM as such, but again, it's the same thing with a different instruction set. Once you know compiler basics it's probably not all that hard to write a compiler that translates GLSL into the shared shader ASM format that was used in older shader models.

If you want to learn how compilers work, I would recommend creating a simple C compiler that outputs x86 ASM or similar instead. There are vast amounts of resources for that, and it's basically the same thing: interpret the code, translate it to a different format, optimize it, and output the ASM.
I doubt there are any tutorials at all that deal with compiling GLSL to GPU ASM as such, but again, it's the same thing with a different instruction set. Once you know compiler basics it's probably not all that hard to write a compiler that translates GLSL into the shared shader ASM format that was used in older shader models.


thanks, [color="#1c2837"]erik,i will follow your suggestions.

[color="#1c2837"]btw, what is the " shared shader ASM format that was used in older shader models[color="#1C2837"]"? since it is referenced as older shader models, what is the situation of current shader model?
He is talking about GL_ARB_vertex_shader and GL_ARB_fragment_shader extensions. These extensions specified "common" shader ASM format similiar as to DX9 ASM shaders. Only disadvantage over modern GLSL is that ATI decided to stop supporting ARB shaders at shader model 2.0 (using DX terminology). Only Nvidia continued up to SM5.0 (including tesselation shaders). So if you don't care about non-nvidia GPU cards then you can use ASM shaders.

In OpenGL modern cross-vendor (Nvidia/AMD/Intel) shader can be written only in GLSL. Actually same is true for Direct3D. For modern D3D (10/11) you can use only HLSL to write shaders. ASM syntax is deprecated and not used anymore. HLSL compiler is good enough to rely on it.

He is talking about GL_ARB_vertex_shader and GL_ARB_fragment_shader extensions. These extensions specified "common" shader ASM format similiar as to DX9 ASM shaders. Only disadvantage over modern GLSL is that ATI decided to stop supporting ARB shaders at shader model 2.0 (using DX terminology). Only Nvidia continued up to SM5.0 (including tesselation shaders). So if you don't care about non-nvidia GPU cards then you can use ASM shaders.

In OpenGL modern cross-vendor (Nvidia/AMD/Intel) shader can be written only in GLSL. Actually same is true for Direct3D. For modern D3D (10/11) you can use only HLSL to write shaders. ASM syntax is deprecated and not used anymore. HLSL compiler is good enough to rely on it.


No, GL_ARB_vertex_shader and GL_ARB_fragment_shader is the first version of GLSL (v 1.00).

The ASM extensions are GL_ARB_vertex_program and GL_ARB_fragment_program which can be found at
http://www.opengl.org/registry/
http://www.opengl.or...tex_program.txt
http://www.opengl.or...ent_program.txt

and there are others from nVidia (GL_NV_blab bla)

They aren't true ASM shaders. The compiler still transforms them to GPU instructions. The same can be said about D3D9 ASM shaders.

There was a tool from ATI (back when they were a separate entity) called Ashley which would convert your GLSL code and show you real GPU instructions in text form. It could also handle D3D shaders.You could select the target GPU and see the small differences between the real asm codes.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

[quote name='Martins Mozeiko' timestamp='1316331408' post='4862981']
He is talking about GL_ARB_vertex_shader and GL_ARB_fragment_shader extensions. These extensions specified "common" shader ASM format similiar as to DX9 ASM shaders. Only disadvantage over modern GLSL is that ATI decided to stop supporting ARB shaders at shader model 2.0 (using DX terminology). Only Nvidia continued up to SM5.0 (including tesselation shaders). So if you don't care about non-nvidia GPU cards then you can use ASM shaders.

In OpenGL modern cross-vendor (Nvidia/AMD/Intel) shader can be written only in GLSL. Actually same is true for Direct3D. For modern D3D (10/11) you can use only HLSL to write shaders. ASM syntax is deprecated and not used anymore. HLSL compiler is good enough to rely on it.


No, GL_ARB_vertex_shader and GL_ARB_fragment_shader is the first version of GLSL (v 1.00).

The ASM extensions are GL_ARB_vertex_program and GL_ARB_fragment_program which can be found at
http://www.opengl.org/registry/
http://www.opengl.or...tex_program.txt
http://www.opengl.or...ent_program.txt

and there are others from nVidia (GL_NV_blab bla)

They aren't true ASM shaders. The compiler still transforms them to GPU instructions. The same can be said about D3D9 ASM shaders.

There was a tool from ATI (back when they were a separate entity) called Ashley which would convert your GLSL code and show you real GPU instructions in text form. It could also handle D3D shaders.You could select the target GPU and see the small differences between the real asm codes.
[/quote]

if the two extensions are not true ASM shaders , what is the point to create them at the beginning?



NVIDIA provides a PTX_ISA manual with their CUDA tool kit, and in CUDA can directly use inline PTX assembly language statements, is it possible to use these PTX assembly in GLSL,since GLSL and CUDA actually run on the same hardware? there are a lot of powerful instruments in PTX that GLSL doesn't have.........
It was created before GLSL and before GL 2.0 when the PS 2.0 hw became available and there was a need for GL to catch up with Direct3D.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
thanks,everyone. i think i should start form something more basic and easier.

This topic is closed to new replies.

Advertisement