Sign in to follow this  

Per-pixel lighting asm point of view

This topic is 835 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi there!

I want to understand better how vertex and fragment programs talk to each other from asm shaders point of view on per-pixel lighting example.

Suppose I want to draw a triangle with per-pixel lighting. In my vertex program I modify the verticies

o[HPOS]=mvp*v[OPOS]

and normals

worldNormal=mv*v[NRML]

worldPosition=mv*v[OPOS]

Where worldPosition and worldNormal are not some vertex program output data. We need to store them and then give to the fragment program. After linear interpolation worldPosition will become actual fragment coordinate in 3d world. Then one needs to perform lighting calculations usually done with verticies.

Several questions:

How worldPosition and worldNormal can move from vertex to fragment program? Through unused texture coordinates, primary and secondary vertex color?

Thanks in advance

Share this post


Link to post
Share on other sites
It is pretty hard to find information about this so I looked through Direct3D samples. Positions and normals are indeed stored in texture coordinates

mhagain
Don't know in detail about openGl but in direct3d people store the data in texture coordinates exactly because of precision arguments. No limitations are imposed on this values. If someone tries to make texture lookups using coordinates outside 0-1 he or she will get into trouble. Color registers are not as good for this.

Share this post


Link to post
Share on other sites

i am not sure if you want o write asm shader, or you want to know how they communicate using  asm code

 

here is per pixel light that i made some time ago:

vertex

!!ARBvp1.0


PARAM MVP1 = program.local[1];

PARAM MVP2 = program.local[2];

PARAM MVP3 = program.local[3];

PARAM MVP4 = program.local[4];




TEMP vertexClip;
DP4 vertexClip.x, MVP1, vertex.position;
DP4 vertexClip.y, MVP2, vertex.position;
DP4 vertexClip.z, MVP3, vertex.position;
DP4 vertexClip.w, MVP4, vertex.position;

MOV result.position, vertexClip;
MOV result.color, vertex.color;
MOV result.texcoord[0], vertex.texcoord;
MOV result.texcoord[1].x, vertex.position.x;
MOV result.texcoord[1].y, vertex.position.y;
MOV result.texcoord[2].x, vertex.position.z;

END
result.texcoord[0] is where you store values so you can pass them to fragment program. there are about 8 active textures so you can use range from 0..7

then fragment program:

!!ARBfp1.0

PARAM LIGHT_POS    = program.local[0];
PARAM LIGHT_COLOR  = program.local[1];
PARAM LIGHT_RADIUS = program.local[2];


TEMP DISTANCE;
TEMP DSTa;
TEMP A;
TEMP B;

MOV A, LIGHT_POS; 
MOV B.x, fragment.texcoord[1].x;
MOV B.y, fragment.texcoord[1].y;
MOV B.z, fragment.texcoord[2].x;


SUB DISTANCE.x, A.x, B.x;
SUB DISTANCE.y, A.y, B.y;
SUB DISTANCE.z, A.z, B.z;

MUL DISTANCE.x, DISTANCE.x, DISTANCE.x;
MUL DISTANCE.y, DISTANCE.y, DISTANCE.y;
MUL DISTANCE.z, DISTANCE.z, DISTANCE.z;

ADD DSTa.x, DISTANCE.x, DISTANCE.y;
ADD DSTa.x, DSTa.x, DISTANCE.z;

RSQ DSTa.x, DSTa.x;

RCP DSTa.x, DSTa.x;
#now we have distance from light pos to fragment/vertex

#intenstiy equation looks like: 1.0 - (dst / radius)

TEMP R;
MOV R.x, LIGHT_RADIUS.x;
RCP R.x, R.x;

MUL R.y, R.x, DSTa.x;

SUB R.z, 1.0, R.y;

MAX R.z, R.z, 0.0;

TEMP COL;

MOV COL, LIGHT_COLOR;

MUL COL.x, COL.x, R.z;
MUL COL.y, COL.y, R.z;
MUL COL.z, COL.z, R.z;
MOV COL.w, 1.0;


MOV result.color, COL;



END




here i pass texture coordinate of vertex to 

result.texcoord[0], vertex.texcoord;

 

and pass vertex position (actualyl it should be multiplied by world matrix but i was using world matrix as identity so i didint have to implement that in program)
MOV result.texcoord[1].x, vertex.position.x;
MOV result.texcoord[1].y, vertex.position.y;
MOV result.texcoord[2].x, vertex.position.z;

 

 

 

then i read the vertex pos in fragment shader and compare distance then i apply light intenisty on that fragment value (since now we know that fragment has position at texcoord[1].xyz)

 

result:

 

asmppl.jpg

 

 

usually texcoord[ i ].x precision should be defined in the binded texture for glActiveTexture(GL_TEXTUREi); so if you youse depth component as format it will be 24 bit etc. - i highly doubt about that here in arb 1.0 i think it has full 32 bit precision since that shader works, (note that radius of that red light is at least 300.0) and i didint set anything to gl_texture1

 

 

 

 

theres also a lighting here along with shadowmap in arb vp fp

shadtest1.jpg

 

color of the fragment was determined in vertex program not in fragment.

Edited by ?W ?I ?R ?E ?D ? ?C ?A ?T

Share this post


Link to post
Share on other sites
?W ?I ?R ?E ?D ? ?C ?A ?T
Thanks for the wide answer. Correct me if I am wrong, data can be stored only in output registers, no other options. And then this data will be linearly interpolated and given as initial to fragment program. So number of parameters that go to fragment program is pretty low.

Yes, I want to write vertex and fragment programs on asm. My graphics card is not cutting edge so it is a good option.

Share this post


Link to post
Share on other sites

Yes, I want to write vertex and fragment programs on asm. My graphics card is not cutting edge so it is a good option.


Someone can correct me if I'm wrong, but that is not true, as I understand it, at least for OpenGL. The shaders are compiled by the driver into GPU specific instructions. There is no general shader assembly. Even if you knew your specific GPU's instructions, you'll still need to submit the program through the OpenGL shader interface, which expects C-like code, and even if you managed to submit the GPU instructions, it would only be guaranteed work with that specific GPU from that specific vendor.
 
Nevermind... Google proved me wrong. I didn't know you could write shaders in assmebly, although the ARB assembly language wiki doesn't seem to make this a promising endeavor. Edited by MarkS

Share this post


Link to post
Share on other sites
MarkS,

I would agree partially. For example there is and XPD (vector product) "instruction" in ARB, but NV_vertex_program doesn't have this instruction, so it likely to be a "macro". By asm I mean ARB or NV, I am not going to access GPU directly, it just works faster than glsl on my PC.

Share this post


Link to post
Share on other sites

well i think you can only pass 24 floats to fragment shader, if you can use glsl then do it, arb programs cant do much instructions about 90. and its better to write in glsl its way easier

Edited by ?W ?I ?R ?E ?D ? ?C ?A ?T

Share this post


Link to post
Share on other sites

This topic is 835 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this