Sign in to follow this  
shodanjr_gr

Camera aligned/anchored quads - how can it be done?

Recommended Posts

I want to do some things in shaders at certain intervals in front of the camera for the entirety of the viewing area, and my idea was to render quads at certain points infront of the camera and do my processing on the fragments of those quads as they arrive. These quads have to be perpendicular to the camera vector, (which means that they have to behave as billboards, which is something i am familiar with), but they also have to move as the camera moves and their relation with the camera in world coordinates has to be static. Any idea how i can do that? Thanks a lot :)

Share this post


Link to post
Share on other sites
Quote:
Original post by shodanjr_gr
they also have to move as the camera moves and their relation with the camera in world coordinates has to be static.


I think this can be done by setting up the object's transformation before setting up the transformations for the camera. This way the object will not be affected by the camera transformation and it's spatial relationship with the camera will be determined solely by the object's transformation.

I hope this makes sense...

Share this post


Link to post
Share on other sites
Quote:
Original post by Gage64
Quote:
Original post by shodanjr_gr
they also have to move as the camera moves and their relation with the camera in world coordinates has to be static.


I think this can be done by setting up the object's transformation before setting up the transformations for the camera. This way the object will not be affected by the camera transformation and it's spatial relationship with the camera will be determined solely by the object's transformation.

I hope this makes sense...


Thanks for the speedy reply Gage :)

That's what ive been trying to do actually, but in a slightly different way. I am multiplying with the inverse of the camera modelview matrix before i draw the quads, but it doesnt seem to work, my quads are nowhere to be seen (inside the camera volume at least).

Bonus question:

Is there a way in OpenGL to extract the model matrix and view matrix? (so i can move between object space to world space and then to eye space)

Share this post


Link to post
Share on other sites
Model and View are the same matrix; further, yes, there is an easy way to retrieve the current modelview and projection matricies.

GLfloat mvStorage[16], projStorage[16];

glGetFloatv(GL_MODELVIEW_MATRIX, mvStorage);
glGetFloatv(GL_PROJECTION_MATRIX, projStorage);

The inverse operation is glLoadMatrixf(mvStorage), documented here. Mind that OpenGL matrices are column-major.

Share this post


Link to post
Share on other sites
Quote:
Original post by Wyrframe
Model and View are the same matrix; further, yes, there is an easy way to retrieve the current modelview and projection matricies.

GLfloat mvStorage[16], projStorage[16];

glGetFloatv(GL_MODELVIEW_MATRIX, mvStorage);
glGetFloatv(GL_PROJECTION_MATRIX, projStorage);

The inverse operation is glLoadMatrixf(mvStorage), documented here. Mind that OpenGL matrices are column-major.


Erm, in my understanding the model matrix transforms vertices from object space to world space coordinates. Then the View matrix transforms vertices from world space to eye space. The modelview matrix in OpenGL does both those transformations at the same time.

Right?

edit: To elaborate, i need the WORLD matrix since i want to be able to have a way to statically reference vertices (i want to use a vertex's world XZ coordinates to get a noise value, and i want two vertices on the same XZ spot to give me the same noise value).

Share this post


Link to post
Share on other sites
Correct, the OpenGL modelview matrix is a product of both the model and the view matrix (so no distiguishing like in D3D). Basically, you can see the view matrix as the "model" matrix that transforms the entire world to eye space.

If I understand your initial question correctly, do you want to have quads that always have the same size and position on the screen, independent of the camera?

You could do this by just using the identity modelview matrix and adjust the quads' sizes and positions according to the projection matrix you use.

I do this for 2D text, for example. I use an orthogonal projection matrix (dimensions matching screen resolution) and just set the text coordinates in "pixels".

Share this post


Link to post
Share on other sites
Quote:
Original post by Lord_Evil
Correct, the OpenGL modelview matrix is a product of both the model and the view matrix (so no distiguishing like in D3D). Basically, you can see the view matrix as the "model" matrix that transforms the entire world to eye space.

If I understand your initial question correctly, do you want to have quads that always have the same size and position on the screen, independent of the camera?

You could do this by just using the identity modelview matrix and adjust the quads' sizes and positions according to the projection matrix you use.

I do this for 2D text, for example. I use an orthogonal projection matrix (dimensions matching screen resolution) and just set the text coordinates in "pixels".


So there is no way to "sepate" the two matrices, right?

I suppose i can always "create" the world matrix by saving all the translations/rotations/scales and pass them to the shader as a uniform.

As far as the quads go, not exactly. I do not want to produce a "HUD" (which would have me throwing up quads in Ortho projection). I want to be able to do stuff (for instance, produce some particle effects off a texture) at certain points "in front" of the camera. Does that make sense?

Thanks for taking the time to reply :)

Share this post


Link to post
Share on other sites
Well, I could be easier to just remember the camera/view matrix and pass that one to the shader. Knowing that one you should be able to separate the view and model matrix.

Particles would be billboards, wouldn't they? You could pass the vertices with the center location and then use the texture coords to move them into position in the vertex shader. Basically it could be like:

out.pos.x = in.pos.x + (in.tex.x - 0.5) * functionOfDepth;
//y accordingly

YOu just have to find a suitable functionOfDepth.

Share this post


Link to post
Share on other sites
Quote:
Original post by Lord_Evil
Well, I could be easier to just remember the camera/view matrix and pass that one to the shader. Knowing that one you should be able to separate the view and model matrix.

Particles would be billboards, wouldn't they? You could pass the vertices with the center location and then use the texture coords to move them into position in the vertex shader. Basically it could be like:

out.pos.x = in.pos.x + (in.tex.x - 0.5) * functionOfDepth;
//y accordingly

YOu just have to find a suitable functionOfDepth.


Can you expand a bit on this? I dont think i understand what you mean.

Share this post


Link to post
Share on other sites
First I have to say that on a second thought there's no need for that functionOfDepth but just passing the size of the quad as a parameter.

You could pass the 4 vertices of each quad with the position equal to the quad's center, e.g. (0/0/0) for a quad centered at the origin.

In your vertex shader you transform that position to eye space using the modelview matrix. You now have the quad position relative to the camera.

Next, move the vertices to their real positions. In order to know in what direction to move them on the x and y axes you can read the texture coordinates. Assuming you have coords of 0.0f or 1.0f you could say:

if texcoord.x == 0.0f move the vertex to the left, if texcoord.x == 1.0f move it to the right (same for y).

Since branching in shaders is somewhat expensive you could say

direction.x = texcoord.x * 2.0f - 1.0f

which would give you direction.x = -1.0f for texcoord.x = 0.0f and direction.x = 1.0f for texcoord.x = 1.0f.

Now, you apply the size of the quad and finally project vertex to screen space.

Summarizing your output position could be calculated like this:


OUT.pos = mul(IN.pos, modelview); //transform the vertex to eye space
OUT.pos.xy += (IN.tex.xy - 0.5f) * IN.size.xy; //apply the size and move the vertex by size/2 according to texture coordinate, i.e. tcoord = 0.0f -> move by -size/2
OUT.pos = mul(OUT.pos, projection); //project the vertex to screen space

Share this post


Link to post
Share on other sites
Quote:
Original post by Lord_Evil
stuff
[/code]


Thanks for the reply

I actually got it to work in a different way.

Here is the code snippet:



void RenderCameraAlignedQuads(void)
{
float modelview_matrix[16];
float *inverse_modelview = (float*) malloc(4*4*sizeof(float));
glPushMatrix();
glGetFloatv(GL_MODELVIEW_MATRIX, modelview_matrix);
m4_inverse(inverse_modelview, modelview_matrix);
glMultMatrixf(inverse_modelview);
for(int i = 0; i < no_quads; i++)
{
glPushMatrix();
glTranslatef(0.0,0.0,-i*10.0);
glBegin(GL_QUADS);
glColor3f(0.5,0.5,0.4);
glVertex3f(-1.0,-1.0,0.0);
glColor3f(0.5,0.5,0.4);
glVertex3f(1.0,-1.0,0.0);
glColor3f(0.5,0.5,0.4);
glVertex3f(1.0,1.0,0.0);
glColor3f(0.5,0.5,0.4);
glVertex3f(-1.0,1.0,0.0);
glEnd();
glPopMatrix();
}
glPopMatrix();
free(inverse_modelview);
}



Now what i need is to scale each quad so that when transformed to clip-space, it fills up the screen and i am done :)

Share this post


Link to post
Share on other sites
Quote:
Original post by shodanjr_gr
glGetFloatv(GL_MODELVIEW_MATRIX, modelview_matrix);
m4_inverse(inverse_modelview, modelview_matrix);
glMultMatrixf(inverse_modelview);

What you do here is to load the modelview matrix, calculate the inverse and multiply the current matrix with its inverse. The result will be (or should be due to possible precision errors) the identity matrix.

So replace those 3 lines with glIdentity(); for more efficiency.


Edit: what you do is to render quads in the center of the screen and just move them further away from the camera. So you do exactly what I asked you earlier: "If I understand your initial question correctly, do you want to have quads that always have the same size and position on the screen, independent of the camera?" ;)

Just keep in mind that the first quad would never be rendered if perspective projection was applied since you'd not have a near clipping plane at 0.0.

Edit2: if you want each quad to fill the screen, just don't apply any projection matrix. Clip space goes from 1- to 1, so your quads already have unprojected full-screen size. But you should rethink the z-values then.

Share this post


Link to post
Share on other sites
If it's about full-screen postprocessing, simply don't transform in your vertex-shader:

C++:

void ulDrawFullScreenQuad2D(){
glDisable(GL_DEPTH_TEST);
glBegin(GL_QUADS);
glTexCoord2f(0,0); glVertex2f(-1,-1);
glTexCoord2f(1,0); glVertex2f( 1,-1);
glTexCoord2f(1,1); glVertex2f( 1, 1);
glTexCoord2f(0,1); glVertex2f(-1, 1);
glEnd();
glEnable(GL_DEPTH_TEST);
}




shader:

#include "system.h"

varying vec2 coord0;

#if IS_VERTEX
void main(){
gl_Position=gl_Vertex;
coord0=gl_MultiTexCoord0.xy;
}
#endif



#if IS_FRAGMENT
uniform sampler2D tex0;
void main(){
gl_FragColor = texture2D(tex0,coord0);
}
#endif




you can then also use the transformation matrices for some varying parameter to pass to the frag-shader, if you'll be making a view-dependent effect. It's the vertex-position transformation headaches that we took care of easily and flawlessly with the above base code.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this