Camera aligned/anchored quads - how can it be done?

Started by
11 comments, last by idinev 15 years, 9 months ago
I want to do some things in shaders at certain intervals in front of the camera for the entirety of the viewing area, and my idea was to render quads at certain points infront of the camera and do my processing on the fragments of those quads as they arrive. These quads have to be perpendicular to the camera vector, (which means that they have to behave as billboards, which is something i am familiar with), but they also have to move as the camera moves and their relation with the camera in world coordinates has to be static. Any idea how i can do that? Thanks a lot :)
Advertisement
Quote:Original post by shodanjr_gr
they also have to move as the camera moves and their relation with the camera in world coordinates has to be static.


I think this can be done by setting up the object's transformation before setting up the transformations for the camera. This way the object will not be affected by the camera transformation and it's spatial relationship with the camera will be determined solely by the object's transformation.

I hope this makes sense...
Quote:Original post by Gage64
Quote:Original post by shodanjr_gr
they also have to move as the camera moves and their relation with the camera in world coordinates has to be static.


I think this can be done by setting up the object's transformation before setting up the transformations for the camera. This way the object will not be affected by the camera transformation and it's spatial relationship with the camera will be determined solely by the object's transformation.

I hope this makes sense...


Thanks for the speedy reply Gage :)

That's what ive been trying to do actually, but in a slightly different way. I am multiplying with the inverse of the camera modelview matrix before i draw the quads, but it doesnt seem to work, my quads are nowhere to be seen (inside the camera volume at least).

Bonus question:

Is there a way in OpenGL to extract the model matrix and view matrix? (so i can move between object space to world space and then to eye space)
Model and View are the same matrix; further, yes, there is an easy way to retrieve the current modelview and projection matricies.

GLfloat mvStorage[16], projStorage[16];

glGetFloatv(GL_MODELVIEW_MATRIX, mvStorage);
glGetFloatv(GL_PROJECTION_MATRIX, projStorage);

The inverse operation is glLoadMatrixf(mvStorage), documented here. Mind that OpenGL matrices are column-major.
RIP GameDev.net: launched 2 unusably-broken forum engines in as many years, and now has ceased operating as a forum at all, happy to remain naught but an advertising platform with an attached social media presense, headed by a staff who by their own admission have no idea what their userbase wants or expects.Here's to the good times; shame they exist in the past.
Quote:Original post by Wyrframe
Model and View are the same matrix; further, yes, there is an easy way to retrieve the current modelview and projection matricies.

GLfloat mvStorage[16], projStorage[16];

glGetFloatv(GL_MODELVIEW_MATRIX, mvStorage);
glGetFloatv(GL_PROJECTION_MATRIX, projStorage);

The inverse operation is glLoadMatrixf(mvStorage), documented here. Mind that OpenGL matrices are column-major.


Erm, in my understanding the model matrix transforms vertices from object space to world space coordinates. Then the View matrix transforms vertices from world space to eye space. The modelview matrix in OpenGL does both those transformations at the same time.

Right?

edit: To elaborate, i need the WORLD matrix since i want to be able to have a way to statically reference vertices (i want to use a vertex's world XZ coordinates to get a noise value, and i want two vertices on the same XZ spot to give me the same noise value).
Correct, the OpenGL modelview matrix is a product of both the model and the view matrix (so no distiguishing like in D3D). Basically, you can see the view matrix as the "model" matrix that transforms the entire world to eye space.

If I understand your initial question correctly, do you want to have quads that always have the same size and position on the screen, independent of the camera?

You could do this by just using the identity modelview matrix and adjust the quads' sizes and positions according to the projection matrix you use.

I do this for 2D text, for example. I use an orthogonal projection matrix (dimensions matching screen resolution) and just set the text coordinates in "pixels".
If I was helpful, feel free to rate me up ;)If I wasn't and you feel to rate me down, please let me know why!
Quote:Original post by Lord_Evil
Correct, the OpenGL modelview matrix is a product of both the model and the view matrix (so no distiguishing like in D3D). Basically, you can see the view matrix as the "model" matrix that transforms the entire world to eye space.

If I understand your initial question correctly, do you want to have quads that always have the same size and position on the screen, independent of the camera?

You could do this by just using the identity modelview matrix and adjust the quads' sizes and positions according to the projection matrix you use.

I do this for 2D text, for example. I use an orthogonal projection matrix (dimensions matching screen resolution) and just set the text coordinates in "pixels".


So there is no way to "sepate" the two matrices, right?

I suppose i can always "create" the world matrix by saving all the translations/rotations/scales and pass them to the shader as a uniform.

As far as the quads go, not exactly. I do not want to produce a "HUD" (which would have me throwing up quads in Ortho projection). I want to be able to do stuff (for instance, produce some particle effects off a texture) at certain points "in front" of the camera. Does that make sense?

Thanks for taking the time to reply :)
Well, I could be easier to just remember the camera/view matrix and pass that one to the shader. Knowing that one you should be able to separate the view and model matrix.

Particles would be billboards, wouldn't they? You could pass the vertices with the center location and then use the texture coords to move them into position in the vertex shader. Basically it could be like:

out.pos.x = in.pos.x + (in.tex.x - 0.5) * functionOfDepth;
//y accordingly

YOu just have to find a suitable functionOfDepth.
If I was helpful, feel free to rate me up ;)If I wasn't and you feel to rate me down, please let me know why!
Quote:Original post by Lord_Evil
Well, I could be easier to just remember the camera/view matrix and pass that one to the shader. Knowing that one you should be able to separate the view and model matrix.

Particles would be billboards, wouldn't they? You could pass the vertices with the center location and then use the texture coords to move them into position in the vertex shader. Basically it could be like:

out.pos.x = in.pos.x + (in.tex.x - 0.5) * functionOfDepth;
//y accordingly

YOu just have to find a suitable functionOfDepth.


Can you expand a bit on this? I dont think i understand what you mean.
First I have to say that on a second thought there's no need for that functionOfDepth but just passing the size of the quad as a parameter.

You could pass the 4 vertices of each quad with the position equal to the quad's center, e.g. (0/0/0) for a quad centered at the origin.

In your vertex shader you transform that position to eye space using the modelview matrix. You now have the quad position relative to the camera.

Next, move the vertices to their real positions. In order to know in what direction to move them on the x and y axes you can read the texture coordinates. Assuming you have coords of 0.0f or 1.0f you could say:

if texcoord.x == 0.0f move the vertex to the left, if texcoord.x == 1.0f move it to the right (same for y).

Since branching in shaders is somewhat expensive you could say

direction.x = texcoord.x * 2.0f - 1.0f

which would give you direction.x = -1.0f for texcoord.x = 0.0f and direction.x = 1.0f for texcoord.x = 1.0f.

Now, you apply the size of the quad and finally project vertex to screen space.

Summarizing your output position could be calculated like this:

OUT.pos    = mul(IN.pos, modelview);            //transform the vertex to eye spaceOUT.pos.xy += (IN.tex.xy - 0.5f) * IN.size.xy;  //apply the size and move the vertex by size/2 according to texture coordinate, i.e. tcoord = 0.0f -> move by -size/2OUT.pos = mul(OUT.pos, projection);             //project the vertex to screen space
If I was helpful, feel free to rate me up ;)If I wasn't and you feel to rate me down, please let me know why!

This topic is closed to new replies.

Advertisement