Sign in to follow this  
tomek22

Is there a way to sort polygons in glsl?

Recommended Posts

Hi!

I am currently implementing a particle engine which uses glsl fragment shader for position and velocity calculation via framebuffer with 2 textures bound to it. Then I extract texture data and draw sprites with semitransparent textures.

My question is: Is there a efficient way to sort those sprites in glsl or do i have to sort it in the application?

Unrelated question: Is there an easy way to implement billboard in glsl? I would like to make the sprites face the camera at all times

Thanks!

Share this post


Link to post
Share on other sites
Look up point sprites for billboarding. You shouldn't need to sort particles though based on depth.

But sorting polygons in glsl = no. GLSL only shades a given input triangle, you dont have any access to anything other than the current drawn triangle.

Share this post


Link to post
Share on other sites
thanks for the reply!

Why shouldn't i have to sort the particles? How do i integrate them to the scene if i have to turn off depth test?

Another unrelated question (again) : I'm using 2 1D textures for particle velocity and position. I'm using rgb for xyz, and I want to use alpha channel for particle fade, but can't seem to do so. Every time i changle alpha, it scales the rgb components! what do i do?

thanks

Share this post


Link to post
Share on other sites
What dpadam450 said is not that you don't have to sort, but that you don't have access to that kind of functionality. Which is almost true, too. You have access to a single vertex in the vertex shader, and a single triangle in the geometry shader (possibly with adjacency information), and that is it.

Technically, however, you could sort particles in GLSL using fragment shaders, but it is questionable if that makes a lot of sense. There are algorithms such as bitonic sort which have been implemented in GLSL, they allow you to sort data that you've stored in a texture with several passes.

This is a lot of pain to go through compared to just using a blending (add, for example) which does not require depth sorting.
Or, if your billboards are mostly opaque with only some transparency in the border regions, you could just say "so what" for the artefacts due to not sorting, they won't be large and chances are good nobody will see anyway.

Share this post


Link to post
Share on other sites
Look up depth masking(glDepthMask()). You integrate them by depth testing against what you have in your scene, but not writing any depth for each particle. Therefore you don't need to depth test and can still blend them.

Share this post


Link to post
Share on other sites
I have another question but didn't want to open another thread. Maybe someone can help me out...

So, I created a simple particle engine which uses glsl fragment shader for particle physics calculation. So, basically, I just create a float array which represents particle data (position and velocity). Initially, I set the position to 0, 0, 0 and velocity to some random value. Calculation in the shader is pretty speedy. The problem is reading back the calculated data in order to draw the particles. I use glReadPixels which is as i understand it costly, and it makes my application, which usually uses 5-10% of the processor, use about 90-95%.

My question is this: Is there an easy workaround for glReadPixels to read back the calculated data?

I thought of using geometry shader to bypass reading the texture data at all and creating the particle sprites there instead, but I'm using OpenGL 2.0 which doesn't support geometry shader. If there isn't a good workaround for this problem, switching to a newer gl version is an option. Does gl 3.2 have geometry shaders?

Share this post


Link to post
Share on other sites
So you're:

1) Writing position data to texture in 'physics' shader.
2) Reading back the texture to get positions.
3) Using these positions to send particles to be drawn?

Can you just have your particles get their positions from the texture when rendering instead of sending their positions to be rendered?

Instead of sending a quad or point's position as an attribute, can you just send it's 'index' into the texture?

Lets say your texture has the position (5,5,5) where you want the particle to be rendered, and it's particle '0'.

Just send 4 vertices to the vertex shader with their index and offsets (+/- the half height and width of the sprites):

vertex shader pseudocode:

in int index;
in vec3 offset;

main () {

vec3 center = textureIntegerIndex(tex,index)//forgot exactly what this sampler is called
vec3 position = center + offset;

gl_Position = projectionMatrix * viewMatrix * position;

}


Share this post


Link to post
Share on other sites
hmm

Didn't know you could do that. But if you can, that would be awesome! Also, since those indexes are not gonna change, i can put the entire particle render loop into a display list (1 for each particle emitter)

but there's a problem though... When I have my data in the application, i can see if the particles are still active, and when they're all inactive, i can stop the emitter. Is there a way to send this info from the shader to the app? You can't write to uniform variables, right?

Share this post


Link to post
Share on other sites
I don't think there's any way to communicate back with the program without resorting to glReadPixels again, which you'd want to avoid.

What criteria are you using to determine if a particle is 'active' or 'inactive'? Can you 'inactivate' them based on time since creation or something? Or you could just be more 'lazy' about it and maybe just read the texture once every couple seconds, or maybe just read a tiny bit of it each frame to get the positions back. Then you can just do a quick check of which ones are invalid, I wouldn't think you'd need this information every frame.

I think there is also a way to do asynchronous data reads if you don't need the data immediately which would allow the gpu to send you this data when it is inactive, but I forgot the details of this. Maybe PBO for this.

Share this post


Link to post
Share on other sites
each particle is assigned a random value which controls the particle life span. This parameter is then multiplied by particle velocity. I guess i could precompute life span expectancy of the longest living particle before sending it to the shader, but i would like to have a solution which will work for all the different particle effects i plan to make in the future

Now that i think about it, i can deal with this by modifying the emitter interface in the application to make sure all emitters can calculate this on their own. I guess that should work.

Thanks a lot for the tip, you've been very helpful!

Share this post


Link to post
Share on other sites
hmmm

looks like there's another problem :)

here's the prototype for texture1d function:

vec4 texture1D( sampler1D, float [,float bias] )

it takes in float, not int, which suggests i would have to send in float values from 0 to 1 in my application. How can i be sure about the precision?

Share this post


Link to post
Share on other sites
I think you want texelFetch(), not texture1D().

See the section here about "direct texel fetches"

Also you have to be really careful when sending integers as attributes, opengl sends it's attributes as floats, even if you specify the attribute type as int. (spent a week figuring out this nasty mistake before when I sent '1' as an attribute it came out as integer 1,080,000,000).

Make sure you use glVertexAttribIPointer instead of glVertexAttribPointer for integers.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this