Sign in to follow this  
Farfadet

lines with GLSL

Recommended Posts

Hi, I'm trying to have my app render everything with shaders, which simplifies the code a lot. However, I wonder how glBegin(GL_LINES) is understood by the vertex shader ; how do you set the line thickness, stipples, etc. Or is it still required to use the fixed functionalities for lines ? I Couldn't find any on this in the orange book, nor on the net.

Share this post


Link to post
Share on other sites
Until GLSL gets geometry shaders, you'll probably have to do this using fixed functionality. Ultimately, as far as I know, you can only affect vertex placement and fragment colours, not cast new vertices or fragments.

Share this post


Link to post
Share on other sites
Quote:
Original post by biggoron
Until GLSL gets geometry shaders, you'll probably have to do this using fixed functionality. Ultimately, as far as I know, you can only affect vertex placement and fragment colours, not cast new vertices or fragments.


AFAIK, you can't change line thickness or stipple in a geometry shader.
The OP doesn't want to emit vertices.
If he wants to emit vertices, he can use GL_EXT_geometry_shader4.
So it is possible to make gs with GLSL. develop.nvidia.com has a demo.
Compare that to D3D9, which doesn't have gs. You would need to get Vista to have D3D10 and have gs support. Your better off with GL.

Share this post


Link to post
Share on other sites
Its not possible, til we've geometry shaders in GLSL standart (OpenGL mount evans and OpenGL longs peak - OpenGL 3.0). Anyway it's not the best to do geometry "creation" in geometry shaders - it'll be the bottleneck of your app - why? Well, what's done, when you're doing rasterization - you're sending vertex arrays (RAM) or vertex buffer objects (GRAM - i named it like that - "Graphics RAM"), textures (RAM) or pixel buffer objects (GRAM), shaders / effects (GRAM), physics data (well - transformation data from physics) (RAM) or (PRAM - Physics RAM - if you're using AGEIA's NovodeX and AGEIA's PhysX physics accelerator) or (GRAM - i don't recommend that - it has very big bottleneck - vertex texture fetches aren't good way) - so these are basic data. What do you need - you need to send them from somewhere (RAM memory, PRAM memory, ...) to GRAM. Graphics memory is today too small to hold all that data, if it'd big enough, it'd make some sense, BUT it'd be the same as CPU and RAM (which would doesn't make any sense, so it'd be just CPU in card). So you can store vertices all in the GRAM, but you'd not have enough place here for textures, physics VTFs (Vertex texture fetches), etc. (Well with small amount of data it'd be possible, but I'm talking about general usage of this kind of rendering - you'd need streaming, big textures in memory, complex models in memory, ...). If you'd do that all on GPU - textures would travel there (and not back), VTFs would travel there (and !BACK!), Physics data would travel there (and !BACK!), vertices would travel here (and !BACK! - Geometry Shaders) - you could possibly have some data on GRAM, but it'd not be enough. Althrough if you're creating procedural demo (F.e. for demoscene) - do that, it'd be much more faster. The slowest operation is, when you're transforming data, if you're transforming in just one way - it's good (and fast!) - it's not probably bottleneck, but if you're transforming in two ways, data stream do this:
You've got 10 parts of object on CPU - you're sending one by one part on GPU (vertex arrays) and getting back in geometry shader, whats doing the stream?
Send 1st part - stop streaming til get 1st part back (after rasterization) - Get 1st part
What's the worst on GS? You're waiting til u get 1st part back (if you're not using GS, then you're sending constantly part by part, and getting back nothing). So GS aren't best solution to do this, because transforming between CPU and GPU isn't the fastest.

I appologize for long post and little spam (i got too much into GS, which i never used - I'm using OpenGL and ATI card - it's quite a wild combination, but it's running perfectly)

Share this post


Link to post
Share on other sites
Thanks for the answers. I won't use geometry shaders anyhow for compatibility reasons. However, I'm not convinced that you can't use vertex and fragment shaders with lines, because :
1) why not ? line rasterisation is different from polygon rasterization, but it still interpolates pixel values from vertex data. Vertex shaders do not interfere with the interpolation function, they only program how things such as diffuse or specular color, texture coordinates,... are combined to compute fragment values, and those fragment values are in turn combined in the fragment shader. I don't see a good reason to forbid this for lines or points
2) I didn't find anything in spec 2.1 that says its illegal.
so I'll guess I'll have to experiment. Another possibility would be to render lines as quads, the line's width becoming the quad's height. The shaders could be tailored to implement stipples, antialias and so on. Thinking of it, you could add some interesting functionalities to the fixed ones (such as varying width). And I would'nt be surprised if it rendered faster.

Share this post


Link to post
Share on other sites
Maybe I know how - send line points (you must send it like GL_LINES) in vertex texture fetch (texture with written vertex positions as RGB color) and in alpha channel send thickness. It'd look like this (in memory - 4 variables - RGBA):

point1.x point1.y point1.z point1.thickness point2.x point2.y point2.z point2.thickness
point3.x point3.y point3.z point3.thickness point4.x point4.y point4.z point4.thickness

This would be 4 points in vertex texture fetch, you'd need to decide how to render them - line loop - pt1-----pt2-----pt3-----pt4 or standart lines (well try first standart lines - it's probably the only way) - pt1-----pt2 pt3-----pt4

Well this will work as GPGPU (general purpose graphics processor unit) programming. It'd probably be fast, but i'm not sure. I'm interested much much more in raytracing (especially realtime raytracing), and that's possible using GPGPU programming (althrough I'm doing everything on CPU - taking advanatage of multi-core), so I don't see why this wouldn't. Geometry shaders would take work as creator, translator, etc. of points. You can do the same with vertex texture fetch, but it's a little tricky (you can write into pixels and then render-to-texture using FBOs, then send it into rasterizing shader, etc.)

Share this post


Link to post
Share on other sites
Quote:
Original post by Vilem Otte
*mostly unintellable gibberish*


Ok, firstly we already have a short hand for graphics card ram; VRAM short for 'video ram', I recommend you use that if you want to communicate your ideas clearly.

Secondly, yes when you are rendering you are sending alot of information across however alot of this is streamed across as required by the driver swapping in form system ram as required.

I'm not remotely sure why you have all this data traveling to the from the card; at the most you'll want to grab the Geo-shader output if you want to apply physics to the object (however right now Geo-shaders are generally used to perform LOD stuff by reteslating or other systems where the feed back isn't as import), which would be a stream of vertices in some manner (NV's feedback extension or PBO is you use Render-to-vertex-buffer methods).

Also, your "advise" if based on the faulty idea that you do one thing, wait, do another, wait and so on; you don't: you send all the parts and get the bits back in order, sometimes the same frame, in some cases methods allow you to wait a frame between updates.

Also, given your admission that you haven't even used a Geo-shader I think you should stop giving 'advise' on things you don't really know about, much like you gave your opinion as fact on something in the shadow thread; I would suggest you stop going off on long rants about subjects you don't understand or haven't used because you are just giving out misleading information to people who don't know any better and frankly I dislike that idea happening around here; stick to what you know.

Share this post


Link to post
Share on other sites
Quote:
Original post by Vilem Otte
Maybe I know how - send line points (you must send it like GL_LINES) in vertex texture fetch (texture with written vertex positions as RGB color) and in alpha channel send thickness. It'd look like this (in memory - 4 variables - RGBA):

point1.x point1.y point1.z point1.thickness point2.x point2.y point2.z point2.thickness
point3.x point3.y point3.z point3.thickness point4.x point4.y point4.z point4.thickness

This would be 4 points in vertex texture fetch, you'd need to decide how to render them - line loop - pt1-----pt2-----pt3-----pt4 or standart lines (well try first standart lines - it's probably the only way) - pt1-----pt2 pt3-----pt4

Well this will work as GPGPU (general purpose graphics processor unit) programming. It'd probably be fast, but i'm not sure. I'm interested much much more in raytracing (especially realtime raytracing), and that's possible using GPGPU programming (althrough I'm doing everything on CPU - taking advanatage of multi-core), so I don't see why this wouldn't. Geometry shaders would take work as creator, translator, etc. of points. You can do the same with vertex texture fetch, but it's a little tricky (you can write into pixels and then render-to-texture using FBOs, then send it into rasterizing shader, etc.)


I didn't understand your other post either. It didn't relate to what the OP asked.

Rendering lines as quads certainly would do the trick.

Quote:

Thanks for the answers. I won't use geometry shaders anyhow for compatibility reasons. However, I'm not convinced that you can't use vertex and fragment shaders with lines, because :


No one said you can't use VS and FS with GL_LINES.
You said that you want to control line thickness and stipple. AFAIK, there is no feature in GLSL to do this.

Share this post


Link to post
Share on other sites
Noone actually tried to correctly answer the OP's question!!!

Read the GLSL specification -> Stippling is not replaced by a fragment shader, so you should be able to use it just like you did with the fixed function pipeline (if there are no driver implementation bugs)

I don't know about the line with. The shader probably can't set it, but why don't you simply try to use the function from the fixed pipeline and see if it works? Simply use glLineWidth together with your shader.

Share this post


Link to post
Share on other sites
Quote:
No one said you can't use VS and FS with GL_LINES.
You said that you want to control line thickness and stipple. AFAIK, there is no feature in GLSL to do this.

None automatic, I guess. However, I would expect texture lookup to be possible with lines (why not ?), in which case you can stipple using a RGBA texture that's... stippled ! unless...
Quote:
Read the GLSL specification -> Stippling is not replaced by a fragment shader, so you should be able to use it just like you did with the fixed function pipeline (if there are no driver implementation bugs)

which would be even easier. As I said, I will try and let you know what works.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this