Archived

This topic is now archived and is closed to further replies.

2D simulated lighting

This topic is 5090 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I just read the ''Dynamic 2D Soft Shadows'' article by Orangy Tang on GameDev, and found it very interesting for a relative beginner to OpenGL such as myself.t I want to create a circle of light, as shown in the article, that fades away around the edges, but I also want the transparency of the images to be retained. The method of setting all alpha values in the screen buffer to 0, then putting a circle of higher alpha values fading to 0 over the top, then blitting all images using (GL_DST_ALPHA, GL_ONE) as the blending function is beautiful, so long as I don''t care about source image transparency. How do I retain the transparency of my images (I use GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA as my blending function for transparency), but also have them fade away, taking into account their source alpha levels.? Do I need to do several rendering passes, and if so, how do I go about it? Thanks guys, Rob note: This post is similar to one I posted in the NEHE forum, I apologise if cross-posting gets your goat, but I''ve received no answer to the first post in two days, so I thought I would try this forum too, hope no one minds.

Share this post


Link to post
Share on other sites
Getting transparency to work with the lighting is tricky, and not something I''ve got round to trying yet. In fact even thinking about transparency and lighting combined gets confusing, no-one had any definate answers last time I asked around here of how to handle it.

Simple additive tranparency is quite easy to add, you just need to remove the depth write for the object, then scale the transparency down by the number of lights its within (ie, how many times it gets redrawn).

Something more complex like the one you''re after is more tricky. As far as I can tell it needs at least another multiply operation to get the final intensity (texture alpha * light alpha), and then another to scale down the existing colour in the framebuffer framebuffer colour * (1-final intensity). Then these two need to be added together in the framebuffer as the final colour.

You can probably break that down into several rendering passes, again keeping temporary results in the alpha channel. But its likely to get pretty complicated and by that point I''d be tempted to change to pixel shaders - it can probably be done with a simple pixel shader thats not too demanding on the hardware with a lot more flexibility (and the performance is likely to be much better since you could do it in a single pass).

Just some ideas. Let me know how it goes

Share this post


Link to post
Share on other sites
Yeeks, I didn''t realise the concept would be so complicated, although I can see why.

Obviously, in a sense, I want to apply two blending affects, which can''t be done, unless I split it somehow over two passes, although I don''t know how . I''ll look into that.

Pixel shaders ... how does one use them conceptually. Even what are they ?

I''ve moved from a software surfaces 2D world, and am just dabbling with OpenGL. Experienced general coder, but inexperienced OGL.

Thanks for replying OrangyTang. Your article was very helpful, by the way.

Share this post


Link to post
Share on other sites
quote:
Original post by serenity
Pixel shaders ... how does one use them conceptually. Even what are they ?

Conceptually they''re a tiny little program that runs on the graphics card and is called to process every single fragment drawn (the correct GL term is ''fragment program'' if you''re googling). They take things like the texture colour, interpolated surface colour etc. and from those you use these to caculate the final result, basically letting you write your own shading equations instead of just configuring a fixed one (with glBlendFunc, etc.).

However they do need hardware support, since you can''t emulate them on the CPU without switching back to software rendering. High end cards support things like branching and looping, lower down you get restrictions on the number of operators and the ordering and dependancy of the instructions.

You need to go to GeForce 3 or higher to get pixel shaders with any sort of power to them, although I beleive if you use nVidia''s Cg you can get very basic ones that run on Geforce 1 & 2 (basically just an easier way to configure register combiners).

nVidia has a load of information, and their Cg compiler is avalible for free as a download from their site if you''re interested.

Share this post


Link to post
Share on other sites