How to make varying emissive light sources?

Started by
13 comments, last by samoth 16 years, 1 month ago
I understand the concept of making a lightsource shine. My current method is to use the alpha channel of the texture to represent what needs to shine and multiply my a constant to make it go to HDR range and bloom them when rendering. Is this the best method? I was also thinking whether drawing a HDR textured quad over the region would work? However this makes the glowing kind of fixed. For example in a traffic light, I would need to selectively glow the appropriate lightbulb (red,green,amber). How do I choose when to glow which surface, and when to turn it off.
Advertisement
Why don't you use either vertex color or an unused texture coordinate? Since most every object today is textured and doesn't use vertex color anyway, you could as well use if for that purpose.

Assuming that you use OpenGL, you would do a glColor3f(0.0, 0.0, 0.0) at the beginning of the frame, and when rendering the traffic light, you simply do another glColor3f(...) before and after, setting the vertex color to something non-zero, and to zero again afterwards.

Then, inside your shader, you get the traffic light's color from your texture as usual, but you add to it the vertex color. If the 0.0-1.0 range of emitted light isn't enough, you can also scale the vertex color to whatever you like. To vary brightness, simply use different values when calling glColor3f.

This will give you full control with very little extra work. If you already use vertex color, just use a spare texture coordinate. Lastly, if you want some special effects (like a little walking man on a traffic light) then you could multiply it with an "emission map" on top.

I don't know how to do the same in DirectX, but there surely is something similar to glColorXX too.
I am currently using directx. What did you actually mean by adding a glColor3f(0.0, 0.0, 0.0) at the beginning of the frame? Does the glColor3f added refers to the new vertex color of the current TrafficLight mesh?

For directx, it not really friendly if I am to go modify the individal vertex data once the creation is done. Thats because directx uses vertex/index buffer and to access them I would have to lock the buffer to read/write into it.

Is there a alternative method that does not require me to access and modifiy individual vertices data?
Taking the traffic light as an example, one way of selecting which light should glow would be as follows:

Give the vertices that make up the red light geometry the colour (1,0,0)
Give the vertices that make up the amber light geometry the colour (0,1,0)
Give the vertices that make up the green light geometry the colour (0,0,1)

This effectively encodes which parts of the geometry belong to which light. You can now control which light is glowing using a single shader uniform (I presume you are using shaders):

glowAmount = saturate( dot( vertexColour, glowSelect ) );


To make a light glow, just set the corresponding component of the glowSelect uniform to 1:

To make only the red light glow, set glowSelect to (1,0,0).
To make the red and amber lights glow, set glowSelect to (1,1,0).
To make only the green light glow, set glowSelect to (0,0,1).

If you didn't want to use vertex colours, you could paint the selection regions onto a separate texture, although this will obviously cost you more memory. This is quite a common approach when selectively blending normal maps for things like facial wrinkles.
Quote:Original post by littlekid What did you actually mean by adding a glColor3f(0.0, 0.0, 0.0) at the beginning of the frame? [...] modify the individal vertex data [...] vertex/index
No worries there. It doesn't interfere with vertex buffer. What glColor3f does is set a render state. Basically, it tells the driver "make the vertex color like this for everything that follows, until I tell you something different".
As I said, I'm not familiar with DirectX, but from what I read at MSDN, I think you need to some combination of IDirect3DDevice9::SetRenderState() for that effect.

The idea is that you have the surface's color stored in a texture. This is the "normal" color of the light's stained glass, the color that you see when it's only passively lit. If the light is "on", it will additionally be emitting light, so it is brighter than other objects. Thus, what you need to do in addition to the usual lighting/shading is add some luminosity. Once that is done, HDR/Bloom will take care of glow etc., so you need not do anything special about that.

To be able to turn lights on and off as you need them, you need one tweakable "number" (else you would have to modify your vertex buffers... yuck!).
You could use a uniform shader variable. However, setting uniforms is a massive performance hit due to pipeline stall. Setting the color state causes no stall, so this is preferrable. Of course, a color has 3 (actually 4, with alpha) "numbers" and you only need one. But that's no problem.

You can either only use one component as luminance, or use the components in parallel for different "channels" like SaltyGoodness proposed (very nice idea!), or you could simply use all, in which case you could simulate "real" colored light, not just luminance. So you could for example simulate a red lamp behind green stained glass (I'm not sure if that is any useful, but if nothing else, it's cool).
Quote:Original post by samothHowever, setting uniforms is a massive performance hit due to pipeline stall. Setting the color state causes no stall, so this is preferrable.


According to who/what? First, let's ask TomF:

http://home.comcast.net/~tom_forsyth/blog.wiki.html#[[Renderstate%20change%20costs]]

Okay. So yeah, changing constants isn't free. However, in any normal environment, you're going to be changing constants anyway - at least once per frame, right? And once you've changed one constant, the incremental cost for a second one is practically zero. I'm guessing that you're going to be moving lights, or at least altering your MVP matrix, eh?

Secondly, the whole concept of using glColorf is pretty bad. It's an archaic fixed-function API that has no place in a shader-based renderer. In fact, I'm pretty sure that on nearly any recent card/drivers, if you're using shaders, it's not going to do anything at all. If you need to tweak something per-object, use a uniform, and put some logic in the shader. If you want to store anything at higher-resolution than per-object, then it's almost always easiest to use textures. (And it gives the artists maximal control over whatever effect you're making).

Unless you never want anyone to use/see what you're doing, you should always think about the pipeline and tool issues involved. Yes, you can squeeze bytes and come up with some elaborate encoding scheme that uses different color channels of vertex color, but:

- Try explaining that to an artist modeling something.
- How is that even remotely extensible? It works here because traffic lights have three colored bulbs ... what about when there's a fourth (or 12th) possible color?

A reasonable solution for emission (and what we use, as do other folks) is to just have another texture stage for things that might glow. You've presumably got diffuse+normal+specular+... Add another texture stage for emissive color. That lets any part of any object glow. It allows the glow to be de-coupled from the lighting and diffuse material properties. And you can use an HDR texture for the emissive map so that your glowing light sources can be stored with accurate relative brightness.
Quote:Original post by samoth
You could use a uniform shader variable. However, setting uniforms is a massive performance hit due to pipeline stall.

I'm not really sure what you're getting at here. Since you're only setting the uniform once per model, how is it any different from (say) a diffuse colour or a worldViewProjection matrix? But then again, I've done nearly all my graphics programming in the last 3 years on consoles (where setting shader constants costs almost nothing). So what do I know!
Quote:you're going to be changing constants anyway - at least once per frame, right? And once you've changed one constant, the incremental cost for a second one is practically zero. I'm guessing that you're going to be moving lights, or at least altering your MVP matrix, eh?

Secondly, the whole concept of using glColorf is pretty bad. It's an archaic fixed-function API that has no place in a shader-based renderer. In fact, I'm pretty sure that on nearly any recent card/drivers, if you're using shaders, it's not going to do anything at all. If you need to tweak something per-object, use a uniform, and put some logic in the shader. If you want to store anything at higher-resolution than per-object, then it's almost always easiest to use textures. (And it gives the artists maximal control over whatever effect you're making).

I totally agree with you on this point, however...

Quote:A reasonable solution for emission (and what we use, as do other folks) is to just have another texture stage for things that might glow. You've presumably got diffuse+normal+specular+... Add another texture stage for emissive color. That lets any part of any object glow.

Fine, but how do you then make the different lamps on the traffic light model glow? Render them in three separate batches? I don't know why you're dismissing the vertex colour idea out of hand, it's actually extremely useful in practice and pretty standard where I work. For example we've used it in the past for selectively blending in dirt and scratches in vehicle shaders, controlling areas for normal map blending for facial wrinkling and vehicle brake, indicator and head lights.

All the artists I've worked with have had absolutely no problem understanding the concept (in fact it was an artist who first proposed the idea), it's simple, doesn't really add any complexity to your pipeline and it's fast.

Quote:Original post by osmanbAccording to who/what? First, let's ask TomF:
Since you brought it up, let's see what he is saying:
Quote:Note that in ~DX10 hardware, the shader constants are cheap to change. But in ~DX9 hardware, they can be quite expensive.

So I figure that you don't consider anything which is not DX10 class or better. Besides, at least one vendor used to recompile all active shaders every time you change an uniform (maybe they still do, I don't know).
On the other hand, vertex attributes are a safe bet, they have almost zero overhead apart from being an API call.

Quote:However, in any normal environment, you're going to be changing constants anyway - at least once per frame, right?
Except we're not talking about once per frame.

Quote:Secondly, the whole concept of using glColorf is pretty bad. It's an archaic fixed-function API that has no place in a shader-based renderer. In fact, I'm pretty sure that on nearly any recent card/drivers, if you're using shaders, it's not going to do anything at all.
You surely have some evidence for this claim? Honestly, I would be quite surprised if you did, but go ahead.
Using a texture coordinates or vertex color is a quite well-known and well-approved technique for pseudo-instancing. And yes, it works just fine on old and new hardware.

Quote:Unless you never want anyone to use/see what you're doing, you should always think about the pipeline and tool issues involved. [...]
- Try explaining that to an artist modeling something.

How so? It has nothing to do with either tools or artists. You are not seriously telling me that your artists colorize your models with vertex color, do they. And even if they do, please read carefully what I wrote: you can as well use a texture coordinate to the same effect (or any other vertex attribute). Now please don't say your models use 16 vertex attributes... :)

Quote:I don't know why you're dismissing the vertex colour idea out of hand, it's actually extremely useful in practice and pretty standard where I work.
Exactly, works like a charm.
First, I'm sorry that my previous post sounded so dismissive. I was in a hurry and that was poor form.

Anyways...

I'm actually used to working on console (not DX10), but I still think the cost of shader constants can be easily amortized given that you're probably going to be setting some other kind of material parameters per-object. (Or, like I said, at least setting MVP if nothing else.)

glColorf is not a vertex attribute. It's part of a complicated fixed-function color-selection algorithm that encompasses lighting state, material state, vertex color, material mode, etc... I know how it works. My point was that if you're using shaders, then none of that portion of the fixed function pipeline is even enabled. I haven't written OpenGL code in a while, but I still strongly suspect that if you're using shaders, glColor will do nothing. If we assume that with shaders turned on, GL behaves as if lighting is disabled (?), then it could theoretically support glColor, but only in the case where you have no vertex colors anyway (because reading the color attributes from your vertex stream will give the actual attributes otherwise).

I might be trying too hard to simplify this problem at the expense of performance, but if I wanted three independent light-sources of different colors that could be driven separately... I'd probably use three separate materials, and just cut the model into three parts. Then you can set the glow on each one, modulate the emissive texture, and easily support broken lights that have two bulbs active at once.

This topic is closed to new replies.

Advertisement