Jump to content

  • Log In with Google      Sign In   
  • Create Account


spek

Member Since 11 Apr 2005
Offline Last Active Jun 22 2014 01:16 PM
-----

Posts I've Made

In Topic: Physically accurate ambient lighting from reflection map (Cube Map)

08 June 2014 - 02:21 PM

I tried this once, and the result was... meh. But! That is partially because I did it cheap/wrong. What happened

- A cubemap was updated at the camera position, every cycle

- Each of the 6 cubemap faces was downsized ("blurred") to 1x1 pixels

- The result was stored as 6 colors (but you can also pass it as a blurry cubeMap texture)

- Objects would get their ambient lighting by mixing between the 6 colors, based on their normals

 

In essence, this method is more or less correct for objects (very) nearby to the camera, your player character for example. But obviously it's incorrect for other stuff in the scene. that catches the environment from a different position. Now I just tried to achieve very cheap semi-realtime G.I. lighting, but the problem is that the lighting changes with every step you take. If you step forward into a lightbeam, the whole background suddenly changes lights as well.

 

 

I'm not 100% sure, but I believe some games actually did use an effect like this (probably combined with statically baked ambient lighting), but "softened" this annoying light-change artifect by smoothly going over from one color into another. You could see this happening in GTA IV, when standing in front of a window in your appartment and then walking a few steps away from it.

 

Another trick to reduce wacky light-changes, is to exclude small/local lightsources in your initial cubeMap. For an outdoor area, mainly the sky, sun or moon are dominant. So, with some hacks a trick like this could work for you. It's far from accurate, but at least its very cheap & you can use the initial cubeMap for reflections nearby your camera as well.

 

 

But more convenient these days is to bake light into "probes", "ambient cubes", or whatever people call it. The idea remains the same, except that you will have multiple probes (sampled as cubemaps) scattered over your world. These probes don't move, and objects/geometry/particles/.../ have to pick the closest probe(s) to fetch their light from. You can pre-bake all your light into these probes, which allows pretty fast and quite decent G.I. However... it's not a realtime, dynamic solution by itself. Of course you could try to update all the probes all time, but it would kill your framerate. There are several tricks being invested by modern game engines to deal with this. Use less probes, spread the updates over multiple cycles, provide fast look-up information so they don't have to render complicated cubemaps, and so on. But so far the bottom line is that all realtime solutions I tried or heard of, are far from perfect. Too slow, too restricted, too memory consuming, or just too ugly.

 

But you can extend pre-baked probes with some dynamic information. Use SSAO or a similar techniques for local shading. And for day/night cycles, you can store an occlusion value in your probes (or lightmap, or geometry) that tells how much % effect the skybox has on this location. 0% would mean the probe can't see the skybox at all, and therefore won't be litten by it. And otherwise you can add some skybox light, using the surface normals and actual skybox color. Again, a fake. But a pretty cheap & effective one.


In Topic: Just how alright will I be if I were to skip normal-mapping?

06 June 2014 - 06:03 AM

I think you'll burn in hell when chosing not to normalMap tongue.png

 

But more seriously, the needs depend on the situation I guess. Pumping an extra 10k triangles into a character model instead of using normalMaps may also get you where you want, without bringing the videocard to its knees. But think about the environment. How much tris would a level cost if every brick-wall was modeled like, well, a real brick wall? The polycount would explode with every extra square meter you make.

 

You also see more and more "detailNormalMaps", a secundary (frequently repeating / tiled) normalMap to simulate the micro-structure of a certain material. Cotton, bumpy skins, leather, rough concrete spickles, wood nerves, et cetera. Even if the video-card would laugh about it, the artist that has to model your stuff won't! NormalMaps can often be recycled for various cases, and in some cases its enough to "cheat" by converting a greyscaled photo into a normalMap. Production-wise easier than modeling each and every detail.

 

Then again if you won't see the surfaces from nearby, there is less need to have a normalMap of course. When peeking around in other game's texture packs, you may notice that ceiling textures are often a bit simpler, without normalMaps. Reason? You're not looking at the ceiling all day, are you?

 

 

I think/hope that rendering will get more towards displacement mapping, or whatever its called these days. So (nearby) geometry would get tesselated into much smaller patches, and have their vertices offsetted by a "bump-" or "heightMap" kind of thing. That may kill normalMaps one day maybe, although you still need that extra texture of course.


In Topic: How do I implement a simple 2D torch effect on screen with GLSL with SFML

29 May 2014 - 06:39 AM

If I understand it right, you basically want a circular overlay on your screen, as in the bottom-right screenshot?

 

The simplest way, without needing any shaders, is to draw your circle (or whatever mask) in a image, draw a screen-filling quad on top of the rest, and apply multiply-blending. White pixels on this image will keep the background color unchanged, darker colors will darken the background pixels.

 

In case you want to animate a bit (a toch flickers), you can gently try to shit this screen filling quad a bit the to the left/right/up/down, and/or multiply the quad texture color with another color that pulsates (with a sinus or something). So far, still no shaders are needed really. Nevertheless, it might be a good lesson to achieve this same effect with a shader, as you get a bit more control on things like color multiplication, shifting, or eventually mixing 2 different textures to achieve an animation.


In Topic: Doing local fog (again)

29 May 2014 - 05:17 AM

Attached File  T22_FoggyMetroTube.jpg   130.38KB   2 downloads

Implemented the "Deferred Particle Lighting" technique Frenetic Pony mentioned earlier.

 

And I'm pretty pleased, although there is a catch when using larger and/or moving particles. Bigger particles (say at least half a square meter) suddenly change color when moving in or outside the shadows. Which is a logical price to pay when only lighting the center. Maybe it can be improved a little bit by lighting the 4 billboard corner vertices instead of the center "particle point".

 

 

Sorting

Next step is to do something about the sorting / better blending, as the (moving) particles tend to "pop" now. I didn't read the "Weighted Blend Order independant" paper from above yet, but I suppose it doesn't need any sorting? In case I do need sorting, I'm not so sure if my current system works. Instead of 1 giga-VBO that provides space for all particles, each particle-generator has its own VBO. So sorting withing a generator works, but I can't sort all of them in case multiple particle fields intersect. I also wonder if sorting out one big VBO would be useful at all, since I still have to render the individual particle generators one-by-one:

for each particleGenerator
     apply textures / parameters / shaders for this effect
     generator.renderVBO();

Not being able to perfectly sort everything won't be a very big drama, but nevertheless, any smart tricks there?

 

 

 

 

@Hodgeman

Thanks for the hints! I'm using multiple blending modes mixed now (additive, multiply, modulation,...). You think, asides from emissive particles such as fire, that a single methods can be enough to do both additive "gassy" substances, as well as smoggy stuff that darkens the scene?

-edit-

Stupid question, you already answered that. Dark color but high alpha darkens/replaces the color. Bright color & low alpha adds up. Thus it works for both situations. Nice:)

 

 

@Krzystof

Thanks for the reads! Starting now smile.png


In Topic: Doing local fog (again)

26 May 2014 - 01:58 PM

>> If you're going deferred the Lords of the Fallen guys have a neat per vertex deferred for small particles that they use for smoke

Thanks for pointing that out. Never thought about that really! For the others who are interested:

 

 

1* Render ("rasterize") your particles as single points into a (1D or 2D) texture.

Each particle would get its own pixel. This pixel typically contains the particle position. The target position depends on an id that is unique for each particle.

 

2* Apply all your lights on this texture, as you would normally do deferred rendering.

Except that each light is just a quad that covers the entire screen (or texture canvas so to say). For each particle(pixel), you know the position, and thus you'll know wether it can be litten or not. The normal won't be needed since particles typically face towards the viewer, so use that direction info as a normal.

 

3* Accumulate light colors into another "particle-diffuse-light" texture.

Eventually you could do another pass to add your ambient light as well.

 

4* When rendering the actual particles, refer to to the texture from step 4.

Each particle uses its unique index to fetch the light results from step 3. You may do this fetch in the vertex shader already, so the only thing your pixel shader has to do, is drawing the (animated) particle texture. Overdraw still sucks, but at least this allows to play with lights at a low cost.

 

 

 

Or read

http://www.slideshare.net/philiphammer/the-rendering-technology-of-lords-of-the-fallen


PARTNERS