How should I do environment reveal/texture blending?,
Members - Reputation: 198
Posted 06 July 2011 - 01:20 PM
At the moment I have 2 or 3 different ideas that are floating around in my head.
1; UV map the environment and have a regular colour texture but also have a greyscale reveal texture (Revealing the colour texture underneath), black areas would be visible immediately and then whiter areas would slowly get revealed as the game progressed. This could allow for all sorts of creative reveals. The problems I currently see with it is that I don't know if there would be discontinuity between different parts of the UV map. Also it mean a full extra texture for each environment, but I think this is reasonable and the bit-depth of the greyscale can be knocked down a little if needs be.
2; If the camera is static, then the reveal could be done in screen space but I think this would not give the most convincing effect overall. It would be good for screen transitions etc that are quick but not for a long reveal. Perspective would be quite hard to deal with. Although I suppose if the camera was static then I could take the first approach to generate the reveal texture and then take a 2D snapshot of the environment with the reveal texture to generate a screen-space reference which should save on memory, but restricts us to a static camera.
3; Do a geometry check for distance/intersection and then manipulate the edges of this using some reference image to match the concept style.
I'm avoiding a straight-up programmatic approach for now as an artistic approach would provide a much better feel.
I'm trying to find material and reference to better illustrate what I'm talking about but I'm having a hard time doing so.
Something like the following but without the bouncing/animated effects, just the spreading; Youtube
Or picture Peter Parker being taken over from Venom, Predator going from visible->invisible, etc.
If you have any references, papers, articles, videos or know the official name of the technique I'm trying to describe that would be great. I hope that I've done a reasonable job describing what I'm trying to achieve so if you have any suggestions then I would be glad to hear them.
Prime Members - Reputation: 1183
Posted 06 July 2011 - 03:07 PM
1- Just render everything as usual
2- Take a snapshot of your results, OR, render step1 into a Texture (see renderTargets or FBO's) instead of directly on the screen
3- At the end, render a screen-quad (quad that fills the screen, exactly in front of the camera)
4- Apply a shader on this quad, and pass the snapshot/texture from step 2 as a parameter.
In the step4 shader, you can blend between the original screen contents(step2) and practically anything else. A black color, or another image. But you can also
- (Sobel) edge detection (requires an extra step that renders depth or positions into a texture)
- contrast / saturation / color enhancement
Just to name a few. For the more advanced stuff such as a model going from wireframe to full-shaded, you may want to render the scene multiple times:
1- render the normal scene to texture A
2- render the wireframe scene to texture B
3- apply screen quad with post effect shader that mixes between texture A and B depending on an elapsed time parameter
* Eventually render a texture C that contains the elapsed time per surface / object / pixel so you can blend the objects on different timings
The Youtube movie is yet something different again, as it is animated. A simple fade-in is not sufficient. What you could do (but this is pretty damn difficult to draw) is using the alpha channel to create a time-offset.
1.- Draw a texture A (the static part that doesn't change)
2.- Draw a (eventual transparent) overlay texture in its final shape
3.- Fill the alpha channel of texture 2 with a time offset. Black values means this pixel is directly visible, white value means the pixel becomes visible at the end of the movie. Gray values is everything in between.
4.- In the shader:
color.rgb = staticTexture.rgb + overlayTexture.rgb * ( currentElapsedTime > overlayTexture.a );
However, this is difficult to draw(for the artist), plus 32bit images only have 256 variations on the alpha channel so it won't work for longer animations (or very chunky). This can be improved by using 16-bit textures and eventually let a program compute the alpha values, but still...
A smarter way might be using some sort of floodfill that evolves step by step. In a background process, you must "expand" a texture with a few pixels each cycle:
1.- use the final version of the image as a reference paramter for a shader that does sort of a floodfill
2.- start with an almost empty texture. Fill in the lines/colors/shapes you like to start with --> pass a first version texture your artist draws
3.- render a screenquad to a renderTarget / FBO. This shader copies values from his neighbor pixels, but only if they are also filled in the reference texture from step 1. Eventually only copy the pixel partially so it takes multiple cycles before the target color is reached.
4.- Repeat step 3 until all pixels are filled. Do not clear the texture!
I would advise you to study:
- image effects (take a look at the filters in photoshop for example)
- renderTargets / FBO's
- Rendering extra data in renderTargets / FBO's such as depth or other custom parameters
Once you master these 4, you can go banana's
Members - Reputation: 198
Posted 08 July 2011 - 06:36 AM
While I'm not a master I have a good grip, both theoretically and practically, on all of the methods you've mentioned. I guess I was just looking for some suggestions as to how I could approach the problem and your post has certainly pointed out some new things for me to think about so thanks for the post, it's very much appreciated!