A Few Theories for powerful graphics rendering.

Started by
4 comments, last by Bombshell93 12 years, 9 months ago
Well I've come up with many ideas improving older ideas. They may be pants or they may be brilliant or anything in between, I'm planning on implementing these soon so comments on how they would perform or how they would compare to other methods would be of great use to me.
Keep in mind these ideas are intended for a deferred renderer, that said models are assumed to mostly be going through the same shaders.

Chunk-based Material Terrain Mapping
This is fairly simple. When mapping terrain the world is split into cubes (chunks) each side of a chunk has a nullable 4-channel 2D Texture (you wouldn't want thin air to need a texture)
The Texture format is as follow,
R - Height (I assume your all familiar with height mapping)
G,B - U,V (Texture coordinates)
[s]A - first 6 bits used to store a Material identifier allowing 64-terrain materials to be active at once (my material system I will explain shortly)[/s]
Edit: [color="#1C2837"]A - 2, 2-bit integers for materials, 3-bit integer allowing 8 levels of interpolation (0 and 1 being not interpolated) and finally 1-bit determining whether or not there is ground at that pixel.

the last 2 bits I'm debating over usage. They could be misc bools (such as 1-bit to indicate whether or not to draw the terrain at that spot, useful for bleeding terrain through the chunks.)
I'm still not sure about how working with a texture value at bit-level would be, but its essential with this particular method that blank spaced be mapped to allow multiple chunks to link.

Please let me know about any overhead or any perfomance hits I may have overlooked,
but the reason this idea struck me as good is the use of mipmaps for easy adaptive LOD and with height maps being relative to a certain space, the chunks could be scaled down to allow smaller detail or scaled up to allow less memory consumption. That and simple drawing onto the height maps they can be used for real-time terrain modification (worms styled terrain destruction anyone?)

Materials
Materials are nothing new... all of you know that, but I have a few ideas of how a material can be used and what a material can hold.
Firstly and most obviously, a group of values and textures to modify the way the material is treated while rendering.
Second mipmapped tillable maps allowing cheaper higher detail on close-up models (ideal for the thread pattern of clothing or scales or fur on an animal or even the fine grain in sand.)
Third and probably not very ground breaking, Material specific variables representing qualities of the material (E.G. a variable effecting how prominent the glow map is, useful for something like a guns barrel overheating.

[color="#1c2837"]NOW!
Real-time radiosity, please let me know what you think of this, its something I'm not to sure about but if it works it should turn out quite powerful.
Real-Time Radiosity
The idea behind this is basicly render 2 low resolution images the same way shadow maps are usually rendered. these images will be 64-BPP format;
R,G - Hue, Saturation
B - [color="#1C2837"]Light intensity (ambient occlusion and specular maps taken into account)[color="#1c2837"]
A - split in 2 Spherical Direction values for the estimated direction of light bounce
and
R,G,B - Position
A - spread value based on the light bounce compared to the normal and the image resolution.

Using the Position and after decoding the split spherical direction values, a ray-trace for each pixel is performed, the points where the rays stop will hold radiosity lights, basicly a distorted point light using the bounce value compared with the light value to create sort of an estimated spot light.
The spread value effecting the lights radius and intensity depending on how far from the ray start position the light is.
The Hue and saturation converted to R, G, B again. the lights will be used while calculating the screen-space lightmap, taking into account the already calculated ambient occlusion I feel this method of radiosity could be very effective, especially when you consider you could change the radiosity maps resolution based on the distance from the current view and the light allowing (in theory) very effective adaptive LOD in the lighting.

I have more ideas on real-time [s]radiosity[/s], efficient and high quality particle systems and some other less significant graphics implementations.
But I'm still thinking most of them through.

Anyway! Back on topic,
I'd love to know any criticisms, suggestions, or references to similar or comparable implementations,
Anything you think may help or be of interest to me.

Thank you for reading,
Bombshell
Advertisement

G,B - U,V (Texture coordinates)
A - first 6 bits used to store a Material identifier allowing 64-terrain materials to be active at once
This is possible - you can use the 6-bit integer as an index into an array of 64 textures.
However, if you sample this terrain texture ("HeightUVIndex texture"?) in the pixel shader, then each pixel will only receive one index, which means you can't blend between layers (e.g. sand can't fade into grass).


Also, interpolation of this texture is not possible because blending between two index values is nonsensical. This also complicates mip-mapping of the texture.
e.g. if at mip-level 0, there's a quad of 4 pixels with the indices for grass, rock, dirt and sand. At mip-level 1, these four pixels become a single pixel. Which index should that pixel specify?
[size=2]Materials are nothing new... all of you know that, but I have a few ideas of how a material can be used and what a material can hold. Firstly --- a group of values and textures. Second mipmapped tillable maps. Third Material specific variables.[/quote]Those aren't new ideas, sorry wink.gif
I have been trying to think of a way to allow interpolation, but I keep coming to the conclusion that it would take an extra colour channel. I'll keep thinking, should I come up with anything I'll post it up.

Materials are nothing new... all of you know that, but I have a few ideas of how a material can be used and what a material can hold. Firstly --- a group of values and textures. Second mipmapped tillable maps. Third Material specific variables.
Those aren't new ideas, sorry wink.gif
[/quote]
First point I know isn't new... kind of obvious with my use of the word obvious.
I'm assuming the second point (tillable textures aimed at enhancing small detail) is used by the frostbite engine?
either that or they used very large textures because the detail from what I could see of their clothing was very good down to the tiny details.
Third I'd suspected it'd been used before, not like I was trying to be ground breaking :)

Eitherway,
Thanks for the comment :)
I've come up with an interpolation compromise for the Materials.
Assigning materials to chunks rather than the pixels would work out for the better (especially when I reconsider how confusing and bandwidth consuming the 64 materials would be), realistically unless you want huge chunks 4 materials to any given area should suffice, that said a 2-bit int would allow this.
[color=#1C2837][size=2]

R - Height (I assume your all familiar with height mapping)
[color="#1c2837"]G,B - U,V (Texture coordinates)
[color="#1c2837"]A - 2, 2-bit integers for materials, 3-bit integer allowing 8 levels of interpolation (0 and 1 being not interpolated) and finally 1-bit determining whether or not there is ground at that pixel.

[color="#1c2837"]NOW!
Real-time radiosity, please let me know what you think of this, its something I'm not to sure about but if it works it should turn out quite powerful.
Real-Time Radiosity
The idea behind this is basicly render 2 low resolution images the same way shadow maps are usually rendered. these images will be 64-BPP format;
R,G - Hue, Saturation
B - [color=#1C2837][size=2]Light intensity (ambient occlusion and specular maps taken into account)[color="#1c2837"]
A - split in 2 Spherical Direction values for the estimated direction of light bounce
and
R,G,B - Position
A - spread value based on the light bounce compared to the normal and the image resolution.

Using the Position and after decoding the split spherical direction values, a ray-trace for each pixel is performed, the points where the rays stop will hold radiosity lights, basicly a distorted point light using the bounce value compared with the light value to create sort of an estimated spot light.
The spread value effecting the lights radius and intensity depending on how far from the ray start position the light is.
The Hue and saturation converted to R, G, B again. the lights will be used while calculating the screen-space lightmap, taking into account the already calculated ambient occlusion I feel this method of radiosity could be very effective, especially when you consider you could change the radiosity maps resolution based on the distance from the current view and the light allowing (in theory) very effective adaptive LOD in the lighting.

Let me know if I'm forgetting something or if my theory on real-time radiosity is comparable to anything or if it just plain may not work :P
Thanks for reading,
Bombshell
For terrain rendering, check out the NVIDIA SDk 10, Texture Arrays (terrain) for an example.
Wisdom is knowing when to shut up, so try it.
--Game Development http://nolimitsdesigns.com: Reliable UDP library, Threading library, Math Library, UI Library. Take a look, its all free.
Well the example has given me and idea to get over the pixel level UV's (and I was unaware of the texture array struct, thanks for that)
assigning materials a height map for use in terrain blending could fix this problem by rendering whichever map is higher at the current position and interpolating when the materials height maps are within a set height of each other. And as opposed to an smoothing value, a height manipulation value for each material in the current chunk. This could give off some really good looking results....
R - Height
G, B - 4 4-bit integers representing height manipulation values for materials
A - 1-bit for blank space or terrain space and 7-bits for any other misc that may come in useful

Or alternatively

R - Height
G - 2 2-bit values to indicate 1 of the 4 materials from the chunk 4-bits free for any other misc that may come in useful
B - a height map modifier to indicate the "interpolation" between the 2 materials
A - 1-bit for blank space or terrain space and 7-bits for any other misc that may come in useful

I will mention something about why I'm trying to get the empty space,
The reason I've been thinking so intensely about this is to try and get as much as I can out of as little as I can, get the most bang for my buck. Any free space could be used for more values to use in the process of rendering, of course if I can not fill a space I will merely assign it to another value that could use the increased accuracy.
Efficiency is the key to a powerhouse engine such as frostbite 2 and cryengine 3 and I have done quite a bit of research into those engines (still am) in order to get Ideas and I am determined to make an engine, as powerful as my abilities will allow me to make it.
It may not turn out to be a powerhouse, but at this rate and once I have the chance to test and refine all the ideas, I'll have at least a semi-decent engine.

Thanks for the comment,
And Thanks for reading,
Bombshell

This topic is closed to new replies.

Advertisement