How to "render-generate" in 3D rasterization

Started by
0 comments, last by Ashaman73 13 years, 7 months ago
In the book AI Game programming 4, Marden and Smith taught us how to use
Movement expansion tests and 3d rasterization combined techniques to build
connectivity graphs. I do not understand (as no code is provided) how to do render-generate pass in 3D rasterization using the game engine. I assume
he meant software rendering. But how can colors be converted into depth or other useful information? Can I use the Direct3D Renderer instead? Could anyone provide some pseudocode or C++ code for this part?
Thanks in advance
Jack

[Edited by - lucky6969b on August 30, 2010 3:40:38 PM]
Advertisement
Quote:
But how can colors be converted into depth or other useful information?

Color is represented by three color channels, red,green and blue and an additional not visible alpha channel. Most often one byte is used for each channel, so you can encode any 32bit integer value to RGBA by some simple bit shifting:

encoding:int value = ...byte red   = (value >> 24) & 0xff; byte green = (value >> 16) & 0xff; byte blue  = (value >>  8) & 0xff; byte alpha = (value      ) & 0xff; decoding:int value = (red <<24) | (green<<16) | (blue<<8) | alpha;

All you need to do is to give each data an unique integer id and encode the id to a RGBA color. Then render all your surfaces with the according colors(=ids) and after that read the framebuffer back into main memory and analyse the pixel positions. The rendered colors must be converted back to your id and you can look up what "useful" information is associated with the given id. That's all.

Can I use the Direct3D Renderer instead? 

Either DirectX or OpenGL, both API permits rendering and reading back of colors.

This topic is closed to new replies.

Advertisement