Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

100 Neutral

About dreddlox

  • Rank
  1. In all breeds of MPEG and many of the other major video codecs, YV12 color space is used. This means: 1) The chroma(color) and the lumi(brightness) components of each texel are split so that 3 channels remain, Y(lumi), U(chroma1), V(chroma2) 2) The chroma is sampled at quarter resolution, meaning for 4 texels, there are 1(channel)*4(texels)*8 lumi bits + 2(channels)*1(texels)*8 chroma bits = 48 bit, or 12 bit per texel(hence YV12) 3) The chroma is able to be further compressed(eg. more bpp reduction) in later stages as it still has a fairly insignificant role in what our eyes pick up If you are using bump mapping, you can multiply the lumi by the normal map(scaling to avoid overflow, of course) thus having 5 channels of data, 2 of which are low-res, rather than the regular 6. Normal method(uncompressed): X8R8G8B8 color + X8R8G8B8 normal = 64bpp Color detail: 100% Lumi detail: 100% My method(uncompressed): 1/4 * R8G8 color + X8R8G8B8 normal = 36bpp Color detail: ~25% (Hey, DVD quality is actually worse than this) Lumi detail: ~99% (scaling the normals to allow lumi range 0-3 drops qual by 1.2%) Normal method(compressed): DXT1/S3TC color + DXT3/S3TC normal = 12bpp Color detail: ~50% Lumi detail: ~50% My method(compressed): 1/4 * DXT1/S3TC color + DXT3/S3TC normal = 9bpp Color detail: ~12.5% Lumi detail: ~50% So its basically a cheap way to be able to increase texture sizes by 25%-43.25% whilst losing an acceptable amount of detail
  2. Quote:Original post by Monder Why do you have to calculate either? Can't you just store the Z in the normal map along with X and Y (as well as having an RGB texture), or am I missing something here? Mainly to conserve memory bandwidth. Almost all texture formats store require that a texel is either 1, 2 or 4 bytes big, this means that an XYZ texture would be stored as an XYZ_ texture(ie. a unused byte on the end), whereas an XY texture would be stored as is, and would take up half the space of the XYZ_. Another possibility is having an RGBX texture and a YZ texture, but that just feels silly. Admittedly, this would mean my algorithm is most useful in a situation with an alpha channel, eg. RG + XYZA. But combinations of the DXTs can make RG + XYZ worthwhile.
  3. Quote:Original post by Cypher19 What about the other 99.9% of cases in which the diffuse/colour texture isn't pseudo-normalized, such that R+G+B=1? The color texture is pseudo-normalized in preprocessing(ie. non-runtime). When the length of the normal is adjusted, the texel effectively becomes brighter or dimmer to compensate for the pseudo-normalization. This is basically an adaptation of the YUV color of video codecs(Y=lumi, UV = chroma). Have an example: Start with the color RGB={0.5, 1.0, 0.5} and the normal XYZ{sqrt(0.5},sqrt(0.5},0}, the transformed light vector of {1,0,0} Do my preprocess math to get the new values: RG = {0.25,0.5}, XYZ = {sqrt(0.5}*2/3,sqrt(0.5}*2/3,0} In the pixel shader: Calculate that RGB' = {0.25,0.5,0.25}. Multiply through lighting equation to get pixel value: RGB = (sqrt(0.5}/2,sqrt(0.5},sqrt(0.5}/2} Compared to the traditional method: Preprocess to get RGB={0.5,1.0,0.5}, XY = {sqrt(0.5),sqrt(0.5)} In the pixel shader: Calculate XYZ = {sqrt(0.5},sqrt(0.5},0} Multiply through lighting equation to get pixel value: RGB = (sqrt(0.5}/2,sqrt(0.5},sqrt(0.5}/2} [size=-1]Edit: I'm tired and not completing my sentences. Lay off >.<
  4. OK, this just came to me(the recent gamasutra article about texture compression sparked this idea). Proposal: Instead of storing a texture + normal map in RGB + XY with calculated Z form, I believe it would be more efficient to store it in RG + XYZ with calculated B form. Explaination: The process of deriving Z from X and Y on a normal map is a fairly complex one. If Z was stored with the normal map, it would mean that the normal map's length component could be adjusted to increase or decrease the brightness of that texel. If the color component is "normalized" so that R+G+B=1 and the XYZ normal was shortened/elongated to compensate, it is possible to remove the B component from the color and calculate it through the equation: B=1-R-G. The math behind it: Traditional method pixel shader code(no preprocessing): XY = XY Z = sqrt(1 - X*X - Y*Y) RGB = RGB * dot(XYZ, lightXYZ) My method: Preprocessing: XYZ = XYZ * (R+G+B) / 3 RG = RG / (R+G+B) Pixel shader code: XYZ = XYZ * 3 RG = RG * dot(XYZ, lightXYZ) B = (1-R-G) * dot(XYZ, lightXYZ) As you can see, this eliminates a sqrt cycle and untangles a mul in the pixel shader. Has anyone thought of this before? Has anyone ever seen an implementation of it? Edit: My math was funky, I fixed it [Edited by - dreddlox on January 2, 2006 5:10:11 AM]
  5. dreddlox

    Best shadow...

    Although shadow maps are almost certainly the best method, that article seems a bit dated. On any pixel shader compatable card, you should be able to render shadows in the same pass as the geometry, eliminating the need for an offscreen 'shaded area buffer'. To do soft shadows without using such a buffer, simply take many samples offset from the texcoord on the shadow map to test. If you use the eq. Offset = radius of light * 1st sample dist / pixel dist where the 1st sample is the initial shadow map texture read's result, and pixel dist is the distance from the light to the rendered pixel Then you should have a nicely formed umbra/penumbra surrounding each object(ie. shadows blur more as they get further from objects)
  6. dreddlox

    How to Render To Vertex Buffer?

    Thanks for replying, but the first link applies to OpenGl only, and the second talks about D-Maps. I have found the OpenGL extension: GL_EXT_pixel_buffer_object but unfortunately this has no use in D3D. I wonder if typecasting between VBs and RTs would work...
  7. Hi I've a wicked idea for a game engine design, but the problem is that its HEAVILY vertex-processor limited. I was wondering if there is a way to use pixel shaders to output data into a simple vertex buffer(I was thinking of rendering the streams individually, stream 0 = position, stream 1 = normal, etc.)? I have seen ATI mention this capability a few times, and I believe there is an OpenGL extension for this.
  8. As far as coding time goes, I'd have to say "If you have to ask... dont make any other plans for a while" The best way to get estimates for these things is to actually start coding and see how many brick walls you run into setting out the framework... In my life, I've started 3 game engine projects, my current one is the only one that can successfully render...not because I cant make things render(thats easy), but because I hit a problem somewhere else in the engine and eventually gave up(First engine - had to replace the GL pop/pushing with actual matrix math, throughout, second engine - Aimed for photorealistic physics, ended up with a problem with over-application of torque, never managed to fix it, third engine - still working on it) Generally, if you're still in the stage where you're keeping tutorials and articles to learn, you're gonna have to code them 2 or 3 times before you'll really understand where they fit... I'd say the main "brick wall" would be with 3D physics in this case... It is a huge topic, and it takes a long time just to learn how the plane equation(Ax+By+Cz+D=0) works. If you were to shortcut by checking the height of each tire vs the height at that point on the height map, you could make the demo in that time frame, but to actually learn all the physics involved, or code more than a linear AI, you'd get stuck, or get stressed.
  9. dreddlox

    RT depth values

    what nauseating MSDN documents... I found out that the z buffer's value should look something like this: Pf = far plane Pn = near plane Z = input Z ZB = value in the buffer(from 0.0 to 1.0) ZB = (Pf*Z/(Pf-Pn) - Pf*Pn/(Pf-Pn))/Z this is assuming that ZB = projected point.z / projected point.w so actually finding the original Z may be quite a hassle... Lemme try some algebra ZB = (Z-Pn)/(Z*(Pf-Pn)) ZB = 1/(Pf-Pn) - Pn/(Z*(Pf-Pn)) Pn/((ZB - 1/(Pf-Pn))*(Pf-Pn)) = Z Pn/(ZB*(Pf-Pn) - 1) = Z but check my math...lots...check it till it bleeds, even I dont trust it But if it works, that should be your Z value Edit: formula is really: -Pf*Pn/(ZB(Pf-Pn)-Pf) = Z still, check it... [Edited by - dreddlox on September 6, 2004 12:41:25 AM]
  10. dreddlox

    RT depth values

    Hi, i've been thinking about the problem for some time now(wanting to make projected texture maps without a messy pixel shader) and I was thinking, but have yet to actually try: The projection matrix ensures the depth buffer is a value between 0 and 1.0f, if you can use the deth buffer as a normal texture and use a pixelshader to do a backwards matrix thing onto a FP texture, it should work fine. unfortunately my projection buffer knowledge is thin... I was a NeHe reading child...
  11. I once had this problem making a counterstrike opengl wrapper(I had such great plans for that dll, but alas, I hated counterstrike) The problem was that when you're drawing quads, they're broken into triangles: 1,2,3 and 1,3,4 to maintain counterclockwise consistancy. I neglected that consistancy had to be maintained and assumed they went 1,2,3 and 1,4,3, which made half my normals point inwards and appear unlit, this was after neglecting that cross products required CCW or CW(whichever it was >.<) consistancy, normalized inputs and normalized outputs. To me it appeared you weren't even using quads in that screen shot, so check the way you're turning points into polys I would so recommend quadstrips or tristrips for terrain...though DX is far less wasteful than OGL once you get indexed transformed vertex buffers running, thats no reason to quit good habits.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!