# cppcdr

Member

194

168 Neutral

• Rank
Member
1. ## Faking geometric complexity with deferred shading?

I've been playing around with deferred shading lately and had an idea that I wanted to bounce off people who have more experience in the matter. I was thinking that since we have the depth as a texture, would it not be possible to add fake geometry to the scene by "extruding" from a normal map? What I mean is, take the depth texture and add/subtract to it based on the information in the normal map texture. So if we have the edge of a model that has a normal map with say a spike in it, it would add that spike to the depth buffer, thus making it renderable. This would not work on big bumps because there would be popping when the model rotates and more of the normal texture is exposed. I know that it would be necessary to compute some new texture coords to make up for the shift, but wouldn't it be less costly than sending a few thousand extra polys to the graphics card? The calculations to do this would probably be complex so I wanted to see if anyone had experimented with this or if anyone can see a flaw that would make this unusable. Thanks for your time.
2. ## How to make a wavey sheet?

Try looking into perlin noise. It is repetitive, but only after a while. 2D perlin noise will give you a static heightmap. Use 3D perlin noise to animate the heightmap (the first two dimensions are the position, and the other one is the time). If you really want to, you could even use 4D or 5D perlin noise to remove some of the repetitiveness. The nice thing with perlin noise is that it will generate a nice smooth mesh while animating because it is continuous. You will not have any points appearing out of nowhere, they will grow slowly.
3. ## Simple, dumb, reflection

Ok, sorry about misunderstanding your question. My best guess would be to fire a ray from the eye to each bottom vertex and reflect it based on the normal. Then do a ray to plane intersection test with a plane placed at the top position in your drawing. This new point will be your texture coordinate (you must of course normalize the texture coord to get it in the range of 0 to 1). If you do this, I believe that the texture will be correctly rendered. If i'm wrong, post your vertex shader code so I can take a look.
4. ## Simple, dumb, reflection

Look up planar reflections on google or gamedev, I think that's what you want. There are lots of articles, but this one gave me all the help I wanted: http://www.riemers.net/eng/Tutorials/XNA/Csharp/series4.php It describes how to create a mirror surface (water). There were some minor errors, but I can't remember where. I'll give you a small run down of the procedure to render reflections: 1 - Render everything that will be reflected using an inverted camera to a texture. 2 - Find the projected texture coordinates of each vertex. 3 - Render the texture of (1) with those texture coordinates on the reflected surface.
5. ## GForce 8600 sli1 OpenGL issues

The only thing I can think of is either the card is not meant to be used in that manner, or you don't have the latest drivers from NVIDIA (many people forget to update regularly).
6. ## VTF to do terrain rendering

Perhaps, but I thought that the number of triangles that could be outputted were limited because gs are slow in comparison to the other shaders. Thus making the maximum size of the caves small. Please correct me if i'm wrong.
7. ## VTF to do terrain rendering

Well in terms of overhangs or caves, what I do is replace the patch that has that feature by a 3d model. The adjacent patches automatically adapt to the LOD of the center model (it took a while to figure out how). As far as I know, there are no terrain methods (besides voxel terrain) that allow caves. Also, the vtf is really fast on most recent graphics cards, and even if I do lose a few fps (i haven't benchmarked, so i don't really know), the trade off in ease of use is more than worth it.
8. ## VTF to do terrain rendering

No, I use vertex texture fetch along with a single VBO to render my terrain. The combination of the two saves memory and seems to render faster (because I can make use of really efficient LOD). I don't really know if there is a big speedup with VTF but the fact that everything is handled in my vertex shader greatly increases the amount of flexibility i have. I don't have to ever write my vertex buffer again, so it is really efficient. VBO's that are static are heavily optimised, while dynamic VBO's are slower, if I remember correctly. So I guess you would get a speed boost from that if you were changing your mesh many times. Oh, and I just thought of this, you could theoretically use a single terrain object to render hundreds of terrains. You just pass a texture to the render function. The vertex shader will take care of the rest for you. So even more memory efficient :)
9. ## [SOLVED] xyz rotations to match spherical coordinates?

You would first of all have to transpose your matrix. By transpose, we do not mean translate. There is a big difference. What you are doing is setting the translation to zero. Transposing a matrix is different. You take each element in a matrix and exchange it with the element with the opposite position (i know this isn't clear so i'll give an example): Original matrix : | m11 m12 m13 m14 | | m21 m22 m23 m24 | | m31 m32 m33 m34 | | m41 m42 m43 m44 | Becomes : | m11 m21 m31 m41 | | m12 m22 m32 m42 | | m13 m23 m33 m43 | | m14 m24 m34 m44 | Secondly, you say that the entire world is shifted. Have you reset your view matrix between renderings? If you do not specify a normal matrix for rendering (not the billboard matrix) you will get wierd results. Now, finally, about the directX sprites. I don't know if it is slow and inaccurate in dx9 cause i never used it. In dx10, it works very well. I hardly notice any frame drop from the sprites. However, if you are worried about the speed and accuracy, you could create your own sprite class by using simple billboarded quads with a texture.
10. ## VTF to do terrain rendering

The reason that I use VTF is that I can have only one vertex and index buffer and tile them. It saves on memory. Basically what you were saying in your second paragraph. However, I tried small sizes like 17x17 and 33x33 and had hard times rendering a large mesh(4k x 4k). By making the size larger, such as 129x129, there was a large speed increase (2.5x if I remember correctly). The bottleneck seemed to be coming from the many draw calls that were needed. Remember, vbo's are optimized for rendering large numbers of triangles, nor repeatedly rendering small numbers. Also, you can implement LOD easily by simply having more than one index buffer (one for each level). If you want to keep things simple, keep track of the adjacent patches and using the vertex shader move some of the vertices on the finer patches to make sure there are no cracks. This keeps you from making many index buffers for each level. Hope this answers you questions.
11. ## [SOLVED] xyz rotations to match spherical coordinates?

I suggest that you look at billboarding. Billboarding uses a matrix to transform the quad so that it is facing the camera. You could also use sprite objects if you are using DirectX (I don't know if OpenGL has this feature, but probably). In DirectX the sprites are automatically facing the camera (once again, OpenGL probably acts the same way, but don't quote me on that). Hope this helps.

13. ## trouble with rendertarget...

Sorry, devronious, I can't help with MDX cause I have no experiance with it. However, it seems that in your code you have dev.RenderState.ZBufferEnable = false; Could it be that this is causing the problem? Also, check the return value of this.renderTarget = new Microsoft.DirectX.Direct3D.Texture. Perhaps you are not creating the render target at all. Another thing to check would be to run through in debug mode and see if DrawIndexedPrimitives is actually called. You have some wierd code before that may be blocking the call.
14. ## Adjacent Vertices in a .3ds mesh

So if I understand correctly, you are trying to get per face lighting, using per vertex lighting? If that is the case, then I guess that the best way to do it would be to create some extra vertices (36 total) and calculate the normals afterwards based on the triangle information (from the indices). If you are actually trying to get 1 normal per vertex, you could do it this way: Create temporaty extra vertices (3 per triangle), calculate the face normals on each one (so 36 verts (6 per quad because there are 2 tris), 12 face normals). Then iterate through each real vertex (8 in this case) and average the normals of the vertices that have the same position. So in other words, you would be doing the average of the face normal of each triangle that touches the vertex. It might not be the best way to do it, but I found that it worked great on my models. If the method I described is not clear, tell me and i'll reexplain tomorrow... i'm tired right now so i don't know if what i am writing makes any sense :)
15. ## directinput questions

You could have a boolean flag that indicates if the user has released the key. For example, you would first ask the user for the key. Once you recieve the keydown message, you set the boolean flag to false, thus meaning that the key is pressed. Then, once the key is released you set the flag to true. You then only process the input when the boolean flag is true. Pseudo code: CheckKeyPress() { if(key1 is down) { keyNumber = 1 bIsReleased = false } } CheckKeyRelease() { if(key1 is up) bIsReleased = true } in the game loop, you would have: CheckKeyPress() CheckKeyRelease() if(bIsReleased) { // Do your processing of the key number here ProcessNumber() keyNumber = -1 // To clear the key press }