Jump to content
  • Advertisement

cppcdr

Member
  • Content Count

    194
  • Joined

  • Last visited

Community Reputation

168 Neutral

About cppcdr

  • Rank
    Member
  1. I've been playing around with deferred shading lately and had an idea that I wanted to bounce off people who have more experience in the matter. I was thinking that since we have the depth as a texture, would it not be possible to add fake geometry to the scene by "extruding" from a normal map? What I mean is, take the depth texture and add/subtract to it based on the information in the normal map texture. So if we have the edge of a model that has a normal map with say a spike in it, it would add that spike to the depth buffer, thus making it renderable. This would not work on big bumps because there would be popping when the model rotates and more of the normal texture is exposed. I know that it would be necessary to compute some new texture coords to make up for the shift, but wouldn't it be less costly than sending a few thousand extra polys to the graphics card? The calculations to do this would probably be complex so I wanted to see if anyone had experimented with this or if anyone can see a flaw that would make this unusable. Thanks for your time.
  2. cppcdr

    How to make a wavey sheet?

    Try looking into perlin noise. It is repetitive, but only after a while. 2D perlin noise will give you a static heightmap. Use 3D perlin noise to animate the heightmap (the first two dimensions are the position, and the other one is the time). If you really want to, you could even use 4D or 5D perlin noise to remove some of the repetitiveness. The nice thing with perlin noise is that it will generate a nice smooth mesh while animating because it is continuous. You will not have any points appearing out of nowhere, they will grow slowly.
  3. cppcdr

    Simple, dumb, reflection

    Ok, sorry about misunderstanding your question. My best guess would be to fire a ray from the eye to each bottom vertex and reflect it based on the normal. Then do a ray to plane intersection test with a plane placed at the top position in your drawing. This new point will be your texture coordinate (you must of course normalize the texture coord to get it in the range of 0 to 1). If you do this, I believe that the texture will be correctly rendered. If i'm wrong, post your vertex shader code so I can take a look.
  4. cppcdr

    Simple, dumb, reflection

    Look up planar reflections on google or gamedev, I think that's what you want. There are lots of articles, but this one gave me all the help I wanted: http://www.riemers.net/eng/Tutorials/XNA/Csharp/series4.php It describes how to create a mirror surface (water). There were some minor errors, but I can't remember where. I'll give you a small run down of the procedure to render reflections: 1 - Render everything that will be reflected using an inverted camera to a texture. 2 - Find the projected texture coordinates of each vertex. 3 - Render the texture of (1) with those texture coordinates on the reflected surface.
  5. cppcdr

    GForce 8600 sli1 OpenGL issues

    The only thing I can think of is either the card is not meant to be used in that manner, or you don't have the latest drivers from NVIDIA (many people forget to update regularly).
  6. cppcdr

    VTF to do terrain rendering

    Perhaps, but I thought that the number of triangles that could be outputted were limited because gs are slow in comparison to the other shaders. Thus making the maximum size of the caves small. Please correct me if i'm wrong.
  7. cppcdr

    VTF to do terrain rendering

    Well in terms of overhangs or caves, what I do is replace the patch that has that feature by a 3d model. The adjacent patches automatically adapt to the LOD of the center model (it took a while to figure out how). As far as I know, there are no terrain methods (besides voxel terrain) that allow caves. Also, the vtf is really fast on most recent graphics cards, and even if I do lose a few fps (i haven't benchmarked, so i don't really know), the trade off in ease of use is more than worth it.
  8. cppcdr

    VTF to do terrain rendering

    No, I use vertex texture fetch along with a single VBO to render my terrain. The combination of the two saves memory and seems to render faster (because I can make use of really efficient LOD). I don't really know if there is a big speedup with VTF but the fact that everything is handled in my vertex shader greatly increases the amount of flexibility i have. I don't have to ever write my vertex buffer again, so it is really efficient. VBO's that are static are heavily optimised, while dynamic VBO's are slower, if I remember correctly. So I guess you would get a speed boost from that if you were changing your mesh many times. Oh, and I just thought of this, you could theoretically use a single terrain object to render hundreds of terrains. You just pass a texture to the render function. The vertex shader will take care of the rest for you. So even more memory efficient :)
  9. You would first of all have to transpose your matrix. By transpose, we do not mean translate. There is a big difference. What you are doing is setting the translation to zero. Transposing a matrix is different. You take each element in a matrix and exchange it with the element with the opposite position (i know this isn't clear so i'll give an example): Original matrix : | m11 m12 m13 m14 | | m21 m22 m23 m24 | | m31 m32 m33 m34 | | m41 m42 m43 m44 | Becomes : | m11 m21 m31 m41 | | m12 m22 m32 m42 | | m13 m23 m33 m43 | | m14 m24 m34 m44 | Secondly, you say that the entire world is shifted. Have you reset your view matrix between renderings? If you do not specify a normal matrix for rendering (not the billboard matrix) you will get wierd results. Now, finally, about the directX sprites. I don't know if it is slow and inaccurate in dx9 cause i never used it. In dx10, it works very well. I hardly notice any frame drop from the sprites. However, if you are worried about the speed and accuracy, you could create your own sprite class by using simple billboarded quads with a texture.
  10. cppcdr

    VTF to do terrain rendering

    The reason that I use VTF is that I can have only one vertex and index buffer and tile them. It saves on memory. Basically what you were saying in your second paragraph. However, I tried small sizes like 17x17 and 33x33 and had hard times rendering a large mesh(4k x 4k). By making the size larger, such as 129x129, there was a large speed increase (2.5x if I remember correctly). The bottleneck seemed to be coming from the many draw calls that were needed. Remember, vbo's are optimized for rendering large numbers of triangles, nor repeatedly rendering small numbers. Also, you can implement LOD easily by simply having more than one index buffer (one for each level). If you want to keep things simple, keep track of the adjacent patches and using the vertex shader move some of the vertices on the finer patches to make sure there are no cracks. This keeps you from making many index buffers for each level. Hope this answers you questions.
  11. I suggest that you look at billboarding. Billboarding uses a matrix to transform the quad so that it is facing the camera. You could also use sprite objects if you are using DirectX (I don't know if OpenGL has this feature, but probably). In DirectX the sprites are automatically facing the camera (once again, OpenGL probably acts the same way, but don't quote me on that). Hope this helps.
  12. Hi, I am currently working on the sky rendering for my engine based on the paper here: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter16.html I have also looked at: href="http://www.gamedev.net/community/forums/topic.asp?topic_id=461747&PageSize=25&WhichPage=2 I have gotten the sky to look ok, but i'm missing the sun. Mayby i'm mistaken, but shouldn't the shader automatically generate the sun? Here is my shader code: matrix worldViewProj; float3 cameraPos; float time; float dispX; float dispZ; Texture2D colorTexture; Texture2D starsTexture; static float3 v3LightDir = {0, 0, 0}; // Light direction static float3 v3CameraPos = {0, 10.00, 0}; // Camera's current position static float3 v3Wavelength = {0.650f, 0.570f, 0.475f}; static float3 v3InvWavelength = 1 / pow(v3Wavelength, 4); // 1 / pow(wavelength, 4) for RGB channels static float fCameraHeight = length(v3CameraPos); static float fCameraHeight2 = fCameraHeight * fCameraHeight; static float fInnerRadius = 10.0; static float fInnerRadius2 = fInnerRadius * fInnerRadius; static float fOuterRadius = 10.25; static float fOuterRadius2 = fOuterRadius * fOuterRadius; // Scattering parameters static float ESun = 20.00; static float KrESun = 0.0025f * ESun; // Kr * ESun static float KmESun = 0.0010f * ESun; // Km * ESun static float Kr4PI = 0.0025f * 4 * 3.14159265f; static float Km4PI = 0.0010f * 4 * 3.14159265f; // Phase function static float g = -0.991; static float g2 = g * g; static float fScale = 4; // 1 / (outerRadius - innerRadius) = 4 here static float fScaleDepth = 0.25; // Where the average atmosphere density is found static float fScaleOverScaleDepth = fScale / fScaleDepth; // scale / scaleDepth static float fSkydomeRadius = 512; // Skydome radius (allows us to normalize skydome distances etc) static int numSamples = 5; static float samples = (float)numSamples; // Calculates the Mie phase function float getMiePhase(float fCos, float fCos2, float g, float g2) { return 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos2) / pow(1.0 + g2 - 2.0*g*fCos, 1.5); } // Calculates the Rayleigh phase function float getRayleighPhase(float fCos2) { //return 1.0; return 0.75 + 0.75*fCos2; } struct PSInput { float4 Pos : SV_POSITION; float4 RayleighColor : COLOR0; float4 MieColor : COLOR1; float3 Direction : TEXCOORD0; }; SamplerState samLinear { Filter = MIN_MAG_MIP_LINEAR; AddressU = Wrap; AddressV = Clamp; }; float scale(float cos) { float x = 1.0 - cos; return fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } // // Vertex Shader // PSInput VS(float4 Pos : POSITION, float2 Tex : TEXCOORD) { PSInput output; v3LightDir.x = 0; v3LightDir.y = sin(time/100); v3LightDir.z = cos(time/100); // Get the ray from the camera to the vertex, and it's length (far point) float3 v3Pos = Pos / (fSkydomeRadius) * 10.25; float3 v3Ray = v3Pos - v3CameraPos; float fFar = length(v3Ray); v3Ray /= fFar; v3Ray = v3Pos - v3CameraPos; fFar = length(v3Ray); v3Ray /= fFar; // Calculate the ray's starting position, then calculate its scattering offset float3 v3Start = v3CameraPos; float fHeight = length(v3Start); //float fDepth = exp(-fHeight/H0); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight)); float fStartAngle = dot(v3Ray, v3Start) / fHeight; float fStartOffset = fDepth * scale(fStartAngle); // Init loop variables float fSampleLength = fFar / samples; float fScaledLength = fSampleLength * fScale; float3 v3SampleRay = v3Ray * fSampleLength; float3 v3SamplePoint = v3Start + v3SampleRay * 0.5f; // Loop the ray float3 color = {0, 0, 0}; for (int i = 0; i < numSamples; i++) { float fHeight = length(v3SamplePoint); //fDepth = exp(-fHeight/H0); fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight)); float3 v3Up = {0, 1, 0}; float fLightAngle = dot(v3LightDir, v3SamplePoint) / fHeight; float fCameraAngle = dot(v3Ray, v3SamplePoint) / fHeight; float fScatter = fStartOffset + (fDepth*(scale(fLightAngle) - scale(fCameraAngle))); float3 v3Attenuate = exp(-fScatter * (v3InvWavelength * Kr4PI + Km4PI)); // Accumulate color v3Attenuate *= (fDepth * fScaledLength); color += v3Attenuate; // Next sample point v3SamplePoint += v3SampleRay; } output.RayleighColor.xyz = color * v3InvWavelength * KrESun; output.RayleighColor.w = 1.0f; output.MieColor.xyz = color * KmESun; output.MieColor.w = 1.0f; output.Direction = v3CameraPos - v3Pos; output.Pos = mul(Pos, worldViewProj); return output; } // // Pixel Shader // float4 PS(PSInput input) : SV_Target { float fCos = dot(v3LightDir, input.Direction) / length(input.Direction); float fCos2 = fCos*fCos; return input.RayleighColor * getRayleighPhase(fCos2) + input.MieColor * getMiePhase(fCos, fCos2, g, g2); } technique10 Render { pass P0 { SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); } } I've gone through it many times and can't find anything wierd about it. Mayby someone can point out what is wrong?
  13. cppcdr

    trouble with rendertarget...

    Sorry, devronious, I can't help with MDX cause I have no experiance with it. However, it seems that in your code you have dev.RenderState.ZBufferEnable = false; Could it be that this is causing the problem? Also, check the return value of this.renderTarget = new Microsoft.DirectX.Direct3D.Texture. Perhaps you are not creating the render target at all. Another thing to check would be to run through in debug mode and see if DrawIndexedPrimitives is actually called. You have some wierd code before that may be blocking the call.
  14. So if I understand correctly, you are trying to get per face lighting, using per vertex lighting? If that is the case, then I guess that the best way to do it would be to create some extra vertices (36 total) and calculate the normals afterwards based on the triangle information (from the indices). If you are actually trying to get 1 normal per vertex, you could do it this way: Create temporaty extra vertices (3 per triangle), calculate the face normals on each one (so 36 verts (6 per quad because there are 2 tris), 12 face normals). Then iterate through each real vertex (8 in this case) and average the normals of the vertices that have the same position. So in other words, you would be doing the average of the face normal of each triangle that touches the vertex. It might not be the best way to do it, but I found that it worked great on my models. If the method I described is not clear, tell me and i'll reexplain tomorrow... i'm tired right now so i don't know if what i am writing makes any sense :)
  15. cppcdr

    directinput questions

    You could have a boolean flag that indicates if the user has released the key. For example, you would first ask the user for the key. Once you recieve the keydown message, you set the boolean flag to false, thus meaning that the key is pressed. Then, once the key is released you set the flag to true. You then only process the input when the boolean flag is true. Pseudo code: CheckKeyPress() { if(key1 is down) { keyNumber = 1 bIsReleased = false } } CheckKeyRelease() { if(key1 is up) bIsReleased = true } in the game loop, you would have: CheckKeyPress() CheckKeyRelease() if(bIsReleased) { // Do your processing of the key number here ProcessNumber() keyNumber = -1 // To clear the key press }
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!