majorbrot

Members
  • Content count

    17
  • Joined

  • Last visited

Community Reputation

910 Good

About majorbrot

  • Rank
    Member
  1. You have to be careful with the order of multiplications. The book uses openTK, an openGL wrapper for .NET, while you are using XNA with directX. In the vertexshader you had previously output.position = mul(<vector>, <matrix>), but now it is mul(<matrix>, <vector>), so you reversed the order. Try changing this and see what happens. There are currently some threads around dealing with the differences, perhaps you should read those to get the clue.   Edit: Here are two links: http://www.gamedev.net/topic/655253-eliminating-opengldirectx-differences/   http://www.gamedev.net/topic/655147-oh-no-not-this-topic-again-lh-vs-rh/
  2. the simplest thing you can do is to use the SpriteSortMode.BackToFront. a SpriteBatch.Begin()-overload takes it as a parameter. in the Draw()-call you can pass a layerdepth (a value between 0 and 1, with 0 being in the front). so to assign a layerdepth do something like layerdepth = (screenHeight - objPos.y) / screenHeight i didnt test it, but this or something similar should do the job without that manual sorting.
  3. Loading content efficiently in XNA?

    if 2 models point to the same texture, it should only be loaded once. to access textures loaded with a model you have to iterate through meshes and meshparts. this way you can also grab diffusecolors etc. to handle multiple textures/colors in one model, you can write a simple MeshTag-class, that contains all the data you need from the BasicEffect and store it in part.Tag. something like this should do the job: foreach (ModelMesh mesh in model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { MeshTag tag = new MeshTag(); tag.Texture = ((BasicEffect)part.Effect).Texture; tag.Color = ((BasicEffect)part.Effect).DiffuseColor; part.Tag = (Object)tag; //not sure if a cast to Object is needed //additionally you can set a customeffect here } when trying to access MeshTag-Data, dont forget the typecast. hope that helps, major
  4. the cube to sphere transformation is a good startingpoint. all you have to do is create a cube with tessellated sides and normalize (vertexposition - planetcenter) in the vertexshader. multiply this with the planetradius and your done. regarding c#/xna: its possible, have a look at this: http://www.gamedev.net/blog/1302/entry-2250847-bow-shock-a-summary-of-work-done-so-far/ it should give you a great overview of things that can be done.
  5. [D3D]A problem about pick sequence!

    what about putting all intersecting meshes in a vector with a pointer to the mesh or some other identifier plus the distance? than you can iterate through the vector and save the shortest distance with the identifier.
  6. When filling the gBuffer, depthbuffer read/write/testing has to be enabled. In the compose it should be turned off, because youre just drawing a fullscreenquad, meaning that the full screen should be covered and a depth-test would only cause problems. Did you have a look at your Rendertargets? Before combining, does your Colortarget look right? And did you test both methods?
  7. yea, the skyboxshader reads the stencilmap and uses clip() to discard the pixels with red = 1 (i think it is clip(0.1 - stencil.r). and depth is no problem, you just have to disable depthtesting, because the stencilmap already "knows" where the skybox has to be drawn and where its occluded. Edit: Here is the full order: (- disable depth read/write) - clear depthBuffer to max depth - clear gBuffer - enable depth read/write - fill gBuffer with terrain/models/... - disable depth read/write - draw lights - combine color + lightmap - draw skybox
  8. I implemented Deferred Rendering with a SkyBox in XNA. There i just use a "selfmade" stencil buffer (XNA discards the stencilbuffer when switching the rendertarget), so when rendering the gBuffer i have an extra target, setting the red-channel to 1 everywhere somthing is drawn. After composing the final image i render the skybox, where the extra target has a red-value of zero. Its really simple and you can use the unused channels for other data (material-IDs or whatever).
  9. VertexBuffer not setting properly?

    [sub]its just a.wrong order. try this:[/sub] [sub]-set effect params[/sub] [sub]-call Pass.Apply()[/sub] [sub]-Draw()[/sub] [sub]and you dont have to call present as this is done in base.Draw().[/sub] [sub]Edit: [/sub] [sub]its enaugh to set the params one time per frame, Apply() and draw two or more times, if all params stay the same. otherwise you can update only the per object changing values, Apply again and draw. [/sub]
  10. the light calculations are all done in worldspace, so i have a vector for the directional light, that is not transformed in any way and the normals from the terrain arent either, they're just passed from vertex to pixelshader. With the NdotL thing you mean that technically the sunlight points up? than that seems to be wrong, i thought it would be just the "right direction". With normalCalculation i mean calculating the vertexNormals from the terrain. now they are calculated when the terrain is created and recalculated when it changes, and are then passed as vertexData. my question was, if it is possible to calculate them in the vertexshader. if i remember correct, i read somewhere that this could be done, but i dont know how, because you dont have access to the neighboring vertices. thanks, all this stuff makes me feel so newbish again... Edit: i think the calculation in the vertexShader could be done if i had a heightmap. But thats missing, so i see no chance beside generating one. EditEdit: Got the NormalMapping working, looks good to me: [img]http://img215.imageshack.us/img215/3678/dsnobug.png[/img] thank you again, it helped me so much.
  11. I think so too... I just commented out the stuff with normalmaps, for now it's working, although it seems to be a bit hacky, because i have to take the negative normal from the vertex. So i try figuring out, what that is, then i can go a step further and try the normal mapping. For now all NormalCalculations are done on the CPU, but is there an easy way of doing it on the GPU? I'm relatively new to this, so theres a lot to learn ;) Thanks a lot, you really helped me out.
  12. That is, because i output the Normals in a Texture to look them up in a seperate effect. I can't store values in [-1,1] range in a Texture, so i transform it and bring it back in [-1,1] in the other effect. Or am i missing something? Edit: RenderTargetFormat is Color, so it can store values in [0,1] range.
  13. Reading your post, it makes sense, but after looking over my shaders i can't find anything wrong. Is it right that the mistake has to be in the Terrain-Shader? PointLightShader is copied for testing from the sample, so i know it works. And the half point light is already in the lightmap, so the combineEffect can't be the problem. Here's the code for my terrainShader: [CODE] PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) : COLOR0 { PixelShaderOutput output; //+++++++++++++++++++++++++++++++++++ //++++ C O L O R - S E C T I O N ++++ //+++++++++++++++++++++++++++++++++++ //Get Colorvalues from Textures and calculate final TextureColor float3 rTex = tex2D(RTextureSampler, input.UV * rTile); float3 gTex = tex2D(GTextureSampler, input.UV * gTile); float3 bTex = tex2D(BTextureSampler, input.UV * bTile); float3 aTex = tex2D(ATextureSampler, input.UV * aTile); float3 baseTex = tex2D(BaseTextureSampler, input.UV * BaseTile); float3 baseWeight = clamp(1.0f - input.Weights.x - input.Weights.y - input.Weights.z - input.Weights.w, 0, 1); float3 texColor = baseWeight * baseTex; texColor += input.Weights.x * rTex + input.Weights.y * gTex + input.Weights.z * bTex + input.Weights.w * aTex; output.Color.rgb = texColor; output.Color.a = specularIntensity; //+++++++++++++++++++++++++++++++++++ //+++ N O R M A L - S E C T I O N +++ //+++++++++++++++++++++++++++++++++++ //Process VertexNormal and bring it in [0,1] range float3 vnormal = 0.5f * (input.Normal + 1.0f); normalize(vnormal); //Get Normals from NormalMaps (already in [0,1] range) float3 baseNorm = tex2D(NormalSampler, input.UV * BaseTile);// * 2.0 - 1.0; float3 rNorm = tex2D(rNormalSampler, input.UV * rTile).rgb;// * 2.0 - 1.0; float3 gNorm = tex2D(gNormalSampler, input.UV * gTile).rgb;// * 2.0 - 1.0; float3 bNorm = tex2D(bNormalSampler, input.UV * bTile).rgb;// * 2.0 - 1.0; float3 aNorm = tex2D(aNormalSampler, input.UV * aTile).rgb;// * 2.0 - 1.0; float3 normal = normalize(baseWeight * baseNorm); //Add Vertex- and TextureNormals up normal = normalize((normal + input.Weights.x * rNorm + input.Weights.y * gNorm + input.Weights.z * bNorm + input.Weights.w * aNorm) + vnormal); output.Normal.rgb = normal;//0.5f * (normal + 1.0f); output.Normal.a = specularPower; //+++++++++++++++++++++++++++++++++++ //++++ D E P T H - S E C T I O N ++++ //+++++++++++++++++++++++++++++++++++ //Depth is VertexShaderOutput.Position.z & .w output.Depth = input.Depth.x / input.Depth.y; output.Stencil = float4(1.0f, 0, 0, 1.0f); return output; } [/CODE] Is there something wrong with it? =/ Thank you all, major
  14. You're right, that's the way i do it, too. I played a lot with depthStencilStates. Either you see no lighting at all or that half sphere. I even tried to rotate the model, but it always looks the same. Distance from light to camera and viewdirection don't change anything... No idea what could be wrong. Thanks for your replies.
  15. Yes, other lighting works as expected too. The terrain is editable, so there shouldnt be a problem with normals, i would have noticed it before.