JMab

Members
  • Content count

    25
  • Joined

  • Last visited

Community Reputation

236 Neutral

About JMab

  • Rank
    Member
  1. Unwanted blending

    I love this place! I occasionally come here with a problem I think is obscure and difficult and an expert like you kauna points out that the answer is simple and obvious!   My render target format is DXGI_FORMAT_R8G8B8A8_UNORM_SRGB. The rest of my engine renders in linear space and I have been trying to integrate the Nvidia OceanCS sample into my engine, without even thinking of transforming the gamma colors to linear. Thanks!   Voila!     Edit: I should add, I'm not quite finished though - the color is close, but not quite right (maybe need to do some other conversions to linear space earlier as well), there are some white triangle artifacts in the top-left hand side of the ocean and it needs to be integrated with my realtime sky component, but it's getting close!
  2. Hello all,   I'm having what I think is an odd problem. Here's what an example pixel is doing in the Output Merger stage:     The final triangle strip outputs a color from the pixel shader of (0.27,0.30,0.43,1), which is what I want as the final color, however it is getting blended with the previous value to give a lighter shade.   I thought the blending in the Output Merger state was completely controlled by the blend state. This is how my blend state is set:     With BlendEnable and LogicOpEnable both FALSE, I thought that no blending would be performed. However I'm getting this unwanted blending. Any idea how to stop it?   Cheers,   John  
  3. I agree pre- or post- increment doesn't matter in regards to the output here.   I've always defaulted to using pre-increment in for C++ loops as I understand a temp variable is needed for a post-increment operator "under the hood", though I'd imagine any optimizing compiler worth it's salt should be able to produce just as efficient assembly either way.
  4. Bilinear filtering yes, mipmaps no! When I started generating specular maps in ShaderMap2 I was remembering to generate mipmaps in the DX Tex Tool, but I'd forgotten that step recently. Generating mipmaps for the specular map solved the problem!   Thanks very much MJP, much appreciated.
  5. Hi D3D experts,   I've recently tried to move from a single specular intensity value to using a specular map, and it doesn't look good.   This is what a single specular intensity value looks like:     Pretty smooth, as you'd expect. This is what it looks like after I try and control the specular intensity with a specular map:     Not good. Here's what the specular map looks like:     And here's my HLSL (old code commented out). Specular color was sampled from the specular map. I just noticed that I needlessly sample into a float3 and then just use the x value, but I think that makes no difference: // Blinn specular. toEye = normalize(toEye); float3 halfway = normalize(toEye + toLight); float nDotH = saturate(dot(halfway, material.Normal)); //finalColor += pow(nDotH, material.SpecularPower) * material.SpecularIntensity; finalColor += pow(nDotH, material.SpecularPower) * material.SpecularColor.x; Any way I can get smooth specular highlights again whilst gaining the ability to only apply the highlights to certain parts of the textured model?  
  6. Sure, I think I'll need to reasonably aggressively cull shadow-generating lights - I've had that as a TODO in my code to do this for a while and am assuming that, that culling will be straightforward, given that my scenes will typically be heavy only on point lights where simple sphere/frustum intersections can discard non-scene-affecting lights quickly.   I'm allowing lights to be flagged as non-shadow-generating and models as non-shadow-casting and/or non-shadow-receiving, so there are those possibilities for performance optimization.   It does make me wonder whether I'll really get the benefit out of my deferred shading path though - it's all well and good to be able to light the scene with dozens to hundreds of non-shadowing-generating lights, but as soon as you implement shadowing, you're brought back down to Earth!
  7. OK, that makes sense - compile a list of all of my scene camera and shadow-generating light frusta and then use that list when traversing the scene. Store results in bit flags for querying later in the frame. Means you only have to traverse the scene once, which would be a slight performance improvement over traversing for each frustum.   I'll give it a go without an acceleration structure - good practice not to try and prematurely optimize!
  8. OK, thanks Tiago. I can see that is the perfect general approach.   A lot of frustum culling though! I've been following an "only implement when I need it approach" to this engine/game development, and had been getting by with frustum culling to the scene camera for the entire list of scene entities, however given that I'll need 6 additional lists per point light and 1 per spotlight, I'm going to have to implement spatial subdivision structure, maybe a loose octree, to minimize culling.   Thanks again!
  9. Hello all,   Looking for some collected wisdom here on this best strategy for this issue.   I'm performing simple frustum culling of models, and also shadow mapping combos of point/spot lights and models. That all works fine. However, there is one problem I've just noticed. When I move the camera to a position in the scene where a shadow-casting model becomes frustum culled, even though it's shadow should still be seen within the frustum, the model is removed from the render queue and no shadow mapping is performed for that model.   I'm thinking that the naive general approach would be to maintain a 2nd list of render commands of models for shadow casting that have not been frustum culled (although the list would probably still be reduced by some other scene visibility approach). Or I could go for a specific approach and let the artists tag shadow casters for no-culling when the camera is within a user-defined zone.   Is there a clever general approach to this? Should I be calculating the volume of the lights (I can do this in shaders already for debugging), calculating what models intersect those volumes and stop them from being frustum culled? Is that the best general approach?   Thanks in advance.
  10. I have Assimp and SDKMesh (for DXSDK sample media) importers, and plan on creating another importer utilising the FBX SDK shortly to support morph targets (Assimp doesn't do morph targets currently).   I like Assimp. It provides much more than just supporting a number of formats for getting data into your engine. It also has a wide range of asset "conditioners", i.e. calculating tangents/bitangents, joining/optimizing, removing redundant stuff, UV conversion of spherical, etc. mapping to standard UVs, ensuring mesh instancing is correct, generating smooth normals, splitting large meshes, triangulation, conversion to other coordinate systems, etc.   If I want to use or test an asset I haven't created, importing it via an Assimp is a good first step for getting it clean.
  11. I should add on the Model/Mesh/SubMesh debate, I don't think my current structure is future proof, as it doesn't consider Level of Detail (LOD). This structure makes sense to me, but each to their own:   Model - A collection of Meshes. Linked to by the actor/gameObject/entity.   Mesh - A collection of SubMeshes. Each Mesh represents a different LOD level of the Model.   SubMesh - A shaders/material/vertex buffer/index buffer set for a particular LOD level.
  12. I can explain how this works in my engine currently.   Only considering relevance to model rendering, my resource manager manages vertex shaders, pixel shaders, models, materials and textures.   A model is a collection of 1-to-many meshes. Each mesh is a subset of the model, with 1 material only. Each mesh contains a vertex buffer, index buffer, material handle and render type.   The render type will instruct the render queue how to render the mesh. The most common render type set is opaque- opaque meshes will end up on the standard forward or deferred rendering path. It could also be alpha-clipped, semi-transparent or an overlay, which are also sorted to be rendered last, always via forward rendering.   My scene is a collection of entities. Entities may contain other entities. Each entity is a collection of 1-to-many components. One type of component is a model component. When rendering the scene, the model component handles the OnRender event by constructing a render command for each mesh within the model. The render command contains the render type, material handle, depth (of the model from the camera), model handle, mesh index and a pointer to the owning component (and therefore indirectly entity). This render command is submitted to the render queue.   Once the entire visibility-culled scene has had a chance to add render commands to the render queue, the render queue is instructed to quick sort the queue. It does this by having all relevant render command information tightly packed (1 byte packing) into 3 uint32_t's.   The structure looks like this:     So the queue sorts by the priority of: Render Type - MaterialID - Model ID - MeshIndex - Depth.   When rendering the queue:   A change in render type instructs the render queue to activate the appropriate vertex and pixel shaders.   A change in material instructs the mesh to set the material parameters in the per-material constant buffer and set the material textures.   A change in model instructs the mesh to set the model parameters in the per-model constant buffer (e.g. transformation matrices).   A change in mesh index instructs the mesh to activate the mesh vertex and index buffers.   The mesh is then ready to render.
  13. I've built Assimp 3.1.1 (the latest) on Windows 8 using Visual Studio 2013 Desktop Express.   I'm no CMake expert, but I just downloaded the latest version of that and pointed it at hthe root of the Assimp 3.1.1 source directory and it managed to generate solution and project files for VS 2013 Desktop Express.   I briefly confused myself by pointing CMake at the \Code subdirectory, but once corrected, it all worked fine.   I already had the last DirectX release (June 2010) installed on my machine, so perhaps that helped it go smoothly.   Only bit of advice I can give from a CMake newbie is to completely clear its cache if you stuff something up, as incorrect settings will hang around otherwise. I think I may have even paranoidly uninstalled it and reinstalled it, after realising I was pointing at the wrong Assimp directory.   This post has reminded me how fun it is to say and type Assimp. So, thank you.
  14. Hi CC Ricers, Don't worry, I'm doing the same as you. Re-writing my DX9 engine in DX11 on my laptop with an Radeon Mobility HD3650, using Frank Luna's DX11 book to help the upgrade. All I've needed to do with his code so far is to change the minimum feature level to D3D10, and change the effect files to compile the shaders as SM4.0 rather than SM5.0. I'm relatively early in the book with implementing all the demos, however they all work fine so far. Cheers, JMab
  15. OK, I've got the basics of the automated bone volume generation working (I had some help from another forum). Here's the code, in case anyone is interested: [code]// Decompose the bone head world matrix. XMVECTOR boneHeadScale, boneHeadRotation, boneHeadPosition; if (!XMMatrixDecompose(&boneHeadScale, &boneHeadRotation, &boneHeadPosition, worldMatrix)) { Log::Warn(L"Failed to decompose bone world matrix."); continue; } // Create the mid-point in local space. XMVECTOR midPoint = XMVectorSet(0.0f, m_Bones[i].HalfLength, 0.0f, 0.0f); // Transform the mid-point to world space. midPoint = XMVector3TransformNormal(midPoint, worldMatrix) + boneHeadPosition; // Construct the bounding volume world matrix. worldMatrix = XMMatrixTransformation(XMVectorZero(), XMQuaternionIdentity(), boneHeadScale, XMVectorZero(), boneHeadRotation, midPoint);[/code] The ragdoll now looks like this: [img]http://img13.imageshack.us/img13/1330/ragdollv.jpg[/img] [color=#282828][font=helvetica, arial, sans-serif]I was thinking of letting the artist define the bone volume "width" and "height" in the 3d editor, however I'm now thinking I should be able to automate this too - I should be able to just to min/max calculations on all of the vertices in each bone's vertex group to calculate the volume. Anyone tried this before, or should I just let the artist define the rest? [/font][/color]