Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

490 Neutral


About turanszkij

Personal Information


  • Twitter
  • Github

Recent Profile Visitors

7270 profile views
  1. turanszkij

    Draw just to get stencil buffer updated

    Should be handled by the driver, but an other thing is that if you still have a pixel shader which wants to write output and no render target is bound, the DX debug layer will begin spamming warning messages, so it's still a good idea to have a null or void PS. You can supress warning messages, but I wouldn't recommend it. It might be also interesting to compare which is the cheapest operation and which is the most expensive. I believe from cheapest to most expensive would be: set pixel shader < set blend state < set render target
  2. turanszkij

    Draw just to get stencil buffer updated

    Setting the renderTargetWriteMask to 0 is a good solution as Koen said. You can also just bind a null pixel shader or have your pixel shader output nothing by having void as return type - that way you can still do alpha testing for example.
  3. turanszkij

    Race conditions between thread groups

    You will have to use atomic operation for this. The performance penalty will depend on the access pattern - how often the atomic is accessed - and it usually means some small added latency to the shader, which might be acceptable for you, but you have to test it. If that value you writing is a single counter, you can also use append/consume buffers which come with a hidden counter that might be implemented in hardware a bit differently than atomic operation to UAV and thus be faster.
  4. In ray tracing, precision loss can also come from the ray being too far from the world origin, so the constant epsilon offset no longer working correctly. There is an interesting article on lightmapping mentioning this and suggesting the following workaround: position += sign(normal) * abs(position * 0.0000002) So instead of constant epsilon, the epsilon gets increased further away from the origin. Although I remember that it didn't work well for me when I tried, so I kept the constant epsilon.
  5. Hi, I have experience with Havok and Bullet physics. Both are easy to integrate, and not dependent on platform or graphics API. Both support rigid and soft bodies. Bullet is completely open source MIT license, while you will have to buy a Havok license and you get no access to its source, just precompiled libraries (I tried it several years ago, when it had free demo for students). You can visit Bullet forums which are helpful, but with Havok you will likely get private support just because you paid for it. Havok has better debugging tools out of the box. For instance, you get a stand alone debug application that will visualize the physics world if you connect it to your game and lets you pick and objects with mouse, etc. Very helpful. With Bullet, you can use its OpenGL debug visualizer (I never tried it), or you can also overload callback functions and do your own debug rendering. Right now I am using Bullet physics in my engine and fairly satisfied with it (although mainly I just use simple stuff). I would recommend first try Bullet, because it's free. You can take a look at my bullet physics wrapper which is less than 500 lines now and supports rigid bodies (sphere, box, convex mesh, concave mesh) and soft bodies (cloth, or any kind of mesh):
  6. turanszkij

    DirectX 11 Device context queston.

    I actually tried it on multiple AMD gpus, the one of them being a RX470 if I remember. But it was more than 2 years ago, maybe they caught up now. I can't check on AMD right now, but I checked on my Intel integrated 620 and that also doesn't support it.
  7. turanszkij

    DirectX 11 Device context queston.

    Before you do any big rewrite using deferred contexts, I warn you that AMD gpus don't support those (last I checked). Also, the DX11 implementation for these can't be very efficient, because resource dependencies can't be declared in the API, so the driver will serialize your command lists when you call ExecuteCommandLists and validate dependencies between them anyway. I'm not saying that there is no gain from this, I think some games used it to success (Civilization V afaik). An other method would be to have your own command list implementation that in the end would generate DX11 commands when you submit it. This would be a bigger task to implement, but it would work on Nvidia, AMD, etc... Anyway, you can Map on Deferred contexts with WRITE_DISCARD flag and NO_OVERWRITE flag if I remember correctly. You can use UpdateSubresource() too. But you can't read back data with MAP_READ, or read query results.
  8. The fact that it works on your nvidia gpu while DX11 gives an error means that it is not required that a gpu vendor implements sampling from this format. I wouldn't rely on it. If you really need to use it, you can access a pixel directly with the brackets operator like this: Texture2D<float3> myTexture; // ... float3 color = myTexture[uint2(128, 256)]; // load the pixel on coordinate (128, 256) Then you can do a linear filtering in shader code.
  9. turanszkij

    Beginning developing

    It's great to hear I'm not the only one. That is not true in my experience, but you need to think long term and keep at it. My general advice would be to keep programming on the side, and one day you will be good. But don't just wait for it, you need to practice, practice, practice, and apply for jobs, that's how you will know when you are good enough. Also try to keep a balance, and practice/pursue other interests. For me, this is drawing and martial arts (tae kwon-do) at the moment. Both of these can also be mastered with a lot of practice, and they really help to stay motivated and not burnt out with programming.
  10. Yes, this is usually also called "ray unprojection" from screen (projected) space to world space. I also taken that piece from my "getPositionFromdDepthbuffer" helper function.
  11. You should be able to multiply the screen-space position with inverseViewProjection matrix, divide by .w, and subtract from camera position, and finally normalize. I haven't tested this now, but should look something like this: float4 worldPos = mul(float4(screenPos.xy, 1, 1), inverseViewProjection); /= worldPos.w; float3 eyeVec = normalize(cameraPos - worldPos); I think that you are looking for this. But as ChuckNovice already mentioned, the frustum corners solution would be faster to compute.
  12. I don't have time to look over all code here, but just leave a suggestion: Familiarize yourself with RenderDoc or Nvidia Nsight, or even Visual studio graphics debugger. It will save you in the long run. Probably some of your GPU-side data is stale, and you can inspect all of it with a graphics debugger.
  13. turanszkij

    CopyResource with BC7 texture

    I think it should be fine when copying to/from the same formats, but once you need to copy anything to a compressed format that is not the same format, you are out of luck! Or you must convert it on CPU first Maybe you could convert it in compute shader, write it out to a linear buffer, then upload that to the compressed format texture, but I never tried this in DX11.
  14. I would rewrite that bit of code you linked just a little bit: while (!gameOver) { // Process AI. for (int i = 0; i < aiComponents.Count(); i++) { aiComponents[i].update(); } // Update physics. for (int i = 0; i < physicsComponents.Count(); i++) { physicsComponents[i].update(); } // Draw to screen. for (int i = 0; i < renderComponents.Count(); i++) { renderComponents[i].render(); } // Other game loop machinery for timing... } (Previously the loops iterated for numEntities times) Now you don't assume that each entity has all the components, just update however many components there are. For accessing an other component from an other, I will have the component manager know which entity a given component belongs to, then I can just query an other componentmanager by that entity ID. Then I either get back a component or nothing depending on if the entity had registered a component of that type for itself or not.. But that said I am still in the process of developing this, so I might not get a full insight.
  15. No, this is describing a system where entity is just a number. Then we also have component managers/collections for every possible component type. We can add a component to componentmanager with an entityID, and that means that an entity received that component. The component manager itself can decide how it wants to implement looking up a component by an entity ID. You don't have a "GameEntity with every possible component", only if you registered the entity to every possible componentmanager. The "for (int i = 0; i < numEntities; i++) loops" are iterated against a specific componentmanager of your choosing, which will iterate through every component, regardless of what entity owns it.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!