Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

485 Neutral


About turanszkij

  • Rank

Personal Information


  • Twitter
  • Github

Recent Profile Visitors

7089 profile views
  1. Hi, I have experience with Havok and Bullet physics. Both are easy to integrate, and not dependent on platform or graphics API. Both support rigid and soft bodies. Bullet is completely open source MIT license, while you will have to buy a Havok license and you get no access to its source, just precompiled libraries (I tried it several years ago, when it had free demo for students). You can visit Bullet forums which are helpful, but with Havok you will likely get private support just because you paid for it. Havok has better debugging tools out of the box. For instance, you get a stand alone debug application that will visualize the physics world if you connect it to your game and lets you pick and objects with mouse, etc. Very helpful. With Bullet, you can use its OpenGL debug visualizer (I never tried it), or you can also overload callback functions and do your own debug rendering. Right now I am using Bullet physics in my engine and fairly satisfied with it (although mainly I just use simple stuff). I would recommend first try Bullet, because it's free. You can take a look at my bullet physics wrapper which is less than 500 lines now and supports rigid bodies (sphere, box, convex mesh, concave mesh) and soft bodies (cloth, or any kind of mesh): https://github.com/turanszkij/WickedEngine/blob/master/WickedEngine/wiPhysicsEngine_Bullet.cpp
  2. turanszkij

    DirectX 11 Device context queston.

    I actually tried it on multiple AMD gpus, the one of them being a RX470 if I remember. But it was more than 2 years ago, maybe they caught up now. I can't check on AMD right now, but I checked on my Intel integrated 620 and that also doesn't support it.
  3. turanszkij

    DirectX 11 Device context queston.

    Before you do any big rewrite using deferred contexts, I warn you that AMD gpus don't support those (last I checked). Also, the DX11 implementation for these can't be very efficient, because resource dependencies can't be declared in the API, so the driver will serialize your command lists when you call ExecuteCommandLists and validate dependencies between them anyway. I'm not saying that there is no gain from this, I think some games used it to success (Civilization V afaik). An other method would be to have your own command list implementation that in the end would generate DX11 commands when you submit it. This would be a bigger task to implement, but it would work on Nvidia, AMD, etc... Anyway, you can Map on Deferred contexts with WRITE_DISCARD flag and NO_OVERWRITE flag if I remember correctly. You can use UpdateSubresource() too. But you can't read back data with MAP_READ, or read query results.
  4. The fact that it works on your nvidia gpu while DX11 gives an error means that it is not required that a gpu vendor implements sampling from this format. I wouldn't rely on it. If you really need to use it, you can access a pixel directly with the brackets operator like this: Texture2D<float3> myTexture; // ... float3 color = myTexture[uint2(128, 256)]; // load the pixel on coordinate (128, 256) Then you can do a linear filtering in shader code.
  5. turanszkij

    Beginning developing

    It's great to hear I'm not the only one. That is not true in my experience, but you need to think long term and keep at it. My general advice would be to keep programming on the side, and one day you will be good. But don't just wait for it, you need to practice, practice, practice, and apply for jobs, that's how you will know when you are good enough. Also try to keep a balance, and practice/pursue other interests. For me, this is drawing and martial arts (tae kwon-do) at the moment. Both of these can also be mastered with a lot of practice, and they really help to stay motivated and not burnt out with programming.
  6. Yes, this is usually also called "ray unprojection" from screen (projected) space to world space. I also taken that piece from my "getPositionFromdDepthbuffer" helper function.
  7. You should be able to multiply the screen-space position with inverseViewProjection matrix, divide by .w, and subtract from camera position, and finally normalize. I haven't tested this now, but should look something like this: float4 worldPos = mul(float4(screenPos.xy, 1, 1), inverseViewProjection); worldPos.xyz /= worldPos.w; float3 eyeVec = normalize(cameraPos - worldPos); I think that you are looking for this. But as ChuckNovice already mentioned, the frustum corners solution would be faster to compute.
  8. turanszkij

    How to do Cascaded Shadow Map

    The best would be if you said where exactly are you stuck, but a quick general guide: It means that you place (for example 3) shadow maps in front of the camera, each farther away from the previous and covering a bigger area. You can do it sort of like shooting a ray from the camera in the direction the camera is facing, then choose 3 points on that. For each point, just place a big enough shadow map that you need. Best would be to compute the size programmatically, but I suggest to just hard code it first. The ortho projection (for directional light) is the inverse of the scale * rotation * translation matrix of the shadow cascade. Use the previous matrix and render a shadow map using that, I won't detail how to render a shadow map, you should be already familiar with it if you are doing cascaded shadows.
  9. turanszkij

    How to do Cascaded Shadow Map

    This should give you a good starting point: https://docs.microsoft.com/en-us/windows/desktop/DxTechArts/cascaded-shadow-maps
  10. I don't have time to look over all code here, but just leave a suggestion: Familiarize yourself with RenderDoc or Nvidia Nsight, or even Visual studio graphics debugger. It will save you in the long run. Probably some of your GPU-side data is stale, and you can inspect all of it with a graphics debugger.
  11. turanszkij

    CopyResource with BC7 texture

    I think it should be fine when copying to/from the same formats, but once you need to copy anything to a compressed format that is not the same format, you are out of luck! Or you must convert it on CPU first Maybe you could convert it in compute shader, write it out to a linear buffer, then upload that to the compressed format texture, but I never tried this in DX11.
  12. I would rewrite that bit of code you linked just a little bit: while (!gameOver) { // Process AI. for (int i = 0; i < aiComponents.Count(); i++) { aiComponents[i].update(); } // Update physics. for (int i = 0; i < physicsComponents.Count(); i++) { physicsComponents[i].update(); } // Draw to screen. for (int i = 0; i < renderComponents.Count(); i++) { renderComponents[i].render(); } // Other game loop machinery for timing... } (Previously the loops iterated for numEntities times) Now you don't assume that each entity has all the components, just update however many components there are. For accessing an other component from an other, I will have the component manager know which entity a given component belongs to, then I can just query an other componentmanager by that entity ID. Then I either get back a component or nothing depending on if the entity had registered a component of that type for itself or not.. But that said I am still in the process of developing this, so I might not get a full insight.
  13. No, this is describing a system where entity is just a number. Then we also have component managers/collections for every possible component type. We can add a component to componentmanager with an entityID, and that means that an entity received that component. The component manager itself can decide how it wants to implement looking up a component by an entity ID. You don't have a "GameEntity with every possible component", only if you registered the entity to every possible componentmanager. The "for (int i = 0; i < numEntities; i++) loops" are iterated against a specific componentmanager of your choosing, which will iterate through every component, regardless of what entity owns it.
  14. I have started experimenting with ECS just now too. I want to address both Data Oriented Design and Entity-Component System. I really like the pattern with no Entity object, and entities being just unique numbers. The following article describes both of these goals in detail: http://gameprogrammingpatterns.com/data-locality.html
  15. I'm not completely sure that I understood the question. Your vertex buffer input will not be clamped. The output from vertex shader to pixel shader will be clipped outside [-1; 1] range for SV_Position, because it specifies clip space positions. TEXCOORD semantic has absolutely no meaning, so no extra clamping will be done.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!