Jump to content
  • Advertisement

ChuckNovice

Member
  • Content count

    27
  • Joined

  • Last visited

Community Reputation

108 Neutral

About ChuckNovice

  • Rank
    Member

Personal Information

  • Industry Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

940 profile views
  1. ChuckNovice

    Mulitple Game Loops?

    Pausing all the game loops and keeping only one active is nothing different than having a single loop handling it all so I don't see what you would gain from doing this. "Multiple" game loops are more commonly used in editors such as 3dsmax to display multiple viewport and still, it's usually just one loop that dispatch the work evenly accross all the viewports. Your efforts would be much better invested in looking for something like a cache and level of detail system. For example you could initially load a very low resolution version of your textures / 3d models and display it first while the full detailed version is loading. You could also cache some levels that have been previously loaded. When switching between your characters, don't automatically release all the level resources unless it is really necessary to free some resources. That way everything remain loaded if you switch back to that character. We could give you a better direction to look at if you can tell us exactly what part of your level loading need to be optimized. Is it resource loading from disk? Is it a procedural level generation and the logic of building the level is the bottleneck?
  2. ChuckNovice

    Add floor in skybox

    Hello, First there is no such thing as an infinite ground. At most the ground will be clipped by the far clip plane of your camera projection and for the depth precision you usually don't want that far plane to be at an absurd distance. I think that what you are looking for is a simple fog function. Pixels should be mixed with a fog color the closer you get to the far clip plane in your pixel shader. The color to use for the fog can either be a manual value that match your skybox or you could also sample the color from the horizon of your cubemap. The ground/objects will then slowly appear to fade with the skybox in the distance. A skybox is supposed to be centered around the camera. I've never heard of a skybox moving up or down. Could you detail that part of the question a little more please.
  3. I omitted that part in my previous comment because I prefer letting someone more experienced than me answer that one. I haven't used immediate mode for my draw calls in almost 10 years.
  4. This part is not necessarily true. Drawing 3D first then 2D will cause a little bit of overdraw where the rasterized pixels from the 2D part are hiding what's behind. In my project I draw 2D first on both my color buffer and stencil buffer. Then I draw the 3D part only where the stencil has not been written. This part save heavy calculation of the 3D scene on pixels that would never be displayed otherwise but it can introduce some problems if you wish to use post-processing effects such as bloom/blur or anything that could cause pixels to bleed around. I handle the skybox in pretty much the same manner. At the last stage of my deferred pipeline I sample and write the skybox color where neither the 3D or the 2D has written anything to the stencil buffer. (I dont rasterize a box / dome around the camera like the common old way) Knowing that, I never need to clear my render targets when I know that a skybox is bound to my scene as all the pixel are guaranteed to be overwritten and that saves a lot of precious bandwidth.
  5. Hello Jens, Just by looking at the code I don't see where you are populating the "indices" List. It seems to me that it is always empty. EDIT: Another thing. I see that everytimes you meet a face instruction (the "f" token) you completely discard all your arrays and create a new one to repopulate it entirely. This is really not optimal and will result in long loading time even on relatively simple models.
  6. The orthographic matrix of your directional light need to be translated+scaled so that it covers the entire frustum. Find out the 8 corners of your camera frustum and multiply each corner by your orthographic matrix. That will give you where those 8 points are located in light space. You can then use these data to adjust the orthographic view and projection matrix so that cover your entire camera frustum by finding out the minimum/maximum bounds. It took me about a week to get it right a long time ago so it's normal if you have to mess around a bit. Also depending on your requirement, a single shadow map may not have enough precision to give good results if you allow your camera to look at the horizon. This is where techniques such as CSM is useful (https://msdn.microsoft.com/en-us/library/windows/desktop/ee416307(v=vs.85).aspx)
  7. Hi, I will let someone else answer your main question (Best way to copy from a vertex buffer to another). However this information should be very relevant for you. I've read in many place that it is common practice to create your dynamic vertex buffers with extra space at the end to avoid recreating the whole buffer on every byte that you want to add. Let's say you always create your vertex buffer with 1.5x the original size, you wont have to do all this until your vertex data almost double in size. I may be wrong on the next one but I believe i've also read somewhere that DX11 was actually doing that for us under the hood with dynamic buffers. So that's one thing you can do to reduce the number of buffer creation.
  8. ChuckNovice

    DirectX12 adds a Ray Tracing API

    None yet other than the denoising stuff they've been talking about as you said. However standardizing all this under an API is the first step to allow this to eventually happen. It would be hard for me to believe that in the future years video cards wont have dedicated hardware to accelerate this in some way or at least improve the existing ones with regard to raytracing. Whatever happens, anything that pour effort and money in this area of 3d graphics is good news to me.
  9. To be fair I've never used Direct2D before but I can already see that your D2D context has no knowledge of the DX11 device that you created. I found that little easy example here : https://english.r2d2rigo.es/2012/07/04/basic-direct2d-drawing-with-sharpdx/ I can already see few things you're doing different. The example make sense, if you follow it closely you'll get it working. For example they create their D2D context by passing it the DX11 device like so : SharpDX.Direct2D1.Device d2dDevice = new SharpDX.Direct2D1.Device(dxgiDevice2);
  10. ChuckNovice

    DirectX12 adds a Ray Tracing API

    Raytracing exists since the 70s indeed. The difference is that there is finally an effort from the vendors to make this area more viable, we now have a platform for it and we know that the hardwares will evolve with raytracing in mind and possibly very optimized hardware circuits for it. Now we have an API dedicated to this and we can develop on it knowing that the vendor will improve it over time and we wont have to rewrite everything from scratch. We are no longer on our own in this. That's what the hype is about.
  11. The IntPtr that a device expect in the constructor is definitely not the handle of your form and that's why you would get this error. Please create your device with the other constructor overloads that accept an Adapter/few creation flags or you may also try the static CreateWithSwapChain method in the Device class. It's the SwapChain that will need to be aware of your form handle since it is the one that is responsible of presenting the buffer to the specified control.
  12. We would first need to see what happen in that GetTransform function. And secondly we usually build the view matrix from the camera information and not the opposite, you're confusing me a little bit on that part. Can you explain your scenario a little more?
  13. ChuckNovice

    DirectX12 adds a Ray Tracing API

    From what I've read so far it seems that the acceleration structure will be all handled by the API/vendor with great optimization for dynamic objects. Did anyone see something that suggest that we will still have to build that structure ourself?
  14. ChuckNovice

    DirectX12 adds a Ray Tracing API

    This is major big news and I can't wait to try it out. Hope we get to see those changes reflected in SharpDX not too long after.
  15. ChuckNovice

    Send one color per surface to shader?

    If your mesh is guaranteed to use the same color per draw call you could pass the color as a constant buffer. Otherwise I think you'll have to provide them in the mesh as you are currently doing. Unless you implement some of the modern material techniques you'll be stuck with the two methods above.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!