Jump to content
  • Advertisement

lubbe75

Member
  • Content Count

    43
  • Joined

  • Last visited

Community Reputation

122 Neutral

1 Follower

About lubbe75

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Art
    Design
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. That's what I do. It's just that I take a number (for maximum buffer size) out of thin air. It feels wrong, but it works.
  2. Ok, that's Interesting. I have a GeForce GTX 1060 6GB, driver version 390.65. Now that I upgraded to driver 398.11 I still get the same results. So, I'm working my way around the problem by creating several small upload buffers, but I don't know at what size I should break and create another upload buffer. Too small buffers are not efficient, and too big will trigger the error. For now I just set the limit to around 100 Mb, and it works.
  3. I want to stress my application by rendering lots of polygons. I use DX12 (SharpDX) and I upload all vertices to GPU memory via an upload buffer. In normal cases everything works fine, but when I try to create a large upload buffer (CreateCommittedResource) I get a non-informative exception from SharpDX, and the device is being removed. GetDeviceRemovedReason (DeviceRemovedReason in SharpDX) returns 0x887A0020: DXGI_ERROR_DRIVER_INTERNAL_ERROR. I'm guessing it's all because it can't find a consecutive chunk of memory big enough to create the buffer (I have 6GB on the card and I'm currently trying to allocate a buffer smaller than 1GB). So how do I deal with this? I guess I could create several smaller buffers instead, but I can't just put the CreateCommittedResource call in a try-catch section. The exception is serious enough to remove my device, so any further attempts will fail after the first try. Can I somehow know beforehand what size is OK to allocate?
  4. lubbe75

    SharpDX (DX12) and WPF?

    Sorry for answering myself again (I seem to be rather lonely in this quest)... The only way forward, as I see it, is to share a surface between DX12 and DX9 (or DX11). Render with DX12 and present with DX9 (D3DImage in WPF). Sounds tricky enough. Any little code example on how to do this would be more than welcome.
  5. lubbe75

    SharpDX (DX12) and WPF?

    Ah... The problem seems to be now that SwapChainPanel is not available in WPF, Desktop. It's for UWP applications (Windows Store apps, Xbox). So, I still don't see a way forward to run DirectX 12 (with or without SharpDX) in a WPF application. Back to square one again .
  6. lubbe75

    SharpDX (DX12) and WPF?

    Thanks for explaining! I will begin with a small SharpDX example before moving on to my "beast" of code
  7. lubbe75

    SharpDX (DX12) and WPF?

    Thanks, but sharpDX (at least how I use it) needs to render to a RenderForm (a winforms control). Is there a tiny example out there somewhere using SharpDX and WPF?
  8. I have a winforms project that uses SharpDX (DirectX 12). The SharpDX library provides a RenderForm (based on a System.Windows.Forms.Form). Now I need to convert the project to WPF instead. What is the best way to do this? I have seen someone pointing to a library, SharpDX.WPF at Codeplex, but according to their info it only provides support up to DX11. (Sorry if this has been asked before. The search function seems to be down at the moment)
  9. As far as I understand there is no real random or noise function in HLSL. I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway... Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious?
  10. Excellent... Adding a bit of extra space at the end sounds like a good idea. And also extending when necessary by first copying the existing buffer. Thanks! Now I just need to write code that works
  11. Can I extend a vertex buffer in DirectX 12 (actually SharpDX with DX12)? I am already set up and happily rendering along when I suddenly realise I need more vertices to render. I could of course scrap the vertex buffer and create a new, larger one, but the problem is that I have already discarded my vertex data on the CPU side, to save memory. So... I could create another vertex buffer instead and make sure to use both, one after the other, when rendering my frames. That would be OK, but what if I need to do this extension 1000 times? I would end up with 1001 vertex buffers, and I guess that would really kill performance. What if I could simply extend my vertex buffer? Is there a way? Let me guess the answer here. I'm guessing there isn't. But maybe there is a way to copy the older vertex buffer into a new larger one? I have already used an upload buffer and copied it to my vertex buffer. Can I simply use the same technique for copying my old vertex buffer into a new one? Does anyone have an example code where this is being done? Well, if I can't extend my vertex buffer in place, I guess this is what I will try instead. Have a great weekend!
  12. I think it's better to reduce the precision in my case, since reading from another constant buffer would probably take too long. What is the equivalent of R8G8B8A8_UInt or R8G8B8A8_UNorm in HLSL? I can only see 16 bit components mentioned in the documentation. No 8 bit types.
  13. I do have hundreds of thousands of vertices (thousands of meshes), so I would save quite a bit of data. Constant buffer? I suppose it's the best way. I am drawing everything in bundles so I suppose I would need to store all the colors in a long vector (one value per mesh), and then index that vector correctly when recording my draw calls. Makes sense.
  14. What is the best practice when you want to draw a surface (for instance a triangle strip) with a uniform color? At the moment I send vertices to the shader, where each vertice has both position and color information. Since all vertices for that triangle strip have the same color I thought I could reduce memory use by sending the color separate somehow. A vertex could then be represented by three floats instead of seven (xyz instead of xys + rgba). Does it make sense? What's the best practice?
  15. lubbe75

    Overlay in DX12?

    My first thought was naturally to draw normal System.Drawing.Graphics elements on top of the 3D rendering. This would be done right after presenting the swapchain and waiting for the GPU to catch up. Unfortunately it didn't work. Nothing gets drawn, at least not on top. If you have any method that works, please let me know. The area that I'm drawing onto is a SharpDX.Windows.Renderform (which inherits from System.Windows.Form.Form). So I guess it's the long route of drawing more 3D polygons instead.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!