Jump to content
  • Advertisement

mellinoe

Member
  • Content count

    12
  • Joined

  • Last visited

Community Reputation

4 Neutral

About mellinoe

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Social

  • Twitter
    effyneber
  • Github
    mellinoe
  1. A few questions that can help shed light on the problem: * What depth range are you rendering -- e.g. what are your near and far planes? Where are most of your objects located? If the range is too large, then there may not be enough precision in the buffer where it matters. * What depth format are you using? A larger format could help. * Are you using the reverse-Z technique with a 32-bit floating point depth buffer? I recently implemented that and saw significant improvements.
  2. The code modifying gl_Position.z is the standard modification intended to switch from OpenGL's clip space (-w -> w), to Vulkan's (0 -> w). How are you compiling your shaders to SPIR-V? Some tools have an option to automatically insert the depth-range fixup (and the inverted-Y fixup) -- perhaps that is what's happening in your case.
  3. mellinoe

    Limited number of Uniform Buffers?

    I also saw this after updating my validation layers recently. In my program, I was using 13 instead of the max 12, so I just combined two of my buffers -- not really a general-purpose solution. The limit feels frustratingly-low compared to the other graphics API's I support, so I'd also be interested in hearing why this limitation exists. By the way, I use a GTX 770 and have the same limit (12). That seems unusual considering how much newer your card is. Seems like it's a global driver limit and not necessarily linked with the chip?
  4. mellinoe

    DirectX - Vulkan clip space

    I flip the coordinates in my vertex shader, but I also generate my shaders from another representation, and the fixup happens "automatically". A similar fixup is done in my OpenGL shaders to reconcile the clip-space differences there (z-range).
  5. Like @Alberth said, strings are going to be deadly for performance, especially if you are using them this way (to serialize individual bytes of a massive byte array). Your goal should be to have zero strings involved in your entire serialization pipeline (unless you are actually serializing a string...). If you are serializing mesh data (or terrain data, etc.), then there's no reason to have a string at any point. You're dealing with geometric data (presumably), not text. Question: What does the actual deserialized data look like? Your engine doesn't deal with a string or with a byte array when it's rendering this terrain data, right? That is just another intermediate representation. Your engine must be dealing with something that contains an array of vertices or some similar data structure in order to draw the terrain/mesh. Instead of thinking about how you can read and write your intermediate string, you should be thinking about how you can most efficiently read and write the actual data you're interested in.
  6. Any particular reason you are serializing to a string, rather than using a binary format of some kind? Given that you already seem to have a byte[] array ready to go, you could just spit that out directly, rather than passing it through an intermediate string representation. Writing a byte array to a Stream or file is trivial, and reading it back is just as easy. It will also be many times faster than converting to and from a string.
  7. This is not accurate. There is no difference between `System.Boolean` and `bool`. The latter is simply the C# language alias for the former, and there is no functional difference in their usage. I don't think there's anything dictating how much space primitive types take on the stack -- that is an implementation detail and likely depends on the runtime and optimization level being used. Traditionally, `System.Boolean` is not considered a blittable type because the Win32 boolean type is 32-bits, whereas the .NET representation is 8-bytes. This behavior probably made sense when .NET was first designed, primarily for Win32 systems.
  8. mellinoe

    Urho3D Graphics abstraction

    I just treat this as a quirk of the Direct3D11 backend's state tracking behavior. It's not legal for users to bind incompatible resources to the pipeline through my library's regular API, and they would get a descriptive error message if they did that. Nevertheless the D3D11 context might get into a state where there are resource conflicts after a sequence of legal operations, because unused resources aren't removed. I could probably aggressively purge those unused resources, but it would be unnecessary most of the time. EDIT: To clarify, in response to this: I'm only talking about the cases where the user hasn't made a mistake. If they try to actually do something illegal (e.g. bind a Texture as both read and read-write), then you should catch that separately and give them a descriptive error.
  9. mellinoe

    Urho3D Graphics abstraction

    I deal with a similar issue in my graphics library. My API is not exactly like what you've posted (it's closer to Vulkan/D3D12 with multiple resources being bound as a single set), but there's a similar problem that needs to be tackled in the Direct3D11 backend. In my case, I just keep track of all SRVs and UAVs that are currently bound, and check for invalid combinations when a new SRV or UAV is bound. For example, if you try to bind an SRV for Texture A to the context, then I'll check if Texture A has any bound UAV's. If it does, then those are removed. Then, the SRV is bound to the context and added to the map that tracks SRV state. Same process when a UAV is bound. A bit hand-wavey (I could elaborate if it doesn't make sense), but overall it's not a very complicated system. Urho3D might be doing something more clever than I am.
  10. @Hodgman I am indeed using a D3D style matrix. I mentioned at the end that I made a couple of modifications that are supposed to account for those differences, which I found from some older discussion threads. I suppose I can go the whole way and try to start with a GL-style matrix and see if it helps. EDIT: Hrm, using a GL-style matrix didn't seem to affect the results much.
  11. I'm currently working on adding a reflective surface to my project. I have most of it down and working properly, but I am struggling with a technique I've read about online (and which seems to be widely used). I'm referring to this technique: "Modifying the Projection Matrix to Perform Oblique Near-Plane Clipping". Without the technique, I'm able to render my reflective surface correctly, but only if I delete all of the objects behind the reflection plane. I'm aware that I could add some code into my pixel shader to clip fragments that are behind the custom clipping plane, but I'd like to avoid doing that if I can. Without the oblique near plane: https://i.imgur.com/ZxnwKNX.jpg I've followed several different versions of the technique linked above, but I can't seem to get it right. Invariably, I end up with weird distortions like this: The last little "preview image" at the top shows the rendered reflection view. You can see that the reflected view is completely warped, and stretches out infinitely rather than just having a regular upside-down perspective. The function that modifies the projection matrix is really small, so I'm not sure where my mistake is. The one thing I've modified from the link above is I am scaling the clip plane by (1 / dot(clip, q)) instead of (2 / dot(clip, q)), and removed a +1 at the end, because I am using a [0, 1] clip space. I've seen this mentioned in some places online. Regardless, it doesn't help to change that back. Can anyone point me in the right direction here? The relevant code can be seen here. Any help would be much appreciated.
  12. Hi all, First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource! Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots: The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios. Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!