Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

9 Neutral

About mellinoe

  • Rank

Personal Information

  • Role
  • Interests


  • Twitter
  • Github

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. mellinoe

    Renderdoc can see my model but...

    For me, the next step is usually to rule out depth/stencil/backface issues. I'd go over to the "Texture Viewer" tab, select the color/depth output textures, and check these "Overlay" visualizations: Highlight drawcall. It should show something since your mesh is visible in the VS Output window. Depth test + Stencil Test (if you're using stencil). It should be green. If it's red, then you have something wrong with your depth buffer or depth test settings. Backface Cull. It should be green. Red means it's being culled.
  2. mellinoe

    API Performance

    This is a hard question to answer, because any comparison of graphics API's will inevitably depend on the drivers you are using, the operating system, etc. However, OpenGL and OpenGL ES are very similar (who would have guessed?), but in many cases are mutually exclusive. Mobile systems don't really support "regular" OpenGL, so your only option is OpenGL ES. Some desktop vendors only support OpenGL (like Intel, I believe), whereas others support both GL and GLES (Nvidia). You'd have to talk to one of the driver developers to be sure, but my suspicion is that a huge chunk of the driver code is shared between GL and GLES, if both are supported. There is not likely to be any performance difference, because the API's are so similar. On another front: OpenGL ES is missing a lot of modern features that "regular" OpenGL has, and many of those modern features are aimed at performance. For example, modern OpenGL ES still doesn't support direct state access, which is a handy extension for avoiding pointless bind/unbind calls. Since it's missing some of those performance-centric features, OpenGL ES should be a little bit slower, all other things being equal. If you stick to the stuff that is more-or-less identical between GL and GLES, then I would be surprised to see a difference on the same hardware.
  3. mellinoe

    Vulkan and C#

    I'm assuming that when you say "implementation of Vulkan in C#", you are referring to bindings that let you call into Vulkan. I maintain one such set of bindings for .NET: https://github.com/mellinoe/vk. I've built it mainly for myself and for my internal use in my abstract graphics library Veldrid, where it provides the FFI for my Vulkan backend. As such, it's pretty specific to my needs. It is an extremely "raw" set of unsafe bindings. There's no fancy wrapping or marshalling happening, and its usage is intended to be identical to Vulkan in other languages. It does not support Vulkan 1.1 as of yet, but I haven't seen other bindings support that yet, either. There's a couple of other options that I've seen, which might be more focused on "public consumption" than mine. https://github.com/discosultan/VulkanCore looks good, and is a higher-level set of bindings, with more of an intermediate layer between you and native Vulkan. https://github.com/FacticiusVir/SharpVk is another that I've seen used. I don't have experience with using either of these, personally.
  4. The way you are checking the RGB channels in your pixel shader seems odd to me. Can't you just upload an alpha channel with your cursor texture, for example, and just pass that alpha along when you sample your texture?
  5. A few questions that can help shed light on the problem: * What depth range are you rendering -- e.g. what are your near and far planes? Where are most of your objects located? If the range is too large, then there may not be enough precision in the buffer where it matters. * What depth format are you using? A larger format could help. * Are you using the reverse-Z technique with a 32-bit floating point depth buffer? I recently implemented that and saw significant improvements.
  6. The code modifying gl_Position.z is the standard modification intended to switch from OpenGL's clip space (-w -> w), to Vulkan's (0 -> w). How are you compiling your shaders to SPIR-V? Some tools have an option to automatically insert the depth-range fixup (and the inverted-Y fixup) -- perhaps that is what's happening in your case.
  7. mellinoe

    Limited number of Uniform Buffers?

    I also saw this after updating my validation layers recently. In my program, I was using 13 instead of the max 12, so I just combined two of my buffers -- not really a general-purpose solution. The limit feels frustratingly-low compared to the other graphics API's I support, so I'd also be interested in hearing why this limitation exists. By the way, I use a GTX 770 and have the same limit (12). That seems unusual considering how much newer your card is. Seems like it's a global driver limit and not necessarily linked with the chip?
  8. mellinoe

    DirectX - Vulkan clip space

    I flip the coordinates in my vertex shader, but I also generate my shaders from another representation, and the fixup happens "automatically". A similar fixup is done in my OpenGL shaders to reconcile the clip-space differences there (z-range).
  9. Like @Alberth said, strings are going to be deadly for performance, especially if you are using them this way (to serialize individual bytes of a massive byte array). Your goal should be to have zero strings involved in your entire serialization pipeline (unless you are actually serializing a string...). If you are serializing mesh data (or terrain data, etc.), then there's no reason to have a string at any point. You're dealing with geometric data (presumably), not text. Question: What does the actual deserialized data look like? Your engine doesn't deal with a string or with a byte array when it's rendering this terrain data, right? That is just another intermediate representation. Your engine must be dealing with something that contains an array of vertices or some similar data structure in order to draw the terrain/mesh. Instead of thinking about how you can read and write your intermediate string, you should be thinking about how you can most efficiently read and write the actual data you're interested in.
  10. Any particular reason you are serializing to a string, rather than using a binary format of some kind? Given that you already seem to have a byte[] array ready to go, you could just spit that out directly, rather than passing it through an intermediate string representation. Writing a byte array to a Stream or file is trivial, and reading it back is just as easy. It will also be many times faster than converting to and from a string.
  11. This is not accurate. There is no difference between `System.Boolean` and `bool`. The latter is simply the C# language alias for the former, and there is no functional difference in their usage. I don't think there's anything dictating how much space primitive types take on the stack -- that is an implementation detail and likely depends on the runtime and optimization level being used. Traditionally, `System.Boolean` is not considered a blittable type because the Win32 boolean type is 32-bits, whereas the .NET representation is 8-bytes. This behavior probably made sense when .NET was first designed, primarily for Win32 systems.
  12. mellinoe

    Urho3D Graphics abstraction

    I just treat this as a quirk of the Direct3D11 backend's state tracking behavior. It's not legal for users to bind incompatible resources to the pipeline through my library's regular API, and they would get a descriptive error message if they did that. Nevertheless the D3D11 context might get into a state where there are resource conflicts after a sequence of legal operations, because unused resources aren't removed. I could probably aggressively purge those unused resources, but it would be unnecessary most of the time. EDIT: To clarify, in response to this: I'm only talking about the cases where the user hasn't made a mistake. If they try to actually do something illegal (e.g. bind a Texture as both read and read-write), then you should catch that separately and give them a descriptive error.
  13. mellinoe

    Urho3D Graphics abstraction

    I deal with a similar issue in my graphics library. My API is not exactly like what you've posted (it's closer to Vulkan/D3D12 with multiple resources being bound as a single set), but there's a similar problem that needs to be tackled in the Direct3D11 backend. In my case, I just keep track of all SRVs and UAVs that are currently bound, and check for invalid combinations when a new SRV or UAV is bound. For example, if you try to bind an SRV for Texture A to the context, then I'll check if Texture A has any bound UAV's. If it does, then those are removed. Then, the SRV is bound to the context and added to the map that tracks SRV state. Same process when a UAV is bound. A bit hand-wavey (I could elaborate if it doesn't make sense), but overall it's not a very complicated system. Urho3D might be doing something more clever than I am.
  14. @Hodgman I am indeed using a D3D style matrix. I mentioned at the end that I made a couple of modifications that are supposed to account for those differences, which I found from some older discussion threads. I suppose I can go the whole way and try to start with a GL-style matrix and see if it helps. EDIT: Hrm, using a GL-style matrix didn't seem to affect the results much.
  15. I'm currently working on adding a reflective surface to my project. I have most of it down and working properly, but I am struggling with a technique I've read about online (and which seems to be widely used). I'm referring to this technique: "Modifying the Projection Matrix to Perform Oblique Near-Plane Clipping". Without the technique, I'm able to render my reflective surface correctly, but only if I delete all of the objects behind the reflection plane. I'm aware that I could add some code into my pixel shader to clip fragments that are behind the custom clipping plane, but I'd like to avoid doing that if I can. Without the oblique near plane: https://i.imgur.com/ZxnwKNX.jpg I've followed several different versions of the technique linked above, but I can't seem to get it right. Invariably, I end up with weird distortions like this: The last little "preview image" at the top shows the rendered reflection view. You can see that the reflected view is completely warped, and stretches out infinitely rather than just having a regular upside-down perspective. The function that modifies the projection matrix is really small, so I'm not sure where my mistake is. The one thing I've modified from the link above is I am scaling the clip plane by (1 / dot(clip, q)) instead of (2 / dot(clip, q)), and removed a +1 at the end, because I am using a [0, 1] clip space. I've seen this mentioned in some places online. Regardless, it doesn't help to change that back. Can anyone point me in the right direction here? The relevant code can be seen here. Any help would be much appreciated.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!