• Advertisement

KarimIO

Member
  • Content count

    114
  • Joined

  • Last visited

Community Reputation

271 Neutral

About KarimIO

  • Rank
    Member

Personal Information

  • Interests
    Art
    Audio
    Design
    Programming
  1. STB Image

    It works great here, been using it for years.
  2. Hey guys! Three questions about uniform buffers: 1) Is there a benefit to Vulkan and DirectX's Shader State for the Constant/Uniform Buffer? In these APIs, and NOT in OpenGL, you must set which shader is going to take each buffer. Why is this? For allowing more slots? 2) I'm building an wrapper over these graphics APIs, and was wondering how to handle passing parameters. In addition, I used my own json format to describe material formats and shader formats. In this, I can describe which shaders get what uniform buffers. I was thinking of moving to support ShaderLab (Unity's shader format) instead, as this would allow people to jump over easily enough and ease up the learning curve. But ShaderLab does not support multiple Uniform Buffers at all, as I can tell, let alone what parameters go where. So to fix this, I was just going to send all Uniform Buffers to all shaders. Is this that big of a problem? 3) Do you have any references on how to organize material uniform buffers? I may be optimizing too early, but I've seen people say what a toll this can take.
  3. Sorry for the late reply, but does this mean I can just make everything RGBA without negative consequences?
  4. Okay thanks! I've built my own dds exporter and importer so I can easily modify it to support this, I was just confused most of the flags related to channels were about uncompressed ones according to the msdn. Also I'll let the question about whether or not dxt_rgb stand for curiosity's sake.
  5. Isn't that just for uncompressed data? Or for all of it?
  6. Hey guys, is there a difference in performance between GL_S3TC_DXT1_RGB and GL_S3TC_DXT1_RGBA? I want to use the latter in all cases if possible because dds files have no way to check if there's an alpha bit or not.
  7. But those issues can be mitigated by using features like parallax cubemapping and SSAO. Not fully, to be sure, but when it's the high frequency data that matters, I don't see how cramming more low-frequency data in can help all that much. And how would they even be combined? I skimmed through most of this and it's quite interesting so far. That was my thought process for the most part but I think many are able to combine lightmaps and SH quite well. Sounds interesting! I'll try this out then. Do you have any articles about this technique?
  8. I was considering only using those kinds of approaches on smaller areas, if at all (RSM, VoxelGI, and the like would be supported in volumes) because they're so expensive. But I'll check out your link, thanks
  9. Hey guys, Are lightmaps still the best way to handle static diffuse irradiance, or is SH used for both diffuse and specular irradiance now? Also, do any modern games use direct light in lightmaps, or are all direct lighting handled by shadow maps now? Finally, how is SH usually baked? Thanks!
  10. Hey guys So I was wondering how modern terrain and water geometry works both with and without tesselation. Essentially: 1) Is Geoclipmapping still the best CPU tesselation technique? 2) Is Geoclipmapping still used with tesselation? 3) Is non-tesselated water just flat? Is there any other (reasonable) ways to simulate it? Do people use Geoclipmapping for that too? Thanks!
  11. DX11 Copy Z-Buffer in DirectX

    To clarify to those finding this later, and so someone can correct me if I'm wrong, this is a good solution, but you can not use it with the default depth buffer in opengl. This can be solved by using an fbo in the middle, and simply doing something that doesn't output depth, such as post processing, at the end.
  12. DX11 Copy Z-Buffer in DirectX

    Forgive me if this is marginally off topic, but with OpenGL, is that done this way? glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0); glBindFramebuffer(GL_FRAMEBUFFER, fbo2); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0);
  13. DX11 Copy Z-Buffer in DirectX

    Are you suggesting sharing an image/texture (specifically the depth buffer) between two framebuffers? I had considered that but I thought it might have issues.
  14. Hey guys, I'm trying to work on adding transparent objects to my deferred-rendered scene. The only issue is the z-buffer. As far as I know, the standard way to handle this is copying the buffer. In OpenGL, I can just blit it. What's the alternative for DirectX? And are there any alternatives to copying the buffer? Thanks in advance!
  15. 3D Do models go inside a BSP?

    I didn't mean frustum culling of pixels, I'm aware that happens automatically, I meant of subtrees and entire objects or areas. Awesome! Thanks! I'll check it out soon. And is this used in addition to the previously mentioned techniques, or instead of it? Thanks again, JoeJ and Hodgman!
  • Advertisement