• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

382 Neutral

About DiharaW

  • Rank

Personal Information

  • Interests
  1. I didn't know creating CBV descriptors was cheap. I should give that a try. I doubt i could use CBV's in the Root Signature for everything though, since they take up a lot of space. Now i have to figure out a way to neatly wrap this with Vulkan. Thank you!
  2. So you're copying per-object constants between draw calls instead of copying everything at once into a single large buffer? And wouldn't copying descriptors in every frame have a lot of overhead? How do you handle dynamic constants in your Vulkan path?
  3. So these days i'm writing a D3D12/Vulkan abstraction for a project and i've hit a wall tackling resource binding. In an older renderer i wrote, i put all of my per-object uniforms into one big Uniform Buffer/Constant Buffer, copied all the data in one go and bound ranges of it using glBindBufferRange (GL) and XSSetConstantBuffers1 (D3D11) for each object in the scene. It seemed to be a more efficient approach than copying between draws. The same thing can be done in Vulkan by creating a dynamic uniform buffer and providing Dynamic Offsets to vkCmdBindDescriptorSets. But when it comes to Direct3D 12, i haven't seen an equivalent approach yet. The only thing that i came up with so far is to create multiple ConstantBufferViews for a Constant Buffer into a Descriptor Table and bind it by adding the appropriate offset to the Descriptor Table's address. // Get the descriptor increment size const UINT cbvSrvDescriptorSize = pDevice->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); // Handle for the first object's constant buffer view CD3DX12_GPU_DESCRIPTOR_HANDLE cbvSrvGpuHandle1(pCbvSrvHeap->GetGPUDescriptorHandleForHeapStart(), 0); pCommandList->SetGraphicsRootDescriptorTable(0, cbvSrvGpuHandle1); // Draw first object... // Add offset to get the handle for the second object's constant buffer view CD3DX12_GPU_DESCRIPTOR_HANDLE cbvSrvGpuHandle2(pCbvSrvHeap->GetGPUDescriptorHandleForHeapStart(), cbvSrvDescriptorSize); pCommandList->SetGraphicsRootDescriptorTable(0, cbvSrvGpuHandle2); // Draw second object... While this approach works fine, it doesn't necessarily match Vulkan's dynamic offset approach, so i can't make a clean abstraction over the two. So how would you guys go about handling this? And i'd love to know your approaches to abstracting the Vulkan/D3D12 binding model in general.
  4. I prefer to give the interface names' the I-prefix. For the file names i just use a lowercase, separated by underscores style without the I-prefix. Because like you said, i_my_class.cpp looks really weird So for example, i'd have a base class like render_device.h which contains the IRenderDevice interface, and several implementations in files like render_device_d3d12.h, render_device_vk.h etc.
  5. How Much Do You Program Outside of Work?

    I work as an iOS developer by day. So all my game-dev related programming is done outside of work. I'm finishing up my last year of college while working a full-time job, so my spare time is very limited now. It's sad really.
  6. Multithreading questions

    Yeah, i was trying to say that having multiple mutexes is a bad idea. Guess i didn't convey my idea properly. How would a semaphore work to limit the number of simultaneous accessors? I'm curious cause i've only used it for signalling events (i.e kicking off a GPU submit job etc).
  7. Multithreading questions

    When both threads try to access a queue protected by a shared mutex, they will access it one after the other, in a serial manner. And yes, when one thread acquires the mutex, the other threads will be blocked until they can grab it too. If your goal is to protect the shared resource (queue in this case), having two mutexes would mean that two threads can access the resource simultaneously. If that's what you want, then cool. But if you want to make the queue thread-safe, sharing a single mutex is the way to go.
  8. Hi guys!   Theses days i'm trying to add in support for Direct3D 11 to my rendering engine which currently supports OpenGL 3.3 upwards. While writing the abstraction i hit a bit of a road block : Input Layouts. So to my knowledge in Direct3D 11 you have to define Input Layout per shader (by providing Shader Bytecode). Whereas in OpenGL you have to make glVertexAttribPointer calls for each attribute. Currently i am using VAO's and store the Attribute Locations in them by calling glVertexAttribPointer after binding Buffers like so : glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); glGenBuffers(1, &IBO); glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, Vertices.size() * sizeof(Vertex), &Vertices[0], GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO); glBufferData(GL_ELEMENT_ARRAY_BUFFER, Indices.size() * sizeof(GLuint), &Indices[0], GL_STATIC_DRAW); // then do attribpointer calls before unbinding VAO glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)0); ...... glBindVertexArray(0); But since doing this per-VAO won't work with Direc3D, i have i come up with the following:   Call glVertexAttribPointer every frame for each shader that uses a particular vertex layout (similar to calling IASetInputLayout).   Question 1 : Is this a good idea? Will calling glVertexAttribPointer so often affect performance? How do you guys handle this in your engines? Question 2 (bonus :D ) : Should i use Vertex Buffers and Index Buffers in my OpenGL implementation without VAO's? (because Direct3D does not have such a thing). Or should i try to emulate VAO's in Direct3D somehow? (have an array of Buffers? that's awful i think). I know it's a lot of questions but it's been driving me nuts. I really want to hear your thoughts. Thanks in advance guys!
  9.   I actually did a bit more reading and realized that my initial understanding was in fact correct, it's just the wording on some articles that threw me off and confused me a little. Until yesterday i was using Blinn-Phong, but now i got the Cook-Torrance specular working, along with Image Based Lighting. It seems to work, apart from the black spots on the edges, is it something wrong with the Fresnel?    Edit : This is with a Roughness of 0.1. I guess i'll try a different Distribution and Geometric calculation.          Ahh thank you!! I found those links really helpful! 
  10. Thank you so much! That cleared up a lot of things!    In your answer to my second question, by "color" do you mean "Albedo"? And what about the values you sample from the Specular Map? Is that no longer useful with PBR? Or do you need both a Specular Map and a Roughness Map? And is the Final Fragment Color determined like this? FinalColor = Kd * max(NdotL, 0) + Ks * Rs + Ambient
  11. So i've been learning OpenGL for a few months and i wrote a simple Engine for the sake of learning and implementing new techniques. I've implemented a considerable amount of stuff like SSAO, HDR, Bloom, Cascaded Shadow Maps, Variance Shadow Maps and so on. I recently started reading up on Physically Based Rendering and i ended up realizing that my knowledge on Lighting itself isn't very good. I live in Sri Lanka and programmers here don't even know what Graphics Programming is, so asking these questions on this forum is my only option.    What are the Ka, Kd, Ks terms in a usual Blinn-Phong shader? are they Material properties? Because some shaders represent them as floats and some as vec3s. Or is Kd equal to "max(NdotL, 0.0)" and Ks equal to "pow(max(NdotH, 0.0), 32.0)", in which case are they the 'Intensities' of each of them? Or are these 3 terms just constants meant to be controlled by artists?  In PBR, are the Kd, Ks terms determined using Metallicness and Roughness values? Because i've been reading this article : http://www.codinglabs.net/article_physically_based_rendering_cook_torrance.aspx  , and in this it says the sum of Ks and Kd cannot exceed 1. I've been trying to implement a Cook-Torrance specular following this article : http://content.gpwiki.org/index.php/D3DBook:(Lighting)_Cook-Torrance but i have some confusion regarding that. Should "pow(max(NdotH, 0.0), 32.0)" be replaced by the "Rs" term from the code on that article? Because when i tried that, the ambient light areas looked pitch black and the specular highlights were all wrong. Why does the Cook-Torrance shader in the CodingLabs article use multiple samples while the GPWiki article does not? Is it to accurately represent the distribution of light? ...and i think that's all the questions off the top of my head. Really sorry if this is a lot, but i'm still a beginner and need a lot of clarification. Anyways i hope you guys and help me out! Thanks in advance!  
  • Advertisement