_void_

Members
  • Content count

    99
  • Joined

  • Last visited

Community Reputation

862 Good

About _void_

  • Rank
    Member

Personal Information

  • Interests
    Education
    Programming
  1. @galop1n Yep, you are right. I was using Sample instead of SampleLevel. The issue has been solved. Do you know if MinMax filtering is supported by default in D3D12? How do you check if it is supported otherwise? Thanks!
  2. Hello guys, I would like to use MinMax filtering (D3D12_FILTER_MAXIMUM_MIN_MAG_MIP_POINT) in compute shader. I am trying to compile compute shader (cs_5_0) and encounter an error: "error X4532: cannot map expression to cs_5_0 instruction set". I tried to compile the shader in cs_6_0 mode and got "unrecognized compiler target cs_6_0". I do not really understand the error as cs_6_0 is supposed to be supported. According to MSDN, D3D12_FILTER_MAXIMUM_MIN_MAG_MIP_POINT should "Fetch the same set of texels as D3D12_FILTER_MIN_MAG_MIP_POINT and instead of filtering them return the maximum of the texels. Texels that are weighted 0 during filtering aren't counted towards the maximum. You can query support for this filter type from the MinMaxFiltering member in the D3D11_FEATURE_DATA_D3D11_OPTIONS1 structure". Not sure if this is valid documentation as it is talking about Direct3D 11. D3D12_FEATURE_DATA_D3D12_OPTIONS does not seem to provide this kind of check. Direct3D device is created with feature level D3D_FEATURE_LEVEL_12_0 and I am using VS 2015. Thanks!
  3. It does not really matter which option you choose. 1.One ring buffer of size = Number of meshes * Number of vertices per mesh * Number of back buffers or 2.Ring buffer per mesh 1 of size = Number of vertices per mesh * Number of back buffers    ....   Ring buffer per mesh N of size = Number of vertices per mesh * Number of back buffers If you read the data directly from upload heap this would suffice. In case you decide to copy the data to default heap from upload heap before reading, you would need to duplicate them in the default heap as well. 
  4. Would whole-heartedly recommend - great explanation of D3D11 API! Really liked chapters on resources and deferred shading while was reading back then.
  5. DX12 PIX for Windows released yesterday

    How PIX fits with VS Graphics Debugger? Will VS Graphics Debugger continue to exist?
  6. Yeah, thank you guys for clarification :-)
  7. Hello guys :-)   I am working on a render pass which produces output in one execution and then reads output in the next execution. And this has to be repeated a number of times. I am thinking if I could actually implement this with one ExecuteIndirect call and using bindless resources.  For each draw call I would generate indirect argument as encoded 32 bit constant containing information about the index of input/output texture. Please see code below. Texture2D<float3> g_InTextures[] ... // Contains Tex1, Tex2 as SRV RWTexture2D<float3> g_OutTextures[] ... // Contains Tex1, Tex2 as UAV cbuffer IndirectArg32BitConstant : register(b0) { // first 16 bits contain input texture index // second 16 bits contain output texture index uint g_TextureIndex; } void CS(...) { uint2 texLoc = ...; g_OutTextures[outIndex][texLoc] = g_InTextures[inIndex][texLoc]; } The drawback with such approach is that we need to transition our resources from read to write, write to read states between draw calls, which is not feasible with ExecuteIndirect arguments. Instead, I thought that I could make InTextures and OutTexture to be RWTexture2D so that to avoid the need for resource transition. RWTexture2D<float3> g_InTextures[] ... // Contains Tex1, Tex2 as SRV RWTexture2D<float3> g_OutTextures[] ... // Contains Tex1, Tex2 as UAV cbuffer IndirectArg32BitConstant : register(b0) { // first 16 bits contain input texture index // second 16 bits contain output texture index uint g_TextureIndex; } void CS(...) { uint2 texLoc = ...; g_OutTextures[outIndex][texLoc] = g_InTextures[inIndex][texLoc]; } But now I do not know if this is sufficient from resource read/write sync point of view. Could draw calls of ExecuteIndirect be overlapped by the driver? Could we start reading UAV while we are still writing to it from the previous draw? Do you think it is a good idea to implement ping-pong rendering this way? Seems like a neat trick if possible :-)
  8. Guys, thank you for the input! Really appreciative.     Hmmm, this somehow leaked out of my mind. Should work with my case! 
  9. Hi guys!   Are there any recommendations on the number of draw/dispatch calls that you would/could record per one command list? I am working on light propagation volume algorithm where you need repeatedly to run the same shader in ping-pong mode for input/output results. I am wondering if I could record all the passes in one command list :-)   Thanks  
  10. I was able to clarify the things via mail communication with Peter-Pike himself :-) His answers are listed below:   1) "light color" in computer graphics is more often the emitted radiance, incident radiance is the projection of all the lights (and possibly indirect light), the key distinction is it is light arriving at some location. A point (or spherical light) will have incident radiance that is just 1/d^2 in general.   2) If incident radiance is in the range of [0,1], then outgoing radiance will always be in the range [0,1] if you have energy conservation - ie: BRDF's are normalized (so diffuse is albedo/Pi - I'll get to this more later, etc.), etc. This is just a result of the physics of light transport (ie: otherwise you could generate "free energy", etc.)   3) So "normalization" is a bit overloaded - for diffuse reflectance, the "1/pi" normalizes the cosine term (ie: integral over the hemisphere of a cosine is pi, so the "normalized cosine kernel", which is cos/pi, integrates to 1.)   4) I am not sure I totally am following you, but if you look at the diffuse reflectance equation: integral( albedo * L(s) * cos( theta ) / pi ), you can factor the albedo ( a [0,1] number that is just the diffuse reflectance) out of the integral, and you are left with two other terms (the above integral is in a coordinate system where the normal is Z.) The whole idea of the "normalization" section is if you have some type of light projected into SH, you want to solve for an "intensity" so that it reflects unit radiance.   If this is a directional light L(s) is just sum_i( y_i(d) y_i(s) ), where d is the direction to the light. T(s) is just the cos(theta)/pi term from the reflectance term - so it's the projection of a normalized cosine into SH. The reflectance integral reduces to the dot product of the light projected into SH (L(s)) and the reflectancce (T(s) in the paper due to the orthogonality of the basis (integral of the product of two functions expressed in an orthogonal basis is just the dot product of the coefficient vectors...)   So that section is about how to reason about values to assign to different light types so that they reflect "unit radiance" with a diffuse material. This is done just by making the normal point at the light, and solving for the unknown intensity "c" from that equation (V is really "T" - and the symmetry of the light types means we only need to look at the ZH basis functions...)   5) What I said above, it's how you make an energy conserving diffuse BRDF (albedo / pi), cos/pi is the "normalized clamped cosine function" (clamped because it really is max[0,cos]/pi, ie: just integrated over a hemisphere.)   6) This "normalization", is just a way to pick "[0,1]-ish numbers" to assign as light intensities. If you are doing real PBR, you don't do any of this stuff. Though in practice we do something similar - ie: the values you assign to a light are specified so that a diffuse material with unit albedo pointed at the light 10 meters (or some fixed distance) away reflects some amount of light (specified in f-stops generally.) This is useful whether you use SH or not. ie: all of this math can be done with analytic equations, other BRDF's, light sources, etc. The point of that section is to solve for a modifier to a [0,1] value that makes specifying light intensities more intuitive, that's pretty much it..   Unfortunately I don't have the original document, otherwise I would change 1/dot(L,V) to 1/dot(L,T)
  11. Hello!   I am struggling with digesting Normalization section (page 12) in Stupid Spherical Harmonics by Sloan. In essense, I do not understand the need for normalization and how the author derives the formulas.   Can you help me with the following questions please:   1.Do we represent incident radiance by what we call light color in computer graphics? 2.If we use incident radiance in the range from 0 to 1, then there is not guarantee that output radiance will be in the range from 0 to 1 as well? Why? 3.What does Sloan actually means by normalization? Is it the procedure which guarantees that output radiance will be in the range from 0 to 1? 4.He provides an integral which computes a scale factor c, where  c is found to be 1/dot(L, V). What is V? Is it output radiance direction? He does not expand much on this. How was this derived? 4.The transfer function when we compute output radiance includes both cosine lobe and BRDF function. Sloan does not seem to include BRDF contribution. Even if we have diffuse surface where BRDF is constant, we should include material albedo. How do we treat it then? 5.He mentions the term "normalized clamped cosine in SH". What does "normalized" mean for the clamped cosine function? 6.Sloan talks about "normalized clamped cosine" function. It feels like it should be sufficient to use only this function for the normalization. Should not incident radiance function participate in the normalization factor computation?   Thanks
  12. fries,   Now it is clear. Thanks for great explanation!
  13. Hello guys!   I am having hard time trying to understand the subject.   Sir Sloan in his Stupid Spherical Harmonics (Projection from Cube Maps section) provides pseudo-code for the implementation. 1.He mentions that the result has to be normalized and I do not really understand what he means here (4*PI/fWtSum multiplier). 2.Should not multiplication on the solid angle of the projected cube map texel on the unit sphere be sufficient?   Also, I found another implementation for the procedure in the Physically Based Rendering book by Matt Pharr. The authors expand more on how solid angle on the unit sphere is related to the texel area in the cube map using formula dW = dA / (x*x + y*y + 1)^(3/2).   Their implementation looks the following way Vector w(u, v, 1.0f); float dA = 1.0f / powf(Dot(w, w), 3.0f / 2.0f); coeffs[k] += texelColor * Ylm[k] * dA * (4.0f * resolution * resolution);   3.Still I cannot buy how they calculate dA (texel area) in the code snippet. Is it really texel area (if to believe their original notations). It looks like dA is in fact solid angle dW. 4.Again, I do not really understand the role of multiplier (4.0f * resolution * resolution).   I would be very appreciative if you could provide some insights into the derivation behind the integration process.   Thanks!
  14. Thank you guys for clarifying the things - was a little bit confuzed.
  15. Hello!   I have been working on light-weight implementation of custom OBJ file loader :-) I am attempting to load Sibenik Cathedral from http://graphics.cs.williams.edu/data/meshes.xml to be specific. I have noticed that some texture coordinates in the OBJ file are negative (that is outside the range [0, 1]), such as   vt -0.764533 0.452165 0.000000 vt -0.767727 0.428138 0.000000 vt -0.783781 0.555808 0.000000 vt -0.764533 0.547835 0.000000   How do you interpret the negative values here?   Thank you!