Jump to content
  • Advertisement

DaOnlyOwner

Member
  • Content Count

    8
  • Joined

  • Last visited

Community Reputation

135 Neutral

About DaOnlyOwner

  • Rank
    Newbie

Personal Information

  • Interests
    |programmer|
  1. For performance reasons Sparse Voxel Octrees should not be used. The update just takes too long if you move a light or have moving objects. Volume textures are easier to handle but updating them takes even longer because you have to invalidate everything. Thus Nvidia uses clipmaps where moving things doesn't imply the need to revoxeliz i guess. Thanks for the link to tomorrows children tho, Im having a look at it
  2. Thank you. JoeJ. I think I will just evaluate a constant then. I agree, but the VXGI implementation of Nvidia uses cone tracing too (however utilizing another approch for storing the voxels) and they achieve some fairly pleasant results, even for diffuse GI.
  3. Hey guys, I am implementing following paper (Interactive Global Illumination using Voxel Cone Tracing): https://hal.sorbonne-universite.fr/LJK_GI_ARTIS/hal-00650173v1 (This paper can be downloaded for free if you look around) Basically the author suggest to store radiance, color etc. into the leafs of an octree and mipmap that into the higher levels of the tree. Then use conetracing to calculate 2 bounce global illumination. So the octree is ready and I now want to inject radiance into the leaf nodes. For this task I use the suggested method and render a "light-view" map from the perspective of the light. I use physically based materials, thus the actual computation cannot be precalculated because solving the rendering equation for a specific voxel also depends on the viewing and light direction. I have seen some implementations that just use the lambertian brdf as a simplification. But that will likely worsen the quality of the resulting frame, wouldn't it? My idea is to calculate the result (using the BRDF from ue4) for more than one viewing direction and just interpolate between them at runtime. This process has to be repeated when a light changes. So my question is: How should I handle this problem? Or should I just use the lambertian brdf and not care? Thanks
  4. Thank you, I use that method. However the problem I face is that with a BSSRDF you have to integrate over the area and the incoming rays from the scattering part. So this changes the rendering equation (for IBL) and in this way I can't use the method described by Karis right?
  5. Hello guys, I have a fully working PBR and IBL implementation and now want to integrate translucency into my engine. I read up on translucency and decided to go with: https://colinbarrebrisebois.com/2011/03/07/gdc-2011-approximating-translucency-for-a-fast-cheap-and-convincing-subsurface-scattering-look/. I don't understand how this works with IBL; A pointlight okay. But what do you do with an environment map? My understanding of a BSSRDF is that a fragment is shaded according to many light rays coming from various points on the surface and scattering through the medium + diffuse and specular part; I can see how the thickness map is a approximation of that using only a pointlight and a constant ambient term; An envmap however has to be sampled multiple times because now the ambient term is no more constant and so the light rays that scatter through the surface are of different wavelengths everytime. I think the described method doesn't fit a PBR implementation, does it? So my question: How can I implement this method with IBL and are there better approaches to translucency that fit PBR better? Thanks  
  6. Okay so it turns out that my mat3 creates matrices in column major order. So if I do mat3(Tangent, Bitangent, Normal) I am actually already assigning the transposed matrix. Anyways thank you
  7. Okay, so in my shader I do this (Z-Up, objects have no transformations) [...] out mat3 TBN [...] TBN = mat3(0,1,0, 0,0,1 ,1,0,0) * transpose(mat3(Tangent, Bitangent, Normal)); So mat3(Tangent,Bitangent,Normal) should do this: [Tangent.x, Tangent.y, Tangent.z       ] [Bitangent.x, Bitangent.y, Bitangent.z] [Normal.x, Normal.y, Normal.z            ] And after the transpose() I should end up with      [Tangent.x,... ]                                                                          [Tangent.y, ...]                                                                          [Tangent.z, ...]    So now I should be in world space. 1) I am using M * V assuming M is the change of basis matrix and V is the normal. 2) Right handed coordinate systems, Z-Up. Thank you ;)
  8. Hi guys, I am trying to teach myself graphicsprogramming, but sometimes I stumble over some things. I try to convert a normal that I get from sampling a normal map into worldspace. My thoughts on this: A vector x can be expressed as a linear combination of three scalar values a1,a2,a3 like this: x = e1 * a1 + e2 * a2 + e3 * a3 = [e1,e2,e3] * (a1,a2,a3). e1, e2, e3 are vectors that form a basis of a vector space. Assume that M is the change of basis matrix that converts from vector space [e1,e2,e3] to [f1,f2,f3]. M * [e1,e2,e3] = [f1,f2,f3] To get M I do the following: M = [f1,f2,f3] * [e1,e2,e3]^T (1) In this special case let [e1,e2,e3] be [Tangent,Bitangent,Normal] (They are basis vectors, right?): To convert a sampled vector v=[Tangent,Bitangent,Normal] * v' to world space I do the following: M * [Tangent, Bitangent, Normal] * v' = [W1,W2,W3] * v'. => M * v = [W1,W2,W3] * v' =(1)=> [W1,W2,W3] * [Tangent,Bitangent,Normal]^T * v =? "Point in world space" However this produces wrong results. I read online that M should be [Tangent,Bitangent,Normal], so I implemented it that way and it works. But I also want to get the math behind this. So why is my solution not working? And whats the math behind the correct solution? Thank you guys  
  9. Okay, thank you :D So I am just going to follow the implementation of the course notes.
  10. Hey guys, I know there were a lot of talks about this topic, but there are still some things I don't really understand.   My goal is to implement Image based lighting and physically based shading.   To do this I have to solve the lighting integral. I am following the Epic Games course notes, but one thing bothers me: They apply a ME to the equation using Importance sampling. So far so good. Then they split the sum into two sums and multiply them together so they can be precomputed. This is an approximation however.   Why can't you just calculate so that you end up with: [attachment=33361:CodeCogsEqn.gif] (Essentially splitting the integral around the sum.) The first Integral can be precalculated with Spherical harmonics (to which I have a question too, but I might ask that in another post) and is exactly the same as in Epic's presentation. The second Integral differs from Epic's course notes in that it has Li(l) in it. But that shouldn't be a problem because fspec depends on l too, so it doesn't add up to the dependency count. In the fragment shader you just look up both textures with the appropriate values and add them together. Can somebody please help me who had only experience with Phong shaders and OpenGL 2.0 in the past? I don't seem to understand and why it seems so complicated in my eyes. Thank you very much. DaOnlyOwner      
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!