Advertisement Jump to content
  • Advertisement

neoragex

Member
  • Content Count

    29
  • Joined

  • Last visited

Community Reputation

276 Neutral

About neoragex

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1.   The purpose of linear auto-exposure (linear AE) is to adjust the whole brightness level, in a wide dynamic range. It is very important at the first place because that human eye is much more sensitive to the brightness than the color/hue. The point here is: NEVER talk about tonemapping before you get some proper exposure.   In the meanwhile, the unlinear tonemapping’s purpose is to adjust the perception of your eye for color and hue, in a much more narrow dynamic range, given a specific over-all brightness level. The common standard tonemapping methods include reinard?filmic?uncharted2?aes etc. At most time, the quality of tonemapping is very subjective, and it is all about personal’s preference. Compared with AE op, the operations of dynamic range compression and contrast adjustment are often involved in tonemapping. So we often call it as an ‘unlinear’ operation.
  2.   Yes. An iso-voxel in emittee volume has only one solid attribute, but an aniso-voxel would have multi-direction attributes. One common representation of aniso-voxel is 6-axis color. But I think the SH representation is more appropriate for this situation. Rumor shows that aniso-voxel would be helpful to prevent the exaggerated diffuse color-bleeding in a common svo-alike GI situation. But unfortunately I have not tried it, because of its terrible memory costs, especially when you have not a decent out-of-core voxel management mechanism. Maybe the VTR feature in dx11.3 is the cure! :P   Sorry for my rush reply JoeJ. I haven't finished the reading of your reply. I need more time to think about your suggestions, and I will post my responses in the following thread. :)
  3.   I don't think doing simple forward shading without shadowmapping during voxelization is a proper method because it ignores the occlusion relationship between the voxels from the first place. In my opinion, close attension should be paid to every place that maybe affect the quality of occlusions. In fact, the decent maintaince of occlusion relationship is the very gist that VCT holds, when compared with other RTGI solutions, such as LPV, RSM, DeepG etc. And I also don’t think the following aniso-mipmap filtering is enough for compensating the losses of occlusion infos, because it is too coarse when compared with the shadowmap…   On the other hand, doing a full forward shading during voxelization (with shadowmap) is also not recommanded because of the redundant fragment rasterization. The performance loss caused by fragment shading during voxelization would be too hearvy to afford, especially when using some advanced shading tech. such as PBR etc.   The RSM-alike way is quite simple because the light shadowmap is there all at my disposal. So I decide to use RSM for the light injection of first bounce. Of couse for the limited solution of RSM, the direct light injection result would not be pretty. My strategy to overcome this is to introduce the calculation of second bounce, namely, after RSM injection I dispatched one diffuse VCT(4 lobes are enough) from every voxel that have non-null opacity to get its 2nd bounce light-injection. By doing so I find my quality of light injection is greatly improved.
  4.   You can't. You will need 'linear exposure' + 'un-linear tonemapping'. Reinhard is only a method for tonemap.
  5. Hello guys :D  I am back. During the pass few days I worked hard to make some progress and little time left for me to update my test results. So here is what I have got, which gives me some confidence to continue this works.   I have tried JoeJ's suggestion about msaa coverage density. But unfortunately, my result shows that its contribution for the visual quality is not quite obvious. This leads me to think that maybe the corrected edge opacitys (namely, voxel anti-aliasing) is not so crucial for the quality of results. I am not sure, but that is what I got.   One of most important things about the quality is the light injectction and the occlusion of the aniso-voxels (and its mipmap). I used a traditional RSM way to inject the light. Although I am not satisfied with the light-injected emitter volume and its VCT specular results, I find that the VCT diffuse results are not bad, in certain degree. The below is a demo shot for my VCT specular. Does anybody have the experiences to to improve that?   I heard that there exist some possible solutions to improve VCT quality by using SH opacity volume with SH representation (SEGI) or 6-face aniso-opacity volumes (tomorrow's children) etc. But I am not sure which way is better for me. How about your opinion? :)
  6.   hmmm... This idea sounds like coverage-to-alpha. Sorry for my late reply. I will have a quick try and post an more detailed description about the problem and the result here. :)
  7.   Yes. Just like a traditional way of svo I used the diffuse tex transparency for the voxel opacity, and I also implemented the 6-face aniso-mipmaping. But the cone tracing result is still not good enough... I already realized that I should use associated color (pre-alpha-multipled) to do these works, but I don't know what is the exactly meaning of 'alpha-blend multiple fragments that fall on one voxel'. Would you kindly enough to show me more details? :)   I also notice that tomorrow's children used a 6-face surface voxelization method, which considered the mesh itself as a aniso-voxelized (instead of iso-voxelized) volume. They also pointed that the solid voxelization would be greatly helpful for the cone tracing. Is there anybody has related experiences about this way?     Yes, I do realized that maybe there exists some relationships between the coverage of MS and the real occupancy of the voxel. But the question is: HOW to implement this? In fact, I have used the 8x multi-sampling for my GPU voxelization implementation. But until now I have not work out an effective way to induce the correct opacity (density) of voxels from its 2D sub-pixel coverage.   In fact, what I really concern is TO IMPROVE the quality of voxel cone tracing. If a traditional gpu voxelization is all the best that I can do, I would go with it. Maybe I should focus on  the cone tracing instead? How do you think? :ph34r:
  8. HI everyone, recently I am working for my svo-alike gi system and I have a big problem. That is HOW to get the correct opacity (instead of binary occupancy) for voxel during my gpu voxelization, which is crucial for the quality of volumetric cone tracing. I am using a traditional 6 main-axis voxelization method with multi-sampling, and I am wondering if i can get the non-binary opacity per voxel by fragment voxelization. I also notice that the GDC talk of Tomrrow's children has introduced some interesting methods about voxelization. But unfortunately they didn't give the details about the calculation of voxel's occupancy. Any suggestions would be appreciated?
  9.   Thanks for your hints @agleed! Your reply about projection window makes the answer far more clear. But maybe I still find some little bugs in the explanation.   I do think that the world area relative to a single texel in a shadow-map may not be constant, considering the perspective projection. It depends on the linear depth (in the light's coordination system) for every single texel in the shadow map. So the 'lightPos.z' in above code does not stand for the distance between the light and a FIXED PROJECTION WINDOW. It is the linear depth (in the light's coordination system) for its corresponding texel in the shadow map.   What do you think? :)
  10. Is there anybody knows how to calculate the area of world coordinates for one texel in a shadow map (for dir-light or spot-light)? I just remember some paper has given some MAGIC formula to calculate this, such as:   float calculateSurfelAreaLight(vec3 lightPos) {     return (4.0 * lightPos.z * lightPos.z * f_tanFovXHalf * f_tanFovYHalf) / (i_RSMsize * i_RSMsize); }   But i can't find the full induction process about its formula. How can I get this induced formula? What is the relationship between the area and the light-fov? Any suggestions or paper-references about this would be greatly appreciated!
  11. @JoyJ and @Frenetic Pony: Thanks! Your suggestion would be very helpful. I will take a look at it and have some tries.
  12.   Agreed. It may be a problem. The spheremap seems like some very raw representation for texel's radiance. I can't see its advantage over an proficient spherial representation, except for its not-that-accurate effciency. :wink:
  13. For every texel in The order: 1886's lightmap, it is a SG including 9 lobes. Note that for one SG lobe, it needs 3 parameters: dir (2 scalars) , radiances(3 scalars), width(1 scalar), so we need 6 scalars to represent one lobe. In practice, The order:1886 hardcoded the dir and width for every lobes to cover the sphere uniformly. So they only need 9 (lobes) x 3 (radiances) = 27 scalars for one texel. Note that these 27 co-effs for lobes could be HDR values. When doing the baking, the 3x3 env map (3x3x6=36 scalars) for one texel is not easy to save into the lightmaps, compared with only 27 scalars for SG. And also note when sampling the env-map, the bilinear filtering could not cross the boundary of the facet. In fact the SG is more suitable to represent the spherical function for interpolated evaluation than env-map. When doing the lighting evaluation, the SG representation is also far more efficient that a env-map by exploiting the orthogonality of its basis, according to the course of The order: 1886. I also notice that The order used the SG mostly for rough materials. So in my limited opinion, as long as it is better than SH & H-basis, it is a win... All above is the reason that I think SG is better for a env-map... :) Sorry for my rush reply...
  14. Thanks @JoeJ! Your explanation for HL2 ambient cube is pretty clear. What I really want to do is to improve the quality of my irradiance-volume. So I have to find some transfer basis solution to represent (and compress) the irradiance cubes. I have implemented it with SH3 by now. But I’m quite worry about the ‘negative ringing artifacts’ and the ‘low-band filtered directionless property’ of SH under HDR lighting, just like MJP had said in another thread. And I also found the quality of my irr-vols is hard to improve, even though I have got some not-really-bad results (see the attachment). It's hard to adjust, very sensitve to vol position, and with a lot of light-leaks and banding artifacts. I haven’t implement any raytracing or lightmap solution by now. Is it possible to get some proper diffuse indirect-lightings ONLY BY irr-vols baking (with some fancy transfer-basis solution)? Or I MUST implement some lightmap solutions such as radiosity or raytracing to get that?   Any suggestion?
  15. Thanks for your kindly reply @danybittel :)  I recognized my bad description for the question... :unsure:  so I have modified my thread to make it more clear. What I really want to do is to improve the quality of my diffuse indirect-lighting.   I am wondering that your answer is exactly a Half-life 2's ambient cube interpolation (aka HL2 basis interpolation), isn't it? I found some illustrations for its result. I'm not quite satisfied with its quality, and I am also not quite sure how well it performs on HDR lighting situation....But what surprises me is that Tom Clancy's The Division (by UBI, 2016) has also adopted this method to interpolate their indirect-irradiance cubes.... Could anybody give some comments on this?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!