Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

4 Neutral

About ChenMo

  • Rank

Personal Information

  • Role
  • Interests

Recent Profile Visitors

830 profile views
  1. Thanks for your share, JoeJ. After reading your post for a while, I add another rayattribute in my RLSL ray shader which contains albedo only. I multiply albedo by rl_InRay.albedo at each bounce. And at last, at accumulation stage, I accumulate rl_InRay.color * rl_InRay.albedo for bounced rays and accumulate rl_InRay.color for non bounced rays.It works well. : ) Here are two images.
  2. The two images above are the result of baking don not take albedo into account when baking, without bleeding.
  3. Yes, I did not describe it clearly yesterday. : ). I had took albedo account into baking when doing path tracing, so I got bleeding color. I have tried this as you mentioned just now, and then I lost bleeding color, no bleeding happens. There is no way bleeding comes from I think, because no albedo was took into account when baking, all the irradiance was calculated based on the light color, which is vec3(1.0, 1.0, 1.0) now. I hope I have described it clearly. : )
  4. Yes, as you mentioned, albedo resolution will be limited by lightmap resolution if I took albedo into irradiance calculation. But I have a confusion. Whether albedo should be took into the rendering equation. For an example, a ray hit a red area, the reflected color of it will be red too, but if I don't take albedo into the calculation,the resulting irradiance won't be red, so I lose the red bleeding when sampling lightmap. And I am reading your post which JoeJ told me.It's excellent!
  5. Yes, there existing reflections will be really cool, it's worth doing I think.
  6. Hi JoeJ, I will go to read the blog you have post to learn something about it. Thank you!
  7. Hi, guys. I am developing a path tracing baking renderer now, which is based on OpenGL and OpenRL. It can bake some scenes e.g. the following, I am glad it can bake bleeding diffuse color. : ) I store the irradiance directly, like this. Albedo * diffuse color has already come into irradiance calculation when baking, both direct and indirect together. After baking, In OpenGL fragment shader, I use the light map directly. I think I have got something wrong, due to most game engines don't do like this. Which kind of data should I choose to store into the light maps? I need diffuse only. Thanks in advance!
  8. You are welcome! So happy it can help you. And I will come here again if I could got some progress, good luck!
  9. Hi @JoeJ, it has passed some time since we talked about this thread last time.I have been working on our lightmap project these days. Because you had mentioned the problems about seams, and I found a useful method and implementation post by Sylvan: https://www.sebastiansylvan.com/post/LeastSquaresTextureSeams/, I hope this may help you. : ) I have injected it into our project, and it works well after some necessary changes.Here are two images which with seams and another one after stitched.
  10. Hi there. Since last post has past 20 days.I have done some work on our lightmap baking project, and now I will share some experience here about this thread. I have done as JoeJ said before: Call UVAtlasPartition() Scale the charts Call UVAtlasPack() - or implement your own packing. Too sad it generate the same chart size(of the whole mesh) after packing no matter how I scale the charts.So I give UVAtlas up, and try to use thekla_atlas. Now thekla_atlas works well expect few meshes. I will debug it in the feature.I also processed the first colocal index info to made triangles connected as much as possible.If you want to know more details about this, see this https://github.com/Thekla/thekla_atlas/issues/18, the author answered the question about this. I am going to solve the problem about seams now. : )
  11. I'm so happy to see you have worked so deeply on this work. I have not worked deeply till now. I will obersve the packing result fomr UVAtlas and Thekla, and take care of the input and output vertex count. As you mentioned, UV atlas can work well if I have added additional seams info into input mesh.I don't hope to make our artists to do more work than before, so I will spend some time to solve the segmentation probram. I will do the partition work first tomorrow, and then next, aha, there is some work waiting me to do.I am so happy I can communicate with you further and I will share my experience here next. : ).
  12. As you mentioned about packing, I hope it can pack well according to the uv value after scaled.I will try it out. I have taken notice of the output vertex count after creating atlas method called,it incrementd than input vertex count, and then I invalidated optimizing when load mesh, as a result, no vertex was merged, and I don't need to reorder triangle index after generating uv atlas.Both the input and the output mesh now have the same vertex count and triangle list.I think doing so may help you resolve the bad partition problem, or merge some edges manualy to control the chart count.My testing mesh has 100629 vertices and there exists about 400 charts in generated uv atlas. I have also tried out lightmap uv generating in Unreal, it has a better result than UVatlas looks like.The charts's distribution is neat.I think I would not learn how to do it from it's source code unless I can not got a accectable result from UVAtlas.
  13. I am so happy to saw a good and strightforward way to finish this work. And I think I have got it because of your post. Mostly it can work well I think, although how to calculate the area of a triagnle I have never met ever before. I will implement it as you metioned on Monday, and I think there is a little detail to do e.g. the parameter about streching and so on.Thank you @JoeJ very much, and I will maintain this thread until I have got all over the work.
  14. I am implementing baking in our engine.I met a problem about how to assignment per object uv in lightmap atlas. I am using UVAtlas to generate lightmap uv, most unwrapped mesh has a uv range [0, 1), no matter how big they are. I wanna them have same uv density so to packing them well in lightmap atlas.I have tried thekla_atlas to do the same thing too, but it seems that it can not unwrap uv according mesh size. As far as I can see, unwrapping uv coordinates using its world space can solve this, all meshes share a same scale.But I don't hope to spend a lot of time to write these code and debug them.I am wandering is there exist some methods I don't know that can scale each lightmap uv to a same density. Thanks in advance. : )
  15. ChenMo

    DoF - near field bleeding

    Nice work. I have been worked on depth of field for a few days, and I studied the same paper of you.After reading what you two said, I have understood further. Thanks. Please upload the demo again?The original link was 404.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!