Jump to content

  • Log In with Google      Sign In   
  • Create Account

theagentd

Member Since 15 Nov 2010
Offline Last Active Sep 23 2016 06:44 PM

Posts I've Made

In Topic: What is the exact correct normal map interpretation for Blender?

15 August 2016 - 08:45 PM

I'm pretty sure the blurring is just a different style of filling out the unused space of the normal map. The internals look identical after all. That normal map was baked with Substance Painter.


In Topic: What is the exact correct normal map interpretation for Blender?

13 August 2016 - 08:04 PM

Sifting through the entire source code of Blender would be a huge amount of work, especially since it's written in a programming language I'm not particularly experienced with... Otherwise, that would indeed be the "easiest" solution.

 

Here are the results of a baked normal map:

http://imgur.com/a/nWoOA

This normal map for a simple smooth cube was generated using Substance Painter. We tried to make sure that the mesh was triangulated, but the end result still sucks. Although the mesh gets closer to the right result (y is inverted in this as you can see in the normal map, which looked the most correct), but it's... "wobbly" and uneven even though it should be perfectly flat like the original high-poly model... It's clear that I'm using a different tangent basis from, well, everything else in the entire world it seems.

 

 

EDIT: Ahaa! This: http://gamedev.stackexchange.com/questions/128023/how-does-mikktspace-work-for-calculating-the-tangent-space-during-normal-mapping seems to be exactly what I'm looking for!!! Of course it's unanswered...


In Topic: What is the exact correct normal map interpretation for Blender?

13 August 2016 - 05:29 AM

Thanks a lot for your response, we'll be sure to set up our Blender settings correctly, but the problem I'm talking about comes from very subtle errors in direction, not obvious things like inverted normals or inverted Y coordinates. I need to figure out the exact algorithm that Blender uses for normal mapping to ensure that I use the same normals, tangents, bitangents and normalization steps as they do, or subtle errors will be introduced... Still, thank you a lot for all that information, we'll be sure to take it all into consideration.

 

There is many more reason this could happen, it would be nice if you uploaded a image showing the model in Blender and one showing it in your engine. Also uploading the texture will help.
 

We're still trying to learn our ways around Blender, but I will try to do this as soon as I can.


In Topic: Computing an optimized mesh from a number of spheres?

16 April 2016 - 09:02 AM

Thanks for all the awesome responses! I'm not entirely sure I can implement this though... I will try to check out some Java libraries for accomplishing all this for me, hopefully. And here I thought I had a new idea... >___>

 

The only thing I had really heard about before was meta-balls.


In Topic: A phenomenological scattering model

03 April 2016 - 09:59 AM

I gave it a whack, just to see. The method is so simple, it's easy to integrate.
 


The "DepthWeighted" image is using the "A phenomenological scattering model" maths. I'm just using the same weighting algorithm that was presented in the paper. And just reflections here -- no transmission/refraction.
There are some more screenshots here (with some trees and grass and things).
 
It seems to work best when there are few layers. For certain types of geometry, it might be ok. But some cases can turn to mush.

Assuming lighting method similar to the paper, I didn't notice weighting hurting precision too much... If you have a lot of layers, you're going to get mush, anyway.

The worst case for weighting issues might be distant geometry with few layers (given that this will be multiplied by a small number, and divided again by that number). Running the full lighting algorithm will be expensive, anyway -- so maybe it would be best to run a simplified lighting model.

It may work best for materials like the glass demo scene from the paper. That is dominated by the refractions (which aren't effected by the weighting or sorting artifacts).

What about a more interesting example with different colors overlapping? Also, can you provide shader code? I have a test program just begging me to implement this algorithm into it. xd

 

Oh, missed the link. Well... That's really not very impressive sadly... It seems like an improvement over WBOIT, but not by much...


PARTNERS